mirror of
https://github.com/huggingface/candle.git
synced 2025-06-21 12:20:46 +00:00
Module Docs (#2624)
* update whisper * update llama2c * update t5 * update phi and t5 * add a blip model * qlamma doc * add two new docs * add docs and emoji * additional models * openclip * pixtral * edits on the model docs * update yu * update a fe wmore models * add persimmon * add model-level doc * names * update module doc * links in heira * remove empty URL * update more hyperlinks * updated hyperlinks * more links * Update mod.rs --------- Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
This commit is contained in:
@ -1,8 +1,11 @@
|
||||
//! Based on the BLIP paper from Salesforce Research.
|
||||
//!
|
||||
//! See "BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation"
|
||||
//! - [Arxiv](https://arxiv.org/abs/2201.12086)
|
||||
//! - [Github](https://github.com/salesforce/BLIP)
|
||||
//! The blip-image-captioning model can generate captions for an input image.
|
||||
//!
|
||||
//! - ⚡ [Interactive Wasm Example](https://huggingface.co/spaces/radames/Candle-BLIP-Image-Captioning)
|
||||
//! - 💻 [GH Link](https://github.com/salesforce/BLIP)
|
||||
//! - 🤗 [HF Link](https://huggingface.co/Salesforce/blip-image-captioning-base)
|
||||
//! - 📝 [Paper](https://arxiv.org/abs/2201.12086)
|
||||
//!
|
||||
|
||||
use super::blip_text;
|
||||
|
Reference in New Issue
Block a user