Module Docs (#2624)

* update whisper

* update llama2c

* update t5

* update phi and t5

* add a blip model

* qlamma doc

* add two new docs

* add docs and emoji

* additional models

* openclip

* pixtral

* edits on the  model docs

* update yu

* update a fe wmore models

* add persimmon

* add model-level doc

* names

* update module doc

* links in heira

* remove empty URL

* update more hyperlinks

* updated hyperlinks

* more links

* Update mod.rs

---------

Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
This commit is contained in:
zachcp
2024-11-18 08:19:23 -05:00
committed by GitHub
parent 12d7e7b145
commit 386fd8abb4
39 changed files with 170 additions and 115 deletions

View File

@ -1,8 +1,11 @@
//! Based on the BLIP paper from Salesforce Research.
//!
//! See "BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation"
//! - [Arxiv](https://arxiv.org/abs/2201.12086)
//! - [Github](https://github.com/salesforce/BLIP)
//! The blip-image-captioning model can generate captions for an input image.
//!
//! - ⚡ [Interactive Wasm Example](https://huggingface.co/spaces/radames/Candle-BLIP-Image-Captioning)
//! - 💻 [GH Link](https://github.com/salesforce/BLIP)
//! - 🤗 [HF Link](https://huggingface.co/Salesforce/blip-image-captioning-base)
//! - 📝 [Paper](https://arxiv.org/abs/2201.12086)
//!
use super::blip_text;