Module Docs (#2624)

* update whisper

* update llama2c

* update t5

* update phi and t5

* add a blip model

* qlamma doc

* add two new docs

* add docs and emoji

* additional models

* openclip

* pixtral

* edits on the  model docs

* update yu

* update a fe wmore models

* add persimmon

* add model-level doc

* names

* update module doc

* links in heira

* remove empty URL

* update more hyperlinks

* updated hyperlinks

* more links

* Update mod.rs

---------

Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
This commit is contained in:
zachcp
2024-11-18 08:19:23 -05:00
committed by GitHub
parent 12d7e7b145
commit 386fd8abb4
39 changed files with 170 additions and 115 deletions

View File

@ -3,7 +3,11 @@
//! Open Contrastive Language-Image Pre-Training (OpenCLIP) is an architecture trained on
//! pairs of images with related texts.
//!
//! - [GH Link](https://github.com/mlfoundations/open_clip)
//! - 💻 [GH Link](https://github.com/mlfoundations/open_clip)
//! - 📝 [Paper](https://arxiv.org/abs/2212.07143)
//!
//! ## Overview
//!
//! ![](https://raw.githubusercontent.com/mlfoundations/open_clip/main/docs/CLIP.png)
pub mod text_model;