Documentation Pass for Models (#2617)

* links in chinese_clip

* links for clip model

* add mod docs for flux and llava

* module doc for MMDIT and MIMI

* add docs for a few more modesl

* mod docs for bert naser and beit

* add module docs for convmixer colpali codegeex and chatglm

* add another series of moddocs

* add  fastvit-llama2_c

* module docs mamba -> mobileone

* module docs from moondream-phi3

* mod docs for quantized and qwen

* update to yi

* fix long names

* Update llama2_c.rs

* Update llama2_c_weights.rs

* Fix the link for mimi + tweaks

---------

Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
This commit is contained in:
zachcp
2024-11-15 02:30:15 -05:00
committed by GitHub
parent 0ed24b9852
commit f689ce5d39
94 changed files with 1001 additions and 51 deletions

View File

@ -1,3 +1,20 @@
//! Mixtral Model, a sparse mixture of expert model based on the Mistral architecture
//!
//! See Mixtral model details at:
//! - [Hugging Face](https://huggingface.co/docs/transformers/model_doc/mixtral)
//! - [Mixtral-8x7B Blog Post](https://mistral.ai/news/mixtral-of-experts/)
//!
//! The model uses a mixture of experts architecture with:
//! - 8 experts per layer
//! - Top 2 expert routing
//! - Sliding window attention
//! - RoPE embeddings
//!
//! References:
//! - [Hugging Face Implementation](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mixtral/modeling_mixtral.py)
//! - [Mixtral Blog Post](https://mistral.ai/news/mixtral-of-experts/)
//!
use crate::models::with_tracing::{linear_no_bias, Linear, RmsNorm};
/// Mixtral Model
/// https://github.com/huggingface/transformers/blob/main/src/transformers/models/mixtral/modeling_mixtral.py