Documentation Pass for Models (#2617)

* links in chinese_clip

* links for clip model

* add mod docs for flux and llava

* module doc for MMDIT and MIMI

* add docs for a few more modesl

* mod docs for bert naser and beit

* add module docs for convmixer colpali codegeex and chatglm

* add another series of moddocs

* add  fastvit-llama2_c

* module docs mamba -> mobileone

* module docs from moondream-phi3

* mod docs for quantized and qwen

* update to yi

* fix long names

* Update llama2_c.rs

* Update llama2_c_weights.rs

* Fix the link for mimi + tweaks

---------

Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
This commit is contained in:
zachcp
2024-11-15 02:30:15 -05:00
committed by GitHub
parent 0ed24b9852
commit f689ce5d39
94 changed files with 1001 additions and 51 deletions

View File

@ -1,4 +1,18 @@
/// https://huggingface.co/01-ai/Yi-6B/blob/main/modeling_yi.py
//! Yi model implementation.
//!
//! Yi is a decoder-only large language model trained by 01.AI.
//! It follows a standard transformer architecture similar to Llama.
//!
//! Key characteristics:
//! - Multi-head attention with rotary positional embeddings
//! - RMS normalization
//! - SwiGLU activation in feed-forward layers
//! - Grouped-query attention for efficient inference
//!
//! References:
//! - [Yi Model](https://huggingface.co/01-ai/Yi-6B)
//! - [Hugging Face](https://huggingface.co/01-ai/Yi-6B/blob/main/modeling_yi.py)
use crate::models::with_tracing::{linear_no_bias, Linear, RmsNorm};
use candle::{DType, Device, Module, Result, Tensor, D};
use candle_nn::{Activation, VarBuilder};