mirror of
https://github.com/huggingface/candle.git
synced 2025-06-16 18:48:51 +00:00
Documentation Pass for Models (#2617)
* links in chinese_clip * links for clip model * add mod docs for flux and llava * module doc for MMDIT and MIMI * add docs for a few more modesl * mod docs for bert naser and beit * add module docs for convmixer colpali codegeex and chatglm * add another series of moddocs * add fastvit-llama2_c * module docs mamba -> mobileone * module docs from moondream-phi3 * mod docs for quantized and qwen * update to yi * fix long names * Update llama2_c.rs * Update llama2_c_weights.rs * Fix the link for mimi + tweaks --------- Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
This commit is contained in:
@ -1,4 +1,18 @@
|
||||
/// https://huggingface.co/01-ai/Yi-6B/blob/main/modeling_yi.py
|
||||
//! Yi model implementation.
|
||||
//!
|
||||
//! Yi is a decoder-only large language model trained by 01.AI.
|
||||
//! It follows a standard transformer architecture similar to Llama.
|
||||
//!
|
||||
//! Key characteristics:
|
||||
//! - Multi-head attention with rotary positional embeddings
|
||||
//! - RMS normalization
|
||||
//! - SwiGLU activation in feed-forward layers
|
||||
//! - Grouped-query attention for efficient inference
|
||||
//!
|
||||
//! References:
|
||||
//! - [Yi Model](https://huggingface.co/01-ai/Yi-6B)
|
||||
//! - [Hugging Face](https://huggingface.co/01-ai/Yi-6B/blob/main/modeling_yi.py)
|
||||
|
||||
use crate::models::with_tracing::{linear_no_bias, Linear, RmsNorm};
|
||||
use candle::{DType, Device, Module, Result, Tensor, D};
|
||||
use candle_nn::{Activation, VarBuilder};
|
||||
|
Reference in New Issue
Block a user