mirror of
https://github.com/huggingface/candle.git
synced 2025-06-20 20:09:50 +00:00
Documentation Pass for Models (#2617)
* links in chinese_clip * links for clip model * add mod docs for flux and llava * module doc for MMDIT and MIMI * add docs for a few more modesl * mod docs for bert naser and beit * add module docs for convmixer colpali codegeex and chatglm * add another series of moddocs * add fastvit-llama2_c * module docs mamba -> mobileone * module docs from moondream-phi3 * mod docs for quantized and qwen * update to yi * fix long names * Update llama2_c.rs * Update llama2_c_weights.rs * Fix the link for mimi + tweaks --------- Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
This commit is contained in:
@ -1,3 +1,21 @@
|
||||
//! RWKV v6 model implementation with quantization support.
|
||||
//!
|
||||
//! RWKV is a linear attention model that combines the efficiency of RNNs
|
||||
//! with the parallelizable training of Transformers. Version 6 builds on previous
|
||||
//! versions with further optimizations.
|
||||
//!
|
||||
//! Key characteristics:
|
||||
//! - Linear attention mechanism
|
||||
//! - Time mixing layers
|
||||
//! - Channel mixing layers
|
||||
//! - RMSNorm for normalization
|
||||
//! - Support for 8-bit quantization
|
||||
//!
|
||||
//! References:
|
||||
//! - [RWKV Architecture](https://github.com/BlinkDL/RWKV-LM)
|
||||
//! - [RWKV v6 Release](https://huggingface.co/BlinkDL/rwkv-6)
|
||||
//!
|
||||
|
||||
use crate::{
|
||||
quantized_nn::{layer_norm, linear_no_bias as linear, Embedding, Linear},
|
||||
quantized_var_builder::VarBuilder,
|
||||
|
Reference in New Issue
Block a user