mirror of
https://github.com/huggingface/candle.git
synced 2025-06-20 04:00:28 +00:00
Documentation Pass for Models (#2617)
* links in chinese_clip * links for clip model * add mod docs for flux and llava * module doc for MMDIT and MIMI * add docs for a few more modesl * mod docs for bert naser and beit * add module docs for convmixer colpali codegeex and chatglm * add another series of moddocs * add fastvit-llama2_c * module docs mamba -> mobileone * module docs from moondream-phi3 * mod docs for quantized and qwen * update to yi * fix long names * Update llama2_c.rs * Update llama2_c_weights.rs * Fix the link for mimi + tweaks --------- Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
This commit is contained in:
@ -1,3 +1,19 @@
|
||||
//! Quantized Llama2 model implementation.
|
||||
//!
|
||||
//! This provides an 8-bit quantized implementation of Meta's LLaMA2 language model
|
||||
//! for reduced memory usage and faster inference.
|
||||
//!
|
||||
//! Key characteristics:
|
||||
//! - Decoder-only transformer architecture
|
||||
//! - RoPE position embeddings
|
||||
//! - Grouped Query Attention
|
||||
//! - 8-bit quantization of weights
|
||||
//!
|
||||
//! References:
|
||||
//! - [LLaMA2 Paper](https://arxiv.org/abs/2307.09288)
|
||||
//! - [LLaMA2 Technical Report](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/)
|
||||
//!
|
||||
|
||||
use super::llama2_c::{Cache, Config};
|
||||
use crate::quantized_nn::{linear_no_bias as linear, Embedding, Linear, RmsNorm};
|
||||
pub use crate::quantized_var_builder::VarBuilder;
|
||||
|
Reference in New Issue
Block a user