mirror of
https://github.com/huggingface/candle.git
synced 2025-06-19 19:58:35 +00:00
Documentation Pass for Models (#2617)
* links in chinese_clip * links for clip model * add mod docs for flux and llava * module doc for MMDIT and MIMI * add docs for a few more modesl * mod docs for bert naser and beit * add module docs for convmixer colpali codegeex and chatglm * add another series of moddocs * add fastvit-llama2_c * module docs mamba -> mobileone * module docs from moondream-phi3 * mod docs for quantized and qwen * update to yi * fix long names * Update llama2_c.rs * Update llama2_c_weights.rs * Fix the link for mimi + tweaks --------- Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
This commit is contained in:
@ -1,5 +1,22 @@
|
||||
// This implementation is based on the python version from huggingface/transformers.
|
||||
// https://github.com/huggingface/transformers/blob/b109257f4fb8b1166e7c53cc5418632014ed53a5/src/transformers/models/recurrent_gemma/modeling_recurrent_gemma.py#L2
|
||||
//! Recurrent Gemma model implementation
|
||||
//!
|
||||
//! Recurrent Gemma is a version of the Gemma language model that incorporates recurrent memory.
|
||||
//! This allows the model to maintain state between predictions and have longer-range memory.
|
||||
//!
|
||||
//! Key characteristics:
|
||||
//! - Real-gated linear recurrent units (RGLRU)
|
||||
//! - 1D convolution for local context
|
||||
//! - RMSNorm for layer normalization
|
||||
//! - Rotary positional embeddings (RoPE)
|
||||
//! - Grouped query attention
|
||||
//!
|
||||
//! References:
|
||||
//! - [Gemma: Open Models Based on Gemini Technology](https://blog.google/technology/developers/gemma-open-models/)
|
||||
//! - [Recurrent Memory model architecture](https://arxiv.org/abs/2402.00441)
|
||||
//!
|
||||
//! This implementation is based on the python version from huggingface/transformers.
|
||||
//! https://github.com/huggingface/transformers/blob/b109257f4fb8b1166e7c53cc5418632014ed53a5/src/transformers/models/recurrent_gemma/modeling_recurrent_gemma.py#L2
|
||||
//!
|
||||
use candle::{DType, Device, IndexOp, Module, Result, Tensor, D};
|
||||
use candle_nn::{linear_b as linear, Linear, VarBuilder};
|
||||
use std::sync::Arc;
|
||||
|
Reference in New Issue
Block a user