mirror of
https://github.com/huggingface/candle.git
synced 2025-06-18 19:47:12 +00:00
Documentation Pass for Models (#2617)
* links in chinese_clip * links for clip model * add mod docs for flux and llava * module doc for MMDIT and MIMI * add docs for a few more modesl * mod docs for bert naser and beit * add module docs for convmixer colpali codegeex and chatglm * add another series of moddocs * add fastvit-llama2_c * module docs mamba -> mobileone * module docs from moondream-phi3 * mod docs for quantized and qwen * update to yi * fix long names * Update llama2_c.rs * Update llama2_c_weights.rs * Fix the link for mimi + tweaks --------- Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
This commit is contained in:
@ -1,10 +1,9 @@
|
||||
//! Based from the Stanford Hazy Research group.
|
||||
//!
|
||||
//! See "Simple linear attention language models balance the recall-throughput tradeoff", Arora et al. 2024
|
||||
//! <https://arxiv.org/abs/2402.18668>
|
||||
|
||||
//! Original code:
|
||||
//! https://github.com/HazyResearch/based
|
||||
//! - [Arxiv](https://arxiv.org/abs/2402.18668)
|
||||
//! - [Github](https://github.com/HazyResearch/based)
|
||||
//!
|
||||
|
||||
use candle::{DType, Device, IndexOp, Module, Result, Tensor, D};
|
||||
use candle_nn::{
|
||||
|
Reference in New Issue
Block a user