Module Docs (#2620)

* update bert docs

* update based

* update bigcode

* add pixtral

* add flux as well
This commit is contained in:
zachcp
2024-11-16 03:09:17 -05:00
committed by GitHub
parent 00d8a0c178
commit a3f200e369
5 changed files with 126 additions and 10 deletions

View File

@ -1,9 +1,9 @@
//! Based from the Stanford Hazy Research group.
//!
//! See "Simple linear attention language models balance the recall-throughput tradeoff", Arora et al. 2024
//! - [Arxiv](https://arxiv.org/abs/2402.18668)
//! - [Github](https://github.com/HazyResearch/based)
//!
//! - Simple linear attention language models balance the recall-throughput tradeoff. [Arxiv](https://arxiv.org/abs/2402.18668)
//! - [Github Rep](https://github.com/HazyResearch/based)
//! - [Blogpost](https://hazyresearch.stanford.edu/blog/2024-03-03-based)
use candle::{DType, Device, IndexOp, Module, Result, Tensor, D};
use candle_nn::{