mirror of
https://github.com/huggingface/candle.git
synced 2025-06-18 19:47:12 +00:00
Module Docs (#2620)
* update bert docs * update based * update bigcode * add pixtral * add flux as well
This commit is contained in:
@ -1,9 +1,9 @@
|
||||
//! Based from the Stanford Hazy Research group.
|
||||
//!
|
||||
//! See "Simple linear attention language models balance the recall-throughput tradeoff", Arora et al. 2024
|
||||
//! - [Arxiv](https://arxiv.org/abs/2402.18668)
|
||||
//! - [Github](https://github.com/HazyResearch/based)
|
||||
//!
|
||||
//! - Simple linear attention language models balance the recall-throughput tradeoff. [Arxiv](https://arxiv.org/abs/2402.18668)
|
||||
//! - [Github Rep](https://github.com/HazyResearch/based)
|
||||
//! - [Blogpost](https://hazyresearch.stanford.edu/blog/2024-03-03-based)
|
||||
|
||||
use candle::{DType, Device, IndexOp, Module, Result, Tensor, D};
|
||||
use candle_nn::{
|
||||
|
Reference in New Issue
Block a user