mirror of
https://github.com/huggingface/candle.git
synced 2025-06-21 20:22:49 +00:00
Documentation Pass for Models (#2617)
* links in chinese_clip * links for clip model * add mod docs for flux and llava * module doc for MMDIT and MIMI * add docs for a few more modesl * mod docs for bert naser and beit * add module docs for convmixer colpali codegeex and chatglm * add another series of moddocs * add fastvit-llama2_c * module docs mamba -> mobileone * module docs from moondream-phi3 * mod docs for quantized and qwen * update to yi * fix long names * Update llama2_c.rs * Update llama2_c_weights.rs * Fix the link for mimi + tweaks --------- Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
This commit is contained in:
@ -1,3 +1,20 @@
|
||||
//! Parler Model implementation for parler_tts text-to-speech synthesis
|
||||
//!
|
||||
//! Implements a transformer-based decoder architecture for generating audio tokens
|
||||
//! from text using discrete tokens. The model converts text into audio segments
|
||||
//! using multiple codebooks of quantized audio tokens.
|
||||
//!
|
||||
//! The model architecture includes:
|
||||
//! - Multi-head attention layers for text and audio processing
|
||||
//! - Feed-forward networks
|
||||
//! - Layer normalization
|
||||
//! - Positional embeddings
|
||||
//! - Multiple codebook prediction heads
|
||||
//!
|
||||
//! The implementation follows the original parler_tts architecture while focusing
|
||||
//! on audio token generation for text-to-speech synthesis.
|
||||
//!
|
||||
|
||||
use crate::generation::LogitsProcessor;
|
||||
use crate::models::t5;
|
||||
use candle::{IndexOp, Result, Tensor};
|
||||
|
Reference in New Issue
Block a user