* qwen-moe rebase
* lint
* fixed rebase error
* swapped normal MoE model with CausalMoE Model in example, and swapped the tie word embeddings if statement
* updated readme
* add Qwen3.rs
* fixed compile error
* attempting to gett pr 2903 working with qwen weights
* different qwen variants working
* added moe model
* clippy
* added additional eos token
* translated Korean comments to English as well as I can
* removed specialized Qwen3RmsNorm and replaced with generic Candle RmsNorm
* replaced custom repeat_kv implementation with candle's repeat_kv implementation
* replace linear with linear_b in attention initalization
* replaced custom custom kv_cache implementation with candle kv_cache
* style
* replaced explicit broadcast add with normal add in decoder layer
* removed keeping the Rotary embedding layer in the model struct
* used tie_word_embeddings bool from config instead of relying on existence of weights for lm head in CasualLM
* removed duplicate code from qwen3_moe
* removed sliding window from qwen3 attention
* removed MoE code
* removed unused option
* Fixed Typo
Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
* fixed tie word embeddings to use the correct embedding weights instead of the opposite
---------
Co-authored-by: Max <naturale@hufs.ac.kr>
Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
* removed scale factor from computation and made quantized gemma3 work similarly to non-quantized gemma3
* created default consts, replaced is_sliding with Option holding a window_size
* gemma3: changed RotaryEmbedding base freq based on layer and sliding window
* Changed attention mask per layer, either normal or sliding
* made attention mask creation slightly more efficient by only creating them once per model iteration
* changed is_sliding to an Option
* clippy
* changed to stop on both <eos> and <end_of_turn> instead of either or
* implemented quantized-gemma, inference not working
* Fixed a few modeling bugs: outputing the correct tokens for a few iterations then garbage
* lint
* clippy
* quantized-gemma3 example working
* added readme
* clippy
* Initial commit: model weights working, prediciton incorrect
* moved distilbertformaskedlm into distilbert modeling file
* made maskedLM like bert example, still incorrect predictions
* finally not getting NaNs, fixed attention mask
* getting correct output sentences
* get top k predictions
* fixed output formatting slightly
* added default arg for model_id
* lint
* moved masked token example code from distilbertformaskedlm example to distilbert example
* lint
* removed distilbertformaskedlm example
* cleanup
* clippy
* removed embedding normalization from example
* made output and model dependent on args instead of prompt
* lint
* replaced or_ok anyhow error with anyhow context
* changed error message for mask token not found
* Add the SNAC audio tokenizer.
* More snac.
* Again more snac.
* Add some example code for snac.
* Get the weights to load.
* Add to the snac model.
* Fixes.
* Get round-tripping to work.
* Save/load code files.
* Clippy fix.
* Fmt fix.
* Add the CSM model.
* Add some code to load the model.
* Load the text tokenizer.
* Add frame generation.
* Get the sampling to work.
* Rope fix.
* Autoregressive generation.
* Generate some audio file.
* Use the actual prompt.
* Support multiple turns.
* Add a very barebone readme.
* Move some of the shared bits to the model.
* added new language pairs to marian-mt
* lint
* seperated python code for converting tokenizers into its own file and and added a reqirements.txt for dependencies, updated instructions in readme and included python version
* Cleanup.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com>
* Update main.rs
* Update codegeex4_9b.rs
* Get things to compile.
* Add some default for when rope_ratio is missing.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com>
* add xlm-roberta-base
* Add task enum for fill-mask and reranker in xlm-roberta example; update README and fix attention mask dimensions
- Introduced a new `Task` enum to replace string task identifiers in the xlm-roberta example.
- Updated the logic in `main.rs` to handle tasks using the new enum.
- Enhanced README with example output for fill-mask task.
- Fixed dimension retrieval in `prepare_4d_attention_mask` function for better clarity and safety.
* Clippy fix.
---------
Co-authored-by: laurent <laurent.mazare@gmail.com>
* Fix bug in whisper transformer
- due to num_threads going to zero
in single threaded case
* Apply rustfmt.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com>
* change: BertEncoder struct to public
* change: make certain fields in Config struct public
* change: all fields in bert config struct to be public
* change: add clone to bert encoder and others
* Clippy fix.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com>
* Adds support for stella_en_v5 embedding model -400M variant
* Unified stella
* WIP: Unified Stella
* Combined stella for both 1.5B and 400M variants
* Cargo fmt for the CI
* removed redundant stella-400m model and example after merge into stella-en-v5
* cargo fmt --all
---------
Co-authored-by: Anubhab Bandyopadhyay <4890833+AnubhabB@users.noreply.github.com>
Co-authored-by: laurent <laurent.mazare@gmail.com>
* module docs
* varbuilder gguf docs
* add a link to gguf files
* small additonal mod doc titles
* safetensor docs
* more core docs
* more module docs in canlde_core
* 2 more link fixes