Logo
Explore Help
Register Sign In
huggingface/candle
1
0
Fork 0
You've already forked candle
mirror of https://github.com/huggingface/candle.git synced 2025-06-21 12:20:46 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
c2007ac88fb0dd6fa6f82f6624693a0095db2edb
candle/candle-transformers/src/models
History
Laurent Mazare c2007ac88f W fixes. (#862)
2023-09-15 15:11:11 +01:00
..
segment_anything
Return the low res mask in the wasm segment-anything module. (#798)
2023-09-10 13:03:02 +01:00
stable_diffusion
Add the attention block. (#846)
2023-09-14 15:40:09 +01:00
whisper
Use softmax-last-dim in whisper. (#810)
2023-09-11 11:05:05 +01:00
wuerstchen
W fixes. (#862)
2023-09-15 15:11:11 +01:00
bert.rs
Move some models to candle-transformers so that it's easier to re-use. (#794)
2023-09-10 09:40:27 +01:00
bigcode.rs
Move some models to candle-transformers so that it's easier to re-use. (#794)
2023-09-10 09:40:27 +01:00
dinov2.rs
Move more models to candle-transformers (#796)
2023-09-10 10:20:18 +01:00
efficientnet.rs
Move more models to candle-transformers (#796)
2023-09-10 10:20:18 +01:00
falcon.rs
Move some models to candle-transformers so that it's easier to re-use. (#794)
2023-09-10 09:40:27 +01:00
llama.rs
Move some models to candle-transformers so that it's easier to re-use. (#794)
2023-09-10 09:40:27 +01:00
mod.rs
Start adding the Wuerstchen diffusion pipeline (#843)
2023-09-14 10:56:07 +01:00
quantized_llama.rs
Use softmax-last-dim in the quantized example. (#848)
2023-09-14 17:29:24 +01:00
t5.rs
Add support to flan-t5 (#840)
2023-09-13 19:27:20 +02:00
Powered by Gitea Version: 1.24.2 Page: 77ms Template: 8ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API