Logo
Explore Help
Register Sign In
huggingface/candle
1
0
Fork 0
You've already forked candle
mirror of https://github.com/huggingface/candle.git synced 2025-06-15 18:28:24 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
ad7c53953b6aadf7d741b2263aa2fd32223b023b
candle/candle-examples/examples
History
Laurent Mazare ad7c53953b Add a verbose-prompt mode, similar to llama.cpp. (#489)
2023-08-17 15:26:44 +01:00
..
bert
Add a cuda kernel for upsampling. (#441)
2023-08-14 13:12:17 +01:00
bigcode
Add a cuda kernel for upsampling. (#441)
2023-08-14 13:12:17 +01:00
custom-ops
Relax the requirements on CustomOp. (#486)
2023-08-17 11:12:05 +01:00
falcon
Add a cuda kernel for upsampling. (#441)
2023-08-14 13:12:17 +01:00
llama
Layer norm tweaks (#482)
2023-08-17 10:07:13 +01:00
llama2-c
Layer norm tweaks (#482)
2023-08-17 10:07:13 +01:00
llama_multiprocess
Relax the requirements on CustomOp. (#486)
2023-08-17 11:12:05 +01:00
mnist-training
Add the candle-datasets crate (#322)
2023-08-05 08:56:50 +01:00
musicgen
Remove the embedding ops in favor of index-select. (#299)
2023-08-02 05:42:11 +01:00
quantized
Add a verbose-prompt mode, similar to llama.cpp. (#489)
2023-08-17 15:26:44 +01:00
stable-diffusion
F16 support for stable diffusion (#488)
2023-08-17 13:48:56 +01:00
whisper
Add a cuda kernel for upsampling. (#441)
2023-08-14 13:12:17 +01:00
Powered by Gitea Version: 1.24.0 Page: 1360ms Template: 157ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API