Logo
Explore Help
Register Sign In
huggingface/candle
1
0
Fork 0
You've already forked candle
mirror of https://github.com/huggingface/candle.git synced 2025-06-15 18:28:24 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
b8263aa15cf2d8d0f425e25bae296ea4e96aeb88
candle/candle-examples/examples
History
Laurent Mazare b8263aa15c Quantized support for f16 and f32 (#457)
* Add f32 as a quantized type.

* Add f16 as a quantized type too.
2023-08-15 21:09:37 +01:00
..
bert
Add a cuda kernel for upsampling. (#441)
2023-08-14 13:12:17 +01:00
bigcode
Add a cuda kernel for upsampling. (#441)
2023-08-14 13:12:17 +01:00
custom-ops
Use bail rather than wrapping a string where possible. (#249)
2023-07-26 15:42:46 +01:00
falcon
Add a cuda kernel for upsampling. (#441)
2023-08-14 13:12:17 +01:00
ggml
Quantized support for f16 and f32 (#457)
2023-08-15 21:09:37 +01:00
llama
Tweak the llama example. (#450)
2023-08-15 12:18:20 +01:00
llama2-c
Support the Accelerate BLAS on macOS. (#325)
2023-08-05 17:25:24 +01:00
llama_multiprocess
Remove the checkpoint conversion script. (#405)
2023-08-11 05:59:48 +01:00
mnist-training
Add the candle-datasets crate (#322)
2023-08-05 08:56:50 +01:00
musicgen
Remove the embedding ops in favor of index-select. (#299)
2023-08-02 05:42:11 +01:00
stable-diffusion
Track the conv2d operations in stable-diffusion. (#431)
2023-08-13 15:58:26 +01:00
whisper
Add a cuda kernel for upsampling. (#441)
2023-08-14 13:12:17 +01:00
Powered by Gitea Version: 1.24.0 Page: 124ms Template: 13ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API