Logo
Explore Help
Register Sign In
huggingface/candle
1
0
Fork 0
You've already forked candle
mirror of https://github.com/huggingface/candle.git synced 2025-06-15 18:28:24 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
102fa4c2e3e833199517a9400d0c2310ce18d62e
candle/candle-examples/examples
History
Nicolas Patry 102fa4c2e3 Fixing llamav1
2023-08-16 14:53:29 +02:00
..
bert
Add a cuda kernel for upsampling. (#441)
2023-08-14 13:12:17 +01:00
bigcode
Add a cuda kernel for upsampling. (#441)
2023-08-14 13:12:17 +01:00
custom-ops
Use bail rather than wrapping a string where possible. (#249)
2023-07-26 15:42:46 +01:00
falcon
Add a cuda kernel for upsampling. (#441)
2023-08-14 13:12:17 +01:00
ggml
Get the ggml based llama to generate some text. (#464)
2023-08-16 12:41:07 +01:00
llama
Fixing llamav1
2023-08-16 14:53:29 +02:00
llama2-c
Support the Accelerate BLAS on macOS. (#325)
2023-08-05 17:25:24 +01:00
llama_multiprocess
Remove the checkpoint conversion script. (#405)
2023-08-11 05:59:48 +01:00
mnist-training
Add the candle-datasets crate (#322)
2023-08-05 08:56:50 +01:00
musicgen
Remove the embedding ops in favor of index-select. (#299)
2023-08-02 05:42:11 +01:00
stable-diffusion
Track the conv2d operations in stable-diffusion. (#431)
2023-08-13 15:58:26 +01:00
whisper
Add a cuda kernel for upsampling. (#441)
2023-08-14 13:12:17 +01:00
Powered by Gitea Version: 1.24.0 Page: 112ms Template: 9ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API