Logo
Explore Help
Register Sign In
huggingface/candle
1
0
Fork 0
You've already forked candle
mirror of https://github.com/huggingface/candle.git synced 2025-06-16 18:48:51 +00:00
Code Issues Packages Projects Releases Wiki Activity
312 Commits 74 Branches 16 Tags
ed4d0959d37f365deb804706143fc06d08937e20
Go to file
Clone
Open with VS Code Open with VSCodium Open with Intellij IDEA
Download ZIP Download TAR.GZ Download BUNDLE
laurent ed4d0959d3 Add a const to easily tweak the dtype used for llama internal computations.
2023-06-30 15:01:39 +01:00
.cargo
Add the topological sort for backprop.
2023-06-20 19:15:39 +01:00
.github/workflows
Windows 2019 is slower to load (I guess less availability).
2023-06-28 22:21:38 +00:00
candle-core
Add a const to easily tweak the dtype used for llama internal computations.
2023-06-30 15:01:39 +01:00
candle-hub
Merge pull request #19 from LaurentMazare/llama_safetensors
2023-06-29 12:49:26 +02:00
candle-kernels
Bugfix: remove the u8/bf16 conversion kernel as it is ambiguous.
2023-06-30 10:43:32 +01:00
.gitignore
Improve how we check that the dims are in bounds.
2023-06-30 09:11:00 +01:00
.pre-commit-config.yaml
Fixing tokenizers dep.
2023-06-22 12:25:58 +02:00
Cargo.toml
Revert the new profile.
2023-06-29 19:08:50 +01:00
Makefile
Fix two cuda bugs (matmul and where_cond).
2023-06-27 11:31:04 +01:00
README.md
And add back some readme :)
2023-06-27 15:50:43 +01:00

README.md

candle

Minimalist ML framework for Rust

Description
No description provided
Readme 41 MiB
Languages
Rust 82%
Metal 5.9%
Cuda 4.2%
C++ 3%
Python 2.2%
Other 2.7%
Powered by Gitea Version: 1.24.0 Page: 88ms Template: 6ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API