Logo
Explore Help
Register Sign In
huggingface/candle
1
0
Fork 0
You've already forked candle
mirror of https://github.com/huggingface/candle.git synced 2025-06-17 11:08:52 +00:00
Code Issues Packages Projects Releases Wiki Activity
1,941 Commits 74 Branches 16 Tags
30b145150f47cc21b51e04adf03ce41995ff729f
Commit Graph

666 Commits

Author SHA1 Message Date
laurent
19183b8e4f Factor out the gemm bits. 2023-06-28 08:51:13 +01:00
laurent
0417d9cec8 Add more cuda testing again. 2023-06-28 08:33:43 +01:00
laurent
395c84e80a Also run the backprop tests on cuda. 2023-06-28 08:15:03 +01:00
laurent
b0f5f2d22d Add some display tests + bugfixes. 2023-06-27 21:37:28 +01:00
laurent
8c81a70170 PyTorch like display implementation. 2023-06-27 21:16:35 +01:00
laurent
934655a60d Add squeeze/unsqueeze/stack. 2023-06-27 19:32:00 +01:00
laurent
1d504cc6b3 Rework the debug trait. 2023-06-27 19:10:30 +01:00
laurent
684f66326d Add the get method. 2023-06-27 17:39:58 +01:00
laurent
c44e5346f4 Add some helper functions. 2023-06-27 17:37:09 +01:00
laurent
dbe3e4e7c0 Add some test utils module. 2023-06-27 16:20:28 +01:00
laurent
e221d38819 Factor the slicing code in cuda. 2023-06-27 15:45:59 +01:00
laurent
07a682c2ff Run the tensor tests for the cuda backend too. 2023-06-27 15:37:01 +01:00
laurent
ca6aa8ff12 Use num-cpus to enable parallelism. 2023-06-27 14:42:26 +01:00
laurent
318503cd38 Cache the causal mask in llama. 2023-06-27 12:21:08 +01:00
laurent
380d61e990 Fix two cuda bugs (matmul and where_cond). 2023-06-27 11:31:04 +01:00
Nicolas Patry
d7f729fb8f Refactor the hierarchy. 2023-06-27 11:57:27 +02:00
First Previous ... 10 11 12 13 14 Next Last
Powered by Gitea Version: 1.24.0 Page: 237ms Template: 11ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API