Logo
Explore Help
Register Sign In
huggingface/candle
1
0
Fork 0
You've already forked candle
mirror of https://github.com/huggingface/candle.git synced 2025-06-18 03:28:50 +00:00
Code Issues Packages Projects Releases Wiki Activity
154 Commits 74 Branches 16 Tags
87c5aab005f0c4b7777e16e89ed97e55240c45da
Commit Graph

15 Commits

Author SHA1 Message Date
laurent
87c5aab005 More llama fixes. 2023-06-25 18:08:41 +01:00
laurent
60a5598c8b Fix some shape errors. 2023-06-25 17:56:59 +01:00
laurent
817e4b5005 Rework the embeddings so that it works on non-contiguous weights + factor out some code. 2023-06-25 17:37:47 +01:00
laurent
334524e2c4 Take as input slices of tensors as well as slices of &Tensors. 2023-06-25 17:07:09 +01:00
laurent
118cc30908 Add some currently broken tests. 2023-06-25 14:55:25 +01:00
laurent
90c140ff4b Start sketching the llama example. 2023-06-25 13:51:20 +01:00
laurent
6463d661d8 Tweaks. 2023-06-22 20:25:14 +01:00
laurent
aebffcfc13 Add a matmul cuda example. 2023-06-22 19:44:26 +01:00
laurent
5276755fb3 Add cuda support for unary ops. 2023-06-22 15:12:59 +01:00
laurent
e1eb86db61 Add some first binary op (add). 2023-06-22 13:52:02 +01:00
laurent
87a37b3bf3 Retrieve data from the gpu. 2023-06-22 11:01:49 +01:00
laurent
97d9142dee Add a first kernel. 2023-06-21 20:48:22 +01:00
laurent
fcb4e6b84f Use a reference for the device. 2023-06-21 19:55:57 +01:00
laurent
c654ecdb16 Add a specific example for cuda. 2023-06-21 18:56:04 +01:00
laurent
b3eb57cd0a Avoid some duplication using a macro + add some basic example to make debugging easier. 2023-06-21 10:08:41 +01:00
Powered by Gitea Version: 1.24.0 Page: 702ms Template: 191ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API