Commit Graph

2079 Commits

Author SHA1 Message Date
1ce3843cab Add the relu op. 2023-06-28 09:38:54 +01:00
b805c4114b Merge pull request #22 from LaurentMazare/more-cuda-testing2
Again more cuda testing.
2023-06-28 09:01:25 +01:00
19183b8e4f Factor out the gemm bits. 2023-06-28 08:51:13 +01:00
0417d9cec8 Add more cuda testing again. 2023-06-28 08:33:43 +01:00
64c6bc4f5e Merge pull request #21 from LaurentMazare/more-cuda-tests
Also run the backprop tests on cuda.
2023-06-28 08:19:01 +01:00
395c84e80a Also run the backprop tests on cuda. 2023-06-28 08:15:03 +01:00
a457020d50 Merge pull request #20 from LaurentMazare/tensor-display
Add some pretty print display to Tensors
2023-06-27 21:53:09 +01:00
b0f5f2d22d Add some display tests + bugfixes. 2023-06-27 21:37:28 +01:00
8c81a70170 PyTorch like display implementation. 2023-06-27 21:16:35 +01:00
934655a60d Add squeeze/unsqueeze/stack. 2023-06-27 19:32:00 +01:00
1d504cc6b3 Rework the debug trait. 2023-06-27 19:10:30 +01:00
d28bf64ed6 Merge pull request #18 from LaurentMazare/tensor-helper
Add some helper functions
2023-06-27 17:43:04 +01:00
684f66326d Add the get method. 2023-06-27 17:39:58 +01:00
c44e5346f4 Add some helper functions. 2023-06-27 17:37:09 +01:00
efc39b71c5 Merge pull request #17 from LaurentMazare/cuda-test-utils
Add some test utils module.
2023-06-27 16:24:04 +01:00
dbe3e4e7c0 Add some test utils module. 2023-06-27 16:20:28 +01:00
aa35c418a5 Merge pull request #16 from LaurentMazare/cuda-tests
Run the tensor tests for the cuda backend too.
2023-06-27 15:51:28 +01:00
47937650aa And add back some readme :) 2023-06-27 15:50:43 +01:00
e221d38819 Factor the slicing code in cuda. 2023-06-27 15:45:59 +01:00
07a682c2ff Run the tensor tests for the cuda backend too. 2023-06-27 15:37:01 +01:00
b3622c972f Merge pull request #15 from LaurentMazare/num-cpus
Use num-cpus to enable parallelism in matmul's cpu version.
2023-06-27 14:45:08 +01:00
ca6aa8ff12 Use num-cpus to enable parallelism. 2023-06-27 14:42:26 +01:00
64ae526af4 Merge pull request #11 from LaurentMazare/add_hub
Adding candle-hub
2023-06-27 15:37:52 +02:00
70a90a1465 Clippy without features. 2023-06-27 14:04:20 +02:00
75e0905832 Adding fully offline version. 2023-06-27 13:58:23 +02:00
1a82bc50c9 [Tmp] Adding candle-hub 2023-06-27 13:58:23 +02:00
8371890996 Merge pull request #12 from LaurentMazare/fix_ci
Does this prevent `candle-kernels` test suite from being run ?
2023-06-27 13:58:01 +02:00
c2edaf83eb Ignoring candle-kernels during CI. 2023-06-27 13:53:23 +02:00
140a8edf01 Merge pull request #14 from LaurentMazare/llama-opt
Cache the causal mask in llama.
2023-06-27 12:21:31 +01:00
318503cd38 Cache the causal mask in llama. 2023-06-27 12:21:08 +01:00
527a71fdad Merge pull request #13 from LaurentMazare/cuda-bugfixes
Fix two cuda bugs (matmul and where_cond).
2023-06-27 11:32:26 +01:00
380d61e990 Fix two cuda bugs (matmul and where_cond). 2023-06-27 11:31:04 +01:00
0fed864bbf Does this prevent candle-kernels test suite from being run ? 2023-06-27 12:14:53 +02:00
d7f729fb8f Refactor the hierarchy. 2023-06-27 11:57:27 +02:00
6c4a960b15 Embedding bugfix. 2023-06-27 09:56:19 +01:00
18707891b7 Fix an error message. 2023-06-27 09:45:38 +01:00
bb262ecc99 More casting kernels. 2023-06-27 09:36:35 +01:00
ee3d290f8b Cuda support for dtype conversions. 2023-06-27 09:15:46 +01:00
51640ba7e6 Merge pull request #10 from LaurentMazare/f16
Add support for f16 and bf16
2023-06-27 05:59:59 +01:00
e152c1273d Add more context for missing cuda kernels. 2023-06-27 05:56:19 +01:00
4d19889acc where_cond for f16. 2023-06-26 22:14:32 +01:00
a6a7477bea Matmul cublas support for f16. 2023-06-26 22:08:22 +01:00
36a4749e95 Add the f16 affine kernel. 2023-06-26 22:05:31 +01:00
53fdbda683 Add the f16 sum kernel (fix). 2023-06-26 22:02:22 +01:00
93e24f29f4 Add the f16 sum kernel. 2023-06-26 22:01:29 +01:00
d204f1c7c0 Cuda support for embedding f16. 2023-06-26 21:58:15 +01:00
becb822ce0 Support more types in the cpu matmul. 2023-06-26 21:37:41 +01:00
7cfa4c307c Handle f16/bf16 in npy. 2023-06-26 21:10:03 +01:00
de1f612645 Remove the default features from the CI as cuda is not available. 2023-06-26 20:56:13 +01:00
22da2c7e02 More f16 and bf16 support. 2023-06-26 20:52:01 +01:00