Commit Graph

2350 Commits

Author SHA1 Message Date
318503cd38 Cache the causal mask in llama. 2023-06-27 12:21:08 +01:00
527a71fdad Merge pull request #13 from LaurentMazare/cuda-bugfixes
Fix two cuda bugs (matmul and where_cond).
2023-06-27 11:32:26 +01:00
380d61e990 Fix two cuda bugs (matmul and where_cond). 2023-06-27 11:31:04 +01:00
0fed864bbf Does this prevent candle-kernels test suite from being run ? 2023-06-27 12:14:53 +02:00
d7f729fb8f Refactor the hierarchy. 2023-06-27 11:57:27 +02:00
6c4a960b15 Embedding bugfix. 2023-06-27 09:56:19 +01:00
18707891b7 Fix an error message. 2023-06-27 09:45:38 +01:00
bb262ecc99 More casting kernels. 2023-06-27 09:36:35 +01:00
ee3d290f8b Cuda support for dtype conversions. 2023-06-27 09:15:46 +01:00
51640ba7e6 Merge pull request #10 from LaurentMazare/f16
Add support for f16 and bf16
2023-06-27 05:59:59 +01:00
e152c1273d Add more context for missing cuda kernels. 2023-06-27 05:56:19 +01:00
4d19889acc where_cond for f16. 2023-06-26 22:14:32 +01:00
a6a7477bea Matmul cublas support for f16. 2023-06-26 22:08:22 +01:00
36a4749e95 Add the f16 affine kernel. 2023-06-26 22:05:31 +01:00
53fdbda683 Add the f16 sum kernel (fix). 2023-06-26 22:02:22 +01:00
93e24f29f4 Add the f16 sum kernel. 2023-06-26 22:01:29 +01:00
d204f1c7c0 Cuda support for embedding f16. 2023-06-26 21:58:15 +01:00
becb822ce0 Support more types in the cpu matmul. 2023-06-26 21:37:41 +01:00
7cfa4c307c Handle f16/bf16 in npy. 2023-06-26 21:10:03 +01:00
de1f612645 Remove the default features from the CI as cuda is not available. 2023-06-26 20:56:13 +01:00
22da2c7e02 More f16 and bf16 support. 2023-06-26 20:52:01 +01:00
a31411fd91 Start adding f16/bf16 support. 2023-06-26 19:37:47 +01:00
36a1a48ba0 Avoid a cast when no conversion is required. 2023-06-26 18:16:19 +01:00
46789c403c Cublas fixes. 2023-06-26 17:59:27 +01:00
1ad5baecc5 Handle transposed matrixes in cublas. 2023-06-26 17:49:29 +01:00
3761f02aa8 Use atomicAdd as a quick workaround some cuda synchronisation issue. 2023-06-26 16:31:24 +01:00
f2ac5547fc Avoid the race condition on cuda sums. 2023-06-26 16:19:06 +01:00
687c5beb6a Decompose the softmax op so that it can be run on cuda. 2023-06-26 15:36:21 +01:00
33c0234a33 (Properly) add the where kernels. 2023-06-26 13:25:56 +01:00
cd2a171c06 Add the where kernels. 2023-06-26 13:25:02 +01:00
b1d6e264da Sketch the where_cond cuda kernel wrapper. 2023-06-26 13:11:14 +01:00
95a2c8e7da Add helper functions for fortran contiguous data. 2023-06-26 13:02:06 +01:00
f6104c4b64 Add the reduce-sum kernel. 2023-06-26 12:35:26 +01:00
16f0f5b9d2 Add a cuda kernel for embeddings. 2023-06-26 11:47:57 +01:00
5952c3fa91 Cleanup the broadcast setup. 2023-06-26 10:49:34 +01:00
217bdcdf4d Fix the error message. 2023-06-26 10:14:34 +01:00
59a59f41a6 Add the cuda mode to llama. 2023-06-26 10:06:44 +01:00
512d12e38d Avoid copying the data around when loading weights. 2023-06-26 08:09:03 +01:00
4ad5d17d8c Slightly more efficient weight loading. 2023-06-26 07:56:25 +01:00
11696e6377 Faster model weight loading. 2023-06-26 07:40:11 +01:00
d867155ef2 Load the weights for llama. 2023-06-26 07:23:59 +01:00
7a3101f15f Llama bugfix. 2023-06-26 07:07:56 +01:00
97424289d1 Fix the llama causal mask inversion. 2023-06-25 21:16:54 +01:00
117f014b55 Add where_cond and properly apply the causal mask. 2023-06-25 21:08:03 +01:00
25bcad290e Fix the causal mask computation. 2023-06-25 20:19:30 +01:00
8e404eb125 Get a some first inference to work on llama. 2023-06-25 18:26:15 +01:00
87c5aab005 More llama fixes. 2023-06-25 18:08:41 +01:00
60a5598c8b Fix some shape errors. 2023-06-25 17:56:59 +01:00
817e4b5005 Rework the embeddings so that it works on non-contiguous weights + factor out some code. 2023-06-25 17:37:47 +01:00
334524e2c4 Take as input slices of tensors as well as slices of &Tensors. 2023-06-25 17:07:09 +01:00