Commit Graph

460 Commits

Author SHA1 Message Date
49f4a77ffd Put them back. 2023-07-10 15:11:48 +02:00
38ac50eeda Adding some doc + Extended stack to work with extra final dimensions. 2023-07-10 14:51:10 +02:00
868743b8b9 Expanding a bit the README 2023-07-10 12:51:37 +02:00
9ce0f1c010 Sketch the candle-nn crate. (#115)
* Sketch the candle-nn crate.

* Tweak the cuda dependencies.

* More cuda tweaks.
2023-07-10 08:50:09 +01:00
270997a055 Add the elu op. (#113) 2023-07-09 21:56:31 +01:00
eb64ad0d4d Cuda kernel for the conv1d op (#111)
* Boilerplate code for conv1d.

* Boilerplate code for conv1d.

* More boilerplate for conv1d.

* Conv1d work.

* Get the conv1d cuda kernel to work.

* Conv1d support when no batch dim.
2023-07-08 18:13:25 +01:00
5c3864f9f7 Add more sum tests. (#110)
* Add some tests for the sum.

* More sum testing.
2023-07-08 13:15:36 +01:00
e676f85f00 Sketch a fast cuda kernel for reduce-sum. (#109)
* Sketch a fast cuda kernel for reduce-sum.

* Sketch the rust support code for the fast sum kernel.

* More work on the fast kernel.

* Add some testing ground.

* A couple fixes for the fast sum kernel.
2023-07-08 12:43:56 +01:00
33479c5f1b Add some very simple sum benchmark. (#108)
* Add some very simple sum benchmark.

* Rename the file.
2023-07-08 08:39:27 +01:00
02b5c38049 Use cublas bf16. (#101) 2023-07-07 08:00:12 +01:00
666c6f07a1 Merge pull request #88 from LaurentMazare/fix_unsafe_loads
Fixing unsafe slow load (memcpy).
2023-07-07 00:09:56 +02:00
ce27073feb Update candle-core/src/safetensors.rs 2023-07-06 23:59:54 +02:00
4afa461b34 Sketch the Falcon model. (#93)
* Sketch the Falcon model.

* Add more substance to the falcon example.

* Falcon (wip).

* Falcon (wip again).

* Falcon inference.

* Get the weights from the api and properly generate the model.

* Use the proper model.

* Fix the file/revision names.

* Fix bias handling.

* Recompute the rot embeddings.

* Fix the input shape.

* Add the release-with-debug profile.

* Silly bugfix.

* More bugfixes.

* Stricter shape checking in matmul.
2023-07-06 19:01:21 +01:00
f1e29cd405 Allow using mkl in tests. (#90) 2023-07-06 13:25:05 +01:00
054717e236 Fixing unsafe slow load (memcpy).
- Without the annotation, I think the rust compiler assumes it's all u8.

It did segfault trying to load `Roberta`.
2023-07-06 13:14:33 +02:00
dd60bd84bb MKL adjustments. (#87) 2023-07-06 11:37:27 +01:00
c297a50960 Add mkl support for matrix multiply. (#86)
* Fix some rebase issues.

* Use mkl instead.

* Use mkl in bert.

* Add the optional mkl feature.

* Conditional compilation based on the mkl feature.

* Add more mkl support.
2023-07-06 11:05:05 +01:00
e2bfbcb79c Support dim indexes in cat. 2023-07-05 20:39:08 +01:00
2c3d871b2e Add a simpler way to specify the dim index for some ops. 2023-07-05 20:22:43 +01:00
6d1e79d378 Bugfix for to_scalar (use the proper start offset). 2023-07-05 06:42:29 +01:00
459e2e1ae3 Properly handle the stride in conv1d. 2023-07-04 15:05:04 +01:00
b3d4d0fd0f Very inefficient conv1d implementation. 2023-07-04 13:50:41 +01:00
950b4af49e Proper conv1d dispatch. 2023-07-04 11:29:28 +01:00
a424d95473 Add more of the conv1d op. 2023-07-04 11:15:45 +01:00
3aac1047fe Sketch the conv1d op. 2023-07-04 10:52:34 +01:00
a57b314780 Add a batch dimension on the bert example. 2023-07-04 06:10:52 +01:00
86d691c74c Better handling of the batch dimension in matmul. 2023-07-03 22:51:40 +01:00
ad52b0377c Add the varbuilder + check shapes. 2023-07-03 15:32:20 +01:00
fd682a94f8 Merge pull request #62 from LaurentMazare/safetensors_integration
Adding saving capabilities.
2023-07-03 15:40:00 +02:00
0b3cc215f1 Address comments. 2023-07-03 13:52:27 +02:00
5bc66c68fa Adding saving capabilities. 2023-07-03 13:39:24 +02:00
fdb1acd2ff Move llama in a cargo-examples directory. 2023-07-03 11:30:58 +01:00
81cec86e75 Adding a bit more docs around safety. 2023-07-03 11:55:54 +02:00
639270b796 Use the patched gemm for the time being. 2023-07-03 10:29:15 +01:00
899c76de75 Handle more types in safetensors. 2023-07-03 10:09:46 +01:00
783b7054ee Move more safetensors bits to the shared module. 2023-07-03 09:34:08 +01:00
fe2c07e368 Add the ST error. 2023-07-03 08:44:00 +01:00
cf2789fb81 Move some safetensors bits in the candle-core crate. 2023-07-03 08:37:46 +01:00
78871ffe38 Add dtype support. 2023-07-02 20:12:26 +01:00
7c65e2d187 Add a flag for custom prompt. 2023-07-01 06:36:22 +01:00
bbe0c5fbaa Do not use rayon for a single thread (bis). 2023-06-30 18:47:22 +01:00
6b67d25d9f Do not use rayon for a single thread. 2023-06-30 18:46:32 +01:00
679b6987b6 Early conversion for the llama weights. 2023-06-30 16:42:53 +01:00
ed4d0959d3 Add a const to easily tweak the dtype used for llama internal computations. 2023-06-30 15:01:39 +01:00
fbc329ed85 Add the verbose cpu cast operations. 2023-06-30 10:33:29 +01:00
8ad47907f3 Add the kernels. 2023-06-30 10:26:56 +01:00
19cbbc5212 Improve how we check that the dims are in bounds. 2023-06-30 09:11:00 +01:00
f6152e74b6 Tweak the kv-cache flag. 2023-06-29 22:16:40 +01:00
ae3f202f3b Add a flag. 2023-06-29 22:12:15 +01:00
23389b1bd7 Enable the KV cache after fixing the caching length and the rope bits. 2023-06-29 22:00:57 +01:00