Commit Graph

792 Commits

Author SHA1 Message Date
2c3d871b2e Add a simpler way to specify the dim index for some ops. 2023-07-05 20:22:43 +01:00
6d1e79d378 Bugfix for to_scalar (use the proper start offset). 2023-07-05 06:42:29 +01:00
459e2e1ae3 Properly handle the stride in conv1d. 2023-07-04 15:05:04 +01:00
b3d4d0fd0f Very inefficient conv1d implementation. 2023-07-04 13:50:41 +01:00
950b4af49e Proper conv1d dispatch. 2023-07-04 11:29:28 +01:00
a424d95473 Add more of the conv1d op. 2023-07-04 11:15:45 +01:00
3aac1047fe Sketch the conv1d op. 2023-07-04 10:52:34 +01:00
a57b314780 Add a batch dimension on the bert example. 2023-07-04 06:10:52 +01:00
86d691c74c Better handling of the batch dimension in matmul. 2023-07-03 22:51:40 +01:00
ad52b0377c Add the varbuilder + check shapes. 2023-07-03 15:32:20 +01:00
fd682a94f8 Merge pull request #62 from LaurentMazare/safetensors_integration
Adding saving capabilities.
2023-07-03 15:40:00 +02:00
0b3cc215f1 Address comments. 2023-07-03 13:52:27 +02:00
5bc66c68fa Adding saving capabilities. 2023-07-03 13:39:24 +02:00
fdb1acd2ff Move llama in a cargo-examples directory. 2023-07-03 11:30:58 +01:00
81cec86e75 Adding a bit more docs around safety. 2023-07-03 11:55:54 +02:00
639270b796 Use the patched gemm for the time being. 2023-07-03 10:29:15 +01:00
899c76de75 Handle more types in safetensors. 2023-07-03 10:09:46 +01:00
783b7054ee Move more safetensors bits to the shared module. 2023-07-03 09:34:08 +01:00
fe2c07e368 Add the ST error. 2023-07-03 08:44:00 +01:00
cf2789fb81 Move some safetensors bits in the candle-core crate. 2023-07-03 08:37:46 +01:00
78871ffe38 Add dtype support. 2023-07-02 20:12:26 +01:00
7c65e2d187 Add a flag for custom prompt. 2023-07-01 06:36:22 +01:00
bbe0c5fbaa Do not use rayon for a single thread (bis). 2023-06-30 18:47:22 +01:00
6b67d25d9f Do not use rayon for a single thread. 2023-06-30 18:46:32 +01:00
679b6987b6 Early conversion for the llama weights. 2023-06-30 16:42:53 +01:00
ed4d0959d3 Add a const to easily tweak the dtype used for llama internal computations. 2023-06-30 15:01:39 +01:00
fbc329ed85 Add the verbose cpu cast operations. 2023-06-30 10:33:29 +01:00
8ad47907f3 Add the kernels. 2023-06-30 10:26:56 +01:00
19cbbc5212 Improve how we check that the dims are in bounds. 2023-06-30 09:11:00 +01:00
f6152e74b6 Tweak the kv-cache flag. 2023-06-29 22:16:40 +01:00
ae3f202f3b Add a flag. 2023-06-29 22:12:15 +01:00
23389b1bd7 Enable the KV cache after fixing the caching length and the rope bits. 2023-06-29 22:00:57 +01:00
b50bd880ce Only narrow when needed + deactivate the kv cache. 2023-06-29 19:07:52 +01:00
3232df9458 Add some KV cache to llama. 2023-06-29 15:29:40 +01:00
889f7e0971 Merge pull request #39 from LaurentMazare/anyhow-backtrace
Add backtraces.
2023-06-29 13:17:53 +01:00
e27ee98d3f Add backtraces. 2023-06-29 13:17:20 +01:00
78ec40b077 Typo. 2023-06-29 12:09:53 +00:00
de48e6fd59 Putting back main. 2023-06-29 12:08:35 +00:00
0958c588f6 Putting back seed. 2023-06-29 12:07:21 +00:00
c5e8f788be Revert some changes. 2023-06-29 12:05:53 +00:00
e63ed6aaa3 Remove unwrap. 2023-06-29 12:04:25 +00:00
2fe1d3e36d Moving llama to f16. 2023-06-29 12:00:16 +00:00
b4dc9f6108 Add a seed parameter to llama. 2023-06-29 12:47:19 +01:00
53628db3a9 Merge pull request #36 from LaurentMazare/fix_example
Simple example fix.
2023-06-29 13:36:05 +02:00
1913512f42 Simple example fix. 2023-06-29 11:10:57 +00:00
2741b39ad3 Use broadcasted scalars for const tensors. 2023-06-29 11:56:40 +01:00
3872dc4751 Merge pull request #19 from LaurentMazare/llama_safetensors
Llama safetensors
2023-06-29 12:49:26 +02:00
b4aab7b95f Put more requirements on the withdtype trait. 2023-06-29 11:37:42 +01:00
c9c468e1aa Use Map2 for binary ops. 2023-06-29 10:09:15 +01:00
83c7d660ca Add Map2. 2023-06-29 10:05:06 +01:00