c297a50960
Add mkl support for matrix multiply. ( #86 )
...
* Fix some rebase issues.
* Use mkl instead.
* Use mkl in bert.
* Add the optional mkl feature.
* Conditional compilation based on the mkl feature.
* Add more mkl support.
2023-07-06 11:05:05 +01:00
fdb1acd2ff
Move llama in a cargo-examples directory.
2023-07-03 11:30:58 +01:00
81cec86e75
Adding a bit more docs around safety.
2023-07-03 11:55:54 +02:00
783b7054ee
Move more safetensors bits to the shared module.
2023-07-03 09:34:08 +01:00
cf2789fb81
Move some safetensors bits in the candle-core crate.
2023-07-03 08:37:46 +01:00
7c65e2d187
Add a flag for custom prompt.
2023-07-01 06:36:22 +01:00
679b6987b6
Early conversion for the llama weights.
2023-06-30 16:42:53 +01:00
ed4d0959d3
Add a const to easily tweak the dtype used for llama internal computations.
2023-06-30 15:01:39 +01:00
f6152e74b6
Tweak the kv-cache flag.
2023-06-29 22:16:40 +01:00
ae3f202f3b
Add a flag.
2023-06-29 22:12:15 +01:00
23389b1bd7
Enable the KV cache after fixing the caching length and the rope bits.
2023-06-29 22:00:57 +01:00
b50bd880ce
Only narrow when needed + deactivate the kv cache.
2023-06-29 19:07:52 +01:00
3232df9458
Add some KV cache to llama.
2023-06-29 15:29:40 +01:00
78ec40b077
Typo.
2023-06-29 12:09:53 +00:00
de48e6fd59
Putting back main.
2023-06-29 12:08:35 +00:00
0958c588f6
Putting back seed.
2023-06-29 12:07:21 +00:00
c5e8f788be
Revert some changes.
2023-06-29 12:05:53 +00:00
e63ed6aaa3
Remove unwrap.
2023-06-29 12:04:25 +00:00
2fe1d3e36d
Moving llama to f16.
2023-06-29 12:00:16 +00:00
b4dc9f6108
Add a seed parameter to llama.
2023-06-29 12:47:19 +01:00
1913512f42
Simple example fix.
2023-06-29 11:10:57 +00:00
3872dc4751
Merge pull request #19 from LaurentMazare/llama_safetensors
...
Llama safetensors
2023-06-29 12:49:26 +02:00
122e334d0c
Simplify the pattern matching logic in the cuda backend.
2023-06-29 09:21:11 +01:00
ece3ec6167
Final updates -> moving to deterministic for easier comparison.
2023-06-28 14:56:39 +00:00
926fffa0b7
Ok.
2023-06-28 14:56:39 +00:00
e29dae044d
Tmp.
2023-06-28 14:56:38 +00:00
c44e5346f4
Add some helper functions.
2023-06-27 17:37:09 +01:00
318503cd38
Cache the causal mask in llama.
2023-06-27 12:21:08 +01:00
d7f729fb8f
Refactor the hierarchy.
2023-06-27 11:57:27 +02:00