Commit Graph

72 Commits

Author SHA1 Message Date
ed4d0959d3 Add a const to easily tweak the dtype used for llama internal computations. 2023-06-30 15:01:39 +01:00
f6152e74b6 Tweak the kv-cache flag. 2023-06-29 22:16:40 +01:00
ae3f202f3b Add a flag. 2023-06-29 22:12:15 +01:00
23389b1bd7 Enable the KV cache after fixing the caching length and the rope bits. 2023-06-29 22:00:57 +01:00
b50bd880ce Only narrow when needed + deactivate the kv cache. 2023-06-29 19:07:52 +01:00
3232df9458 Add some KV cache to llama. 2023-06-29 15:29:40 +01:00
78ec40b077 Typo. 2023-06-29 12:09:53 +00:00
de48e6fd59 Putting back main. 2023-06-29 12:08:35 +00:00
0958c588f6 Putting back seed. 2023-06-29 12:07:21 +00:00
c5e8f788be Revert some changes. 2023-06-29 12:05:53 +00:00
e63ed6aaa3 Remove unwrap. 2023-06-29 12:04:25 +00:00
2fe1d3e36d Moving llama to f16. 2023-06-29 12:00:16 +00:00
b4dc9f6108 Add a seed parameter to llama. 2023-06-29 12:47:19 +01:00
1913512f42 Simple example fix. 2023-06-29 11:10:57 +00:00
3872dc4751 Merge pull request #19 from LaurentMazare/llama_safetensors
Llama safetensors
2023-06-29 12:49:26 +02:00
122e334d0c Simplify the pattern matching logic in the cuda backend. 2023-06-29 09:21:11 +01:00
ece3ec6167 Final updates -> moving to deterministic for easier comparison. 2023-06-28 14:56:39 +00:00
926fffa0b7 Ok. 2023-06-28 14:56:39 +00:00
e29dae044d Tmp. 2023-06-28 14:56:38 +00:00
c44e5346f4 Add some helper functions. 2023-06-27 17:37:09 +01:00
318503cd38 Cache the causal mask in llama. 2023-06-27 12:21:08 +01:00
d7f729fb8f Refactor the hierarchy. 2023-06-27 11:57:27 +02:00