mirror of
https://github.com/huggingface/candle.git
synced 2025-06-14 18:06:36 +00:00

* Add some flash-attn kernel, import the code for flash-attn v2 from Dao-AILab. * More flash attn. * Set up the flash attn parameters. * Get things to compile locally. * Move the flash attention files in a different directory. * Build the static C library with nvcc. * Add more flash attention. * Update the build part. * Better caching. * Exclude flash attention from the default workspace. * Put flash-attn behind a feature gate. * Get the flash attn kernel to run. * Move the flags to a more appropriate place. * Enable flash attention in llama. * Use flash attention in llama.
4 lines
137 B
Plaintext
4 lines
137 B
Plaintext
[submodule "candle-examples/examples/flash-attn/cutlass"]
|
|
path = candle-flash-attn/cutlass
|
|
url = https://github.com/NVIDIA/cutlass.git
|