e82fcf1c59
Add more example readmes. ( #828 )
...
* Add more readmes.
* Add a readme for dinov2.
* Add some skeleton files for a couple more examples.
* More whisper details.
2023-09-12 17:21:24 +01:00
805bf9ffa7
Implement top_p / nucleus sampling ( #819 )
...
* Implement top_p / nucleus sampling
* Update changelog
* rustfmt
* Add tests
* Fix clippy warning
* Fix another clippy error
2023-09-12 18:10:16 +02:00
d3f05eae8c
Move some models to candle-transformers so that it's easier to re-use. ( #794 )
...
* Move some models to candle-transformers so that they can be shared.
* Also move falcon.
* Move Llama.
* Move whisper (partial).
2023-09-10 09:40:27 +01:00
26e1b40992
Repeat-penalty in the falcon example. ( #634 )
2023-08-28 08:13:40 +01:00
c78ce76501
Add a simple Module trait and implement it for the various nn layers ( #500 )
...
* Start adding the module trait.
* Use the module trait.
* Implement module for qmatmul.
2023-08-18 09:38:22 +01:00
c84883ecf2
Add a cuda kernel for upsampling. ( #441 )
...
* Add a cuda kernel for upsampling.
* Update for the latest tokenizers version.
2023-08-14 13:12:17 +01:00
9aca398a4f
More accelerate optimizations ( #427 )
...
* Add more tracing to the whisper example.
* Support accelerate in more examples.
* Use accelerate for pointwise functions.
* Use accelerate for binary operations too.
* Bugfix for binary operation: use the rhs before the lhs.
2023-08-13 12:53:34 +01:00
4bf2ebf836
Use u8 tensors for masks. ( #273 )
2023-07-29 11:32:58 +01:00
3eb2bc6d07
Softmax numerical stability. ( #267 )
...
* Softmax numerical stability.
* Fix the flash-attn test.
2023-07-28 13:13:01 +01:00
ca479a873e
Upgrading hf-hub to 0.2.0
(Modified API to not pass the Repo around
...
all the time)
2023-07-27 20:05:02 +02:00
43c7223292
Rename the .r functions to .dims so as to be a bit more explicit. ( #220 )
2023-07-22 10:39:27 +01:00
439321745a
Removing candle-hub
internal to extract into hf-hub
standalone.
2023-07-19 15:04:38 +02:00
66750f9827
Add some 'cuda-if-available' helper function. ( #172 )
2023-07-15 08:25:15 +01:00
4ed56d7861
Removing cuda default.
...
Seems very important for a lot of exploring users usually on laptop
without GPUs.
Adding more README instructions in a follow up.
2023-07-14 16:52:15 +02:00
3c02ea56b0
Add a cli argument to easily switch the dtype. ( #161 )
2023-07-13 19:18:49 +01:00
50b0946a2d
Tensor mutability ( #154 )
...
* Working towards tensor mutability.
* Use a ref-cell to provide tensor mutability.
2023-07-13 11:04:40 +01:00
ba35d895e7
Sketch the candle-transformers crate. ( #147 )
...
* Sketch the candle-transformers crate.
* Format the empty files.
2023-07-12 13:49:31 +01:00
eae646d322
Use arange in the examples. ( #146 )
2023-07-12 12:12:34 +01:00
674eb35e10
Remove some dead-code pragmas. ( #137 )
2023-07-11 09:33:59 +01:00
b46c28a2ac
VarBuilder path creation ( #131 )
...
* Use a struct for the safetensor+routing.
* Group the path and the var-builder together.
* Fix for the empty path case.
2023-07-10 22:37:34 +01:00
1aa7fbbc33
Move the var-builder in a central place. ( #130 )
2023-07-10 20:49:50 +01:00
b06e1a7e54
[nn] Move the Embedding and Activation parts. ( #116 )
...
* Share the Embedding and Activation parts.
* Tweak some activations.
2023-07-10 10:24:52 +01:00
9ce0f1c010
Sketch the candle-nn crate. ( #115 )
...
* Sketch the candle-nn crate.
* Tweak the cuda dependencies.
* More cuda tweaks.
2023-07-10 08:50:09 +01:00
ea5dfa69bc
Sketching the musicgen model. ( #66 )
...
* Skeleton files for musicgen.
* Add a musicgen model module.
* Sketch the model loading.
* Start adding the forward pass.
* More forward pass.
* Positional embeddings.
* Forward for the decoder layers.
* Add an empty function.
* Fix the musicgen weight names.
* More musicgen modeling.
* Add the T5 loading bits.
* Add the encodec config.
* Add the encodec module hierarchy.
* More Encodec modeling.
* Encodec modeling.
* Encodec modeling.
* Add more to the encodec modeling.
* Load the weights.
* Populate the resnet blocks.
* Also load the conv transpose weights.
* Split musicgen in multiple files.
2023-07-09 19:53:35 +01:00
f35cfc5e97
Sample with temperature. ( #106 )
2023-07-07 18:12:25 +01:00
03dffe9ecc
Use F32 for the reduce ops. ( #105 )
2023-07-07 17:55:21 +01:00
e923b3adc2
Add a KV cache to falcon. ( #104 )
2023-07-07 17:24:38 +01:00
05ff1cff66
Add some caching to the causal mask. ( #103 )
2023-07-07 12:56:44 +01:00
2df044f9a1
Clippy after rebase.
2023-07-07 09:22:09 +02:00
1ec221a749
Fixing falcon example.
2023-07-07 09:13:55 +02:00
d38a926c14
Convert the logits to f32 before extracting them. ( #102 )
2023-07-07 08:07:57 +01:00
bac4ef40f3
Add some text generation pipeline for falcon. ( #98 )
2023-07-07 06:34:22 +01:00
2b8e8c9f14
Bugfixes. ( #97 )
2023-07-06 23:26:11 +01:00
a3f3b93d16
Add the call to dense in the attention layer. ( #96 )
2023-07-06 23:22:08 +01:00
0f679fe42e
Fix some shape issues in falcon. ( #95 )
...
* Fix some shape issues.
* Use different dtypes.
2023-07-06 19:23:54 +01:00
4afa461b34
Sketch the Falcon model. ( #93 )
...
* Sketch the Falcon model.
* Add more substance to the falcon example.
* Falcon (wip).
* Falcon (wip again).
* Falcon inference.
* Get the weights from the api and properly generate the model.
* Use the proper model.
* Fix the file/revision names.
* Fix bias handling.
* Recompute the rot embeddings.
* Fix the input shape.
* Add the release-with-debug profile.
* Silly bugfix.
* More bugfixes.
* Stricter shape checking in matmul.
2023-07-06 19:01:21 +01:00