Commit Graph

20 Commits

Author SHA1 Message Date
a1812f934f Add a yolo-v3 example. (#528)
* Add a couple functions required for yolo.

* Add the yolo-v3 example.

* Add minimum and maximum.

* Use the newly introduced maximum.

* Cuda support for min/max + add some testing.

* Allow for more tests to work with accelerate.

* Fix a typo.
2023-08-20 18:19:37 +01:00
c78ce76501 Add a simple Module trait and implement it for the various nn layers (#500)
* Start adding the module trait.

* Use the module trait.

* Implement module for qmatmul.
2023-08-18 09:38:22 +01:00
4b3bd79fbd Remove the embedding ops in favor of index-select. (#299)
* Remove the embedding ops in favor of index-select.

* Also remove the cuda kernels.
2023-08-02 05:42:11 +01:00
3eb2bc6d07 Softmax numerical stability. (#267)
* Softmax numerical stability.

* Fix the flash-attn test.
2023-07-28 13:13:01 +01:00
43c7223292 Rename the .r functions to .dims so as to be a bit more explicit. (#220) 2023-07-22 10:39:27 +01:00
66750f9827 Add some 'cuda-if-available' helper function. (#172) 2023-07-15 08:25:15 +01:00
4ed56d7861 Removing cuda default.
Seems very important for a lot of exploring users usually on laptop
without GPUs.

Adding more README instructions in a follow up.
2023-07-14 16:52:15 +02:00
a2f72edc0d Simplify the parameters used by sum and sum_keepdim. (#165) 2023-07-14 08:22:08 +01:00
2bfa791336 Use the same default as pytorch for sum. (#164) 2023-07-13 21:32:32 +01:00
50b0946a2d Tensor mutability (#154)
* Working towards tensor mutability.

* Use a ref-cell to provide tensor mutability.
2023-07-13 11:04:40 +01:00
a3663ce2f2 Encodec forward pass (#153)
* Sketch the forward pass for encodec.

* Forward pass for the encodec resnet block.

* Encodec decoding.
2023-07-13 08:18:39 +01:00
6c75a98ad2 Add the forward pass for the T5 model. (#152)
* Add the forward pass for the T5 model.

* More t5 forward pass.
2023-07-12 22:02:40 +01:00
674eb35e10 Remove some dead-code pragmas. (#137) 2023-07-11 09:33:59 +01:00
0e9d3afd77 Simplify the var-builder layer setup. (#133) 2023-07-10 23:22:58 +01:00
6fc1ab4f0d MusicGen var-store path cleanup. (#132) 2023-07-10 23:13:11 +01:00
1aa7fbbc33 Move the var-builder in a central place. (#130) 2023-07-10 20:49:50 +01:00
89a5b602a6 Move the conv1d layer to candle_nn. (#117) 2023-07-10 11:02:06 +01:00
b06e1a7e54 [nn] Move the Embedding and Activation parts. (#116)
* Share the Embedding and Activation parts.

* Tweak some activations.
2023-07-10 10:24:52 +01:00
9ce0f1c010 Sketch the candle-nn crate. (#115)
* Sketch the candle-nn crate.

* Tweak the cuda dependencies.

* More cuda tweaks.
2023-07-10 08:50:09 +01:00
ea5dfa69bc Sketching the musicgen model. (#66)
* Skeleton files for musicgen.

* Add a musicgen model module.

* Sketch the model loading.

* Start adding the forward pass.

* More forward pass.

* Positional embeddings.

* Forward for the decoder layers.

* Add an empty function.

* Fix the musicgen weight names.

* More musicgen modeling.

* Add the T5 loading bits.

* Add the encodec config.

* Add the encodec module hierarchy.

* More Encodec modeling.

* Encodec modeling.

* Encodec modeling.

* Add more to the encodec modeling.

* Load the weights.

* Populate the resnet blocks.

* Also load the conv transpose weights.

* Split musicgen in multiple files.
2023-07-09 19:53:35 +01:00