Commit Graph

28 Commits

Author SHA1 Message Date
3eb2bc6d07 Softmax numerical stability. (#267)
* Softmax numerical stability.

* Fix the flash-attn test.
2023-07-28 13:13:01 +01:00
8435a99edd Added comment about offsets. 2023-07-27 20:11:57 +02:00
952eca6b54 Fixing slice errors + comments. 2023-07-27 16:59:32 +02:00
7c7e6ba201 Removing inner dependency on safetensors. 2023-07-27 09:58:47 +02:00
1735e4831e TP sharding v2 2023-07-27 09:58:14 +02:00
1f26042693 Move some shared functions to the nn module. (#221) 2023-07-22 13:25:11 +01:00
43c7223292 Rename the .r functions to .dims so as to be a bit more explicit. (#220) 2023-07-22 10:39:27 +01:00
dfd624dbd3 [Proposal] Remove SafeTensor wrapper (allows finer control for users). 2023-07-19 16:25:44 +02:00
2a74019ec6 Vision dataset (#179)
* Add some readers for the mnist dataset.

* Import the cifar and mnist dataset.
2023-07-16 23:43:55 +01:00
104f89df31 Centralize the dependency versions and inherit them. (#177) 2023-07-16 07:47:17 +01:00
4ed56d7861 Removing cuda default.
Seems very important for a lot of exploring users usually on laptop
without GPUs.

Adding more README instructions in a follow up.
2023-07-14 16:52:15 +02:00
d88b6cdca9 Add backtrace information to errors where relevant. (#166)
* Add backtrace information to errors where relevant.

* More backtrace information.

* Add to the FAQ.
2023-07-14 09:31:25 +01:00
a2f72edc0d Simplify the parameters used by sum and sum_keepdim. (#165) 2023-07-14 08:22:08 +01:00
2bfa791336 Use the same default as pytorch for sum. (#164) 2023-07-13 21:32:32 +01:00
57be3638d8 Add the pytorch version of the linear regression as a comment. (#163)
* Add the pytorch version of the linear regression.

* Typo.
2023-07-13 21:05:57 +01:00
23e105cd94 Add the gradient for reduce-sum. (#162)
* Add the gradient for reduce-sum.

* And add the gradient for the broadcast ops.

* Add some backprop tests.

* Add some linear regression example.
2023-07-13 20:14:10 +01:00
ded93a1169 Add the SGD optimizer (#160)
* Add the nn::optim and some conversion traits.

* Add the backward_step function for SGD.

* Get the SGD optimizer to work and add a test.

* Make the test slighly simpler.
2023-07-13 19:05:44 +01:00
465fc8c0c5 Add some documentation and test to the linear layer. (#151)
* Add some documentation and test to the linear layer.

* Layer norm doc.

* Minor tweaks.
2023-07-12 20:24:23 +01:00
a76ec797da Cleanup the main crate error and add a couple dedicated ones (#142)
* Cosmetic cleanups to the error enum.

* More error cleanup.

* Proper error handling rather than panicing.

* Add some conv1d dedicated error.
2023-07-12 09:17:08 +01:00
fa760759e5 Allow for lazy loading of npz files, use it in llama to reduce memory usage in the cpu version. (#141) 2023-07-11 20:22:34 +01:00
37cad85869 Resurrect the llama npy support. (#140) 2023-07-11 19:32:10 +01:00
b31a3bbdcb Sketch the tensor initialization module. (#134) 2023-07-11 07:41:46 +01:00
b46c28a2ac VarBuilder path creation (#131)
* Use a struct for the safetensor+routing.

* Group the path and the var-builder together.

* Fix for the empty path case.
2023-07-10 22:37:34 +01:00
1aa7fbbc33 Move the var-builder in a central place. (#130) 2023-07-10 20:49:50 +01:00
71cd3745a9 Add some layer-norm tests. (#121) 2023-07-10 14:43:04 +01:00
89a5b602a6 Move the conv1d layer to candle_nn. (#117) 2023-07-10 11:02:06 +01:00
b06e1a7e54 [nn] Move the Embedding and Activation parts. (#116)
* Share the Embedding and Activation parts.

* Tweak some activations.
2023-07-10 10:24:52 +01:00
9ce0f1c010 Sketch the candle-nn crate. (#115)
* Sketch the candle-nn crate.

* Tweak the cuda dependencies.

* More cuda tweaks.
2023-07-10 08:50:09 +01:00