Commit Graph

536 Commits

Author SHA1 Message Date
3c02ea56b0 Add a cli argument to easily switch the dtype. (#161) 2023-07-13 19:18:49 +01:00
ded93a1169 Add the SGD optimizer (#160)
* Add the nn::optim and some conversion traits.

* Add the backward_step function for SGD.

* Get the SGD optimizer to work and add a test.

* Make the test slighly simpler.
2023-07-13 19:05:44 +01:00
5ee3c95582 Move the variable creation to the variable module. (#159)
* Move the variable creation to the variable module.

* Make it possible to set a variable.

* Add some basic gradient descent test.

* Get the gradient descent test to work.
2023-07-13 16:55:40 +01:00
6991036bc5 Introduce the variables api used for adjusting parameters during the training loop. (#158)
* Add the variable api.

* And add a comment.
2023-07-13 14:09:51 +01:00
7adc8c903a Expose the storage publicly. (#157) 2023-07-13 13:52:36 +01:00
21aa29ddce Use a rwlock for inner mutability. (#156)
* Use a rw-lock.

* Make clippy happier.
2023-07-13 11:25:24 +01:00
dfabc708f2 Fix a comment. (#155) 2023-07-13 11:11:37 +01:00
50b0946a2d Tensor mutability (#154)
* Working towards tensor mutability.

* Use a ref-cell to provide tensor mutability.
2023-07-13 11:04:40 +01:00
a3663ce2f2 Encodec forward pass (#153)
* Sketch the forward pass for encodec.

* Forward pass for the encodec resnet block.

* Encodec decoding.
2023-07-13 08:18:39 +01:00
6c75a98ad2 Add the forward pass for the T5 model. (#152)
* Add the forward pass for the T5 model.

* More t5 forward pass.
2023-07-12 22:02:40 +01:00
465fc8c0c5 Add some documentation and test to the linear layer. (#151)
* Add some documentation and test to the linear layer.

* Layer norm doc.

* Minor tweaks.
2023-07-12 20:24:23 +01:00
f09d7e5653 Merge pull request #150 from LaurentMazare/add_cheatsheet
Adding cheatsheet + expand on other ML frameworks.
2023-07-12 19:54:07 +02:00
91817b9e57 Tweak the table formatting. 2023-07-12 18:43:52 +01:00
a86ec4b9f0 Add more documentation and examples. (#149)
* Add more documentation and examples.

* More documentation and tests.

* Document more tensor functions.

* Again more examples and tests.
2023-07-12 17:40:17 +01:00
6e938cfe9d Adding cheatsheet + expand on other ML frameworks. 2023-07-12 18:35:34 +02:00
8aab787384 Test the index op + bugfix. (#148) 2023-07-12 15:42:36 +01:00
ba35d895e7 Sketch the candle-transformers crate. (#147)
* Sketch the candle-transformers crate.

* Format the empty files.
2023-07-12 13:49:31 +01:00
eae646d322 Use arange in the examples. (#146) 2023-07-12 12:12:34 +01:00
20599172ac Add from_iter and arange, use it in the doctests. (#145) 2023-07-12 12:03:01 +01:00
b3b39cca92 Llama batch (#144)
* Add a batch dimension to llama.

* Bugfixes.
2023-07-12 11:38:19 +01:00
bcf96e3cf3 Implement the backend trait for the cpu backend. (#143) 2023-07-12 09:54:33 +01:00
a76ec797da Cleanup the main crate error and add a couple dedicated ones (#142)
* Cosmetic cleanups to the error enum.

* More error cleanup.

* Proper error handling rather than panicing.

* Add some conv1d dedicated error.
2023-07-12 09:17:08 +01:00
fa760759e5 Allow for lazy loading of npz files, use it in llama to reduce memory usage in the cpu version. (#141) 2023-07-11 20:22:34 +01:00
37cad85869 Resurrect the llama npy support. (#140) 2023-07-11 19:32:10 +01:00
760f1d7055 Refactor the llama example to make it more in sync with the other ones. (#139)
* Refactor the llama example to make it more in sync with the other ones.

* Make clippy happy.

* Properly load the safetensor weights.

* Get llama back to a working state for the safetensors case.
2023-07-11 17:20:55 +01:00
64264d97c1 Modular backends (#138)
* Add some trait to formalize backends.

* Use the generic backend trait.
2023-07-11 11:17:02 +01:00
674eb35e10 Remove some dead-code pragmas. (#137) 2023-07-11 09:33:59 +01:00
ae79c00e48 Allow for uniform initialization in a single step. (#136) 2023-07-11 08:52:29 +01:00
b31a3bbdcb Sketch the tensor initialization module. (#134) 2023-07-11 07:41:46 +01:00
0e9d3afd77 Simplify the var-builder layer setup. (#133) 2023-07-10 23:22:58 +01:00
6fc1ab4f0d MusicGen var-store path cleanup. (#132) 2023-07-10 23:13:11 +01:00
b46c28a2ac VarBuilder path creation (#131)
* Use a struct for the safetensor+routing.

* Group the path and the var-builder together.

* Fix for the empty path case.
2023-07-10 22:37:34 +01:00
1aa7fbbc33 Move the var-builder in a central place. (#130) 2023-07-10 20:49:50 +01:00
2be09dbb1d Macroify the repeating bits. (#129) 2023-07-10 19:44:06 +01:00
23849cb6e6 Merge pull request #124 from LaurentMazare/new_doc
Squeeze/unsqueeze/reshape
2023-07-10 20:43:23 +02:00
fba07d6b6b Merge pull request #127 from LaurentMazare/tensor_indexing
`i(..)` indexing sugar (partial).
2023-07-10 19:56:34 +02:00
1ad235953b Clippy ? 2023-07-10 19:34:38 +02:00
c9d354f5ae Update candle-core/src/tensor.rs 2023-07-10 19:29:22 +02:00
f29b77ec19 Random initializers. (#128)
* Random initialization.

* CPU rng generation.
2023-07-10 18:26:21 +01:00
5ea747c047 Update candle-core/src/indexer.rs 2023-07-10 19:02:35 +02:00
ef0375d8bc i(..) indexing sugar (partial).
- Only range, and select (no tensor_select)
- No negative indexing
2023-07-10 17:34:04 +02:00
e2807c78a4 Enable the doctests to run with mkl (though they are broken for now). (#126) 2023-07-10 16:27:46 +01:00
548b1df7ea Remove the dependency to blas and use mkl directly. (#125) 2023-07-10 15:52:03 +01:00
e01d099b71 Squeeze/unsqueeze/reshape 2023-07-10 16:40:25 +02:00
221b1aff65 Support dgemm in mkl matmul. (#122) 2023-07-10 15:02:37 +01:00
71cd3745a9 Add some layer-norm tests. (#121) 2023-07-10 14:43:04 +01:00
dc58259679 Merge pull request #120 from LaurentMazare/some_doc_plus_fix_stack
Adding some doc + Extended `stack` to work with extra final dimensions.
2023-07-10 15:21:24 +02:00
9a667155fd Removed commented deny 2023-07-10 15:18:23 +02:00
2c8fbe8155 oops. 2023-07-10 15:13:52 +02:00
49f4a77ffd Put them back. 2023-07-10 15:11:48 +02:00