acb2f90469
Broadcasting performance optimization (cpu) ( #182 )
...
* Avoid recomputing the index from scratch each time.
* More performance optimisations.
2023-07-17 13:41:09 +01:00
5b1c0bc9be
Performance improvement. ( #181 )
2023-07-17 11:07:14 +01:00
28e1c07304
Process unary functions per block ( #180 )
...
* Process unary functions per block.
* Add some inline hints.
2023-07-17 10:22:33 +01:00
104f89df31
Centralize the dependency versions and inherit them. ( #177 )
2023-07-16 07:47:17 +01:00
18ea92d83b
Iteration over strided blocks ( #175 )
...
* Introduce the strided blocks.
* Use the strided blocks to fasten the copy.
* Add more testing.
2023-07-15 21:30:35 +01:00
66750f9827
Add some 'cuda-if-available' helper function. ( #172 )
2023-07-15 08:25:15 +01:00
3672e1a46f
Revert "Testing fmt CI check behind cuda feature flag."
...
This reverts commit b9605310b1
.
2023-07-14 15:18:14 +00:00
b9605310b1
Testing fmt CI check behind cuda feature flag.
2023-07-14 15:14:52 +00:00
dcb4a9291e
Expliciting how to enable cuda.
2023-07-14 17:08:05 +02:00
4ed56d7861
Removing cuda default.
...
Seems very important for a lot of exploring users usually on laptop
without GPUs.
Adding more README instructions in a follow up.
2023-07-14 16:52:15 +02:00
88f666781f
Wasm proof of concept. ( #167 )
...
* Wasm proof of concept.
* Run whisper inference in the browser.
* Some fixes.
* Move the wasm example.
* Change the tokenizer config.
2023-07-14 14:51:46 +01:00
d88b6cdca9
Add backtrace information to errors where relevant. ( #166 )
...
* Add backtrace information to errors where relevant.
* More backtrace information.
* Add to the FAQ.
2023-07-14 09:31:25 +01:00
a2f72edc0d
Simplify the parameters used by sum and sum_keepdim. ( #165 )
2023-07-14 08:22:08 +01:00
2bfa791336
Use the same default as pytorch for sum. ( #164 )
2023-07-13 21:32:32 +01:00
23e105cd94
Add the gradient for reduce-sum. ( #162 )
...
* Add the gradient for reduce-sum.
* And add the gradient for the broadcast ops.
* Add some backprop tests.
* Add some linear regression example.
2023-07-13 20:14:10 +01:00
ded93a1169
Add the SGD optimizer ( #160 )
...
* Add the nn::optim and some conversion traits.
* Add the backward_step function for SGD.
* Get the SGD optimizer to work and add a test.
* Make the test slighly simpler.
2023-07-13 19:05:44 +01:00
5ee3c95582
Move the variable creation to the variable module. ( #159 )
...
* Move the variable creation to the variable module.
* Make it possible to set a variable.
* Add some basic gradient descent test.
* Get the gradient descent test to work.
2023-07-13 16:55:40 +01:00
6991036bc5
Introduce the variables api used for adjusting parameters during the training loop. ( #158 )
...
* Add the variable api.
* And add a comment.
2023-07-13 14:09:51 +01:00
7adc8c903a
Expose the storage publicly. ( #157 )
2023-07-13 13:52:36 +01:00
21aa29ddce
Use a rwlock for inner mutability. ( #156 )
...
* Use a rw-lock.
* Make clippy happier.
2023-07-13 11:25:24 +01:00
dfabc708f2
Fix a comment. ( #155 )
2023-07-13 11:11:37 +01:00
50b0946a2d
Tensor mutability ( #154 )
...
* Working towards tensor mutability.
* Use a ref-cell to provide tensor mutability.
2023-07-13 11:04:40 +01:00
a86ec4b9f0
Add more documentation and examples. ( #149 )
...
* Add more documentation and examples.
* More documentation and tests.
* Document more tensor functions.
* Again more examples and tests.
2023-07-12 17:40:17 +01:00
8aab787384
Test the index op + bugfix. ( #148 )
2023-07-12 15:42:36 +01:00
ba35d895e7
Sketch the candle-transformers crate. ( #147 )
...
* Sketch the candle-transformers crate.
* Format the empty files.
2023-07-12 13:49:31 +01:00
20599172ac
Add from_iter and arange, use it in the doctests. ( #145 )
2023-07-12 12:03:01 +01:00
b3b39cca92
Llama batch ( #144 )
...
* Add a batch dimension to llama.
* Bugfixes.
2023-07-12 11:38:19 +01:00
bcf96e3cf3
Implement the backend trait for the cpu backend. ( #143 )
2023-07-12 09:54:33 +01:00
a76ec797da
Cleanup the main crate error and add a couple dedicated ones ( #142 )
...
* Cosmetic cleanups to the error enum.
* More error cleanup.
* Proper error handling rather than panicing.
* Add some conv1d dedicated error.
2023-07-12 09:17:08 +01:00
fa760759e5
Allow for lazy loading of npz files, use it in llama to reduce memory usage in the cpu version. ( #141 )
2023-07-11 20:22:34 +01:00
37cad85869
Resurrect the llama npy support. ( #140 )
2023-07-11 19:32:10 +01:00
64264d97c1
Modular backends ( #138 )
...
* Add some trait to formalize backends.
* Use the generic backend trait.
2023-07-11 11:17:02 +01:00
674eb35e10
Remove some dead-code pragmas. ( #137 )
2023-07-11 09:33:59 +01:00
ae79c00e48
Allow for uniform initialization in a single step. ( #136 )
2023-07-11 08:52:29 +01:00
2be09dbb1d
Macroify the repeating bits. ( #129 )
2023-07-10 19:44:06 +01:00
23849cb6e6
Merge pull request #124 from LaurentMazare/new_doc
...
Squeeze/unsqueeze/reshape
2023-07-10 20:43:23 +02:00
fba07d6b6b
Merge pull request #127 from LaurentMazare/tensor_indexing
...
`i(..)` indexing sugar (partial).
2023-07-10 19:56:34 +02:00
1ad235953b
Clippy ?
2023-07-10 19:34:38 +02:00
c9d354f5ae
Update candle-core/src/tensor.rs
2023-07-10 19:29:22 +02:00
f29b77ec19
Random initializers. ( #128 )
...
* Random initialization.
* CPU rng generation.
2023-07-10 18:26:21 +01:00
5ea747c047
Update candle-core/src/indexer.rs
2023-07-10 19:02:35 +02:00
ef0375d8bc
i(..)
indexing sugar (partial).
...
- Only range, and select (no tensor_select)
- No negative indexing
2023-07-10 17:34:04 +02:00
e2807c78a4
Enable the doctests to run with mkl (though they are broken for now). ( #126 )
2023-07-10 16:27:46 +01:00
548b1df7ea
Remove the dependency to blas and use mkl directly. ( #125 )
2023-07-10 15:52:03 +01:00
e01d099b71
Squeeze/unsqueeze/reshape
2023-07-10 16:40:25 +02:00
221b1aff65
Support dgemm in mkl matmul. ( #122 )
2023-07-10 15:02:37 +01:00
9a667155fd
Removed commented deny
2023-07-10 15:18:23 +02:00
2c8fbe8155
oops.
2023-07-10 15:13:52 +02:00
49f4a77ffd
Put them back.
2023-07-10 15:11:48 +02:00
38ac50eeda
Adding some doc + Extended stack
to work with extra final dimensions.
2023-07-10 14:51:10 +02:00