6bd8c2d34b
Adding benchmark.
2023-08-29 17:01:40 +02:00
dfd624dbd3
[Proposal] Remove SafeTensor wrapper (allows finer control for users).
2023-07-19 16:25:44 +02:00
67e20c3792
Sum over more dims. ( #197 )
2023-07-19 06:46:32 +01:00
76dcc7a381
Test the broadcasting binary ops. ( #196 )
2023-07-19 06:18:36 +01:00
fd55fc9592
Add an optimized case when performing the softmax over the last dimension. ( #195 )
2023-07-18 17:59:50 +01:00
6623c227d8
Allow the compiler to vectorize some broadcasting loops. ( #194 )
...
* Allow the compiler to vectorize some broadcasting loops.
* Improve the symmetrical broadcasting case.
2023-07-18 17:12:32 +01:00
79a5b686d0
Properly use the offset when broadcasting on a narrow slice. ( #193 )
2023-07-18 16:36:23 +01:00
a45a3f0312
Optimize the sum for the contiguous case. ( #192 )
2023-07-18 14:57:06 +01:00
3307db204a
Mklize more unary ops. ( #191 )
...
* Mklize more unary ops.
* Even more unary ops.
2023-07-18 13:32:49 +01:00
ff61a42ad7
Use mkl to accelerate binary ops. ( #190 )
...
* Vectorized binary ops with mkl.
* Improve the binary op mkl support.
* Push the support for mkl binary ops.
* Proper vectorization of binary ops.
* Proper mkl'isation when broadcasting binary ops.
2023-07-18 12:04:39 +01:00
b706f32839
Add Shape try into ( #189 )
...
* Add the TryInto trait for shapes.
* Use the vectorized operations in block mode too.
2023-07-18 10:52:16 +01:00
d73df74cb2
Preliminary support for mkl based gelu. ( #187 )
...
* Preliminary support for mkl based gelu.
* Add the vectorized function for unary ops.
* Get the mkl specialized gelu to work.
2023-07-18 07:48:48 +01:00
c3a73c583e
Add support for mkl tanh. ( #185 )
2023-07-17 22:06:43 +01:00
acb2f90469
Broadcasting performance optimization (cpu) ( #182 )
...
* Avoid recomputing the index from scratch each time.
* More performance optimisations.
2023-07-17 13:41:09 +01:00
5b1c0bc9be
Performance improvement. ( #181 )
2023-07-17 11:07:14 +01:00
28e1c07304
Process unary functions per block ( #180 )
...
* Process unary functions per block.
* Add some inline hints.
2023-07-17 10:22:33 +01:00
104f89df31
Centralize the dependency versions and inherit them. ( #177 )
2023-07-16 07:47:17 +01:00
18ea92d83b
Iteration over strided blocks ( #175 )
...
* Introduce the strided blocks.
* Use the strided blocks to fasten the copy.
* Add more testing.
2023-07-15 21:30:35 +01:00
66750f9827
Add some 'cuda-if-available' helper function. ( #172 )
2023-07-15 08:25:15 +01:00
3672e1a46f
Revert "Testing fmt CI check behind cuda feature flag."
...
This reverts commit b9605310b1
.
2023-07-14 15:18:14 +00:00
b9605310b1
Testing fmt CI check behind cuda feature flag.
2023-07-14 15:14:52 +00:00
dcb4a9291e
Expliciting how to enable cuda.
2023-07-14 17:08:05 +02:00
4ed56d7861
Removing cuda default.
...
Seems very important for a lot of exploring users usually on laptop
without GPUs.
Adding more README instructions in a follow up.
2023-07-14 16:52:15 +02:00
88f666781f
Wasm proof of concept. ( #167 )
...
* Wasm proof of concept.
* Run whisper inference in the browser.
* Some fixes.
* Move the wasm example.
* Change the tokenizer config.
2023-07-14 14:51:46 +01:00
d88b6cdca9
Add backtrace information to errors where relevant. ( #166 )
...
* Add backtrace information to errors where relevant.
* More backtrace information.
* Add to the FAQ.
2023-07-14 09:31:25 +01:00
a2f72edc0d
Simplify the parameters used by sum and sum_keepdim. ( #165 )
2023-07-14 08:22:08 +01:00
2bfa791336
Use the same default as pytorch for sum. ( #164 )
2023-07-13 21:32:32 +01:00
23e105cd94
Add the gradient for reduce-sum. ( #162 )
...
* Add the gradient for reduce-sum.
* And add the gradient for the broadcast ops.
* Add some backprop tests.
* Add some linear regression example.
2023-07-13 20:14:10 +01:00
ded93a1169
Add the SGD optimizer ( #160 )
...
* Add the nn::optim and some conversion traits.
* Add the backward_step function for SGD.
* Get the SGD optimizer to work and add a test.
* Make the test slighly simpler.
2023-07-13 19:05:44 +01:00
5ee3c95582
Move the variable creation to the variable module. ( #159 )
...
* Move the variable creation to the variable module.
* Make it possible to set a variable.
* Add some basic gradient descent test.
* Get the gradient descent test to work.
2023-07-13 16:55:40 +01:00
6991036bc5
Introduce the variables api used for adjusting parameters during the training loop. ( #158 )
...
* Add the variable api.
* And add a comment.
2023-07-13 14:09:51 +01:00
7adc8c903a
Expose the storage publicly. ( #157 )
2023-07-13 13:52:36 +01:00
21aa29ddce
Use a rwlock for inner mutability. ( #156 )
...
* Use a rw-lock.
* Make clippy happier.
2023-07-13 11:25:24 +01:00
dfabc708f2
Fix a comment. ( #155 )
2023-07-13 11:11:37 +01:00
50b0946a2d
Tensor mutability ( #154 )
...
* Working towards tensor mutability.
* Use a ref-cell to provide tensor mutability.
2023-07-13 11:04:40 +01:00
a86ec4b9f0
Add more documentation and examples. ( #149 )
...
* Add more documentation and examples.
* More documentation and tests.
* Document more tensor functions.
* Again more examples and tests.
2023-07-12 17:40:17 +01:00
8aab787384
Test the index op + bugfix. ( #148 )
2023-07-12 15:42:36 +01:00
ba35d895e7
Sketch the candle-transformers crate. ( #147 )
...
* Sketch the candle-transformers crate.
* Format the empty files.
2023-07-12 13:49:31 +01:00
20599172ac
Add from_iter and arange, use it in the doctests. ( #145 )
2023-07-12 12:03:01 +01:00
b3b39cca92
Llama batch ( #144 )
...
* Add a batch dimension to llama.
* Bugfixes.
2023-07-12 11:38:19 +01:00
bcf96e3cf3
Implement the backend trait for the cpu backend. ( #143 )
2023-07-12 09:54:33 +01:00
a76ec797da
Cleanup the main crate error and add a couple dedicated ones ( #142 )
...
* Cosmetic cleanups to the error enum.
* More error cleanup.
* Proper error handling rather than panicing.
* Add some conv1d dedicated error.
2023-07-12 09:17:08 +01:00
fa760759e5
Allow for lazy loading of npz files, use it in llama to reduce memory usage in the cpu version. ( #141 )
2023-07-11 20:22:34 +01:00
37cad85869
Resurrect the llama npy support. ( #140 )
2023-07-11 19:32:10 +01:00
64264d97c1
Modular backends ( #138 )
...
* Add some trait to formalize backends.
* Use the generic backend trait.
2023-07-11 11:17:02 +01:00
674eb35e10
Remove some dead-code pragmas. ( #137 )
2023-07-11 09:33:59 +01:00
ae79c00e48
Allow for uniform initialization in a single step. ( #136 )
2023-07-11 08:52:29 +01:00
2be09dbb1d
Macroify the repeating bits. ( #129 )
2023-07-10 19:44:06 +01:00
23849cb6e6
Merge pull request #124 from LaurentMazare/new_doc
...
Squeeze/unsqueeze/reshape
2023-07-10 20:43:23 +02:00
fba07d6b6b
Merge pull request #127 from LaurentMazare/tensor_indexing
...
`i(..)` indexing sugar (partial).
2023-07-10 19:56:34 +02:00