Commit Graph

601 Commits

Author SHA1 Message Date
43c7223292 Rename the .r functions to .dims so as to be a bit more explicit. (#220) 2023-07-22 10:39:27 +01:00
52c5d8c087 Add the gather op. (#219)
* Start adding gather.

* Gather cpu implementation + use in simple training.

* Add scatter_add for the gradient of gather.

* Simple cpu implementation of scatter_add.

* Use gather in the simple-training backprop.
2023-07-22 07:21:28 +01:00
6eeea1b04e Polish the index-add op and use it in the index-select backprop (#218)
* Add the cpu version of index-add.

* More cpu support for index-add.

* Use index-add in the backprop.
2023-07-22 05:31:46 +01:00
27174a82aa Start adding index-add. 2023-07-21 20:12:48 +01:00
5cc843550d Add binary and ternary custom ops. (#217) 2023-07-21 17:29:50 +01:00
4a100875bf Use a macro to handle the dtype pattern matching. (#215) 2023-07-21 16:03:51 +01:00
a6bcdfb269 Custom ops with a single argument (#214)
* Add the CustomOp1 trait.

* Add an example of custom op.

* Polish the custom op example.

* Add some backward pass test for custom ops.
2023-07-21 15:18:05 +01:00
b02229ce92 Add some epsilon tolerance to grad tests so that they work on cuda / mkl. (#213) 2023-07-21 12:45:14 +01:00
410654525f Refactor the reduce ops in order to introduce argmin/argmax. (#212)
* Refactor the reduce ops in order to introduce argmin/argmax.

* Clippy fixes.

* Use the newly introduced argmax.

* Fix the strided case.

* Handle the non-contiguous case.
2023-07-21 11:41:08 +01:00
c60831aad4 Add more gradient tests + bugfixes. (#211)
* Add more gradient tests + bugfixes.

* More tests and fixes.

* More tests.
2023-07-21 06:52:39 +01:00
4845d5cc64 More realistic training setup. (#210)
* More realistic training setup.

* Compute the model accuracy.

* Very inefficient backprop for index select.

* More backprop.

* Fix some backprop issues.

* Backprop fix.

* Another broadcasting backprop fix.

* Better backprop for reducing ops.

* Training again.

* Add some gradient tests.

* Get the training to work.
2023-07-20 18:25:41 +01:00
fa08fb3126 Add the index-select op. (#209)
* Add the index-select op.

* Cpu implementation of index-select.

* Add the cpu implementation for index-select.
2023-07-20 14:01:03 +01:00
2a8f28d687 Op refactor (#208)
* Add the binary and unary op enums to factorize some code.

* Bugfix.
2023-07-20 12:28:45 +01:00
e9c052bf94 Add the comparison operations. (#207)
* Add the comparison operations.

* Add the helper functions on the tensor side.

* More cmp operations.

* Cpu implementation for the comparison operations.
2023-07-20 09:40:31 +01:00
dc416243a3 Bump the hf-hub dependency to 0.1.3. (#206) 2023-07-20 07:27:52 +01:00
12d6dc018d Support for MQA for llama v2. (#205)
* Support for MQA for llama v2.

* More llama-v2.

* Move the rotary embedding precomputation in the cache.

* Add a v2 flag.

* Use the hf model.
2023-07-20 06:39:04 +01:00
c34f932319 Fix the mkl build. (#204)
* Fix the mkl build.

* Fix the build properly.
2023-07-19 19:41:11 +01:00
536c5e702e Cuda kernels for fast min/max reductions (#203)
* Add the min/max cuda kernels.

* Better integration of the cuda kernels.
2023-07-19 18:12:27 +01:00
001f9a59ce Merge pull request #201 from LaurentMazare/remove_wrapper
[Proposal] Remove SafeTensor wrapper (allows finer control for users).
2023-07-19 19:02:37 +02:00
9515e8ea6c Merge branch 'main' into remove_wrapper 2023-07-19 18:53:55 +02:00
ad12e20f6b Add cpu support for min and max. (#202)
* Add cpu support for min and max.

* Add min/max all.
2023-07-19 17:11:44 +01:00
e6584476c4 Merge pull request #200 from LaurentMazare/removing_candle_hub
Removing `candle-hub` internal to extract into `hf-hub` standalone.
2023-07-19 17:27:55 +02:00
cb687b4897 Add some more developed training examples. (#199)
* Use contiguous tensors for variables.

* Sketch the mnist example.

* Start adding the reduce ops.

* Renaming.

* Refactor the reduce operations.

* Bugfix for the broadcasting vectorization.
2023-07-19 15:37:52 +01:00
dfd624dbd3 [Proposal] Remove SafeTensor wrapper (allows finer control for users). 2023-07-19 16:25:44 +02:00
439321745a Removing candle-hub internal to extract into hf-hub standalone. 2023-07-19 15:04:38 +02:00
67e20c3792 Sum over more dims. (#197) 2023-07-19 06:46:32 +01:00
76dcc7a381 Test the broadcasting binary ops. (#196) 2023-07-19 06:18:36 +01:00
fd55fc9592 Add an optimized case when performing the softmax over the last dimension. (#195) 2023-07-18 17:59:50 +01:00
6623c227d8 Allow the compiler to vectorize some broadcasting loops. (#194)
* Allow the compiler to vectorize some broadcasting loops.

* Improve the symmetrical broadcasting case.
2023-07-18 17:12:32 +01:00
79a5b686d0 Properly use the offset when broadcasting on a narrow slice. (#193) 2023-07-18 16:36:23 +01:00
a45a3f0312 Optimize the sum for the contiguous case. (#192) 2023-07-18 14:57:06 +01:00
3307db204a Mklize more unary ops. (#191)
* Mklize more unary ops.

* Even more unary ops.
2023-07-18 13:32:49 +01:00
ff61a42ad7 Use mkl to accelerate binary ops. (#190)
* Vectorized binary ops with mkl.

* Improve the binary op mkl support.

* Push the support for mkl binary ops.

* Proper vectorization of binary ops.

* Proper mkl'isation when broadcasting binary ops.
2023-07-18 12:04:39 +01:00
b706f32839 Add Shape try into (#189)
* Add the TryInto trait for shapes.

* Use the vectorized operations in block mode too.
2023-07-18 10:52:16 +01:00
d6313d2447 Add more tracing details to bert. (#188) 2023-07-18 08:11:05 +01:00
d73df74cb2 Preliminary support for mkl based gelu. (#187)
* Preliminary support for mkl based gelu.

* Add the vectorized function for unary ops.

* Get the mkl specialized gelu to work.
2023-07-18 07:48:48 +01:00
b8abe2bb4b Factorize the tokenizers version in the workspace cargo def. (#186) 2023-07-18 06:48:13 +01:00
c3a73c583e Add support for mkl tanh. (#185) 2023-07-17 22:06:43 +01:00
f0cccd08f0 Bert tracing (#184)
* Add some tracing to bert.

* More tracing.

* Add a flag for tracing.
2023-07-17 19:40:42 +01:00
49ea09c73c Gemm update (#183)
* Update the gemm dependency.

* Update the comment too.

* Pin the sha256 dependency.
2023-07-17 14:05:39 +01:00
acb2f90469 Broadcasting performance optimization (cpu) (#182)
* Avoid recomputing the index from scratch each time.

* More performance optimisations.
2023-07-17 13:41:09 +01:00
5b1c0bc9be Performance improvement. (#181) 2023-07-17 11:07:14 +01:00
28e1c07304 Process unary functions per block (#180)
* Process unary functions per block.

* Add some inline hints.
2023-07-17 10:22:33 +01:00
2a74019ec6 Vision dataset (#179)
* Add some readers for the mnist dataset.

* Import the cifar and mnist dataset.
2023-07-16 23:43:55 +01:00
6de7345e39 Improve the wasm ui. (#178)
* Improve the wasm ui.

* Improve the UI.

* Cosmetic changes.
2023-07-16 14:22:40 +01:00
104f89df31 Centralize the dependency versions and inherit them. (#177) 2023-07-16 07:47:17 +01:00
3fb1c4ea96 Add more profiling information for the wasm example. (#176) 2023-07-16 07:18:34 +01:00
18ea92d83b Iteration over strided blocks (#175)
* Introduce the strided blocks.

* Use the strided blocks to fasten the copy.

* Add more testing.
2023-07-15 21:30:35 +01:00
ad91415b4f Add some wasm profiling. (#173) 2023-07-15 09:16:15 +01:00
66750f9827 Add some 'cuda-if-available' helper function. (#172) 2023-07-15 08:25:15 +01:00