12d6dc018d
Support for MQA for llama v2. ( #205 )
...
* Support for MQA for llama v2.
* More llama-v2.
* Move the rotary embedding precomputation in the cache.
* Add a v2 flag.
* Use the hf model.
2023-07-20 06:39:04 +01:00
c34f932319
Fix the mkl build. ( #204 )
...
* Fix the mkl build.
* Fix the build properly.
2023-07-19 19:41:11 +01:00
536c5e702e
Cuda kernels for fast min/max reductions ( #203 )
...
* Add the min/max cuda kernels.
* Better integration of the cuda kernels.
2023-07-19 18:12:27 +01:00
001f9a59ce
Merge pull request #201 from LaurentMazare/remove_wrapper
...
[Proposal] Remove SafeTensor wrapper (allows finer control for users).
2023-07-19 19:02:37 +02:00
9515e8ea6c
Merge branch 'main' into remove_wrapper
2023-07-19 18:53:55 +02:00
ad12e20f6b
Add cpu support for min and max. ( #202 )
...
* Add cpu support for min and max.
* Add min/max all.
2023-07-19 17:11:44 +01:00
e6584476c4
Merge pull request #200 from LaurentMazare/removing_candle_hub
...
Removing `candle-hub` internal to extract into `hf-hub` standalone.
2023-07-19 17:27:55 +02:00
cb687b4897
Add some more developed training examples. ( #199 )
...
* Use contiguous tensors for variables.
* Sketch the mnist example.
* Start adding the reduce ops.
* Renaming.
* Refactor the reduce operations.
* Bugfix for the broadcasting vectorization.
2023-07-19 15:37:52 +01:00
dfd624dbd3
[Proposal] Remove SafeTensor wrapper (allows finer control for users).
2023-07-19 16:25:44 +02:00
439321745a
Removing candle-hub
internal to extract into hf-hub
standalone.
2023-07-19 15:04:38 +02:00
67e20c3792
Sum over more dims. ( #197 )
2023-07-19 06:46:32 +01:00
76dcc7a381
Test the broadcasting binary ops. ( #196 )
2023-07-19 06:18:36 +01:00
fd55fc9592
Add an optimized case when performing the softmax over the last dimension. ( #195 )
2023-07-18 17:59:50 +01:00
6623c227d8
Allow the compiler to vectorize some broadcasting loops. ( #194 )
...
* Allow the compiler to vectorize some broadcasting loops.
* Improve the symmetrical broadcasting case.
2023-07-18 17:12:32 +01:00
79a5b686d0
Properly use the offset when broadcasting on a narrow slice. ( #193 )
2023-07-18 16:36:23 +01:00
a45a3f0312
Optimize the sum for the contiguous case. ( #192 )
2023-07-18 14:57:06 +01:00
3307db204a
Mklize more unary ops. ( #191 )
...
* Mklize more unary ops.
* Even more unary ops.
2023-07-18 13:32:49 +01:00
ff61a42ad7
Use mkl to accelerate binary ops. ( #190 )
...
* Vectorized binary ops with mkl.
* Improve the binary op mkl support.
* Push the support for mkl binary ops.
* Proper vectorization of binary ops.
* Proper mkl'isation when broadcasting binary ops.
2023-07-18 12:04:39 +01:00
b706f32839
Add Shape try into ( #189 )
...
* Add the TryInto trait for shapes.
* Use the vectorized operations in block mode too.
2023-07-18 10:52:16 +01:00
d6313d2447
Add more tracing details to bert. ( #188 )
2023-07-18 08:11:05 +01:00
d73df74cb2
Preliminary support for mkl based gelu. ( #187 )
...
* Preliminary support for mkl based gelu.
* Add the vectorized function for unary ops.
* Get the mkl specialized gelu to work.
2023-07-18 07:48:48 +01:00
b8abe2bb4b
Factorize the tokenizers version in the workspace cargo def. ( #186 )
2023-07-18 06:48:13 +01:00
c3a73c583e
Add support for mkl tanh. ( #185 )
2023-07-17 22:06:43 +01:00
f0cccd08f0
Bert tracing ( #184 )
...
* Add some tracing to bert.
* More tracing.
* Add a flag for tracing.
2023-07-17 19:40:42 +01:00
49ea09c73c
Gemm update ( #183 )
...
* Update the gemm dependency.
* Update the comment too.
* Pin the sha256 dependency.
2023-07-17 14:05:39 +01:00
acb2f90469
Broadcasting performance optimization (cpu) ( #182 )
...
* Avoid recomputing the index from scratch each time.
* More performance optimisations.
2023-07-17 13:41:09 +01:00
5b1c0bc9be
Performance improvement. ( #181 )
2023-07-17 11:07:14 +01:00
28e1c07304
Process unary functions per block ( #180 )
...
* Process unary functions per block.
* Add some inline hints.
2023-07-17 10:22:33 +01:00
2a74019ec6
Vision dataset ( #179 )
...
* Add some readers for the mnist dataset.
* Import the cifar and mnist dataset.
2023-07-16 23:43:55 +01:00
6de7345e39
Improve the wasm ui. ( #178 )
...
* Improve the wasm ui.
* Improve the UI.
* Cosmetic changes.
2023-07-16 14:22:40 +01:00
104f89df31
Centralize the dependency versions and inherit them. ( #177 )
2023-07-16 07:47:17 +01:00
3fb1c4ea96
Add more profiling information for the wasm example. ( #176 )
2023-07-16 07:18:34 +01:00
18ea92d83b
Iteration over strided blocks ( #175 )
...
* Introduce the strided blocks.
* Use the strided blocks to fasten the copy.
* Add more testing.
2023-07-15 21:30:35 +01:00
ad91415b4f
Add some wasm profiling. ( #173 )
2023-07-15 09:16:15 +01:00
66750f9827
Add some 'cuda-if-available' helper function. ( #172 )
2023-07-15 08:25:15 +01:00
2ddda706bd
Switch to using trunk. ( #171 )
2023-07-14 22:06:40 +01:00
d1f5d44c04
Reenable pyo3 in the workspace list ( #170 )
...
* Enable pyo3 back.
* Adapt the CI.
2023-07-14 19:54:38 +01:00
d1f6fad84a
Whisper example in wasm. ( #169 )
...
* Whisper example in wasm.
* Load the model.
* Get the whisper demo to work (though slowly).
* Polish the UI a bit.
* Minor tweak.
* More UI.
* Add the progress bar.
2023-07-14 19:33:36 +01:00
3f930bdaee
Merge pull request #168 from LaurentMazare/remove_cuda_default
...
Removing cuda default.
2023-07-14 18:08:52 +02:00
3672e1a46f
Revert "Testing fmt CI check behind cuda feature flag."
...
This reverts commit b9605310b1
.
2023-07-14 15:18:14 +00:00
b9605310b1
Testing fmt CI check behind cuda feature flag.
2023-07-14 15:14:52 +00:00
dcb4a9291e
Expliciting how to enable cuda.
2023-07-14 17:08:05 +02:00
2d5e952cf9
Updating the example to reflect new command lines.
2023-07-14 16:56:33 +02:00
4ed56d7861
Removing cuda default.
...
Seems very important for a lot of exploring users usually on laptop
without GPUs.
Adding more README instructions in a follow up.
2023-07-14 16:52:15 +02:00
88f666781f
Wasm proof of concept. ( #167 )
...
* Wasm proof of concept.
* Run whisper inference in the browser.
* Some fixes.
* Move the wasm example.
* Change the tokenizer config.
2023-07-14 14:51:46 +01:00
d88b6cdca9
Add backtrace information to errors where relevant. ( #166 )
...
* Add backtrace information to errors where relevant.
* More backtrace information.
* Add to the FAQ.
2023-07-14 09:31:25 +01:00
a2f72edc0d
Simplify the parameters used by sum and sum_keepdim. ( #165 )
2023-07-14 08:22:08 +01:00
2bfa791336
Use the same default as pytorch for sum. ( #164 )
2023-07-13 21:32:32 +01:00
57be3638d8
Add the pytorch version of the linear regression as a comment. ( #163 )
...
* Add the pytorch version of the linear regression.
* Typo.
2023-07-13 21:05:57 +01:00
23e105cd94
Add the gradient for reduce-sum. ( #162 )
...
* Add the gradient for reduce-sum.
* And add the gradient for the broadcast ops.
* Add some backprop tests.
* Add some linear regression example.
2023-07-13 20:14:10 +01:00