b5bb5e056d
Add more conv2d support. ( #340 )
...
* Add more conv2d support.
* Conv2d cpu work.
* Conv2d output shape.
2023-08-08 06:04:32 +01:00
2345b8ce3f
Skeleton for the avg-pool2d and upsample-nearest2d ops. ( #337 )
...
* Skeleton for the avg-pool2d and upsample-nearest2d ops.
* Preliminary conv2d support.
2023-08-07 16:15:38 +01:00
f53a333ea9
Simple pad support. ( #336 )
...
* Simple pad support.
* Fix the tensor indexing when padding.
2023-08-07 15:24:56 +01:00
2c9f605976
Add rand-like/randn-like. ( #333 )
2023-08-06 21:51:08 +01:00
166bfd5847
Add the recip op + use it in stable-diffusion. ( #331 )
...
* Add the recip unary op.
* Fix the cuda kernel.
* Use the recip op in sigmoid.
2023-08-06 21:14:52 +01:00
d34039e352
Add a stable diffusion example ( #328 )
...
* Start adding a stable-diffusion example.
* Proper computation of the causal mask.
* Add the chunk operation.
* Work in progress: port the attention module.
* Add some dummy modules for conv2d and group-norm, get the attention module to compile.
* Re-enable the 2d convolution.
* Add the embeddings module.
* Add the resnet module.
* Add the unet blocks.
* Add the unet.
* And add the variational auto-encoder.
* Use the pad function from utils.
2023-08-06 17:49:43 +01:00
51e51da896
Rename the candle crate to candle-core ( #301 )
...
* Rename to candle-core.
* More candle-core renaming.
2023-08-02 08:20:22 +01:00
4b3bd79fbd
Remove the embedding ops in favor of index-select. ( #299 )
...
* Remove the embedding ops in favor of index-select.
* Also remove the cuda kernels.
2023-08-02 05:42:11 +01:00
16c33383eb
Improve the mnist training example. ( #276 )
...
* Improve the mnist training example.
* Add some initialization routine that can be used for nn.
* Proper initialization in the mnist example.
2023-07-29 16:28:22 +01:00
3eb2bc6d07
Softmax numerical stability. ( #267 )
...
* Softmax numerical stability.
* Fix the flash-attn test.
2023-07-28 13:13:01 +01:00
6475bfadfe
Simplify Tensor::randn. ( #255 )
...
* Simplify Tensor::randn.
* Also switch Tensor::rand to use a generic dtype.
* Support sampling for f16.
* Cleanup.
2023-07-27 07:40:36 +01:00
c97d51243c
Add an abstract backprop op type ( #240 )
...
* Start adding the backprop op type.
* More backprop ops.
* Finish the backprop op.
2023-07-25 14:07:40 +01:00
be9c26180c
Avoid keeping track of the copy ops when not necessary. ( #239 )
2023-07-25 10:06:01 +01:00
18cc73954a
Add some testing for index-add ( #237 )
...
* Add some testing for index-add.
* Fix the cpu implementation for index-add.
2023-07-25 08:38:33 +01:00
fe87778223
Add the copy op. ( #227 )
...
* Add the copy op.
* Tweak some cat error messages.
* Handle the contiguous case in to_vec1.
* Fast variant for to_vec2.
* Add add a faster to_vec3 variant.
2023-07-23 18:06:47 +01:00
43c7223292
Rename the .r functions to .dims so as to be a bit more explicit. ( #220 )
2023-07-22 10:39:27 +01:00
52c5d8c087
Add the gather op. ( #219 )
...
* Start adding gather.
* Gather cpu implementation + use in simple training.
* Add scatter_add for the gradient of gather.
* Simple cpu implementation of scatter_add.
* Use gather in the simple-training backprop.
2023-07-22 07:21:28 +01:00
6eeea1b04e
Polish the index-add op and use it in the index-select backprop ( #218 )
...
* Add the cpu version of index-add.
* More cpu support for index-add.
* Use index-add in the backprop.
2023-07-22 05:31:46 +01:00
27174a82aa
Start adding index-add.
2023-07-21 20:12:48 +01:00
5cc843550d
Add binary and ternary custom ops. ( #217 )
2023-07-21 17:29:50 +01:00
a6bcdfb269
Custom ops with a single argument ( #214 )
...
* Add the CustomOp1 trait.
* Add an example of custom op.
* Polish the custom op example.
* Add some backward pass test for custom ops.
2023-07-21 15:18:05 +01:00
b02229ce92
Add some epsilon tolerance to grad tests so that they work on cuda / mkl. ( #213 )
2023-07-21 12:45:14 +01:00
410654525f
Refactor the reduce ops in order to introduce argmin/argmax. ( #212 )
...
* Refactor the reduce ops in order to introduce argmin/argmax.
* Clippy fixes.
* Use the newly introduced argmax.
* Fix the strided case.
* Handle the non-contiguous case.
2023-07-21 11:41:08 +01:00
4845d5cc64
More realistic training setup. ( #210 )
...
* More realistic training setup.
* Compute the model accuracy.
* Very inefficient backprop for index select.
* More backprop.
* Fix some backprop issues.
* Backprop fix.
* Another broadcasting backprop fix.
* Better backprop for reducing ops.
* Training again.
* Add some gradient tests.
* Get the training to work.
2023-07-20 18:25:41 +01:00
fa08fb3126
Add the index-select op. ( #209 )
...
* Add the index-select op.
* Cpu implementation of index-select.
* Add the cpu implementation for index-select.
2023-07-20 14:01:03 +01:00
2a8f28d687
Op refactor ( #208 )
...
* Add the binary and unary op enums to factorize some code.
* Bugfix.
2023-07-20 12:28:45 +01:00
e9c052bf94
Add the comparison operations. ( #207 )
...
* Add the comparison operations.
* Add the helper functions on the tensor side.
* More cmp operations.
* Cpu implementation for the comparison operations.
2023-07-20 09:40:31 +01:00
ad12e20f6b
Add cpu support for min and max. ( #202 )
...
* Add cpu support for min and max.
* Add min/max all.
2023-07-19 17:11:44 +01:00
cb687b4897
Add some more developed training examples. ( #199 )
...
* Use contiguous tensors for variables.
* Sketch the mnist example.
* Start adding the reduce ops.
* Renaming.
* Refactor the reduce operations.
* Bugfix for the broadcasting vectorization.
2023-07-19 15:37:52 +01:00
18ea92d83b
Iteration over strided blocks ( #175 )
...
* Introduce the strided blocks.
* Use the strided blocks to fasten the copy.
* Add more testing.
2023-07-15 21:30:35 +01:00
d88b6cdca9
Add backtrace information to errors where relevant. ( #166 )
...
* Add backtrace information to errors where relevant.
* More backtrace information.
* Add to the FAQ.
2023-07-14 09:31:25 +01:00
a2f72edc0d
Simplify the parameters used by sum and sum_keepdim. ( #165 )
2023-07-14 08:22:08 +01:00
2bfa791336
Use the same default as pytorch for sum. ( #164 )
2023-07-13 21:32:32 +01:00
5ee3c95582
Move the variable creation to the variable module. ( #159 )
...
* Move the variable creation to the variable module.
* Make it possible to set a variable.
* Add some basic gradient descent test.
* Get the gradient descent test to work.
2023-07-13 16:55:40 +01:00
7adc8c903a
Expose the storage publicly. ( #157 )
2023-07-13 13:52:36 +01:00
21aa29ddce
Use a rwlock for inner mutability. ( #156 )
...
* Use a rw-lock.
* Make clippy happier.
2023-07-13 11:25:24 +01:00
dfabc708f2
Fix a comment. ( #155 )
2023-07-13 11:11:37 +01:00
50b0946a2d
Tensor mutability ( #154 )
...
* Working towards tensor mutability.
* Use a ref-cell to provide tensor mutability.
2023-07-13 11:04:40 +01:00
a86ec4b9f0
Add more documentation and examples. ( #149 )
...
* Add more documentation and examples.
* More documentation and tests.
* Document more tensor functions.
* Again more examples and tests.
2023-07-12 17:40:17 +01:00
20599172ac
Add from_iter and arange, use it in the doctests. ( #145 )
2023-07-12 12:03:01 +01:00
a76ec797da
Cleanup the main crate error and add a couple dedicated ones ( #142 )
...
* Cosmetic cleanups to the error enum.
* More error cleanup.
* Proper error handling rather than panicing.
* Add some conv1d dedicated error.
2023-07-12 09:17:08 +01:00
64264d97c1
Modular backends ( #138 )
...
* Add some trait to formalize backends.
* Use the generic backend trait.
2023-07-11 11:17:02 +01:00
ae79c00e48
Allow for uniform initialization in a single step. ( #136 )
2023-07-11 08:52:29 +01:00
23849cb6e6
Merge pull request #124 from LaurentMazare/new_doc
...
Squeeze/unsqueeze/reshape
2023-07-10 20:43:23 +02:00
fba07d6b6b
Merge pull request #127 from LaurentMazare/tensor_indexing
...
`i(..)` indexing sugar (partial).
2023-07-10 19:56:34 +02:00
c9d354f5ae
Update candle-core/src/tensor.rs
2023-07-10 19:29:22 +02:00
f29b77ec19
Random initializers. ( #128 )
...
* Random initialization.
* CPU rng generation.
2023-07-10 18:26:21 +01:00
ef0375d8bc
i(..)
indexing sugar (partial).
...
- Only range, and select (no tensor_select)
- No negative indexing
2023-07-10 17:34:04 +02:00
e01d099b71
Squeeze/unsqueeze/reshape
2023-07-10 16:40:25 +02:00
9a667155fd
Removed commented deny
2023-07-10 15:18:23 +02:00