5cc843550d
Add binary and ternary custom ops. ( #217 )
2023-07-21 17:29:50 +01:00
a6bcdfb269
Custom ops with a single argument ( #214 )
...
* Add the CustomOp1 trait.
* Add an example of custom op.
* Polish the custom op example.
* Add some backward pass test for custom ops.
2023-07-21 15:18:05 +01:00
b02229ce92
Add some epsilon tolerance to grad tests so that they work on cuda / mkl. ( #213 )
2023-07-21 12:45:14 +01:00
410654525f
Refactor the reduce ops in order to introduce argmin/argmax. ( #212 )
...
* Refactor the reduce ops in order to introduce argmin/argmax.
* Clippy fixes.
* Use the newly introduced argmax.
* Fix the strided case.
* Handle the non-contiguous case.
2023-07-21 11:41:08 +01:00
4845d5cc64
More realistic training setup. ( #210 )
...
* More realistic training setup.
* Compute the model accuracy.
* Very inefficient backprop for index select.
* More backprop.
* Fix some backprop issues.
* Backprop fix.
* Another broadcasting backprop fix.
* Better backprop for reducing ops.
* Training again.
* Add some gradient tests.
* Get the training to work.
2023-07-20 18:25:41 +01:00
fa08fb3126
Add the index-select op. ( #209 )
...
* Add the index-select op.
* Cpu implementation of index-select.
* Add the cpu implementation for index-select.
2023-07-20 14:01:03 +01:00
2a8f28d687
Op refactor ( #208 )
...
* Add the binary and unary op enums to factorize some code.
* Bugfix.
2023-07-20 12:28:45 +01:00
e9c052bf94
Add the comparison operations. ( #207 )
...
* Add the comparison operations.
* Add the helper functions on the tensor side.
* More cmp operations.
* Cpu implementation for the comparison operations.
2023-07-20 09:40:31 +01:00
ad12e20f6b
Add cpu support for min and max. ( #202 )
...
* Add cpu support for min and max.
* Add min/max all.
2023-07-19 17:11:44 +01:00
cb687b4897
Add some more developed training examples. ( #199 )
...
* Use contiguous tensors for variables.
* Sketch the mnist example.
* Start adding the reduce ops.
* Renaming.
* Refactor the reduce operations.
* Bugfix for the broadcasting vectorization.
2023-07-19 15:37:52 +01:00
18ea92d83b
Iteration over strided blocks ( #175 )
...
* Introduce the strided blocks.
* Use the strided blocks to fasten the copy.
* Add more testing.
2023-07-15 21:30:35 +01:00
d88b6cdca9
Add backtrace information to errors where relevant. ( #166 )
...
* Add backtrace information to errors where relevant.
* More backtrace information.
* Add to the FAQ.
2023-07-14 09:31:25 +01:00
a2f72edc0d
Simplify the parameters used by sum and sum_keepdim. ( #165 )
2023-07-14 08:22:08 +01:00
2bfa791336
Use the same default as pytorch for sum. ( #164 )
2023-07-13 21:32:32 +01:00
5ee3c95582
Move the variable creation to the variable module. ( #159 )
...
* Move the variable creation to the variable module.
* Make it possible to set a variable.
* Add some basic gradient descent test.
* Get the gradient descent test to work.
2023-07-13 16:55:40 +01:00
7adc8c903a
Expose the storage publicly. ( #157 )
2023-07-13 13:52:36 +01:00
21aa29ddce
Use a rwlock for inner mutability. ( #156 )
...
* Use a rw-lock.
* Make clippy happier.
2023-07-13 11:25:24 +01:00
dfabc708f2
Fix a comment. ( #155 )
2023-07-13 11:11:37 +01:00
50b0946a2d
Tensor mutability ( #154 )
...
* Working towards tensor mutability.
* Use a ref-cell to provide tensor mutability.
2023-07-13 11:04:40 +01:00
a86ec4b9f0
Add more documentation and examples. ( #149 )
...
* Add more documentation and examples.
* More documentation and tests.
* Document more tensor functions.
* Again more examples and tests.
2023-07-12 17:40:17 +01:00
20599172ac
Add from_iter and arange, use it in the doctests. ( #145 )
2023-07-12 12:03:01 +01:00
a76ec797da
Cleanup the main crate error and add a couple dedicated ones ( #142 )
...
* Cosmetic cleanups to the error enum.
* More error cleanup.
* Proper error handling rather than panicing.
* Add some conv1d dedicated error.
2023-07-12 09:17:08 +01:00
64264d97c1
Modular backends ( #138 )
...
* Add some trait to formalize backends.
* Use the generic backend trait.
2023-07-11 11:17:02 +01:00
ae79c00e48
Allow for uniform initialization in a single step. ( #136 )
2023-07-11 08:52:29 +01:00
23849cb6e6
Merge pull request #124 from LaurentMazare/new_doc
...
Squeeze/unsqueeze/reshape
2023-07-10 20:43:23 +02:00
fba07d6b6b
Merge pull request #127 from LaurentMazare/tensor_indexing
...
`i(..)` indexing sugar (partial).
2023-07-10 19:56:34 +02:00
c9d354f5ae
Update candle-core/src/tensor.rs
2023-07-10 19:29:22 +02:00
f29b77ec19
Random initializers. ( #128 )
...
* Random initialization.
* CPU rng generation.
2023-07-10 18:26:21 +01:00
ef0375d8bc
i(..)
indexing sugar (partial).
...
- Only range, and select (no tensor_select)
- No negative indexing
2023-07-10 17:34:04 +02:00
e01d099b71
Squeeze/unsqueeze/reshape
2023-07-10 16:40:25 +02:00
9a667155fd
Removed commented deny
2023-07-10 15:18:23 +02:00
2c8fbe8155
oops.
2023-07-10 15:13:52 +02:00
49f4a77ffd
Put them back.
2023-07-10 15:11:48 +02:00
38ac50eeda
Adding some doc + Extended stack
to work with extra final dimensions.
2023-07-10 14:51:10 +02:00
270997a055
Add the elu op. ( #113 )
2023-07-09 21:56:31 +01:00
4afa461b34
Sketch the Falcon model. ( #93 )
...
* Sketch the Falcon model.
* Add more substance to the falcon example.
* Falcon (wip).
* Falcon (wip again).
* Falcon inference.
* Get the weights from the api and properly generate the model.
* Use the proper model.
* Fix the file/revision names.
* Fix bias handling.
* Recompute the rot embeddings.
* Fix the input shape.
* Add the release-with-debug profile.
* Silly bugfix.
* More bugfixes.
* Stricter shape checking in matmul.
2023-07-06 19:01:21 +01:00
e2bfbcb79c
Support dim indexes in cat.
2023-07-05 20:39:08 +01:00
2c3d871b2e
Add a simpler way to specify the dim index for some ops.
2023-07-05 20:22:43 +01:00
6d1e79d378
Bugfix for to_scalar (use the proper start offset).
2023-07-05 06:42:29 +01:00
950b4af49e
Proper conv1d dispatch.
2023-07-04 11:29:28 +01:00
a424d95473
Add more of the conv1d op.
2023-07-04 11:15:45 +01:00
3aac1047fe
Sketch the conv1d op.
2023-07-04 10:52:34 +01:00
19cbbc5212
Improve how we check that the dims are in bounds.
2023-06-30 09:11:00 +01:00
b50bd880ce
Only narrow when needed + deactivate the kv cache.
2023-06-29 19:07:52 +01:00
2741b39ad3
Use broadcasted scalars for const tensors.
2023-06-29 11:56:40 +01:00
122e334d0c
Simplify the pattern matching logic in the cuda backend.
2023-06-29 09:21:11 +01:00
3f0d9fbb25
Adapt the cuda bits.
2023-06-28 15:43:03 +01:00
caafef6cc1
Get the cpu tests to run.
2023-06-28 14:32:02 +01:00
14449ff80c
Get the cpu backend to compile.
2023-06-28 14:12:38 +01:00
303b853098
Propagate the layout refactoring.
2023-06-28 13:42:23 +01:00