5ee3c95582
Move the variable creation to the variable module. ( #159 )
...
* Move the variable creation to the variable module.
* Make it possible to set a variable.
* Add some basic gradient descent test.
* Get the gradient descent test to work.
2023-07-13 16:55:40 +01:00
6991036bc5
Introduce the variables api used for adjusting parameters during the training loop. ( #158 )
...
* Add the variable api.
* And add a comment.
2023-07-13 14:09:51 +01:00
7adc8c903a
Expose the storage publicly. ( #157 )
2023-07-13 13:52:36 +01:00
21aa29ddce
Use a rwlock for inner mutability. ( #156 )
...
* Use a rw-lock.
* Make clippy happier.
2023-07-13 11:25:24 +01:00
dfabc708f2
Fix a comment. ( #155 )
2023-07-13 11:11:37 +01:00
50b0946a2d
Tensor mutability ( #154 )
...
* Working towards tensor mutability.
* Use a ref-cell to provide tensor mutability.
2023-07-13 11:04:40 +01:00
a86ec4b9f0
Add more documentation and examples. ( #149 )
...
* Add more documentation and examples.
* More documentation and tests.
* Document more tensor functions.
* Again more examples and tests.
2023-07-12 17:40:17 +01:00
8aab787384
Test the index op + bugfix. ( #148 )
2023-07-12 15:42:36 +01:00
ba35d895e7
Sketch the candle-transformers crate. ( #147 )
...
* Sketch the candle-transformers crate.
* Format the empty files.
2023-07-12 13:49:31 +01:00
20599172ac
Add from_iter and arange, use it in the doctests. ( #145 )
2023-07-12 12:03:01 +01:00
b3b39cca92
Llama batch ( #144 )
...
* Add a batch dimension to llama.
* Bugfixes.
2023-07-12 11:38:19 +01:00
bcf96e3cf3
Implement the backend trait for the cpu backend. ( #143 )
2023-07-12 09:54:33 +01:00
a76ec797da
Cleanup the main crate error and add a couple dedicated ones ( #142 )
...
* Cosmetic cleanups to the error enum.
* More error cleanup.
* Proper error handling rather than panicing.
* Add some conv1d dedicated error.
2023-07-12 09:17:08 +01:00
fa760759e5
Allow for lazy loading of npz files, use it in llama to reduce memory usage in the cpu version. ( #141 )
2023-07-11 20:22:34 +01:00
37cad85869
Resurrect the llama npy support. ( #140 )
2023-07-11 19:32:10 +01:00
64264d97c1
Modular backends ( #138 )
...
* Add some trait to formalize backends.
* Use the generic backend trait.
2023-07-11 11:17:02 +01:00
674eb35e10
Remove some dead-code pragmas. ( #137 )
2023-07-11 09:33:59 +01:00
ae79c00e48
Allow for uniform initialization in a single step. ( #136 )
2023-07-11 08:52:29 +01:00
2be09dbb1d
Macroify the repeating bits. ( #129 )
2023-07-10 19:44:06 +01:00
23849cb6e6
Merge pull request #124 from LaurentMazare/new_doc
...
Squeeze/unsqueeze/reshape
2023-07-10 20:43:23 +02:00
fba07d6b6b
Merge pull request #127 from LaurentMazare/tensor_indexing
...
`i(..)` indexing sugar (partial).
2023-07-10 19:56:34 +02:00
1ad235953b
Clippy ?
2023-07-10 19:34:38 +02:00
c9d354f5ae
Update candle-core/src/tensor.rs
2023-07-10 19:29:22 +02:00
f29b77ec19
Random initializers. ( #128 )
...
* Random initialization.
* CPU rng generation.
2023-07-10 18:26:21 +01:00
5ea747c047
Update candle-core/src/indexer.rs
2023-07-10 19:02:35 +02:00
ef0375d8bc
i(..)
indexing sugar (partial).
...
- Only range, and select (no tensor_select)
- No negative indexing
2023-07-10 17:34:04 +02:00
e2807c78a4
Enable the doctests to run with mkl (though they are broken for now). ( #126 )
2023-07-10 16:27:46 +01:00
548b1df7ea
Remove the dependency to blas and use mkl directly. ( #125 )
2023-07-10 15:52:03 +01:00
e01d099b71
Squeeze/unsqueeze/reshape
2023-07-10 16:40:25 +02:00
221b1aff65
Support dgemm in mkl matmul. ( #122 )
2023-07-10 15:02:37 +01:00
9a667155fd
Removed commented deny
2023-07-10 15:18:23 +02:00
2c8fbe8155
oops.
2023-07-10 15:13:52 +02:00
49f4a77ffd
Put them back.
2023-07-10 15:11:48 +02:00
38ac50eeda
Adding some doc + Extended stack
to work with extra final dimensions.
2023-07-10 14:51:10 +02:00
868743b8b9
Expanding a bit the README
2023-07-10 12:51:37 +02:00
9ce0f1c010
Sketch the candle-nn crate. ( #115 )
...
* Sketch the candle-nn crate.
* Tweak the cuda dependencies.
* More cuda tweaks.
2023-07-10 08:50:09 +01:00
270997a055
Add the elu op. ( #113 )
2023-07-09 21:56:31 +01:00
eb64ad0d4d
Cuda kernel for the conv1d op ( #111 )
...
* Boilerplate code for conv1d.
* Boilerplate code for conv1d.
* More boilerplate for conv1d.
* Conv1d work.
* Get the conv1d cuda kernel to work.
* Conv1d support when no batch dim.
2023-07-08 18:13:25 +01:00
5c3864f9f7
Add more sum tests. ( #110 )
...
* Add some tests for the sum.
* More sum testing.
2023-07-08 13:15:36 +01:00
e676f85f00
Sketch a fast cuda kernel for reduce-sum. ( #109 )
...
* Sketch a fast cuda kernel for reduce-sum.
* Sketch the rust support code for the fast sum kernel.
* More work on the fast kernel.
* Add some testing ground.
* A couple fixes for the fast sum kernel.
2023-07-08 12:43:56 +01:00
33479c5f1b
Add some very simple sum benchmark. ( #108 )
...
* Add some very simple sum benchmark.
* Rename the file.
2023-07-08 08:39:27 +01:00
02b5c38049
Use cublas bf16. ( #101 )
2023-07-07 08:00:12 +01:00
666c6f07a1
Merge pull request #88 from LaurentMazare/fix_unsafe_loads
...
Fixing unsafe slow load (memcpy).
2023-07-07 00:09:56 +02:00
ce27073feb
Update candle-core/src/safetensors.rs
2023-07-06 23:59:54 +02:00
4afa461b34
Sketch the Falcon model. ( #93 )
...
* Sketch the Falcon model.
* Add more substance to the falcon example.
* Falcon (wip).
* Falcon (wip again).
* Falcon inference.
* Get the weights from the api and properly generate the model.
* Use the proper model.
* Fix the file/revision names.
* Fix bias handling.
* Recompute the rot embeddings.
* Fix the input shape.
* Add the release-with-debug profile.
* Silly bugfix.
* More bugfixes.
* Stricter shape checking in matmul.
2023-07-06 19:01:21 +01:00
f1e29cd405
Allow using mkl in tests. ( #90 )
2023-07-06 13:25:05 +01:00
054717e236
Fixing unsafe slow load (memcpy).
...
- Without the annotation, I think the rust compiler assumes it's all u8.
It did segfault trying to load `Roberta`.
2023-07-06 13:14:33 +02:00
dd60bd84bb
MKL adjustments. ( #87 )
2023-07-06 11:37:27 +01:00
c297a50960
Add mkl support for matrix multiply. ( #86 )
...
* Fix some rebase issues.
* Use mkl instead.
* Use mkl in bert.
* Add the optional mkl feature.
* Conditional compilation based on the mkl feature.
* Add more mkl support.
2023-07-06 11:05:05 +01:00
e2bfbcb79c
Support dim indexes in cat.
2023-07-05 20:39:08 +01:00