ae79c00e48
Allow for uniform initialization in a single step. ( #136 )
2023-07-11 08:52:29 +01:00
23849cb6e6
Merge pull request #124 from LaurentMazare/new_doc
...
Squeeze/unsqueeze/reshape
2023-07-10 20:43:23 +02:00
fba07d6b6b
Merge pull request #127 from LaurentMazare/tensor_indexing
...
`i(..)` indexing sugar (partial).
2023-07-10 19:56:34 +02:00
c9d354f5ae
Update candle-core/src/tensor.rs
2023-07-10 19:29:22 +02:00
f29b77ec19
Random initializers. ( #128 )
...
* Random initialization.
* CPU rng generation.
2023-07-10 18:26:21 +01:00
ef0375d8bc
i(..)
indexing sugar (partial).
...
- Only range, and select (no tensor_select)
- No negative indexing
2023-07-10 17:34:04 +02:00
e01d099b71
Squeeze/unsqueeze/reshape
2023-07-10 16:40:25 +02:00
9a667155fd
Removed commented deny
2023-07-10 15:18:23 +02:00
2c8fbe8155
oops.
2023-07-10 15:13:52 +02:00
49f4a77ffd
Put them back.
2023-07-10 15:11:48 +02:00
38ac50eeda
Adding some doc + Extended stack
to work with extra final dimensions.
2023-07-10 14:51:10 +02:00
270997a055
Add the elu op. ( #113 )
2023-07-09 21:56:31 +01:00
4afa461b34
Sketch the Falcon model. ( #93 )
...
* Sketch the Falcon model.
* Add more substance to the falcon example.
* Falcon (wip).
* Falcon (wip again).
* Falcon inference.
* Get the weights from the api and properly generate the model.
* Use the proper model.
* Fix the file/revision names.
* Fix bias handling.
* Recompute the rot embeddings.
* Fix the input shape.
* Add the release-with-debug profile.
* Silly bugfix.
* More bugfixes.
* Stricter shape checking in matmul.
2023-07-06 19:01:21 +01:00
e2bfbcb79c
Support dim indexes in cat.
2023-07-05 20:39:08 +01:00
2c3d871b2e
Add a simpler way to specify the dim index for some ops.
2023-07-05 20:22:43 +01:00
6d1e79d378
Bugfix for to_scalar (use the proper start offset).
2023-07-05 06:42:29 +01:00
950b4af49e
Proper conv1d dispatch.
2023-07-04 11:29:28 +01:00
a424d95473
Add more of the conv1d op.
2023-07-04 11:15:45 +01:00
3aac1047fe
Sketch the conv1d op.
2023-07-04 10:52:34 +01:00
19cbbc5212
Improve how we check that the dims are in bounds.
2023-06-30 09:11:00 +01:00
b50bd880ce
Only narrow when needed + deactivate the kv cache.
2023-06-29 19:07:52 +01:00
2741b39ad3
Use broadcasted scalars for const tensors.
2023-06-29 11:56:40 +01:00
122e334d0c
Simplify the pattern matching logic in the cuda backend.
2023-06-29 09:21:11 +01:00
3f0d9fbb25
Adapt the cuda bits.
2023-06-28 15:43:03 +01:00
caafef6cc1
Get the cpu tests to run.
2023-06-28 14:32:02 +01:00
14449ff80c
Get the cpu backend to compile.
2023-06-28 14:12:38 +01:00
303b853098
Propagate the layout refactoring.
2023-06-28 13:42:23 +01:00
30b355ccd2
Simplify the narrow implementation.
2023-06-28 13:09:59 +01:00
c1bbbf94f6
Start refactoring the stride.
2023-06-28 12:57:30 +01:00
615196e7be
Add more gradients.
2023-06-28 09:59:52 +01:00
1ce3843cab
Add the relu op.
2023-06-28 09:38:54 +01:00
b0f5f2d22d
Add some display tests + bugfixes.
2023-06-27 21:37:28 +01:00
934655a60d
Add squeeze/unsqueeze/stack.
2023-06-27 19:32:00 +01:00
1d504cc6b3
Rework the debug trait.
2023-06-27 19:10:30 +01:00
684f66326d
Add the get method.
2023-06-27 17:39:58 +01:00
c44e5346f4
Add some helper functions.
2023-06-27 17:37:09 +01:00
d7f729fb8f
Refactor the hierarchy.
2023-06-27 11:57:27 +02:00