064ba17bd7
Remove print.
2023-12-18 11:04:16 +01:00
e8ee253ee0
Missing cast.
2023-12-18 11:01:18 +01:00
8bd3d6b94b
Index add.
2023-12-18 10:46:01 +01:00
6a3ca7da0c
Scatter add.
2023-12-18 10:32:22 +01:00
96f1a28e39
Add a simple full method. ( #1455 )
...
* Add a simple implementation of the full method.
* Add the docstring.
2023-12-17 20:15:57 -05:00
586b6f6fff
Adding gather op.
2023-12-17 23:34:12 +01:00
e4b0cc59f5
Adding CMP
2023-12-17 22:32:25 +01:00
0a6e0a8c9a
Implement randn (CPU-> device)
2023-12-17 19:09:08 +01:00
972903021c
Finish reduce kernels.
2023-12-17 19:07:00 +01:00
94817dac56
Bump the crate version to 0.3.2. ( #1452 )
2023-12-17 05:34:53 -06:00
1e86717bf2
Fix a couple typos ( #1451 )
...
* Mixtral quantized instruct.
* Fix a couple typos.
2023-12-17 05:20:05 -06:00
c630622a07
Expose AdamW parameters ( #1449 )
...
* Expose AdamW parameters
* Use reference
2023-12-16 18:41:56 -06:00
c4cfcf1539
Tweak the readme for phi and the default sample length. ( #1450 )
2023-12-16 18:11:36 -06:00
1782e93de6
Mixtral quantized instruct. ( #1447 )
2023-12-16 16:16:39 -06:00
cfdf9640a3
Readme tweaks. ( #1446 )
2023-12-16 06:23:12 -06:00
e12cbfd73b
Update the readme to mention mixtral. ( #1443 )
2023-12-15 19:29:03 -06:00
30a958e5dd
Quantized mixtral model ( #1442 )
...
* Add the Mixtral model.
* Add more of the mixtral layers.
* Add the final layers for mixtral.
* Sketch the expert selection.
* Add some expert routing logic.
* Hopefully finish the routing logic for mixtral.
* Add the mixtral example.
* Fix the weight filenames.
* Bugfix.
* Another fix.
* Yet another fix + remove the unused pragma.
* Shape fix.
* Support for quantized mixtral.
* Support mixtral in the quantized example.
* Mlp or moe type.
* Fix the expert field namings.
* Refactor the mlp bit.
* More MoE logic.
* Add the MoE quantized logic.
* Fix the experts length.
2023-12-15 19:16:06 -06:00
614842b311
Add the Mixtral model. ( #1437 )
...
* Add the Mixtral model.
* Add more of the mixtral layers.
* Add the final layers for mixtral.
* Sketch the expert selection.
* Add some expert routing logic.
* Hopefully finish the routing logic for mixtral.
* Add the mixtral example.
* Fix the weight filenames.
* Bugfix.
* Another fix.
* Yet another fix + remove the unused pragma.
* Shape fix.
* Add a readme.
2023-12-15 14:19:56 -06:00
79eab519fd
Fix phi example ( #1436 )
...
* Fix phi example
* Remove the cuda mention.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com >
2023-12-15 07:01:10 -06:00
6bc92e63cb
Addressing a lot of comments.
2023-12-15 13:06:04 +01:00
aa04015098
Remove unwrap()
.
2023-12-15 12:23:28 +01:00
8b5059e951
Remove test file.
2023-12-15 11:55:30 +01:00
26540641c1
Renamed all kernel names.
2023-12-15 11:24:47 +01:00
34d83377f6
Better error message on older macos
2023-12-15 11:18:54 +01:00
77197379cc
More cleanup.
2023-12-15 11:17:05 +01:00
916a8c5464
Revert candle-transformers.
2023-12-15 11:15:21 +01:00
243e83f2b9
Adding a bunch of docs !
...
Co-authored-by: Ivar Flakstad <69173633+ivarflakstad@users.noreply.github.com >
2023-12-15 11:03:05 +01:00
cf27868b57
More cleanup.
2023-12-15 01:44:22 +01:00
40c3e1bd5a
cleanup.
2023-12-15 01:41:14 +01:00
ece4c69a68
Fixing softmax.
2023-12-15 01:35:08 +01:00
4eeaf205d6
Fix softmax for long sequences (missing barrier).
2023-12-14 19:37:03 +01:00
f419a38e1a
Fix use resource.
2023-12-14 16:52:37 +01:00
361f2ad2af
Working with merging encoders and using fences.
2023-12-14 16:05:33 +01:00
e60f9b5dfc
Speedup ShardedSafeTensors to load Tensors with default hints ( #1384 )
...
* Speedup ShardedSafeTensors to load Tensors with default hints
* Tweaks.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com >
2023-12-14 08:08:56 -06:00
7be982f6f7
Mention phi-2 in the readme. ( #1434 )
2023-12-14 08:02:27 -06:00
104e196d46
Phi 2 wasm ( #1432 )
...
* add phi 2.0 quantized model wasm
* cols
* spell
* bug
2023-12-14 06:04:17 -06:00
5e33c85c8f
Quantized version for phi-v2. ( #1430 )
...
* Quantized version for phi-v2.
* More quantized support.
2023-12-13 21:16:34 -06:00
2b3a018be7
Support for phi-2. ( #1429 )
...
* Support for phi-2.
* Use the v2 naming scheme.
2023-12-13 20:59:29 -06:00
931432ed55
Fixing tests + matmul from MFA
2023-12-13 16:58:36 +01:00
0404a3eb5b
Removed MPSMatrix entirely (buggy).
2023-12-13 16:21:48 +01:00
a9d0657432
Better version ?
2023-12-13 12:09:20 +01:00
4cb443d00a
Fix the logsumexp test. ( #1426 )
2023-12-12 10:56:11 -06:00
87dc559817
Lots of updates including some stack of command buffers.
2023-12-12 17:41:56 +01:00
77252ffb82
Add logsumexp function ( #1424 )
2023-12-12 10:32:17 -06:00
18eb87f25f
Upsample grad ( #1420 )
...
* encode size of upsample in enum
* working convolution method for limited 2d kernels
* add test for sf 3 interpolation
* add higher dimensional tests, fix to work with multichannel input
* Remove commented out line.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com >
2023-12-10 08:43:24 +01:00
da0af3cb3e
Merge pull request #1408 from jbochi/metal_gelu2
...
Fix NaN errors for Gelu in Metal
2023-12-09 19:46:36 +01:00
9bd94c1ffa
Speed up bert with approx gelu ( #1410 )
2023-12-06 17:46:37 +01:00
803ac8405b
Put back affine strided tests
...
Co-Authored-By: Ivar Flakstad <69173633+ivarflakstad@users.noreply.github.com >
2023-12-06 17:04:15 +01:00
6e25822d4f
Fix gelu for large x
2023-12-06 09:59:44 -05:00
236b820e28
Another prelu bugfix. ( #1407 )
2023-12-06 09:54:41 +01:00