7e49e0af96
Tmp for allocator.
2023-11-16 12:50:41 +01:00
181d2299b2
TMp.
2023-11-16 11:41:06 +01:00
2801541e5f
new_owned -> new()..to_owned().
2023-11-16 11:07:56 +01:00
4289984d32
Remove some prints.
2023-11-13 14:51:40 +01:00
1471f98f0b
BF16 metal fix.
2023-11-13 14:44:20 +01:00
dd4a40f1c0
Fixes + cache compute_pipeline_state.
2023-11-13 14:33:16 +01:00
79845bd93b
Working version for llama2-c.
2023-11-13 12:36:27 +01:00
6071797450
Add erf.
2023-11-11 18:22:16 +01:00
b58b247323
Putting back f16 index select.
2023-11-11 17:43:35 +01:00
3900091e75
All tests are panicking instead of random failure.
2023-11-11 17:43:35 +01:00
54355ff997
Adding some half kernels.
2023-11-11 17:43:35 +01:00
e02f1912bb
Reusing a single buffer (for now) to speed things up.
2023-11-11 17:43:35 +01:00
a52b71686b
Going back on remote metal-rs.
2023-11-11 17:43:35 +01:00
7adfb70dff
Few fixes.
2023-11-11 17:43:35 +01:00
3ad02147e4
Starting to fix some tests.
2023-11-11 17:43:34 +01:00
4f39695465
Missing new test.
2023-11-11 17:42:53 +01:00
4cf4844c9d
Adding the test scaffolding.
2023-11-11 17:27:19 +01:00
d840838e95
Cleanup fixed a few ops removed debugging scaffolding.
2023-11-11 17:18:00 +01:00
61a070fdd1
Debugging rope.
2023-11-11 17:18:00 +01:00
e35669647d
Fixed matmul (display still broken without casting back to CPU first? )
2023-11-11 17:18:00 +01:00
53e8b7ee3e
Tmp state.
2023-11-11 17:18:00 +01:00
cc26cce23c
Fixing the kernels + launches to make them faster.
...
Cool work by @ivarflakstad
Co-authored-by: Ivar Flakstad <69173633+ivarflakstad@users.noreply.github.com >
2023-11-11 17:18:00 +01:00
02c2ec2c71
Adding indexing.
...
Co-authored-by: Ivar Flakstad <69173633+ivarflakstad@users.noreply.github.com >
2023-11-11 17:18:00 +01:00
9a2784b8ab
Refactor to simplify our lives for settings the params in the encoder.
2023-11-11 17:18:00 +01:00
0f652f0e3d
Adding the actual backend
2023-11-11 17:18:00 +01:00
ddee9dc1dd
Remove tracing.
2023-11-11 17:18:00 +01:00
fc9bb7784a
Metal part 1 - Scaffolding for metal.
2023-11-11 17:18:00 +01:00
f1e678b39c
Mention the Yi-6b/Yi-34b models in the readme. ( #1321 )
2023-11-11 12:39:11 +01:00
a007f8fdb4
Add the Yi-6b and Yi-34b models. ( #1320 )
...
* Add the Yi-6b model.
* Add the 34b model.
* Add the yi example.
* Fix the weight file names.
2023-11-11 12:00:48 +01:00
2341aa079e
Fix quantized zephyr chat prompt ( #1314 ) ( #1317 )
...
* Fix quantized zephyr chat prompt (#1314 )
* Avoid using a mutable variable.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com >
2023-11-11 09:14:12 +01:00
9e666d4229
Add the var method. ( #1315 )
...
* Add the var method.
* Add a test.
2023-11-10 22:47:57 +01:00
1b12142a02
Add min to buckets in relative_position_bucket ( #1312 )
2023-11-10 11:57:25 +01:00
d2c3f14773
Fix for flash-attn. ( #1310 )
...
Co-authored-by: laurent <laurent@par2dc5-ai-prd-cl01dgx02.cm.cluster >
2023-11-10 10:27:27 +01:00
26c4e5bf1d
Metal part 1 - Scaffolding for metal. ( #1308 )
...
* Metal part 1 - Scaffolding for metal.
* Remove tracing.
2023-11-10 08:35:48 +01:00
18d30005c5
Add support to UL2 model family ( #1300 )
...
* Add support to UL2 model family
* Update docs with UL2
* Create ActivationWithOptionalGating to avoid polluting activations
* Also refactor quantized t5
* Remove useless conversion
* Revert Activation::NewGelu name change
* Remove useless return
* Apply rustfmt and clippy recommendations
* Reuse t5::ActivationWithOptionalGating in quantized version
* (cosmetic change) use a match rather than ifs + avoid early returns.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com >
2023-11-09 18:55:09 +01:00
6958384327
Add support for TrOCR Model ( #1303 )
...
* add bce with logit loss
* add bce with logit loss
* remove imports
* fix tiny bug
* add test documentation and refactor function
* fix test cases and formatting
* add trocr model
* fix formatting
* commit the actual model lol
* more formatting
* remove tokenizer config
2023-11-09 18:49:17 +01:00
e6697471bb
Add weight and bias functions to LayerNorm ( #1306 )
2023-11-09 16:09:01 +01:00
73d02f4f57
fix: negative axis ( #1296 )
...
* fix: negative axis
* Use normalize_axis.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com >
2023-11-08 23:28:21 +01:00
f772213e84
Fix bug introduced in madlad PR ( #1298 )
2023-11-08 17:55:46 +01:00
2feb0b054f
Add the mel filters for 128 bins. ( #1295 )
2023-11-08 08:23:53 +01:00
2d28497197
Preliminary support for whisper v3. ( #1294 )
...
* Preliminary support for whisper v3.
* Add the missing files.
2023-11-08 06:42:52 +01:00
f3a4f3db76
PyO3: Add optional candle.onnx
module ( #1282 )
...
* Start onnx integration
* Merge remote-tracking branch 'upstream/main' into feat/pyo3-onnx
* Implement ONNXModel
* `fmt`
* add `onnx` flag to python ci
* Pin `protoc` to `25.0`
* Setup `protoc` in wheel builds
* Build wheels with `onnx`
* Install `protoc` in manylinux containers
* `apt` -> `yum`
* Download `protoc` via bash script
* Back to `manylinux: auto`
* Disable `onnx` builds for linux
2023-11-08 06:37:50 +01:00
7920b45c8a
Support for timegroupnorm in encodec. ( #1291 )
2023-11-07 22:39:59 +01:00
d4a45c936a
Quantized model small tweaks ( #1290 )
...
* Support the shape op in ONNX.
* Share the axis normalization bits.
* Add some limited support for gather.
* Unsqueeze.
* Comparison with broadcasting.
* Add Not + handle i32.
* Tweaks for the quantized model.
2023-11-07 21:21:37 +01:00
c912d24570
Update README: Move T5 to Text to Text section ( #1288 )
...
I think it makes more sense to have it there, since it's a seq2seq model with cross attention, and not a LM. There are also Decoder only T5 models that work as LMs, but that's not the standard.
2023-11-07 16:14:04 +01:00
d5c2a7b64b
Add info about MADLAD-400 in readme files ( #1287 )
2023-11-07 15:21:59 +01:00
508f811b93
Add support for MADLAD400 ( #1285 )
...
* Add support for madlad
* Add support for quantized MADLAD
2023-11-07 05:35:37 +01:00
a773a4b22b
[ONNX] Support a couple more ops. ( #1284 )
...
* Support the shape op in ONNX.
* Share the axis normalization bits.
* Add some limited support for gather.
* Unsqueeze.
* Comparison with broadcasting.
* Add Not + handle i32.
2023-11-06 22:44:58 +01:00
5a363dbc26
Adds check for 7b-zephyr and uses correct template ( #1283 )
...
* Adds check for 7b-zephyr and uses correct template
* Handle zephyr as mistral.
* Disable the protoc bits of the CI.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com >
2023-11-06 21:05:39 +01:00
abc4f698c5
Add candle-sampling ( #1278 )
2023-11-06 12:53:29 +01:00