0b24f7f0a4
Fix for whisper example. rand::distribution is now rand::distr ( #2811 )
2025-03-16 19:14:55 +01:00
936300678d
Add whisper large-v3 turbo to the example. ( #2531 )
2024-10-02 21:07:08 +02:00
0c11e055be
support distil-large-v3 ( #1898 )
2024-03-21 11:46:49 +01:00
dfab45e1c8
Supports more audio formats ( #1628 )
...
* Supports more audio formats
* Simplify the handling of the different buffer types.
* Check the sample rate.
---------
Co-authored-by: laurent <laurent.mazare@gmail.com >
2024-02-03 14:26:04 +01:00
403680f17d
Quantized GGUF style ( #1523 )
...
* Metal quantized modifications proposal.
- Add a device param, wherever needed.
- Create new QMetal storage thing that implements QuantizedType.
- Update everywhere needed.
Fix Python.
Fixing examples.
Fix: fmt + clippy + stub.
Moving everything around.
Only missing the actual implems.
Fixing everything + adding dequantized kernels.
More work.
Fixing matmul.
Fmt + Clippy
Some clippy fixes.
Working state.
Q2K Metal -> Bugged (also present in GGML).
Q4K CPU -> Bugged (present previously, new test catch it).
Q5K CPU -> Bugged (present previously).
Q8_1 Both -> Never really implemented it seems
Q8K metal -> Never implemented in metal
Fixing Q2K bug (present in ggml).
* Cleanup.
* Fix the rebase.
* Removing the fences speeds everything up and *is* correct this time...
* Cleanup the fence.
* After rebase.
* Bad code removal.
* Rebase after phi2 merge + fix replit default to CPU.
* Making the CI happy.
* More happy tests.
---------
Co-authored-by: Nicolas Patry <nicolas@Nicolass-MacBook-Pro.local >
2024-01-17 10:27:58 +01:00
9ab3f9729f
Use the whisper-v3 tokenizer now that it has been added. ( #1337 )
...
* Use the whisper-v3 tokenizer now that it has been added.
* Use the appropriate nospeech token.
2023-11-16 22:10:31 +00:00
2feb0b054f
Add the mel filters for 128 bins. ( #1295 )
2023-11-08 08:23:53 +01:00
2d28497197
Preliminary support for whisper v3. ( #1294 )
...
* Preliminary support for whisper v3.
* Add the missing files.
2023-11-08 06:42:52 +01:00
e08fbb6543
Add support for distil whisper ( #1245 )
...
* Add support for distil-whisper.
* Add distil-large.
* Rename the large model.
2023-11-02 19:32:35 +01:00
11d3687cc6
Simd128 optimized q8k vecdot. ( #1026 )
2023-10-03 15:29:48 +01:00
089fc3b584
Improve the quantized whisper setup. ( #1018 )
...
* Improve the quantized whisper setup.
* Fix the config file paths.
* Use the standard matmul where possible.
2023-10-02 17:17:46 +01:00
e04c789230
Add a quantized variant of whisper ( #1017 )
...
* Add the quantized-whisper model.
* Quantized the whisper model.
* Adapt the whisper example to handle quantization.
* Add the quantized flag.
* Load the proper weights.
2023-10-02 14:59:53 +01:00
bb3471ea31
Adapt more examples to the updated safetensor api. ( #947 )
...
* Simplify the safetensor usage.
* Convert more examples.
* Move more examples.
* Adapt stable-diffusion.
2023-09-23 21:26:03 +01:00
d3f05eae8c
Move some models to candle-transformers so that it's easier to re-use. ( #794 )
...
* Move some models to candle-transformers so that they can be shared.
* Also move falcon.
* Move Llama.
* Move whisper (partial).
2023-09-10 09:40:27 +01:00
158ff3c609
Add tracing to segment-anything ( #777 )
...
* Tracing support for segment-anything.
* More tracing.
* Handle the empty slice case.
2023-09-08 15:31:29 +01:00
19042962d5
Whisper fix ( #711 )
...
* Remove unnecessary file.
* Whisper fix.
2023-09-01 20:04:07 +01:00
cc22d4db20
Put the transcribe token before the language one. ( #553 )
2023-08-22 16:46:34 +01:00
cc2d6cf2e0
Improve the timestamps support in whisper ( #539 )
...
* Timestamp support for whisper.
* Properly display the timestamps.
* Bugfix for the timestamp units.
2023-08-21 12:26:59 +01:00
26fd37b348
Use the main branch of the HF repo where possible. ( #498 )
...
* Use the main branch of the HF repo where possible.
* And add the large model.
2023-08-18 08:18:30 +01:00
f056dcab21
Add medium model ( #497 )
2023-08-18 08:08:59 +01:00
3164cd24fa
Replicate the sot-token logic from the Python implementation more acc… ( #491 )
...
* Replicate the sot-token logic from the Python implementation more accurately.
* Add a flag to control the timestamp mode.
2023-08-17 16:59:36 +01:00
5f30c1e1e0
Add the whisper small model. ( #490 )
2023-08-17 15:48:34 +01:00
c84883ecf2
Add a cuda kernel for upsampling. ( #441 )
...
* Add a cuda kernel for upsampling.
* Update for the latest tokenizers version.
2023-08-14 13:12:17 +01:00
8bd2b22b33
Optimize the logit computations in the whisper example. ( #434 )
2023-08-13 22:00:13 +01:00
9aca398a4f
More accelerate optimizations ( #427 )
...
* Add more tracing to the whisper example.
* Support accelerate in more examples.
* Use accelerate for pointwise functions.
* Use accelerate for binary operations too.
* Bugfix for binary operation: use the rhs before the lhs.
2023-08-13 12:53:34 +01:00
60cd1551ca
Add a KV cache to whisper. ( #426 )
2023-08-12 21:17:08 +01:00
a0908d212c
Add a -language argument. ( #425 )
2023-08-12 17:08:40 +01:00
0741ebbd51
More multilingual support for whisper. ( #419 )
...
* More multilingual support for whisper.
* Use the language token appropriately.
2023-08-12 15:32:52 +01:00
0c3f109faa
Basic multilingual support for whisper ( #417 )
...
* Multi-lingual support for whisper.
* Avoid hardcoding the token names.
* More multi-lingual support.
* Remove the todo.
2023-08-12 11:23:04 +01:00
91dbf907d3
Add more whisper variants. ( #413 )
2023-08-11 17:33:55 +01:00
c3a0761e62
Add some tracing to the whisper example. ( #375 )
2023-08-09 19:58:36 +01:00
a3b1699409
Embed the mel filters in the whisper binary. ( #373 )
2023-08-09 18:27:26 +01:00
3eb2bc6d07
Softmax numerical stability. ( #267 )
...
* Softmax numerical stability.
* Fix the flash-attn test.
2023-07-28 13:13:01 +01:00
ca479a873e
Upgrading hf-hub to 0.2.0
(Modified API to not pass the Repo around
...
all the time)
2023-07-27 20:05:02 +02:00
43c7223292
Rename the .r functions to .dims so as to be a bit more explicit. ( #220 )
2023-07-22 10:39:27 +01:00
9515e8ea6c
Merge branch 'main' into remove_wrapper
2023-07-19 18:53:55 +02:00
dfd624dbd3
[Proposal] Remove SafeTensor wrapper (allows finer control for users).
2023-07-19 16:25:44 +02:00
439321745a
Removing candle-hub
internal to extract into hf-hub
standalone.
2023-07-19 15:04:38 +02:00
66750f9827
Add some 'cuda-if-available' helper function. ( #172 )
2023-07-15 08:25:15 +01:00
4ed56d7861
Removing cuda default.
...
Seems very important for a lot of exploring users usually on laptop
without GPUs.
Adding more README instructions in a follow up.
2023-07-14 16:52:15 +02:00
50b0946a2d
Tensor mutability ( #154 )
...
* Working towards tensor mutability.
* Use a ref-cell to provide tensor mutability.
2023-07-13 11:04:40 +01:00
1aa7fbbc33
Move the var-builder in a central place. ( #130 )
2023-07-10 20:49:50 +01:00
c187f347bf
Make it easier to use whisper samples from the repo. ( #112 )
...
* Make it easier to use samples from the repo.
* Use f32 for accumulation in the f16/bf16 kernels.
2023-07-08 18:48:27 +01:00
115629fe08
Creating new sync Api for candle-hub
.
...
- `api::Api` -> `api::tokio::api` (And created new `api::sync::Api`).
- Remove `tokio` from all our examples.
- Using similar codebase for now instead of ureq (for simplicity).
2023-07-06 15:15:25 +02:00
c297a50960
Add mkl support for matrix multiply. ( #86 )
...
* Fix some rebase issues.
* Use mkl instead.
* Use mkl in bert.
* Add the optional mkl feature.
* Conditional compilation based on the mkl feature.
* Add more mkl support.
2023-07-06 11:05:05 +01:00
cd230d26fe
Whisper tweaks ( #85 )
...
* Isolate the decoding bits of the whisper example.
* Decode -> Decoder.
* Add the suppress tokens filter.
* More suppress tokens.
2023-07-06 09:13:20 +01:00
19ab5ea411
Merge pull request #78 from LaurentMazare/whisper_update
...
Adapting whisper for Hub use.
2023-07-06 07:21:58 +01:00
2c3d871b2e
Add a simpler way to specify the dim index for some ops.
2023-07-05 20:22:43 +01:00
653c5049f8
Adding auto download of audio file.
2023-07-05 15:21:53 +00:00
e85573a4bd
Adapting whisper for Hub use.
2023-07-05 14:35:27 +00:00