Commit Graph

84 Commits

Author SHA1 Message Date
e82fcf1c59 Add more example readmes. (#828)
* Add more readmes.

* Add a readme for dinov2.

* Add some skeleton files for a couple more examples.

* More whisper details.
2023-09-12 17:21:24 +01:00
d3f05eae8c Move some models to candle-transformers so that it's easier to re-use. (#794)
* Move some models to candle-transformers so that they can be shared.

* Also move falcon.

* Move Llama.

* Move whisper (partial).
2023-09-10 09:40:27 +01:00
158ff3c609 Add tracing to segment-anything (#777)
* Tracing support for segment-anything.

* More tracing.

* Handle the empty slice case.
2023-09-08 15:31:29 +01:00
19042962d5 Whisper fix (#711)
* Remove unnecessary file.

* Whisper fix.
2023-09-01 20:04:07 +01:00
9874d843f1 Fix the accelerate build (#678)
* Cosmetic changes.

* Fix the accelerate build for tanh.
2023-08-30 18:31:14 +02:00
a044907ffc Dilated convolutions (#657)
* Add the dilation parameter.

* Restore the basic optimizer example.

* Dilation support in cudnn.

* Use the dilation parameter in the cpu backend.

* More dilation support.

* No support for dilation in transposed convolutions.

* Add dilation to a test.

* Remove a print.

* Helper function.
2023-08-29 16:12:11 +01:00
72ebb12bca Remove some dead-code annotations. (#629)
* Remove some dead-code annotations.

* More dead code removal.

* One more.

* CI fix.
2023-08-27 18:52:33 +01:00
aba1e90797 Add some group parameter to convolutions. (#566)
* Add some group parameter to convolutions.

* Avoid some unnecessary groups checks.

* Move the tensor convolution bits.

* Properh handling of groups.

* Bump the crate version.

* And add a changelog.
2023-08-23 12:58:55 +01:00
cc22d4db20 Put the transcribe token before the language one. (#553) 2023-08-22 16:46:34 +01:00
cc2d6cf2e0 Improve the timestamps support in whisper (#539)
* Timestamp support for whisper.

* Properly display the timestamps.

* Bugfix for the timestamp units.
2023-08-21 12:26:59 +01:00
c78ce76501 Add a simple Module trait and implement it for the various nn layers (#500)
* Start adding the module trait.

* Use the module trait.

* Implement module for qmatmul.
2023-08-18 09:38:22 +01:00
26fd37b348 Use the main branch of the HF repo where possible. (#498)
* Use the main branch of the HF repo where possible.

* And add the large model.
2023-08-18 08:18:30 +01:00
f056dcab21 Add medium model (#497) 2023-08-18 08:08:59 +01:00
3164cd24fa Replicate the sot-token logic from the Python implementation more acc… (#491)
* Replicate the sot-token logic from the Python implementation more accurately.

* Add a flag to control the timestamp mode.
2023-08-17 16:59:36 +01:00
5f30c1e1e0 Add the whisper small model. (#490) 2023-08-17 15:48:34 +01:00
c84883ecf2 Add a cuda kernel for upsampling. (#441)
* Add a cuda kernel for upsampling.

* Update for the latest tokenizers version.
2023-08-14 13:12:17 +01:00
8bd2b22b33 Optimize the logit computations in the whisper example. (#434) 2023-08-13 22:00:13 +01:00
6d694554b8 Support longer sequences in language detection. (#428) 2023-08-13 13:16:15 +01:00
9aca398a4f More accelerate optimizations (#427)
* Add more tracing to the whisper example.

* Support accelerate in more examples.

* Use accelerate for pointwise functions.

* Use accelerate for binary operations too.

* Bugfix for binary operation: use the rhs before the lhs.
2023-08-13 12:53:34 +01:00
60cd1551ca Add a KV cache to whisper. (#426) 2023-08-12 21:17:08 +01:00
a0908d212c Add a -language argument. (#425) 2023-08-12 17:08:40 +01:00
0741ebbd51 More multilingual support for whisper. (#419)
* More multilingual support for whisper.

* Use the language token appropriately.
2023-08-12 15:32:52 +01:00
0c3f109faa Basic multilingual support for whisper (#417)
* Multi-lingual support for whisper.

* Avoid hardcoding the token names.

* More multi-lingual support.

* Remove the todo.
2023-08-12 11:23:04 +01:00
91dbf907d3 Add more whisper variants. (#413) 2023-08-11 17:33:55 +01:00
c3a0761e62 Add some tracing to the whisper example. (#375) 2023-08-09 19:58:36 +01:00
a3b1699409 Embed the mel filters in the whisper binary. (#373) 2023-08-09 18:27:26 +01:00
3eb2bc6d07 Softmax numerical stability. (#267)
* Softmax numerical stability.

* Fix the flash-attn test.
2023-07-28 13:13:01 +01:00
ca479a873e Upgrading hf-hub to 0.2.0 (Modified API to not pass the Repo around
all the time)
2023-07-27 20:05:02 +02:00
209f06d7c3 Micro-cleanup. (#256) 2023-07-27 07:55:54 +01:00
43c7223292 Rename the .r functions to .dims so as to be a bit more explicit. (#220) 2023-07-22 10:39:27 +01:00
9515e8ea6c Merge branch 'main' into remove_wrapper 2023-07-19 18:53:55 +02:00
dfd624dbd3 [Proposal] Remove SafeTensor wrapper (allows finer control for users). 2023-07-19 16:25:44 +02:00
439321745a Removing candle-hub internal to extract into hf-hub standalone. 2023-07-19 15:04:38 +02:00
66750f9827 Add some 'cuda-if-available' helper function. (#172) 2023-07-15 08:25:15 +01:00
4ed56d7861 Removing cuda default.
Seems very important for a lot of exploring users usually on laptop
without GPUs.

Adding more README instructions in a follow up.
2023-07-14 16:52:15 +02:00
50b0946a2d Tensor mutability (#154)
* Working towards tensor mutability.

* Use a ref-cell to provide tensor mutability.
2023-07-13 11:04:40 +01:00
eae646d322 Use arange in the examples. (#146) 2023-07-12 12:12:34 +01:00
b46c28a2ac VarBuilder path creation (#131)
* Use a struct for the safetensor+routing.

* Group the path and the var-builder together.

* Fix for the empty path case.
2023-07-10 22:37:34 +01:00
1aa7fbbc33 Move the var-builder in a central place. (#130) 2023-07-10 20:49:50 +01:00
89a5b602a6 Move the conv1d layer to candle_nn. (#117) 2023-07-10 11:02:06 +01:00
b06e1a7e54 [nn] Move the Embedding and Activation parts. (#116)
* Share the Embedding and Activation parts.

* Tweak some activations.
2023-07-10 10:24:52 +01:00
9ce0f1c010 Sketch the candle-nn crate. (#115)
* Sketch the candle-nn crate.

* Tweak the cuda dependencies.

* More cuda tweaks.
2023-07-10 08:50:09 +01:00
c187f347bf Make it easier to use whisper samples from the repo. (#112)
* Make it easier to use samples from the repo.

* Use f32 for accumulation in the f16/bf16 kernels.
2023-07-08 18:48:27 +01:00
115629fe08 Creating new sync Api for candle-hub.
- `api::Api` -> `api::tokio::api` (And created new `api::sync::Api`).
- Remove `tokio` from all our examples.
- Using similar codebase for now instead of ureq (for simplicity).
2023-07-06 15:15:25 +02:00
c297a50960 Add mkl support for matrix multiply. (#86)
* Fix some rebase issues.

* Use mkl instead.

* Use mkl in bert.

* Add the optional mkl feature.

* Conditional compilation based on the mkl feature.

* Add more mkl support.
2023-07-06 11:05:05 +01:00
cd230d26fe Whisper tweaks (#85)
* Isolate the decoding bits of the whisper example.

* Decode -> Decoder.

* Add the suppress tokens filter.

* More suppress tokens.
2023-07-06 09:13:20 +01:00
d3418f1cff Add the original whisper names as comment. 2023-07-06 07:57:03 +01:00
19ab5ea411 Merge pull request #78 from LaurentMazare/whisper_update
Adapting whisper for Hub use.
2023-07-06 07:21:58 +01:00
2c3d871b2e Add a simpler way to specify the dim index for some ops. 2023-07-05 20:22:43 +01:00
653c5049f8 Adding auto download of audio file. 2023-07-05 15:21:53 +00:00