Commit Graph

15 Commits

Author SHA1 Message Date
b34d7f0248 Remove some unusued bits. (#1067) 2023-10-09 19:49:57 +01:00
27e70a5093 Whisper quantized wasm (#1028)
* [Whisper] Update to use quantized model

* [whisper] add language detection

* [whisper] change assets location

* [whisper] adapt js example with quantized models

* [whisper] better task parsing

* [whisper] minor fixes
2023-10-04 20:22:57 +01:00
7edd755756 Pass directly the buffer ownership. (#949) 2023-09-24 06:34:44 +01:00
94aa234dfd Add the kv-cache to the whisper wasm version. (#689)
* Add the kv-cache to the whisper wasm version.

* Improve the handling of special tokens.
2023-08-31 09:37:44 +01:00
1d0bb48fae Improve Whisper WASM UI example (#669)
* wip add module and js worker example

* params

* clean up, send error

* final UI with whisper webworker

* add simple instructions
2023-08-30 20:35:41 +02:00
a044907ffc Dilated convolutions (#657)
* Add the dilation parameter.

* Restore the basic optimizer example.

* Dilation support in cudnn.

* Use the dilation parameter in the cpu backend.

* More dilation support.

* No support for dilation in transposed convolutions.

* Add dilation to a test.

* Remove a print.

* Helper function.
2023-08-29 16:12:11 +01:00
72ebb12bca Remove some dead-code annotations. (#629)
* Remove some dead-code annotations.

* More dead code removal.

* One more.

* CI fix.
2023-08-27 18:52:33 +01:00
aba1e90797 Add some group parameter to convolutions. (#566)
* Add some group parameter to convolutions.

* Avoid some unnecessary groups checks.

* Move the tensor convolution bits.

* Properh handling of groups.

* Bump the crate version.

* And add a changelog.
2023-08-23 12:58:55 +01:00
c78ce76501 Add a simple Module trait and implement it for the various nn layers (#500)
* Start adding the module trait.

* Use the module trait.

* Implement module for qmatmul.
2023-08-18 09:38:22 +01:00
c84883ecf2 Add a cuda kernel for upsampling. (#441)
* Add a cuda kernel for upsampling.

* Update for the latest tokenizers version.
2023-08-14 13:12:17 +01:00
3eb2bc6d07 Softmax numerical stability. (#267)
* Softmax numerical stability.

* Fix the flash-attn test.
2023-07-28 13:13:01 +01:00
1735e4831e TP sharding v2 2023-07-27 09:58:14 +02:00
209f06d7c3 Micro-cleanup. (#256) 2023-07-27 07:55:54 +01:00
035372248e Simple QOL.
- Add ms/token on llama2.c (15ms/token on my personal machine)
- Hide `Run` buttons while models are not ready
- Add dummy `progress` while weights are downloading (I briefly looked
  at putting a real progressbar.. and nothing easy enough came up.)
2023-07-26 15:17:32 +02:00
5a26cba733 Re-organize the wasm examples (#231)
* Move the whisper example.

* More renaming.

* Add llama2 as a new wasm example.

* Live generation.

* More of the llama wasm example.

* Formatting.
2023-07-24 12:36:02 +01:00