1e442d4bb9
Fix lints for clippy 1.75. ( #1494 )
2023-12-28 20:26:20 +01:00
1e86717bf2
Fix a couple typos ( #1451 )
...
* Mixtral quantized instruct.
* Fix a couple typos.
2023-12-17 05:20:05 -06:00
805bf9ffa7
Implement top_p / nucleus sampling ( #819 )
...
* Implement top_p / nucleus sampling
* Update changelog
* rustfmt
* Add tests
* Fix clippy warning
* Fix another clippy error
2023-09-12 18:10:16 +02:00
52414ba5c8
Bugfix for the llama2 wasm example. ( #310 )
...
* Clean-up the llama2.c wasm example.
* Use a proper tokenizer.
* Add a prompt.
* Bugfix for the llama2 wasm example.
2023-08-02 17:32:36 +01:00
186c308d51
Wasm llama2 tweaks ( #309 )
...
* Clean-up the llama2.c wasm example.
* Use a proper tokenizer.
2023-08-02 15:49:43 +01:00
4fe8a02f88
Update the repo location. ( #305 )
2023-08-02 11:12:18 +01:00
ba2254556c
Display the temperature being used for text generation. ( #278 )
2023-07-30 09:53:05 +01:00
81bfa46702
Updated.
2023-07-26 15:21:50 +02:00
035372248e
Simple QOL.
...
- Add ms/token on llama2.c (15ms/token on my personal machine)
- Hide `Run` buttons while models are not ready
- Add dummy `progress` while weights are downloading (I briefly looked
at putting a real progressbar.. and nothing easy enough came up.)
2023-07-26 15:17:32 +02:00
97990f4afc
Add number of tokens.
2023-07-26 14:57:20 +02:00
160ba09d30
Polish the llama2 wasm ui. ( #232 )
...
* Polish the llama2 wasm ui.
* readme update.
2023-07-24 15:28:27 +01:00
5a26cba733
Re-organize the wasm examples ( #231 )
...
* Move the whisper example.
* More renaming.
* Add llama2 as a new wasm example.
* Live generation.
* More of the llama wasm example.
* Formatting.
2023-07-24 12:36:02 +01:00