mirror of
https://github.com/huggingface/candle.git
synced 2025-06-16 10:38:54 +00:00
Update the wasm example locations in the readme. (#306)
This commit is contained in:
13
README.md
13
README.md
@ -1,5 +1,12 @@
|
||||
# candle
|
||||
ML framework for Rust
|
||||
[](https://crates.io/crates/candle-core)
|
||||
[](https://docs.rs/candle-core)
|
||||

|
||||
|
||||
Candle is a minimalist ML framework for Rust with a focus on easiness of use and
|
||||
on performance (including GPU support). Try our online demos:
|
||||
[whisper](https://huggingface.co/spaces/lmz/candle-whisper),
|
||||
[llama2](https://huggingface.co/spaces/lmz/candle-llama2).
|
||||
|
||||
```rust
|
||||
let a = Tensor::randn(0f32, 1., (2, 3), &Device::Cpu)?;
|
||||
@ -34,8 +41,8 @@ In order to use **CUDA** add `--features cuda` to the example command line.
|
||||
There are also some wasm examples for whisper and
|
||||
[llama2.c](https://github.com/karpathy/llama2.c). You can either build them with
|
||||
`trunk` or try them online:
|
||||
[whisper](https://laurentmazare.github.io/candle-whisper/index.html),
|
||||
[llama2](https://laurentmazare.github.io/candle-llama2/index.html).
|
||||
[whisper](https://huggingface.co/spaces/lmz/candle-whisper),
|
||||
[llama2](https://huggingface.co/spaces/lmz/candle-llama2).
|
||||
|
||||
For llama2, run the following command to retrieve the weight files and start a
|
||||
test server:
|
||||
|
Reference in New Issue
Block a user