mirror of
https://github.com/huggingface/candle.git
synced 2025-06-17 19:18:50 +00:00
Enable the image crate by default in examples (#501)
* Enable the image crate by default so that it's easier to compile the stable diffusion example. * Also update the readme.
This commit is contained in:
@ -37,10 +37,11 @@ cargo run --example llama --release
|
||||
cargo run --example falcon --release
|
||||
cargo run --example bert --release
|
||||
cargo run --example bigcode --release
|
||||
cargo run --example stable-diffusion --release --features image -- --prompt "a rusty robot holding a fire torch"
|
||||
cargo run --example stable-diffusion --release -- --prompt "a rusty robot holding a fire torch"
|
||||
```
|
||||
|
||||
In order to use **CUDA** add `--features cuda` to the example command line.
|
||||
In order to use **CUDA** add `--features cuda` to the example command line. If
|
||||
you have cuDNN installed, use `--features cudnn` for even more speedups.
|
||||
|
||||
There are also some wasm examples for whisper and
|
||||
[llama2.c](https://github.com/karpathy/llama2.c). You can either build them with
|
||||
|
@ -23,7 +23,7 @@ num-traits = { workspace = true }
|
||||
intel-mkl-src = { workspace = true, optional = true }
|
||||
cudarc = { workspace = true, optional = true }
|
||||
half = { workspace = true, optional = true }
|
||||
image = { workspace = true, optional = true }
|
||||
image = { workspace = true }
|
||||
|
||||
[dev-dependencies]
|
||||
anyhow = { workspace = true }
|
||||
@ -55,7 +55,3 @@ nccl = ["cuda", "cudarc/nccl", "dep:half"]
|
||||
[[example]]
|
||||
name = "llama_multiprocess"
|
||||
required-features = ["cuda", "nccl", "flash-attn"]
|
||||
|
||||
[[example]]
|
||||
name = "stable-diffusion"
|
||||
required-features = ["image"]
|
||||
|
Reference in New Issue
Block a user