mirror of
https://github.com/huggingface/candle.git
synced 2025-06-16 10:38:54 +00:00
Merge pull request #561 from patrickvonplaten/add_installation
Improve installation section and "get started"
This commit is contained in:
29
README.md
29
README.md
@ -10,14 +10,39 @@ and ease of use. Try our online demos:
|
|||||||
[LLaMA2](https://huggingface.co/spaces/lmz/candle-llama2),
|
[LLaMA2](https://huggingface.co/spaces/lmz/candle-llama2),
|
||||||
[yolo](https://huggingface.co/spaces/lmz/candle-yolo).
|
[yolo](https://huggingface.co/spaces/lmz/candle-yolo).
|
||||||
|
|
||||||
|
## Get started
|
||||||
|
|
||||||
|
Make sure that you have [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) correctly installed as described in [**Installation**](https://huggingface.github.io/candle/guide/installation.html).
|
||||||
|
|
||||||
|
Let's see how to run a simple matrix multiplication.
|
||||||
|
Write the following to your `myapp/src/main.rs` file:
|
||||||
```rust
|
```rust
|
||||||
let a = Tensor::randn(0f32, 1., (2, 3), &Device::Cpu)?;
|
use candle_core::{Device, Tensor};
|
||||||
let b = Tensor::randn(0f32, 1., (3, 4), &Device::Cpu)?;
|
|
||||||
|
fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||||
|
let device = Device::Cpu;
|
||||||
|
|
||||||
|
let a = Tensor::randn(0f32, 1., (2, 3), &device)?;
|
||||||
|
let b = Tensor::randn(0f32, 1., (3, 4), &device)?;
|
||||||
|
|
||||||
let c = a.matmul(&b)?;
|
let c = a.matmul(&b)?;
|
||||||
println!("{c}");
|
println!("{c}");
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
`cargo run` should display a tensor of shape `Tensor[[2, 4], f32]`.
|
||||||
|
|
||||||
|
|
||||||
|
Having installed `candle` with Cuda support, simply define the `device` to be on GPU:
|
||||||
|
|
||||||
|
```diff
|
||||||
|
- let device = Device::Cpu;
|
||||||
|
+ let device = Device::new_cuda(0)?;
|
||||||
|
```
|
||||||
|
|
||||||
|
For more advanced examples, please have a look at the following section.
|
||||||
|
|
||||||
## Check out our examples
|
## Check out our examples
|
||||||
|
|
||||||
Check out our [examples](./candle-examples/examples/):
|
Check out our [examples](./candle-examples/examples/):
|
||||||
|
@ -1,6 +1,43 @@
|
|||||||
# Installation
|
# Installation
|
||||||
|
|
||||||
Start by creating a new app:
|
**With Cuda support**:
|
||||||
|
|
||||||
|
1. First, make sure that Cuda is correctly installed.
|
||||||
|
- `nvcc --version` should print information about your Cuda compiler driver.
|
||||||
|
- `nvidia-smi --query-gpu=compute_cap --format=csv` should print your GPUs compute capability, e.g. something
|
||||||
|
like:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
compute_cap
|
||||||
|
8.9
|
||||||
|
```
|
||||||
|
|
||||||
|
If any of the above commands errors out, please make sure to update your Cuda version.
|
||||||
|
|
||||||
|
2. Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) with Cuda support.
|
||||||
|
|
||||||
|
Start by creating a new cargo:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo new myapp
|
||||||
|
cd myapp
|
||||||
|
```
|
||||||
|
|
||||||
|
Make sure to add the `candle-core` crate with the cuda feature:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo add --git https://github.com/huggingface/candle.git candle-core --features "cuda"
|
||||||
|
```
|
||||||
|
|
||||||
|
Run `cargo build` to make sure everything can be correctly built.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo build
|
||||||
|
```
|
||||||
|
|
||||||
|
**Without Cuda support**:
|
||||||
|
|
||||||
|
Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) as follows:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cargo new myapp
|
cargo new myapp
|
||||||
@ -8,17 +45,12 @@ cd myapp
|
|||||||
cargo add --git https://github.com/huggingface/candle.git candle-core
|
cargo add --git https://github.com/huggingface/candle.git candle-core
|
||||||
```
|
```
|
||||||
|
|
||||||
At this point, candle will be built **without** CUDA support.
|
Finally, run `cargo build` to make sure everything can be correctly built.
|
||||||
To get CUDA support use the `cuda` feature
|
|
||||||
```bash
|
|
||||||
cargo add --git https://github.com/huggingface/candle.git candle-core --features cuda
|
|
||||||
```
|
|
||||||
|
|
||||||
You can check everything works properly:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cargo build
|
cargo build
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**With mkl support**
|
||||||
|
|
||||||
You can also see the `mkl` feature which could be interesting to get faster inference on CPU. [Using mkl](./advanced/mkl.md)
|
You can also see the `mkl` feature which could be interesting to get faster inference on CPU. [Using mkl](./advanced/mkl.md)
|
||||||
|
Reference in New Issue
Block a user