From 65e146c72d56a50b5a0b6abe670a19fb0c676604 Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Wed, 23 Aug 2023 08:32:59 +0000 Subject: [PATCH 01/13] Add installation section --- README.md | 96 ++++++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 91 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 76e27a40..9e2a80b1 100644 --- a/README.md +++ b/README.md @@ -10,13 +10,99 @@ and ease of use. Try our online demos: [LLaMA2](https://huggingface.co/spaces/lmz/candle-llama2), [yolo](https://huggingface.co/spaces/lmz/candle-yolo). -```rust -let a = Tensor::randn(0f32, 1., (2, 3), &Device::Cpu)?; -let b = Tensor::randn(0f32, 1., (3, 4), &Device::Cpu)?; +## Installation -let c = a.matmul(&b)?; -println!("{c}"); +- *With Cuda support*: + +1. To install candle with Cuda support, first make sure that Cuda is correctly installed. +- `nvcc --version` should print your information about your Cuda compiler driver. +- `nvidia-smi --query-gpu=compute_cap --format=csv` should print your GPUs compute capability, e.g. something +like: ``` +compute_cap +8.9 +``` + +If any of the above commands errors out, please make sure to update your CUDA version. + +2. Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) with Cuda support + +```bash +cargo new myapp +cd myapp +``` + +Next make sure to add the `candle-core` crate with the cuda feature: + +``` +cargo add --git https://github.com/huggingface/candle.git candle-core --features "cuda" +``` + +Finally, run `cargo build` to make sure everything can be correctly built. + +``` +cargo run +``` + +Now you can run the example as shown in the next section! + +- *Without Cuda support*: + +Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) as follows: + +``` +cargo new myapp +cd myapp +cargo add --git https://github.com/huggingface/candle.git candle-core +``` + +Finally, run `cargo build` to make sure everything can be correctly built. + +``` +cargo run +``` + +## Get started + +Having installed `candle-core` as described in [Installation](#Installation), we can now +run a simple matrix multiplication. + +First, let's add the [`anyhow`](https://docs.rs/anyhow/latest/anyhow/) package to our app. + +``` +cd myapp +cargo add anyhow +``` + +Next, write the following to your `myapp/src/main.rs` file: + +```rust +use anyhow::Result; +use candle_core::{Device, Tensor}; + +fn main() -> Result<()> { + let a = Tensor::randn(0f32, 1., (2, 3), &Device::Cpu)?; + let b = Tensor::randn(0f32, 1., (3, 4), &Device::Cpu)?; + + let c = a.matmul(&b)?; + println!("{c}"); + Ok(()) +} +``` + +`cargo run` should display a tensor of shape `Tensor[[2, 4], f32]` + + +Having installed `candle` with Cuda support, you can create the tensors on GPU instead as follows: + +```diff +- let a = Tensor::randn(0f32, 1., (2, 3), &Device::Cpu)?; +- let b = Tensor::randn(0f32, 1., (3, 4), &Device::Cpu)?; ++ let a = Tensor::randn(0f32, 1., (2, 3), &Device::new_cuda(0)?)?; ++ let b = Tensor::randn(0f32, 1., (3, 4), &Device::new_cuda(0)?)?; +``` + +For more advanced examples, please have a look at the following sections. ## Check out our examples From d4968295a0b75f1adafcc19a7cfe15e8acf3d58d Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Wed, 23 Aug 2023 08:37:08 +0000 Subject: [PATCH 02/13] improve --- README.md | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index 9e2a80b1..93537b13 100644 --- a/README.md +++ b/README.md @@ -12,9 +12,9 @@ and ease of use. Try our online demos: ## Installation -- *With Cuda support*: +- **With Cuda support**: -1. To install candle with Cuda support, first make sure that Cuda is correctly installed. +1. First, make sure that Cuda is correctly installed. - `nvcc --version` should print your information about your Cuda compiler driver. - `nvidia-smi --query-gpu=compute_cap --format=csv` should print your GPUs compute capability, e.g. something like: @@ -23,30 +23,30 @@ compute_cap 8.9 ``` -If any of the above commands errors out, please make sure to update your CUDA version. +If any of the above commands errors out, please make sure to update your Cuda version. 2. Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) with Cuda support +Start by creating a new cargo: + ```bash cargo new myapp cd myapp ``` -Next make sure to add the `candle-core` crate with the cuda feature: +Make sure to add the `candle-core` crate with the cuda feature: ``` cargo add --git https://github.com/huggingface/candle.git candle-core --features "cuda" ``` -Finally, run `cargo build` to make sure everything can be correctly built. +Run `cargo build` to make sure everything can be correctly built. ``` cargo run ``` -Now you can run the example as shown in the next section! - -- *Without Cuda support*: +**Without Cuda support**: Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) as follows: @@ -56,7 +56,7 @@ cd myapp cargo add --git https://github.com/huggingface/candle.git candle-core ``` -Finally, run `cargo build` to make sure everything can be correctly built. +Run `cargo build` to make sure everything can be correctly built. ``` cargo run @@ -67,7 +67,8 @@ cargo run Having installed `candle-core` as described in [Installation](#Installation), we can now run a simple matrix multiplication. -First, let's add the [`anyhow`](https://docs.rs/anyhow/latest/anyhow/) package to our app. +We will need the [`anyhow`](https://docs.rs/anyhow/latest/anyhow/) package for our example, +so let's add it to our app. ``` cd myapp From 34cb9f924fb2eba0ee2a206b47854e5d49d2f22d Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Wed, 23 Aug 2023 08:40:23 +0000 Subject: [PATCH 03/13] improve --- README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 93537b13..2ebdcd45 100644 --- a/README.md +++ b/README.md @@ -99,8 +99,9 @@ Having installed `candle` with Cuda support, you can create the tensors on GPU i ```diff - let a = Tensor::randn(0f32, 1., (2, 3), &Device::Cpu)?; - let b = Tensor::randn(0f32, 1., (3, 4), &Device::Cpu)?; -+ let a = Tensor::randn(0f32, 1., (2, 3), &Device::new_cuda(0)?)?; -+ let b = Tensor::randn(0f32, 1., (3, 4), &Device::new_cuda(0)?)?; ++ let device = Device::new_cuda(0)?; ++ let a = Tensor::randn(0f32, 1., (2, 3), &device)?; ++ let b = Tensor::randn(0f32, 1., (3, 4), &device)?; ``` For more advanced examples, please have a look at the following sections. From b558d08b85f07a30cc29da00bcd2dcd2fddfc1e7 Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Wed, 23 Aug 2023 08:42:47 +0000 Subject: [PATCH 04/13] improve --- README.md | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 2ebdcd45..c617b520 100644 --- a/README.md +++ b/README.md @@ -82,8 +82,10 @@ use anyhow::Result; use candle_core::{Device, Tensor}; fn main() -> Result<()> { - let a = Tensor::randn(0f32, 1., (2, 3), &Device::Cpu)?; - let b = Tensor::randn(0f32, 1., (3, 4), &Device::Cpu)?; + let device = Device::Cpu; + + let a = Tensor::randn(0f32, 1., (2, 3), &device)?; + let b = Tensor::randn(0f32, 1., (3, 4), &device)?; let c = a.matmul(&b)?; println!("{c}"); @@ -94,14 +96,11 @@ fn main() -> Result<()> { `cargo run` should display a tensor of shape `Tensor[[2, 4], f32]` -Having installed `candle` with Cuda support, you can create the tensors on GPU instead as follows: +Having installed `candle` with Cuda support, simply define the `device` to be on GPU: ```diff -- let a = Tensor::randn(0f32, 1., (2, 3), &Device::Cpu)?; -- let b = Tensor::randn(0f32, 1., (3, 4), &Device::Cpu)?; +- let device = Device::Cpu; + let device = Device::new_cuda(0)?; -+ let a = Tensor::randn(0f32, 1., (2, 3), &device)?; -+ let b = Tensor::randn(0f32, 1., (3, 4), &device)?; ``` For more advanced examples, please have a look at the following sections. From 7c0ca80d3a4c238543cc705400643bf05d474007 Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Wed, 23 Aug 2023 08:52:53 +0000 Subject: [PATCH 05/13] move installation to book --- README.md | 62 +++------------------------ candle-book/src/guide/installation.md | 40 +++++++++++++++++ 2 files changed, 45 insertions(+), 57 deletions(-) diff --git a/README.md b/README.md index c617b520..9813a7f9 100644 --- a/README.md +++ b/README.md @@ -10,65 +10,13 @@ and ease of use. Try our online demos: [LLaMA2](https://huggingface.co/spaces/lmz/candle-llama2), [yolo](https://huggingface.co/spaces/lmz/candle-yolo). -## Installation - -- **With Cuda support**: - -1. First, make sure that Cuda is correctly installed. -- `nvcc --version` should print your information about your Cuda compiler driver. -- `nvidia-smi --query-gpu=compute_cap --format=csv` should print your GPUs compute capability, e.g. something -like: -``` -compute_cap -8.9 -``` - -If any of the above commands errors out, please make sure to update your Cuda version. - -2. Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) with Cuda support - -Start by creating a new cargo: - -```bash -cargo new myapp -cd myapp -``` - -Make sure to add the `candle-core` crate with the cuda feature: - -``` -cargo add --git https://github.com/huggingface/candle.git candle-core --features "cuda" -``` - -Run `cargo build` to make sure everything can be correctly built. - -``` -cargo run -``` - -**Without Cuda support**: - -Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) as follows: - -``` -cargo new myapp -cd myapp -cargo add --git https://github.com/huggingface/candle.git candle-core -``` - -Run `cargo build` to make sure everything can be correctly built. - -``` -cargo run -``` - ## Get started -Having installed `candle-core` as described in [Installation](#Installation), we can now -run a simple matrix multiplication. +Make sure that you have [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) correctly installed as described in [**Installation**](https://huggingface.github.io/candle/guide/installation.html). -We will need the [`anyhow`](https://docs.rs/anyhow/latest/anyhow/) package for our example, -so let's add it to our app. +Let's see how to run a simple matrix multiplication. + +We will need the [`anyhow`](https://docs.rs/anyhow/latest/anyhow/) package for our example, so let's also add it to our app. ``` cd myapp @@ -103,7 +51,7 @@ Having installed `candle` with Cuda support, simply define the `device` to be on + let device = Device::new_cuda(0)?; ``` -For more advanced examples, please have a look at the following sections. +For more advanced examples, please have a look at the following section. ## Check out our examples diff --git a/candle-book/src/guide/installation.md b/candle-book/src/guide/installation.md index d2086e0c..69752391 100644 --- a/candle-book/src/guide/installation.md +++ b/candle-book/src/guide/installation.md @@ -1,5 +1,44 @@ # Installation +- **With Cuda support**: + +1. First, make sure that Cuda is correctly installed. +- `nvcc --version` should print your information about your Cuda compiler driver. +- `nvidia-smi --query-gpu=compute_cap --format=csv` should print your GPUs compute capability, e.g. something +like: +``` +compute_cap +8.9 +``` + +If any of the above commands errors out, please make sure to update your Cuda version. + +2. Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) with Cuda support + +Start by creating a new cargo: + +```bash +cargo new myapp +cd myapp +``` + +Make sure to add the `candle-core` crate with the cuda feature: + +``` +cargo add --git https://github.com/huggingface/candle.git candle-core --features "cuda" +``` + +Run `cargo build` to make sure everything can be correctly built. + +``` +cargo run +``` + +**Without Cuda support**: + +Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) as follows: + + Start by creating a new app: ```bash @@ -20,5 +59,6 @@ You can check everything works properly: cargo build ``` +**With mkl support** You can also see the `mkl` feature which could be interesting to get faster inference on CPU. [Using mkl](./advanced/mkl.md) From 7732bf62387d2370c1d01db39852b610bbbd385d Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Wed, 23 Aug 2023 08:54:48 +0000 Subject: [PATCH 06/13] correct --- candle-book/src/guide/installation.md | 17 ++++------------- 1 file changed, 4 insertions(+), 13 deletions(-) diff --git a/candle-book/src/guide/installation.md b/candle-book/src/guide/installation.md index 69752391..31845a36 100644 --- a/candle-book/src/guide/installation.md +++ b/candle-book/src/guide/installation.md @@ -38,25 +38,16 @@ cargo run Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) as follows: - -Start by creating a new app: - -```bash +``` cargo new myapp cd myapp cargo add --git https://github.com/huggingface/candle.git candle-core ``` -At this point, candle will be built **without** CUDA support. -To get CUDA support use the `cuda` feature -```bash -cargo add --git https://github.com/huggingface/candle.git candle-core --features cuda +Finally, run `cargo build` to make sure everything can be correctly built. + ``` - -You can check everything works properly: - -```bash -cargo build +cargo run ``` **With mkl support** From c8211fc4748fc69ef2af60aafd252d6c9592d5c9 Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Wed, 23 Aug 2023 09:04:08 +0000 Subject: [PATCH 07/13] fix code snippets --- candle-book/src/guide/installation.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/candle-book/src/guide/installation.md b/candle-book/src/guide/installation.md index 31845a36..58bf7054 100644 --- a/candle-book/src/guide/installation.md +++ b/candle-book/src/guide/installation.md @@ -6,7 +6,8 @@ - `nvcc --version` should print your information about your Cuda compiler driver. - `nvidia-smi --query-gpu=compute_cap --format=csv` should print your GPUs compute capability, e.g. something like: -``` + +```bash compute_cap 8.9 ``` @@ -30,7 +31,7 @@ cargo add --git https://github.com/huggingface/candle.git candle-core --features Run `cargo build` to make sure everything can be correctly built. -``` +```bash cargo run ``` @@ -38,7 +39,7 @@ cargo run Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) as follows: -``` +```bash cargo new myapp cd myapp cargo add --git https://github.com/huggingface/candle.git candle-core From 283f6c048d0fdd39b7bd67422481766882bd4857 Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Wed, 23 Aug 2023 09:04:36 +0000 Subject: [PATCH 08/13] fix code snippets --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 9813a7f9..12249d24 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@ Let's see how to run a simple matrix multiplication. We will need the [`anyhow`](https://docs.rs/anyhow/latest/anyhow/) package for our example, so let's also add it to our app. -``` +```bash cd myapp cargo add anyhow ``` From 649202024c2ec8b771fd7c554f698dc2f58ba51b Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Wed, 23 Aug 2023 09:05:07 +0000 Subject: [PATCH 09/13] fix code snippets --- candle-book/src/guide/installation.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/candle-book/src/guide/installation.md b/candle-book/src/guide/installation.md index 58bf7054..56d73ad3 100644 --- a/candle-book/src/guide/installation.md +++ b/candle-book/src/guide/installation.md @@ -25,7 +25,7 @@ cd myapp Make sure to add the `candle-core` crate with the cuda feature: -``` +```bash cargo add --git https://github.com/huggingface/candle.git candle-core --features "cuda" ``` @@ -47,7 +47,7 @@ cargo add --git https://github.com/huggingface/candle.git candle-core Finally, run `cargo build` to make sure everything can be correctly built. -``` +```bash cargo run ``` From 2c280007e8abad5eb4186ce0223fd7d60fd3fc58 Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Wed, 23 Aug 2023 13:26:21 +0200 Subject: [PATCH 10/13] Apply suggestions from code review --- README.md | 14 ++------------ candle-book/src/guide/installation.md | 2 +- 2 files changed, 3 insertions(+), 13 deletions(-) diff --git a/README.md b/README.md index 12249d24..a1415535 100644 --- a/README.md +++ b/README.md @@ -15,21 +15,11 @@ and ease of use. Try our online demos: Make sure that you have [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) correctly installed as described in [**Installation**](https://huggingface.github.io/candle/guide/installation.html). Let's see how to run a simple matrix multiplication. - -We will need the [`anyhow`](https://docs.rs/anyhow/latest/anyhow/) package for our example, so let's also add it to our app. - -```bash -cd myapp -cargo add anyhow -``` - -Next, write the following to your `myapp/src/main.rs` file: - +Write the following to your `myapp/src/main.rs` file: ```rust -use anyhow::Result; use candle_core::{Device, Tensor}; -fn main() -> Result<()> { +fn main() -> Result<(), Box> { let device = Device::Cpu; let a = Tensor::randn(0f32, 1., (2, 3), &device)?; diff --git a/candle-book/src/guide/installation.md b/candle-book/src/guide/installation.md index 56d73ad3..0b429566 100644 --- a/candle-book/src/guide/installation.md +++ b/candle-book/src/guide/installation.md @@ -45,7 +45,7 @@ cd myapp cargo add --git https://github.com/huggingface/candle.git candle-core ``` -Finally, run `cargo build` to make sure everything can be correctly built. +Finally, run `cargo run` to make sure everything can be correctly built. ```bash cargo run From c5e43ad0ab860066cf32b08892b70c2733a95802 Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Wed, 23 Aug 2023 13:27:29 +0200 Subject: [PATCH 11/13] Apply suggestions from code review --- candle-book/src/guide/installation.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/candle-book/src/guide/installation.md b/candle-book/src/guide/installation.md index 0b429566..467b477a 100644 --- a/candle-book/src/guide/installation.md +++ b/candle-book/src/guide/installation.md @@ -32,7 +32,7 @@ cargo add --git https://github.com/huggingface/candle.git candle-core --features Run `cargo build` to make sure everything can be correctly built. ```bash -cargo run +cargo build ``` **Without Cuda support**: @@ -45,10 +45,10 @@ cd myapp cargo add --git https://github.com/huggingface/candle.git candle-core ``` -Finally, run `cargo run` to make sure everything can be correctly built. +Finally, run `cargo build` to make sure everything can be correctly built. ```bash -cargo run +cargo build ``` **With mkl support** From c98d3cfd8b3a3d0b505f3b57a5843b442c32fc59 Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Wed, 23 Aug 2023 13:31:54 +0200 Subject: [PATCH 12/13] Update candle-book/src/guide/installation.md --- candle-book/src/guide/installation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/candle-book/src/guide/installation.md b/candle-book/src/guide/installation.md index 467b477a..ac5d6d3f 100644 --- a/candle-book/src/guide/installation.md +++ b/candle-book/src/guide/installation.md @@ -1,6 +1,6 @@ # Installation -- **With Cuda support**: +**With Cuda support**: 1. First, make sure that Cuda is correctly installed. - `nvcc --version` should print your information about your Cuda compiler driver. From 1f58bdbb1d2128ab2bef37621e218272de7ba4fe Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Wed, 23 Aug 2023 13:33:45 +0200 Subject: [PATCH 13/13] Apply suggestions from code review --- README.md | 2 +- candle-book/src/guide/installation.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index a1415535..6b95a587 100644 --- a/README.md +++ b/README.md @@ -31,7 +31,7 @@ fn main() -> Result<(), Box> { } ``` -`cargo run` should display a tensor of shape `Tensor[[2, 4], f32]` +`cargo run` should display a tensor of shape `Tensor[[2, 4], f32]`. Having installed `candle` with Cuda support, simply define the `device` to be on GPU: diff --git a/candle-book/src/guide/installation.md b/candle-book/src/guide/installation.md index ac5d6d3f..394cef35 100644 --- a/candle-book/src/guide/installation.md +++ b/candle-book/src/guide/installation.md @@ -3,7 +3,7 @@ **With Cuda support**: 1. First, make sure that Cuda is correctly installed. -- `nvcc --version` should print your information about your Cuda compiler driver. +- `nvcc --version` should print information about your Cuda compiler driver. - `nvidia-smi --query-gpu=compute_cap --format=csv` should print your GPUs compute capability, e.g. something like: @@ -14,7 +14,7 @@ compute_cap If any of the above commands errors out, please make sure to update your Cuda version. -2. Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) with Cuda support +2. Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) with Cuda support. Start by creating a new cargo: