From 1f58bdbb1d2128ab2bef37621e218272de7ba4fe Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Wed, 23 Aug 2023 13:33:45 +0200 Subject: [PATCH] Apply suggestions from code review --- README.md | 2 +- candle-book/src/guide/installation.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index a1415535..6b95a587 100644 --- a/README.md +++ b/README.md @@ -31,7 +31,7 @@ fn main() -> Result<(), Box> { } ``` -`cargo run` should display a tensor of shape `Tensor[[2, 4], f32]` +`cargo run` should display a tensor of shape `Tensor[[2, 4], f32]`. Having installed `candle` with Cuda support, simply define the `device` to be on GPU: diff --git a/candle-book/src/guide/installation.md b/candle-book/src/guide/installation.md index ac5d6d3f..394cef35 100644 --- a/candle-book/src/guide/installation.md +++ b/candle-book/src/guide/installation.md @@ -3,7 +3,7 @@ **With Cuda support**: 1. First, make sure that Cuda is correctly installed. -- `nvcc --version` should print your information about your Cuda compiler driver. +- `nvcc --version` should print information about your Cuda compiler driver. - `nvidia-smi --query-gpu=compute_cap --format=csv` should print your GPUs compute capability, e.g. something like: @@ -14,7 +14,7 @@ compute_cap If any of the above commands errors out, please make sure to update your Cuda version. -2. Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) with Cuda support +2. Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) with Cuda support. Start by creating a new cargo: