Starting the book.

This commit is contained in:
Nicolas Patry
2023-07-27 12:41:15 +02:00
parent 75e0448114
commit 6242a1470e
8 changed files with 96 additions and 4 deletions

View File

@ -24,6 +24,6 @@ jobs:
curl -sSL $url | tar -xz --directory=bin
echo "$(pwd)/bin" >> $GITHUB_PATH
- name: Run tests
run: cd candle-book && mdbook test
run: cd candle-book && cargo build && mdbook test -L ../target/debug/deps/

View File

@ -48,6 +48,8 @@ trunk serve --release --public-url /candle-llama2/ --port 8081
And then browse to
[http://localhost:8081/candle-llama2](http://localhost:8081/candle-llama2).
<!--- ANCHOR: features --->
## Features
- Simple syntax, looks and like PyTorch.
@ -60,8 +62,11 @@ And then browse to
- Embed user-defined ops/kernels, such as [flash-attention
v2](https://github.com/LaurentMazare/candle/blob/89ba005962495f2bfbda286e185e9c3c7f5300a3/candle-flash-attn/src/lib.rs#L152).
<!--- ANCHOR_END: features --->
## How to use ?
<!--- ANCHOR: cheatsheet --->
Cheatsheet:
| | Using PyTorch | Using Candle |
@ -76,6 +81,8 @@ Cheatsheet:
| Saving | `torch.save({"A": A}, "model.bin")` | `tensor.save_safetensors("A", "model.safetensors")?` |
| Loading | `weights = torch.load("model.bin")` | TODO (see the examples for now) |
<!--- ANCHOR_END: cheatsheet --->
## Structure

View File

@ -1 +1,6 @@
# Introduction
{{#include ../../README.md:features}}
This book will introduce step by step how to use `candle`.

View File

@ -6,13 +6,13 @@
- [Installation](guide/installation.md)
- [Hello World - MNIST](guide/hello_world.md)
- [PyTorch cheatsheet](guide/hello_world.md)
- [PyTorch cheatsheet](guide/cheatsheet.md)
# Reference Guide
- [Running a model](inference/README.md)
- [Serialization](inference/serialization.md)
- [Using the hub](inference/hub.md)
- [Serialization](inference/serialization.md)
- [Advanced Cuda usage](inference/cuda/README.md)
- [Writing a custom kernel](inference/cuda/writing.md)
- [Porting a custom kernel](inference/cuda/porting.md)
@ -24,3 +24,4 @@
- [Training](training/README.md)
- [MNIST](training/mnist.md)
- [Fine-tuning](training/finetuning.md)
- [Using MKL](advanced/mkl.md)

View File

@ -0,0 +1 @@
# Using MKL

View File

@ -0,0 +1,3 @@
# Pytorch cheatsheet
{{#include ../../../README.md:cheatsheet}}

View File

@ -1 +1,53 @@
# PyTorch cheatsheet
# Hello world !
We will now create the hello world of the ML world, building a model capable of solving MNIST dataset.
Open `src/main.rs` and fill in with these contents:
```rust
# extern crate candle;
use candle::{DType, Device, Result, Tensor};
struct Model {
first: Tensor,
second: Tensor,
}
impl Model {
fn forward(&self, image: &Tensor) -> Result<Tensor> {
let x = image.matmul(&self.first)?;
let x = x.relu()?;
x.matmul(&self.second)
}
}
fn main() -> Result<()> {
// Use Device::new_cuda(0)?; to use the GPU.
let device = Device::Cpu;
let first = Tensor::zeros((784, 100), DType::F32, &device)?;
let second = Tensor::zeros((100, 10), DType::F32, &device)?;
let model = Model { first, second };
let dummy_image = Tensor::zeros((1, 784), DType::F32, &device)?;
let digit = model.forward(&dummy_image)?;
println!("Digit {digit:?} digit");
Ok(())
}
```
Everything should now run with:
```bash
cargo run --release
```
Now that we have the running dummy code we can get to more advanced topics:
- [For PyTorch users](./guide/cheatsheet.md)
- [Running existing models](./inference/README.md)
- [Training models](./training/README.md)

View File

@ -1 +1,24 @@
# Installation
Start by creating a new app:
```bash
cargo new myapp
cd myapp
cargo add --git https://github.com/LaurentMazare/candle.git candle
```
At this point, candle will be built **without** CUDA support.
To get CUDA support use the feature `cuda`
```bash
cargo add --git https://github.com/LaurentMazare/candle.git candle --features cuda
```
You can check everything works properly:
```bash
cargo build
```
You can also see feature `mkl` which could be interesting to get faster inference on CPU. [Using mkl](./advanced/mkl.md)