mirror of
https://github.com/huggingface/candle.git
synced 2025-06-18 11:37:11 +00:00
Starting the book.
This commit is contained in:
3
candle-book/src/guide/cheatsheet.md
Normal file
3
candle-book/src/guide/cheatsheet.md
Normal file
@ -0,0 +1,3 @@
|
||||
# Pytorch cheatsheet
|
||||
|
||||
{{#include ../../../README.md:cheatsheet}}
|
@ -1 +1,53 @@
|
||||
# PyTorch cheatsheet
|
||||
# Hello world !
|
||||
|
||||
We will now create the hello world of the ML world, building a model capable of solving MNIST dataset.
|
||||
|
||||
Open `src/main.rs` and fill in with these contents:
|
||||
|
||||
```rust
|
||||
# extern crate candle;
|
||||
use candle::{DType, Device, Result, Tensor};
|
||||
|
||||
struct Model {
|
||||
first: Tensor,
|
||||
second: Tensor,
|
||||
}
|
||||
|
||||
impl Model {
|
||||
fn forward(&self, image: &Tensor) -> Result<Tensor> {
|
||||
let x = image.matmul(&self.first)?;
|
||||
let x = x.relu()?;
|
||||
x.matmul(&self.second)
|
||||
}
|
||||
}
|
||||
|
||||
fn main() -> Result<()> {
|
||||
// Use Device::new_cuda(0)?; to use the GPU.
|
||||
let device = Device::Cpu;
|
||||
|
||||
let first = Tensor::zeros((784, 100), DType::F32, &device)?;
|
||||
let second = Tensor::zeros((100, 10), DType::F32, &device)?;
|
||||
let model = Model { first, second };
|
||||
|
||||
let dummy_image = Tensor::zeros((1, 784), DType::F32, &device)?;
|
||||
|
||||
let digit = model.forward(&dummy_image)?;
|
||||
println!("Digit {digit:?} digit");
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
Everything should now run with:
|
||||
|
||||
```bash
|
||||
cargo run --release
|
||||
```
|
||||
|
||||
Now that we have the running dummy code we can get to more advanced topics:
|
||||
|
||||
|
||||
- [For PyTorch users](./guide/cheatsheet.md)
|
||||
- [Running existing models](./inference/README.md)
|
||||
- [Training models](./training/README.md)
|
||||
|
||||
|
||||
|
@ -1 +1,24 @@
|
||||
# Installation
|
||||
|
||||
Start by creating a new app:
|
||||
|
||||
```bash
|
||||
cargo new myapp
|
||||
cd myapp
|
||||
cargo add --git https://github.com/LaurentMazare/candle.git candle
|
||||
```
|
||||
|
||||
At this point, candle will be built **without** CUDA support.
|
||||
To get CUDA support use the feature `cuda`
|
||||
```bash
|
||||
cargo add --git https://github.com/LaurentMazare/candle.git candle --features cuda
|
||||
```
|
||||
|
||||
You can check everything works properly:
|
||||
|
||||
```bash
|
||||
cargo build
|
||||
```
|
||||
|
||||
|
||||
You can also see feature `mkl` which could be interesting to get faster inference on CPU. [Using mkl](./advanced/mkl.md)
|
||||
|
Reference in New Issue
Block a user