mirror of
https://github.com/huggingface/candle.git
synced 2025-06-17 02:58:50 +00:00
Merge pull request #150 from LaurentMazare/add_cheatsheet
Adding cheatsheet + expand on other ML frameworks.
This commit is contained in:
37
README.md
37
README.md
@ -27,6 +27,21 @@ let c = a.matmul(&b)?;
|
||||
|
||||
## How to use ?
|
||||
|
||||
Cheatsheet:
|
||||
|
||||
| | Using PyTorch | Using Candle |
|
||||
|------------|------------------------------------------|------------------------------------------------------------------|
|
||||
| Creation | `torch.Tensor([[1, 2], [3, 4]])` | `Tensor::new(&[[1f32, 2.]], [3., 4.]], &Device::Cpu)?` |
|
||||
| Indexing | `tensor[:, :4]` | `tensor.i((.., ..4))?` |
|
||||
| Operations | `tensor.view((2, 2))` | `tensor.reshape((2, 2))?` |
|
||||
| Operations | `a.matmul(b)` | `a.matmul(&b)?` |
|
||||
| Arithmetic | `a + b` | `&a + &b` |
|
||||
| Device | `tensor.to(device="cuda")` | `tensor.to_device(&Device::Cuda(0))?` |
|
||||
| Dtype | `tensor.to(dtype=torch.float16)` | `tensor.to_dtype(&DType::F16)?` |
|
||||
| Saving | `torch.save({"A": A}, "model.bin")` | `tensor.save_safetensors("A", "model.safetensors")?` |
|
||||
| Loading | `weights = torch.load("model.bin")` | TODO (see the examples for now) |
|
||||
|
||||
|
||||
Check out our [examples](./candle-examples/examples/):
|
||||
|
||||
- [Whisper](./candle-examples/examples/whisper/)
|
||||
@ -38,16 +53,34 @@ Check out our [examples](./candle-examples/examples/):
|
||||
|
||||
## FAQ
|
||||
|
||||
- Why Candle?
|
||||
### Why Candle?
|
||||
|
||||
Candle stems from the need to reduce binary size in order to *enable serverless*
|
||||
possible by making the whole engine smaller than PyTorch very large library volume
|
||||
possible by making the whole engine smaller than PyTorch very large library volume.
|
||||
This enables creating runtimes on a cluster much faster.
|
||||
|
||||
And simply *removing Python* from production workloads.
|
||||
Python can really add overhead in more complex workflows and the [GIL](https://www.backblaze.com/blog/the-python-gil-past-present-and-future/) is a notorious source of headaches.
|
||||
|
||||
Rust is cool, and a lot of the HF ecosystem already has Rust crates [safetensors](https://github.com/huggingface/safetensors) and [tokenizers](https://github.com/huggingface/tokenizers).
|
||||
|
||||
|
||||
### Other ML frameworks
|
||||
|
||||
- [dfdx](https://github.com/coreylowman/dfdx) is a formidable crate, with shapes being included
|
||||
in types preventing a lot of headaches by getting compiler to complain about shape mismatch right off the bat
|
||||
However we found that some features still require nightly and writing code can be a bit dauting for non rust experts.
|
||||
|
||||
We're leveraging and contributing to other core crates for the runtime so hopefully both crates can benefit from each
|
||||
other
|
||||
|
||||
- [burn](https://github.com/burn-rs/burn) is a general crate that can leverage multiple backends so you can choose the best
|
||||
engine for your workload
|
||||
|
||||
- [tch-rs](https://github.com/LaurentMazare/tch-rs.git) Bindings to the torch library in Rust. Extremely versatile, but they
|
||||
do bring in the entire torch library into the runtime. The main contributor of `tch-rs` is also involved in the development
|
||||
of `candle`.
|
||||
|
||||
### Missing symbols when compiling with the mkl feature.
|
||||
|
||||
If you get some missing symbols when compiling binaries/tests using the mkl
|
||||
|
Reference in New Issue
Block a user