Expanding a bit the README

This commit is contained in:
Nicolas Patry
2023-07-10 12:51:37 +02:00
parent 9ce0f1c010
commit 868743b8b9
3 changed files with 82 additions and 1 deletions

View File

@ -1,8 +1,53 @@
# candle # candle
Minimalist ML framework for Rust ML framework for Rust
```rust
let a = Tensor::zeros((2, 3), DType::F32, &Device::Cpu)?;
let b = Tensor::zeros((3, 4), DType::F32, &Device::Cpu)?;
let c = a.matmul(&b)?;
```
## Features
- Simple syntax (looks and like PyTorch)
- CPU and Cuda backends (and M1 support)
- Enable serverless (CPU), small and fast deployments
- Model training
- Distributed computing (NCCL).
- Models out of the box (Llama, Whisper, Falcon, ...)
- Emphasis on enabling users to use custom ops/kernels
## Structure
- [candle-core](./candle-core): Core ops, devices, and `Tensor` struct definition
- [candle-nn](./candle-nn/): Facilities to build real models
- [candle-examples](./candle-examples/): Real-world like examples on how to use the library in real settings
- [candle-kernels](./candle-kernels/): CUDA custom kernels
## How to use ?
Check out our [examples](./candle-examples/examples/):
- [Whisper](./candle-examples/examples/whisper/)
- [Llama](./candle-examples/examples/llama/)
- [Bert](./candle-examples/examples/bert/) (Useful for sentence embeddings)
- [Falcon](./candle-examples/examples/falcon/)
## FAQ ## FAQ
- Why Candle?
Candle stems from the need to reduce binary size in order to *enable serverless*
possible by making the whole engine smaller than PyTorch very large library volume
And simply *removing Python* from production workloads.
Python can really add overhead in more complex workflows and the [GIL](https://www.backblaze.com/blog/the-python-gil-past-present-and-future/) is a notorious source of headaches.
Rust is cool, and a lot of the HF ecosystem already has Rust crates [safetensors](https://github.com/huggingface/safetensors) and [tokenizers](https://github.com/huggingface/tokenizers).
### Missing symbols when compiling with the mkl feature. ### Missing symbols when compiling with the mkl feature.
If you get some missing symbols when compiling binaries/tests using the mkl If you get some missing symbols when compiling binaries/tests using the mkl

View File

@ -1,3 +1,38 @@
//! ML framework for Rust
//!
//! ```rust
//! use candle::{Tensor, DType, Device};
//! # use candle::Error;
//! # fn main() -> Result<(), Error>{
//!
//! let a = Tensor::zeros((2, 3), DType::F32, &Device::Cpu)?;
//! let b = Tensor::zeros((3, 4), DType::F32, &Device::Cpu)?;
//!
//! let c = a.matmul(&b)?;
//! # Ok(())}
//! ```
//!
//! ## Features
//!
//! - Simple syntax (looks and like PyTorch)
//! - CPU and Cuda backends (and M1 support)
//! - Enable serverless (CPU) small and fast deployments
//! - Model training
//! - Distributed computing (NCCL).
//! - Models out of the box (Llama, Whisper, Falcon, ...)
//!
//! ## FAQ
//!
//! - Why Candle?
//!
//! Candle stems from the need to reduce binary size in order to *enable serverless*
//! possible by making the whole engine smaller than PyTorch very large library volume
//!
//! And simply *removing Python* from production workloads.
//! Python can really add overhead in more complex workflows and the [GIL](https://www.backblaze.com/blog/the-python-gil-past-present-and-future/) is a notorious source of headaches.
//!
//! Rust is cool, and a lot of the HF ecosystem already has Rust crates [safetensors](https://github.com/huggingface/safetensors) and [tokenizers](https://github.com/huggingface/tokenizers)
mod backprop; mod backprop;
mod conv; mod conv;
mod cpu_backend; mod cpu_backend;

View File

@ -13,6 +13,7 @@ readme = "README.md"
[lib] [lib]
name = "candle" name = "candle"
crate-type = ["cdylib"] crate-type = ["cdylib"]
doc = false
[dependencies] [dependencies]
candle = { path = "../candle-core", default-features=false } candle = { path = "../candle-core", default-features=false }