mirror of
https://github.com/huggingface/candle.git
synced 2025-06-18 19:47:12 +00:00

* onnx attention * setup an example, adding and fixing onnx ops bit by bit * model working, output is garbage data * trilu working * close but not quite, Issues still with scatterND * closer but the outputs are still slightly wrong * added tests for trilu and scatterND * lint * readme * clippy * removed unnessisary comments * changed device selection, took hyperparameters from model config
11 lines
295 B
Markdown
11 lines
295 B
Markdown
## Using ONNX models in Candle
|
|
|
|
This example demonstrates how to run [ONNX](https://github.com/onnx/onnx) based LLM models in Candle.
|
|
|
|
This script only implements SmolLM-135M right now.
|
|
|
|
You can run the examples with following commands:
|
|
|
|
```bash
|
|
cargo run --example onnx-llm --features onnx
|
|
``` |