## Using ONNX models in Candle This example demonstrates how to run [ONNX](https://github.com/onnx/onnx) based LLM models in Candle. This script only implements SmolLM-135M right now. You can run the examples with following commands: ```bash cargo run --example onnx-llm --features onnx ```