diff --git a/candle-examples/examples/chatglm/README.md b/candle-examples/examples/chatglm/README.md new file mode 100644 index 00000000..a139c1a9 --- /dev/null +++ b/candle-examples/examples/chatglm/README.md @@ -0,0 +1,13 @@ +# candle-chatglm + +Uses `THUDM/chatglm3-6b` to generate chinese text. Will not generate text for english (usually). + +## Text Generation + +```bash +cargo run --example chatglm --release -- --prompt "部署门槛较低等众多优秀特 " + +> 部署门槛较低等众多优秀特 点,使得其成为了一款备受欢迎的AI助手。 +> +> 作为一款人工智能助手,ChatGLM3-6B +``` \ No newline at end of file diff --git a/candle-examples/examples/chinese_clip/README.md b/candle-examples/examples/chinese_clip/README.md new file mode 100644 index 00000000..15f63dd0 --- /dev/null +++ b/candle-examples/examples/chinese_clip/README.md @@ -0,0 +1,42 @@ +# candle-chinese-clip + +Contrastive Language-Image Pre-Training (CLIP) is an architecture trained on +pairs of images with related texts. This one is trained using in chinese instead of english. + +## Running on cpu + +```bash +$ cargo run --example chinese_clip --release -- --images "candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg","candle-examples/examples/yolo-v8/assets/bike.jpg" --cpu --sequences "一场自行车比赛","两只猫的照片","一个机器人拿着蜡烛" + +> Results for image: candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg +> +> 2025-03-25T19:22:01.325177Z INFO chinese_clip: Probability: 0.0000% Text: 一场自行车比赛 +> 2025-03-25T19:22:01.325179Z INFO chinese_clip: Probability: 0.0000% Text: 两只猫的照片 +> 2025-03-25T19:22:01.325181Z INFO chinese_clip: Probability: 100.0000% Text: 一个机器人拿着蜡烛 +> 2025-03-25T19:22:01.325183Z INFO chinese_clip: +> +> Results for image: candle-examples/examples/yolo-v8/assets/bike.jpg +> +> 2025-03-25T19:22:01.325184Z INFO chinese_clip: Probability: 100.0000% Text: 一场自行车比赛 +> 2025-03-25T19:22:01.325186Z INFO chinese_clip: Probability: 0.0000% Text: 两只猫的照片 +> 2025-03-25T19:22:01.325187Z INFO chinese_clip: Probability: 0.0000% Text: 一个机器人拿着蜡烛 +``` + +## Running on metal + +```bash +$ cargo run --features metal --example chinese_clip --release -- --images "candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg","candle-examples/examples/yolo-v8/assets/bike.jpg" --cpu --sequences "一场自行车比赛","两只猫的照片","一个机器人拿着蜡烛" + +> Results for image: candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg +> +> 2025-03-25T19:22:01.325177Z INFO chinese_clip: Probability: 0.0000% Text: 一场自行车比赛 +> 2025-03-25T19:22:01.325179Z INFO chinese_clip: Probability: 0.0000% Text: 两只猫的照片 +> 2025-03-25T19:22:01.325181Z INFO chinese_clip: Probability: 100.0000% Text: 一个机器人拿着蜡烛 +> 2025-03-25T19:22:01.325183Z INFO chinese_clip: +> +> Results for image: candle-examples/examples/yolo-v8/assets/bike.jpg +> +> 2025-03-25T19:22:01.325184Z INFO chinese_clip: Probability: 100.0000% Text: 一场自行车比赛 +> 2025-03-25T19:22:01.325186Z INFO chinese_clip: Probability: 0.0000% Text: 两只猫的照片 +> 2025-03-25T19:22:01.325187Z INFO chinese_clip: Probability: 0.0000% Text: 一个机器人拿着蜡烛 +``` diff --git a/candle-examples/examples/convmixer/README.md b/candle-examples/examples/convmixer/README.md new file mode 100644 index 00000000..3981e3d9 --- /dev/null +++ b/candle-examples/examples/convmixer/README.md @@ -0,0 +1,17 @@ +# candle-convmixer + +A lightweight CNN architecture that processes image patches similar to a vision transformer, with separate spatial and channel convolutions. + +ConvMixer from [Patches Are All You Need?](https://arxiv.org/pdf/2201.09792) and [ConvMixer](https://github.com/locuslab/convmixer). + +## Running an example + +```bash +$ cargo run --example convmixer --release -- --image candle-examples/examples/yolo-v8/assets/bike.jpg + +> mountain bike, all-terrain bike, off-roader: 61.75% +> unicycle, monocycle : 5.73% +> moped : 3.66% +> bicycle-built-for-two, tandem bicycle, tandem: 3.51% +> crash helmet : 0.85% +``` diff --git a/candle-examples/examples/custom-ops/README.md b/candle-examples/examples/custom-ops/README.md new file mode 100644 index 00000000..46008084 --- /dev/null +++ b/candle-examples/examples/custom-ops/README.md @@ -0,0 +1,17 @@ +# candle-custom-ops + + This example illustrates how to implement forward and backward passes for custom operations on the CPU and GPU. + The custom op in this example implements RMS normalization for the CPU and CUDA. + +## Running an example + +```bash +$ cargo run --example custom-ops + +> [[ 0., 1., 2., 3., 4., 5., 6.], +> [ 7., 8., 9., 10., 11., 12., 13.]] +> Tensor[[2, 7], f32] +> [[0.0000, 0.2773, 0.5547, 0.8320, 1.1094, 1.3867, 1.6641], +> [0.6864, 0.7845, 0.8825, 0.9806, 1.0786, 1.1767, 1.2748]] +> Tensor[[2, 7], f32] +``` \ No newline at end of file diff --git a/candle-examples/examples/efficientnet/README.md b/candle-examples/examples/efficientnet/README.md new file mode 100644 index 00000000..9a009b6a --- /dev/null +++ b/candle-examples/examples/efficientnet/README.md @@ -0,0 +1,15 @@ +# candle-efficientnet + +Demonstrates a Candle implementation of EfficientNet for image classification based on ImageNet classes. + +## Running an example + +```bash +$ cargo run --example efficientnet --release -- --image candle-examples/examples/yolo-v8/assets/bike.jpg --which b1 + +> bicycle-built-for-two, tandem bicycle, tandem: 45.85% +> mountain bike, all-terrain bike, off-roader: 30.45% +> crash helmet : 2.58% +> unicycle, monocycle : 2.21% +> tricycle, trike, velocipede: 1.53% +``` diff --git a/candle-examples/examples/falcon/README.md b/candle-examples/examples/falcon/README.md index 267c78c2..66e04aad 100644 --- a/candle-examples/examples/falcon/README.md +++ b/candle-examples/examples/falcon/README.md @@ -1,3 +1,10 @@ # candle-falcon Falcon is a general large language model. + +## Running an example + +Make sure to include the `--use-f32` flag if using CPU, because there isn't a BFloat16 implementation yet. +``` +cargo run --example falcon --release -- --prompt "Flying monkeys are" --use-f32 +``` \ No newline at end of file diff --git a/candle-examples/examples/glm4/README.org b/candle-examples/examples/glm4/README.org index a584f6c7..71cd3058 100644 --- a/candle-examples/examples/glm4/README.org +++ b/candle-examples/examples/glm4/README.org @@ -12,7 +12,7 @@ GLM-4-9B is the open-source version of the latest generation of pre-trained mode ** Running with ~cpu~ #+begin_src shell - cargo run --example glm4 --release -- --cpu--prompt "Hello world" + cargo run --example glm4 --release -- --cpu --prompt "Hello world" #+end_src ** Output Example diff --git a/candle-examples/examples/llama/README.md b/candle-examples/examples/llama/README.md new file mode 100644 index 00000000..2edec7b1 --- /dev/null +++ b/candle-examples/examples/llama/README.md @@ -0,0 +1,11 @@ +# candle-llama + +Candle implementations of various Llama based architectures. + +## Running an example + +```bash +$ cargo run --example llama -- --prompt "Machine learning is " --which v32-3b-instruct + +> Machine learning is the part of computer science which deals with the development of algorithms and +``` \ No newline at end of file diff --git a/candle-examples/examples/mamba/README.md b/candle-examples/examples/mamba/README.md index 507434a1..2470ab7f 100644 --- a/candle-examples/examples/mamba/README.md +++ b/candle-examples/examples/mamba/README.md @@ -12,6 +12,6 @@ would only work for inference. ## Running the example ```bash -$ cargo run --example mamba-minimal --release -- --prompt "Mamba is the" +$ cargo run --example mamba --release -- --prompt "Mamba is the" ``` diff --git a/candle-examples/examples/metavoice/README.md b/candle-examples/examples/metavoice/README.md index ef53e66f..56b66e3d 100644 --- a/candle-examples/examples/metavoice/README.md +++ b/candle-examples/examples/metavoice/README.md @@ -13,6 +13,6 @@ Note that the current candle implementation suffers from some limitations as of ## Run an example ```bash -cargo run --example metavoice --release -- \\ +cargo run --example metavoice --release -- \ --prompt "This is a demo of text to speech by MetaVoice-1B, an open-source foundational audio model." ``` diff --git a/candle-examples/examples/mnist-training/README.md b/candle-examples/examples/mnist-training/README.md new file mode 100644 index 00000000..3c571b97 --- /dev/null +++ b/candle-examples/examples/mnist-training/README.md @@ -0,0 +1,16 @@ +# candle-mnist-training + +Training a 2 layer MLP on mnist in Candle. + +## Running an example + +```bash +$ cargo run --example mnist-training --features candle-datasets + +> train-images: [60000, 784] +> train-labels: [60000] +> test-images: [10000, 784] +> test-labels: [10000] +> 1 train loss: 2.30265 test acc: 68.08% +> 2 train loss: 1.50815 test acc: 60.77% +``` \ No newline at end of file diff --git a/candle-examples/examples/moondream/README.md b/candle-examples/examples/moondream/README.md index e202de7c..c70ce0f5 100644 --- a/candle-examples/examples/moondream/README.md +++ b/candle-examples/examples/moondream/README.md @@ -12,7 +12,7 @@ $ wget https://raw.githubusercontent.com/vikhyat/moondream/main/assets/demo-1.jp Now you can run Moondream from the `candle-examples` crate: ```bash -$ cargo run --example moondream --release -- --prompt "What is the girl eating?" --image "./demo-1.jpg" +$ cargo run --example moondream --release -- --prompt "Describe the people behind the bikers?" --image "candle-examples/examples/yolo-v8/assets/bike.jpg" avavx: false, neon: true, simd128: false, f16c: false temp: 0.00 repeat-penalty: 1.00 repeat-last-n: 64 diff --git a/candle-examples/examples/musicgen/README.md b/candle-examples/examples/musicgen/README.md new file mode 100644 index 00000000..8db388b1 --- /dev/null +++ b/candle-examples/examples/musicgen/README.md @@ -0,0 +1,20 @@ +# candle-musicgen + +Candle implementation of musicgen from [Simple and Controllable Music Generation](https://arxiv.org/pdf/2306.05284). + +## Running an example + +```bash +$ cargo run --example musicgen -- --prompt "90s rock song with loud guitars and heavy drums" + +> tokens: [2777, 7, 2480, 2324, 28, 8002, 5507, 7, 11, 2437, 5253, 7, 1] +> Tensor[dims 1, 13; u32] +> [[[ 0.0902, 0.1256, -0.0585, ..., 0.1057, -0.5141, -0.4675], +> [ 0.1972, -0.0268, -0.3368, ..., -0.0495, -0.3597, -0.3940], +> [-0.0855, -0.0007, 0.2225, ..., -0.2804, -0.5360, -0.2436], +> ... +> [ 0.0515, 0.0235, -0.3855, ..., -0.4728, -0.6858, -0.2923], +> [-0.3728, -0.1442, -0.1179, ..., -0.4388, -0.0287, -0.3242], +> [ 0.0163, 0.0012, -0.0020, ..., 0.0142, 0.0173, -0.0103]]] +> Tensor[[1, 13, 768], f32] +``` \ No newline at end of file diff --git a/candle-examples/examples/quantized-phi/README.md b/candle-examples/examples/quantized-phi/README.md new file mode 100644 index 00000000..ee463118 --- /dev/null +++ b/candle-examples/examples/quantized-phi/README.md @@ -0,0 +1,20 @@ +# candle-quantized-phi + +Candle implementation of various quantized Phi models. + +## Running an example + +```bash +$ cargo run --example quantized-phi --release -- --prompt "The best thing about coding in rust is " + +> - it's memory safe (without you having to worry too much) +> - the borrow checker is really smart and will catch your mistakes for free, making them show up as compile errors instead of segfaulting in runtime. +> +> This alone make me prefer using rust over c++ or go, python/Cython etc. +> +> The major downside I can see now: +> - it's slower than other languages (viz: C++) and most importantly lack of libraries to leverage existing work done by community in that language. There are so many useful machine learning libraries available for c++, go, python etc but none for Rust as far as I am aware of on the first glance. +> - there aren't a lot of production ready projects which also makes it very hard to start new one (given my background) +> +> Another downside: +``` \ No newline at end of file diff --git a/candle-examples/examples/quantized-t5/README.md b/candle-examples/examples/quantized-t5/README.md index c86e746d..d0a68dbd 100644 --- a/candle-examples/examples/quantized-t5/README.md +++ b/candle-examples/examples/quantized-t5/README.md @@ -1,5 +1,7 @@ # candle-quantized-t5 +Candle implementation for quantizing and running T5 translation models. + ## Seq2Seq example This example uses a quantized version of the t5 model. diff --git a/candle-examples/examples/reinforcement-learning/README.md b/candle-examples/examples/reinforcement-learning/README.md index 28819067..25825408 100644 --- a/candle-examples/examples/reinforcement-learning/README.md +++ b/candle-examples/examples/reinforcement-learning/README.md @@ -2,6 +2,11 @@ Reinforcement Learning examples for candle. +> [!WARNING] +> uv is not currently compatible with pyo3 as of 2025/3/28. + +## System wide python + This has been tested with `gymnasium` version `0.29.1`. You can install the Python package with: ```bash diff --git a/candle-examples/examples/resnet/README.md b/candle-examples/examples/resnet/README.md index df934773..8565a7f3 100644 --- a/candle-examples/examples/resnet/README.md +++ b/candle-examples/examples/resnet/README.md @@ -7,7 +7,7 @@ probabilities for the top-5 classes. ## Running an example ``` -$ cargo run --example resnet --release -- --image tiger.jpg +$ cargo run --example resnet --release -- --image candle-examples/examples/yolo-v8/assets/bike.jpg loaded image Tensor[dims 3, 224, 224; f32] model built diff --git a/candle-examples/examples/segformer/README.md b/candle-examples/examples/segformer/README.md index 3ea503ee..f2cc81ca 100644 --- a/candle-examples/examples/segformer/README.md +++ b/candle-examples/examples/segformer/README.md @@ -10,9 +10,11 @@ If you want you can use the example images from this [pull request][pr], downloa ```bash # run the image classification task -cargo run --example segformer classify +cargo run --example segformer classify candle-examples/examples/yolo-v8/assets/bike.jpg + # run the segmentation task -cargo run --example segformer segment +cargo run --example segformer segment candle-examples/examples/yolo-v8/assets/bike.jpg + ``` Example output for classification: diff --git a/candle-examples/examples/segment-anything/README.md b/candle-examples/examples/segment-anything/README.md index da27f6ce..69051792 100644 --- a/candle-examples/examples/segment-anything/README.md +++ b/candle-examples/examples/segment-anything/README.md @@ -14,8 +14,8 @@ based on [MobileSAM](https://github.com/ChaoningZhang/MobileSAM). ```bash cargo run --example segment-anything --release -- \ - --image candle-examples/examples/yolo-v8/assets/bike.jpg - --use-tiny + --image candle-examples/examples/yolo-v8/assets/bike.jpg \ + --use-tiny \ --point 0.6,0.6 --point 0.6,0.55 ``` diff --git a/candle-examples/examples/siglip/README.md b/candle-examples/examples/siglip/README.md index d79ae330..9ef3acb0 100644 --- a/candle-examples/examples/siglip/README.md +++ b/candle-examples/examples/siglip/README.md @@ -5,7 +5,7 @@ SigLIP is multi-modal text-vision model that improves over CLIP by using a sigmo ### Running an example ``` -$ cargo run --features cuda -r --example siglip - +$ cargo run --features cuda -r --example siglip softmax_image_vec: [2.1912122e-14, 2.3624872e-14, 1.0, 1.0, 2.4787932e-8, 3.2784535e-12] diff --git a/candle-examples/examples/silero-vad/README.md b/candle-examples/examples/silero-vad/README.md index 14dd8a82..8d1d61e1 100644 --- a/candle-examples/examples/silero-vad/README.md +++ b/candle-examples/examples/silero-vad/README.md @@ -6,7 +6,14 @@ This example uses the models available in the hugging face [onnx-community/siler ## Running the example +### using arecord + ```bash $ arecord -t raw -f S16_LE -r 16000 -c 1 -d 5 - | cargo run --example silero-vad --release --features onnx -- --sample-rate 16000 ``` +### using SoX + +```bash +$ rec -t raw -r 48000 -b 16 -c 1 -e signed-integer - trim 0 5 | sox -t raw -r 48000 -b 16 -c 1 -e signed-integer - -t raw -r 16000 -b 16 -c 1 -e signed-integer - | cargo run --example silero-vad --release --features onnx -- --sample-rate 16000 +``` diff --git a/candle-examples/examples/starcoder2/README.md b/candle-examples/examples/starcoder2/README.md new file mode 100644 index 00000000..ccd7a84e --- /dev/null +++ b/candle-examples/examples/starcoder2/README.md @@ -0,0 +1,15 @@ +# candle-starcoder2 + +Candle implementation of Star Coder 2 family of code generation model from [StarCoder 2 and The Stack v2: The Next Generation](https://arxiv.org/pdf/2402.19173). + +## Running an example + +```bash +$ cargo run --example starcoder2 -- --prompt "write a recursive fibonacci function in python " + +> # that returns the nth number in the sequence. +> +> def fib(n): +> if n + +``` \ No newline at end of file diff --git a/candle-examples/examples/stella-en-v5/README.md b/candle-examples/examples/stella-en-v5/README.md index 3a87b295..61c7e4dd 100644 --- a/candle-examples/examples/stella-en-v5/README.md +++ b/candle-examples/examples/stella-en-v5/README.md @@ -10,7 +10,7 @@ Stella_en_1.5B_v5 is used to generate text embeddings embeddings for a prompt. T are downloaded from the hub on the first run. ```bash -$ cargo run --example stella-en-v5 --release -- --query "What are safetensors?" +$ cargo run --example stella-en-v5 --release -- --query "What are safetensors?" --which 1.5b > [[ 0.3905, -0.0130, 0.2072, ..., -0.1100, -0.0086, 0.6002]] > Tensor[[1, 1024], f32] diff --git a/candle-examples/examples/t5/README.md b/candle-examples/examples/t5/README.md index 18c4c832..1e824e31 100644 --- a/candle-examples/examples/t5/README.md +++ b/candle-examples/examples/t5/README.md @@ -1,5 +1,7 @@ # candle-t5 +Candle implementations of the T5 family of translation models. + ## Encoder-decoder example: ```bash diff --git a/candle-examples/examples/vgg/README.md b/candle-examples/examples/vgg/README.md index 473038e8..f0a82f9a 100644 --- a/candle-examples/examples/vgg/README.md +++ b/candle-examples/examples/vgg/README.md @@ -7,7 +7,7 @@ The VGG models are defined in `candle-transformers/src/models/vgg.rs`. The main You can run the example with the following command: ```bash -cargo run --example vgg --release -- --image ../yolo-v8/assets/bike.jpg --which vgg13 +cargo run --example vgg --release -- --image candle-examples/examples/yolo-v8/assets/bike.jpg --which vgg13 ``` In the command above, `--image` specifies the path to the image file and `--which` specifies the VGG model to use (vgg13, vgg16, or vgg19). diff --git a/candle-examples/examples/vit/README.md b/candle-examples/examples/vit/README.md index 42e9a6a7..a8e115c8 100644 --- a/candle-examples/examples/vit/README.md +++ b/candle-examples/examples/vit/README.md @@ -7,8 +7,8 @@ probabilities for the top-5 classes. ## Running an example -``` -$ cargo run --example vit --release -- --image tiger.jpg +```bash +$ cargo run --example vit --release -- --image candle-examples/examples/yolo-v8/assets/bike.jpg loaded image Tensor[dims 3, 224, 224; f32] model built diff --git a/candle-examples/examples/whisper-microphone/README.md b/candle-examples/examples/whisper-microphone/README.md new file mode 100644 index 00000000..825dd52e --- /dev/null +++ b/candle-examples/examples/whisper-microphone/README.md @@ -0,0 +1,15 @@ +# candle-whisper-microphone + +Whisper implementation using microphone as input. + +## Running an example + +```bash +$ cargo run --example whisper-microphone --features microphone + +> transcribing audio... +> 480256 160083 +> language_token: None +> 0.0s -- 30.0s: Hello, hello, I don't know if this is working, but You know, how long did I make this? +> 480256 160085 +``` \ No newline at end of file diff --git a/candle-examples/examples/yi/README.md b/candle-examples/examples/yi/README.md new file mode 100644 index 00000000..51abe9ff --- /dev/null +++ b/candle-examples/examples/yi/README.md @@ -0,0 +1,13 @@ +# candle-yi + +Candle implentations of the Yi family of bilingual (English, Chinese) LLMs. + +## Running an example + +```bash +$ cargo run --example yi -- --prompt "Here is a test sentence" + +> python +> print("Hello World") +> +``` diff --git a/candle-examples/examples/yolo-v3/README.md b/candle-examples/examples/yolo-v3/README.md new file mode 100644 index 00000000..0c25eb72 --- /dev/null +++ b/candle-examples/examples/yolo-v3/README.md @@ -0,0 +1,32 @@ +# candle-yolo-v3: + +Candle implementation of Yolo-V3 for object detection. + +## Running an example + +```bash +$ cargo run --example yolo-v3 --release -- candle-examples/examples/yolo-v8/assets/bike.jpg + +> generated predictions Tensor[dims 10647, 85; f32] +> person: Bbox { xmin: 46.362198, ymin: 72.177, xmax: 135.92522, ymax: 339.8356, confidence: 0.99705493, data: () } +> person: Bbox { xmin: 137.25645, ymin: 67.58148, xmax: 216.90437, ymax: 333.80756, confidence: 0.9898516, data: () } +> person: Bbox { xmin: 245.7842, ymin: 82.76726, xmax: 316.79053, ymax: 337.21613, confidence: 0.9884322, data: () } +> person: Bbox { xmin: 207.52783, ymin: 61.815224, xmax: 266.77884, ymax: 307.92606, confidence: 0.9860648, data: () } +> person: Bbox { xmin: 11.457404, ymin: 60.335564, xmax: 34.39357, ymax: 187.7714, confidence: 0.9545012, data: () } +> person: Bbox { xmin: 251.88353, ymin: 11.235481, xmax: 286.56607, ymax: 92.54697, confidence: 0.8439807, data: () } +> person: Bbox { xmin: -0.44309902, ymin: 55.486923, xmax: 13.160354, ymax: 184.09705, confidence: 0.8266243, data: () } +> person: Bbox { xmin: 317.40826, ymin: 55.39501, xmax: 370.6704, ymax: 153.74887, confidence: 0.7327442, data: () } +> person: Bbox { xmin: 370.02835, ymin: 66.120224, xmax: 404.22824, ymax: 142.09691, confidence: 0.7265741, data: () } +> person: Bbox { xmin: 250.36511, ymin: 57.349842, xmax: 280.06335, ymax: 116.29384, confidence: 0.709422, data: () } +> person: Bbox { xmin: 32.573215, ymin: 66.66239, xmax: 50.49056, ymax: 173.42068, confidence: 0.6998766, data: () } +> person: Bbox { xmin: 131.72215, ymin: 63.946213, xmax: 166.66151, ymax: 241.52773, confidence: 0.64457536, data: () } +> person: Bbox { xmin: 407.42416, ymin: 49.106407, xmax: 415.24307, ymax: 84.7134, confidence: 0.5955802, data: () } +> person: Bbox { xmin: 51.650482, ymin: 64.4985, xmax: 67.40904, ymax: 106.952385, confidence: 0.5196007, data: () } +> bicycle: Bbox { xmin: 160.10031, ymin: 183.90837, xmax: 200.86832, ymax: 398.609, confidence: 0.9623588, data: () } +> bicycle: Bbox { xmin: 66.570915, ymin: 192.56966, xmax: 112.06765, ymax: 369.28497, confidence: 0.9174347, data: () } +> bicycle: Bbox { xmin: 258.2856, ymin: 197.04532, xmax: 298.43106, ymax: 364.8627, confidence: 0.6851388, data: () } +> bicycle: Bbox { xmin: 214.0034, ymin: 175.76498, xmax: 252.45158, ymax: 356.53818, confidence: 0.67071193, data: () } +> motorbike: Bbox { xmin: 318.23938, ymin: 95.22487, xmax: 369.9743, ymax: 213.46263, confidence: 0.96691036, data: () } +> motorbike: Bbox { xmin: 367.46417, ymin: 100.07982, xmax: 394.9981, ymax: 174.6545, confidence: 0.9185384, data: () } +> writing "candle-examples/examples/yolo-v8/assets/bike.pp.jpg" +``` \ No newline at end of file