mirror of
https://github.com/huggingface/candle.git
synced 2025-06-16 18:48:51 +00:00

* define structs * construct ResidualConvUnit * forward() for ResidualConvUnit * implement FeatureFusionBlock * implement Scratch * implement DPTHead * add identity module * implement forward for DTPHead * add get_intermediate_layers to DinoVisionTransformer * implement DepthAnythingV2 * some minor tweaks * fix compile errors * fix var builder prefixes * setup initial example * use fixed patch size of 37 (518 / 14) * debugged until output * print min and max values * add some dynamism to the output location * scale input image * extract prep function * extract output path function * normalize image with magic mean and std * add spectral coloring * squeeze in the right place * make enterpolation optional * use bail instead of panic * omit unnecessary Shape call * remove empty curly braces * use bail instead of assert * use vb and pp * remove closures * extract config object * Apply rustfmt. * Fix some clippy lints. * More lints. * Use the array methods. --------- Co-authored-by: laurent <laurent.mazare@gmail.com>
14 lines
548 B
Markdown
14 lines
548 B
Markdown
# candle-dinov2
|
|
|
|
[Depth Anything V2] is a model for Monocular Depth Estimation (MDE, i.e. just using a single image) which
|
|
builds on the [DINOv2](https://github.com/facebookresearch/dinov2) vision transformer.
|
|
|
|
This example first instantiates the DINOv2 model and then proceeds to create DepthAnythingV2 and run it.
|
|
|
|
## Running an example with color map and CUDA
|
|
|
|
```bash
|
|
cargo run --features cuda,depth_anything_v2 --package candle-examples --example depth_anything_v2 -- --color-map --image candle-examples/examples/yolo-v8/assets/bike.jpg
|
|
```
|
|
|