Mention VGG in the readme. (#1573)

This commit is contained in:
Laurent Mazare
2024-01-12 09:59:29 +01:00
committed by GitHub
parent 6242276c09
commit 8e06bfb4fd
2 changed files with 8 additions and 3 deletions

View File

@ -109,6 +109,9 @@ We also provide a some command line based examples using state of the art models
- [DINOv2](./candle-examples/examples/dinov2/): computer vision model trained
using self-supervision (can be used for imagenet classification, depth
evaluation, segmentation).
- [VGG](./candle-examples/examples/vgg/),
[RepVGG](./candle-examples/examples/repvgg): computer vision models.
- [BLIP](./candle-examples/examples/blip/): image to text model, can be used to
- [BLIP](./candle-examples/examples/blip/): image to text model, can be used to
generate captions for an image.
- [Marian-MT](./candle-examples/examples/marian-mt/): neural machine translation
@ -204,7 +207,7 @@ If you have an addition to this list, please submit a pull request.
- Image to text.
- BLIP.
- Computer Vision Models.
- DINOv2, ConvMixer, EfficientNet, ResNet, ViT.
- DINOv2, ConvMixer, EfficientNet, ResNet, ViT, VGG, RepVGG.
- yolo-v3, yolo-v8.
- Segment-Anything Model (SAM).
- File formats: load models from safetensors, npz, ggml, or PyTorch files.

View File

@ -1,7 +1,9 @@
# candle-repvgg
A candle implementation of inference using a pre-trained [repvgg](https://arxiv.org/abs/2101.03697).
This uses a classification head trained on the ImageNet dataset and returns the
[RepVGG: Making VGG-style ConvNets Great Again](https://arxiv.org/abs/2101.03697).
This candle implementation uses a pre-trained RepVGG network for inference. The
classification head has been trained on the ImageNet dataset and returns the
probabilities for the top-5 classes.
## Running an example