diff --git a/README.md b/README.md index 93cbccc4..c4f27548 100644 --- a/README.md +++ b/README.md @@ -109,6 +109,9 @@ We also provide a some command line based examples using state of the art models - [DINOv2](./candle-examples/examples/dinov2/): computer vision model trained using self-supervision (can be used for imagenet classification, depth evaluation, segmentation). +- [VGG](./candle-examples/examples/vgg/), + [RepVGG](./candle-examples/examples/repvgg): computer vision models. +- [BLIP](./candle-examples/examples/blip/): image to text model, can be used to - [BLIP](./candle-examples/examples/blip/): image to text model, can be used to generate captions for an image. - [Marian-MT](./candle-examples/examples/marian-mt/): neural machine translation @@ -204,7 +207,7 @@ If you have an addition to this list, please submit a pull request. - Image to text. - BLIP. - Computer Vision Models. - - DINOv2, ConvMixer, EfficientNet, ResNet, ViT. + - DINOv2, ConvMixer, EfficientNet, ResNet, ViT, VGG, RepVGG. - yolo-v3, yolo-v8. - Segment-Anything Model (SAM). - File formats: load models from safetensors, npz, ggml, or PyTorch files. diff --git a/candle-examples/examples/repvgg/README.md b/candle-examples/examples/repvgg/README.md index 2cb807c1..d24bcd6d 100644 --- a/candle-examples/examples/repvgg/README.md +++ b/candle-examples/examples/repvgg/README.md @@ -1,7 +1,9 @@ # candle-repvgg -A candle implementation of inference using a pre-trained [repvgg](https://arxiv.org/abs/2101.03697). -This uses a classification head trained on the ImageNet dataset and returns the +[RepVGG: Making VGG-style ConvNets Great Again](https://arxiv.org/abs/2101.03697). + +This candle implementation uses a pre-trained RepVGG network for inference. The +classification head has been trained on the ImageNet dataset and returns the probabilities for the top-5 classes. ## Running an example