From 3ef328c53db164af5e30248c95d37e95c397dc1a Mon Sep 17 00:00:00 2001 From: Laurent Mazare Date: Fri, 22 Sep 2023 21:24:51 +0100 Subject: [PATCH] Mention the new phi model in the readme. (#932) --- README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 36b58bd4..c1750845 100644 --- a/README.md +++ b/README.md @@ -60,8 +60,8 @@ We also provide a some command line based examples using state of the art models - [LLaMA and LLaMA-v2](./candle-examples/examples/llama/): general LLM. - [Falcon](./candle-examples/examples/falcon/): general LLM. -- [StarCoder](./candle-examples/examples/bigcode/): LLM specialized to code - generation. +- [Phi-v1.5](./candle-examples/examples/phi/): a 1.3b general LLM with performance on par with LLaMA-v2 7b. +- [StarCoder](./candle-examples/examples/bigcode/): LLM specialized to code generation. - [Quantized LLaMA](./candle-examples/examples/quantized/): quantized version of the LLaMA model using the same quantization techniques as [llama.cpp](https://github.com/ggerganov/llama.cpp). @@ -146,6 +146,7 @@ If you have an addition to this list, please submit a pull request. - LLaMA v1 and v2. - Falcon. - StarCoder. + - Phi v1.5. - T5. - Bert. - Whisper (multi-lingual support).