mirror of
https://github.com/huggingface/candle.git
synced 2025-06-16 18:48:51 +00:00
Mention the new phi model in the readme. (#932)
This commit is contained in:
@ -60,8 +60,8 @@ We also provide a some command line based examples using state of the art models
|
|||||||
|
|
||||||
- [LLaMA and LLaMA-v2](./candle-examples/examples/llama/): general LLM.
|
- [LLaMA and LLaMA-v2](./candle-examples/examples/llama/): general LLM.
|
||||||
- [Falcon](./candle-examples/examples/falcon/): general LLM.
|
- [Falcon](./candle-examples/examples/falcon/): general LLM.
|
||||||
- [StarCoder](./candle-examples/examples/bigcode/): LLM specialized to code
|
- [Phi-v1.5](./candle-examples/examples/phi/): a 1.3b general LLM with performance on par with LLaMA-v2 7b.
|
||||||
generation.
|
- [StarCoder](./candle-examples/examples/bigcode/): LLM specialized to code generation.
|
||||||
- [Quantized LLaMA](./candle-examples/examples/quantized/): quantized version of
|
- [Quantized LLaMA](./candle-examples/examples/quantized/): quantized version of
|
||||||
the LLaMA model using the same quantization techniques as
|
the LLaMA model using the same quantization techniques as
|
||||||
[llama.cpp](https://github.com/ggerganov/llama.cpp).
|
[llama.cpp](https://github.com/ggerganov/llama.cpp).
|
||||||
@ -146,6 +146,7 @@ If you have an addition to this list, please submit a pull request.
|
|||||||
- LLaMA v1 and v2.
|
- LLaMA v1 and v2.
|
||||||
- Falcon.
|
- Falcon.
|
||||||
- StarCoder.
|
- StarCoder.
|
||||||
|
- Phi v1.5.
|
||||||
- T5.
|
- T5.
|
||||||
- Bert.
|
- Bert.
|
||||||
- Whisper (multi-lingual support).
|
- Whisper (multi-lingual support).
|
||||||
|
Reference in New Issue
Block a user