Add to the readmes for stable-lm. (#1047)

This commit is contained in:
Laurent Mazare
2023-10-06 21:26:04 +01:00
committed by GitHub
parent d5f7267087
commit 955e00b2e8
2 changed files with 28 additions and 0 deletions

View File

@ -62,6 +62,8 @@ We also provide a some command line based examples using state of the art models
- [LLaMA and LLaMA-v2](./candle-examples/examples/llama/): general LLM.
- [Falcon](./candle-examples/examples/falcon/): general LLM.
- [Phi-v1.5](./candle-examples/examples/phi/): a 1.3b general LLM with performance on par with LLaMA-v2 7b.
- [StableLM-3B-4E1T](./candle-examples/examples/stable-lm/): a 3b general LLM
pre-trained on 1T tokens of English and code datasets.
- [Mistral7b-v0.1](./candle-examples/examples/mistral/): a 7b general LLM with
performance larger than all publicly available 13b models as of 2023-09-28.
- [StarCoder](./candle-examples/examples/bigcode/): LLM specialized to code generation.
@ -152,6 +154,7 @@ If you have an addition to this list, please submit a pull request.
- StarCoder.
- Phi v1.5.
- Mistral 7b v0.1.
- StableLM-3B-4E1T.
- T5.
- Bert.
- Whisper (multi-lingual support).