MPT alibi fixes. (#1120)

* MPT alibi fixes.

* Some more fixes.

* Finally get the model to return some sensible outputs.

* Add a readme.
This commit is contained in:
Laurent Mazare
2023-10-18 10:58:05 +01:00
committed by GitHub
parent 662c186fd5
commit 767a6578f1
3 changed files with 64 additions and 13 deletions

View File

@ -0,0 +1,45 @@
# candle-replit-code: code completion specialized model.
[replit-code-v1_5-3b](https://huggingface.co/replit/replit-code-v1_5-3b) is a
language model specialized for code completion. This model uses 3.3B parameters
in `bfloat16` (so the GPU version will only work on recent nvidia cards).
## Running some example
```bash
cargo run --example replit-code --release -- --prompt 'def fibonacci(n): '
```
This produces the following output which actually doesn't generate the fibonacci
series properly.
```
def fibonacci(n): # write Fibonacci series up to n
"""Print a Fibonacci series up to n."""
assert type(n) == int, "n must be an integer"
if (type(fib_list)==None or len==0 ):
fib_list = [1]
for i in range((len-2)): # start at 2nd element of list and go until end.
n += 1
print("Fibonacci number",n,"is:",i)
def main():
"""Call the functions."""
userInput=input('Enter a positive integer: ')
fibonacci(userInput)
if __name__ == '__main__': # only run if this file is called directly.
print("This program prints out Fibonacci numbers.")
main()
```

View File

@ -139,7 +139,7 @@ struct Args {
seed: u64,
/// The length of the sample to generate (in tokens).
#[arg(long, short = 'n', default_value_t = 100)]
#[arg(long, short = 'n', default_value_t = 1000)]
sample_len: usize,
#[arg(long)]