Implement T5 decoding (#864)

* Load t5 decoder

* Run enc, dec, and lm head, but no cross attn

* Cross-attention over key_value_states

* New arg for decoder input ids

* Add mask, don't forward position biases through decoder

* Update t5 examples

* Clippy + rustfmt
This commit is contained in:
Juarez Bochi
2023-09-15 13:05:12 -07:00
committed by GitHub
parent c2007ac88f
commit 3e49f8fce5
3 changed files with 260 additions and 61 deletions

View File

@ -1,17 +1,25 @@
# candle-t5
Generates embeddings using a T5 model. It doesn't support generation yet.
## Encoder-decoder example:
```bash
$ cargo run --example t5 -- --model-id t5-large --prompt 'how tall is obama' --n 1
Loaded and encoded 2.014244792s
[[[-0.3174, -0.1462, 0.0065, ..., -0.0579, -0.0581, 0.1387],
[-0.2905, -0.1945, -0.0685, ..., -0.2457, -0.5137, -0.1760],
[-0.0591, -0.0213, -0.0241, ..., -0.0210, 0.0491, -0.0300],
...
[-0.4333, 0.0027, -0.0609, ..., 0.3069, -0.2252, 0.3306],
[-0.1458, 0.1323, -0.0138, ..., 0.3000, -0.4550, -0.0384],
[ 0.0397, 0.0485, -0.2373, ..., 0.2578, -0.2650, -0.4356]]]
Tensor[[1, 9, 1024], f32]
Took 2.1363425s
```
$ cargo run --example t5 -- --model-id "t5-small" --prompt "translate to German: A beautiful candle." --decode
...
Running on CPU, to run on GPU, build this example with `--features cuda`
Eine schöne Kerze.
9 tokens generated (2.42 token/s)
```
## Sentence embedding example:
```bash
$ cargo run --example t5 -- --model-id "t5-small" --prompt "A beautiful candle."
...
[[[ 0.0515, -0.0541, -0.0761, ..., -0.0392, 0.1511, -0.0265],
[-0.0974, 0.0998, -0.1659, ..., -0.2450, 0.1738, -0.0164],
[ 0.0624, -0.1024, 0.0430, ..., -0.1388, 0.0564, -0.2962],
[-0.0389, -0.1173, 0.0026, ..., 0.1064, -0.1065, 0.0990],
[ 0.1300, 0.0027, -0.0326, ..., 0.0026, -0.0317, 0.0851]]]
Tensor[[1, 5, 512], f32]
Took 303.766583ms
```