Files
candle/candle-examples/examples/t5/README.md
Juarez Bochi 3e49f8fce5 Implement T5 decoding (#864)
* Load t5 decoder

* Run enc, dec, and lm head, but no cross attn

* Cross-attention over key_value_states

* New arg for decoder input ids

* Add mask, don't forward position biases through decoder

* Update t5 examples

* Clippy + rustfmt
2023-09-15 22:05:12 +02:00

786 B

candle-t5

Encoder-decoder example:

$ cargo run --example t5 -- --model-id "t5-small" --prompt "translate to German: A beautiful candle." --decode
...
Running on CPU, to run on GPU, build this example with `--features cuda`
 Eine schöne Kerze.
9 tokens generated (2.42 token/s)

Sentence embedding example:

$ cargo run --example t5 -- --model-id "t5-small" --prompt "A beautiful candle."
...
[[[ 0.0515, -0.0541, -0.0761, ..., -0.0392,  0.1511, -0.0265],
  [-0.0974,  0.0998, -0.1659, ..., -0.2450,  0.1738, -0.0164],
  [ 0.0624, -0.1024,  0.0430, ..., -0.1388,  0.0564, -0.2962],
  [-0.0389, -0.1173,  0.0026, ...,  0.1064, -0.1065,  0.0990],
  [ 0.1300,  0.0027, -0.0326, ...,  0.0026, -0.0317,  0.0851]]]
Tensor[[1, 5, 512], f32]
Took 303.766583ms