Files
candle/candle-examples/examples/gemma
Laurent Mazare a0460cd2b1 Add the code-gemma models. (#2038)
* Add the code-gemma models.

* Tweak to the gemma config.
2024-04-10 21:19:21 +02:00
..
2024-04-10 21:19:21 +02:00
2024-03-13 21:41:36 +01:00

candle-gemma: 2b and 7b LLMs from Google DeepMind

Gemma is a collection of lightweight open models published by Google Deepmind with a 2b and a 7b variant.

In order to use the example below, you have to accept the license on the HuggingFace Hub Gemma repo and set up your access token via the HuggingFace cli login command.

Running the example

$ cargo run --example gemma --release -- --prompt "fn count_primes(max_n: usize)"
fn count_primes(max_n: usize) -> usize {
    let mut primes = vec![true; max_n];
    for i in 2..=max_n {
        if primes[i] {
            for j in i * i..max_n {
                primes[j] = false;
             }
         }
    }
    primes.len()
}