mirror of
https://github.com/huggingface/candle.git
synced 2025-06-17 11:08:52 +00:00

- Uses `Initializer` trait instead. - Allows more decoupling between init and load, which are very different ops. - Allows more decoupling between backends (safetensors, npy, ggml, etc...) This is a minimum viable change. There are 3 kind of objects with various relations. The `Model`: This is `Llama`, `Linear`, `Rms` ... They contain tensors (and possibly other things). and are used to call `forward` basically. They should have no ownership of any internals like Rng state or actual shapes of the tensors (the tensors already own those) The `Initializer`: This is a struct containing necessary information to generate new random tensors. Typically they should own a random generator, and generate different kind of random tensors based on what kind of `Model` they are initializing. This do not own any information about the `Model` itself. Default init stores the `Vec<Var>` for now, in order to send to the optimizer. Ths `Config`: This is the necessary information to link between the `Model` and the `Initializer`. This is another struct which is a companion of the implementation of the initalization. Typical information is the shape of the tensors for simple `Model`, the `eps` for RMS, the `use_bias` boolean to know whether we should have a bias in the linear layer. This should remove all need for `VarBuilder` during intialization, and allow removing every initialization bit within `VarBuilder`. Modifying `llama2-c` to follow that initialization is left on purpose for a follow-up to keep the current PR rather small.
36 lines
1.3 KiB
Rust
36 lines
1.3 KiB
Rust
use candle::{DType, Device, Result, Tensor};
|
|
use candle_nn::init::{DefaultInit, ModelInitializer};
|
|
use candle_nn::{AdamW, Linear, ParamsAdamW};
|
|
|
|
fn gen_data() -> Result<(Tensor, Tensor)> {
|
|
// Generate some sample linear data.
|
|
let w_gen = Tensor::new(&[[3f32, 1.]], &Device::Cpu)?;
|
|
let b_gen = Tensor::new(-2f32, &Device::Cpu)?;
|
|
let gen = Linear::new(w_gen, Some(b_gen));
|
|
let sample_xs = Tensor::new(&[[2f32, 1.], [7., 4.], [-4., 12.], [5., 8.]], &Device::Cpu)?;
|
|
let sample_ys = gen.forward(&sample_xs)?;
|
|
Ok((sample_xs, sample_ys))
|
|
}
|
|
|
|
fn main() -> Result<()> {
|
|
let (sample_xs, sample_ys) = gen_data()?;
|
|
|
|
// Use backprop to run a linear regression between samples and get the coefficients back.
|
|
// let varmap = VarMap::new();
|
|
// let vb = VarBuilder::from_varmap(&varmap, DType::F32, &Device::Cpu);
|
|
let mut initializer = DefaultInit::new(DType::F32, Device::Cpu);
|
|
let model = Linear::init(&mut initializer, ((2, 1), true))?;
|
|
let params = ParamsAdamW {
|
|
lr: 0.1,
|
|
..Default::default()
|
|
};
|
|
let mut opt = AdamW::new(initializer.vars().to_vec(), params)?;
|
|
for step in 0..10000 {
|
|
let ys = model.forward(&sample_xs)?;
|
|
let loss = ys.sub(&sample_ys)?.sqr()?.sum_all()?;
|
|
opt.backward_step(&loss)?;
|
|
println!("{step} {}", loss.to_vec0::<f32>()?);
|
|
}
|
|
Ok(())
|
|
}
|