mirror of
https://github.com/huggingface/candle.git
synced 2025-06-18 11:37:11 +00:00
[RFC] Start removal of VarBuilder
.
- Uses `Initializer` trait instead. - Allows more decoupling between init and load, which are very different ops. - Allows more decoupling between backends (safetensors, npy, ggml, etc...) This is a minimum viable change. There are 3 kind of objects with various relations. The `Model`: This is `Llama`, `Linear`, `Rms` ... They contain tensors (and possibly other things). and are used to call `forward` basically. They should have no ownership of any internals like Rng state or actual shapes of the tensors (the tensors already own those) The `Initializer`: This is a struct containing necessary information to generate new random tensors. Typically they should own a random generator, and generate different kind of random tensors based on what kind of `Model` they are initializing. This do not own any information about the `Model` itself. Default init stores the `Vec<Var>` for now, in order to send to the optimizer. Ths `Config`: This is the necessary information to link between the `Model` and the `Initializer`. This is another struct which is a companion of the implementation of the initalization. Typical information is the shape of the tensors for simple `Model`, the `eps` for RMS, the `use_bias` boolean to know whether we should have a bias in the linear layer. This should remove all need for `VarBuilder` during intialization, and allow removing every initialization bit within `VarBuilder`. Modifying `llama2-c` to follow that initialization is left on purpose for a follow-up to keep the current PR rather small.
This commit is contained in:
@ -1,5 +1,6 @@
|
||||
use candle::{DType, Device, Result, Tensor};
|
||||
use candle_nn::{linear, AdamW, Linear, ParamsAdamW, VarBuilder, VarMap};
|
||||
use candle_nn::init::{DefaultInit, ModelInitializer};
|
||||
use candle_nn::{AdamW, Linear, ParamsAdamW};
|
||||
|
||||
fn gen_data() -> Result<(Tensor, Tensor)> {
|
||||
// Generate some sample linear data.
|
||||
@ -15,14 +16,15 @@ fn main() -> Result<()> {
|
||||
let (sample_xs, sample_ys) = gen_data()?;
|
||||
|
||||
// Use backprop to run a linear regression between samples and get the coefficients back.
|
||||
let varmap = VarMap::new();
|
||||
let vb = VarBuilder::from_varmap(&varmap, DType::F32, &Device::Cpu);
|
||||
let model = linear(2, 1, vb.pp("linear"))?;
|
||||
// let varmap = VarMap::new();
|
||||
// let vb = VarBuilder::from_varmap(&varmap, DType::F32, &Device::Cpu);
|
||||
let mut initializer = DefaultInit::new(DType::F32, Device::Cpu);
|
||||
let model = Linear::init(&mut initializer, ((2, 1), true))?;
|
||||
let params = ParamsAdamW {
|
||||
lr: 0.1,
|
||||
..Default::default()
|
||||
};
|
||||
let mut opt = AdamW::new(varmap.all_vars(), params)?;
|
||||
let mut opt = AdamW::new(initializer.vars().to_vec(), params)?;
|
||||
for step in 0..10000 {
|
||||
let ys = model.forward(&sample_xs)?;
|
||||
let loss = ys.sub(&sample_ys)?.sqr()?.sum_all()?;
|
||||
|
Reference in New Issue
Block a user