- Uses `Initializer` trait instead.
- Allows more decoupling between init and load, which are very different
ops.
- Allows more decoupling between backends (safetensors, npy, ggml,
etc...)
This is a minimum viable change.
There are 3 kind of objects with various relations.
The `Model`:
This is `Llama`, `Linear`, `Rms` ...
They contain tensors (and possibly other things). and are used to
call `forward` basically.
They should have no ownership of any internals like Rng state or
actual shapes of the tensors (the tensors already own those)
The `Initializer`:
This is a struct containing necessary information to generate new
random tensors. Typically they should own a random generator, and
generate different kind of random tensors based on what kind of
`Model` they are initializing.
This do not own any information about the `Model` itself.
Default init stores the `Vec<Var>` for now, in order to send to the
optimizer.
Ths `Config`:
This is the necessary information to link between the `Model` and
the `Initializer`. This is another struct which is a companion of
the implementation of the initalization.
Typical information is the shape of the tensors for simple `Model`,
the `eps` for RMS, the `use_bias` boolean to know whether we should
have a bias in the linear layer.
This should remove all need for `VarBuilder` during intialization, and
allow removing every initialization bit within `VarBuilder`.
Modifying `llama2-c` to follow that initialization is left on purpose
for a follow-up to keep the current PR rather small.