- Uses `Initializer` trait instead.
- Allows more decoupling between init and load, which are very different
ops.
- Allows more decoupling between backends (safetensors, npy, ggml,
etc...)
This is a minimum viable change.
There are 3 kind of objects with various relations.
The `Model`:
This is `Llama`, `Linear`, `Rms` ...
They contain tensors (and possibly other things). and are used to
call `forward` basically.
They should have no ownership of any internals like Rng state or
actual shapes of the tensors (the tensors already own those)
The `Initializer`:
This is a struct containing necessary information to generate new
random tensors. Typically they should own a random generator, and
generate different kind of random tensors based on what kind of
`Model` they are initializing.
This do not own any information about the `Model` itself.
Default init stores the `Vec<Var>` for now, in order to send to the
optimizer.
Ths `Config`:
This is the necessary information to link between the `Model` and
the `Initializer`. This is another struct which is a companion of
the implementation of the initalization.
Typical information is the shape of the tensors for simple `Model`,
the `eps` for RMS, the `use_bias` boolean to know whether we should
have a bias in the linear layer.
This should remove all need for `VarBuilder` during intialization, and
allow removing every initialization bit within `VarBuilder`.
Modifying `llama2-c` to follow that initialization is left on purpose
for a follow-up to keep the current PR rather small.
* Start adding a stable-diffusion example.
* Proper computation of the causal mask.
* Add the chunk operation.
* Work in progress: port the attention module.
* Add some dummy modules for conv2d and group-norm, get the attention module to compile.
* Re-enable the 2d convolution.
* Add the embeddings module.
* Add the resnet module.
* Add the unet blocks.
* Add the unet.
* And add the variational auto-encoder.
* Use the pad function from utils.
* Move the vision datasets to a separate crate.
* Move the batcher bits.
* Update the readme.
* Move the tiny-stories bits.
---------
Co-authored-by: Jane Doe <jane.doe@example.org>
* Rework the var-builder to handle initializations.
* Add some helper functions for layer creation.
* Improve the layer initializations.
* Get initialized variables.
* Precompute the rot embeddings when training lamas.
* Add the nn::optim and some conversion traits.
* Add the backward_step function for SGD.
* Get the SGD optimizer to work and add a test.
* Make the test slighly simpler.