* added chatGLM readme
* changed wording in readme
* added readme for chinese-clip
* added readme for convmixer
* added readme for custom ops
* added readme for efficientnet
* added readme for llama
* added readme to mnist-training
* added readme to musicgen
* added readme to quantized-phi
* added readme to starcoder2
* added readme to whisper-microphone
* added readme to yi
* added readme to yolo-v3
* added readme to whisper-microphone
* added space to example in glm4 readme
* fixed mamba example readme to run mamba instead of mamba-minimal
* removed slash escape character
* changed moondream image to yolo-v8 example image
* added procedure for making the reinforcement-learning example work with a virtual environment on my machine
* added simple one line summaries to the example readmes without
* changed non-existant image to yolo example's bike.jpg
* added backslash to sam command
* removed trailing - from siglip
* added SoX to silero-vad example readme
* replaced procedure for uv on mac with warning that uv isn't currently compatible with pyo3
* added example to falcon readme
* added --which arg to stella-en-v5 readme
* fixed image path in vgg readme
* fixed the image path in the vit readme
* Update README.md
* Update README.md
* Update README.md
---------
Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
* Start updating to cudarc 0.14.
* Adapt a couple more things.
* And a couple more fixes.
* More tweaks.
* And a couple more fixes.
* Bump the major version number.
* Proper module system for the cuda kernels.
* Proper ptx loading.
* Launch the sort kernel.
* Custom op.
* Start using the builder pattern.
* More builder.
* More builder.
* Get candle-core to compile.
* Get the tests to pass.
* Get candle-nn to work too.
* Support for custom cuda functions.
* cudnn fixes.
* Get flash attn to run.
* Switch the crate versions to be alpha.
* Bump the ug dependency.
* Boilerplate for the quantized cuda support.
* More basic cuda support.
* More cuda quantization (quantize on cpu for now).
* Add the dequantization bit.
* Start adding some dedicated cuda kernels from llama.cpp.
* Move the kernel code.
* Start interfacing with the kernel.
* Tweak the kernel launch params.
* Bugfix for quantized metal.
* Fix some clippy lints.
* Tweak the launch parameters.
* Tweak cuda basics to perform a quantized matmul.
* Perform the dequantization on the cpu + use cublas for matmul.
* Add the dequantization kernel.
* Test the qmatmul.
* More kernels.
* Matmul-vec kernel.
* Add a couple kernels.
* More dequantization kernels.
* Use cfg to seperate benchmark results based on features
* Add metal where_cond for f16 and bf16. Add benchmark
* Remove allow pragma
* Avoid some unnecessary returns.
* Improve benchmarks layout
* Updated feature separated benchmarks
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com>
* Build example kernels.
* Add some sample custom kernel.
* Get the example kernel to compile.
* Add some cuda code.
* More cuda custom op.
* More cuda custom ops.