candle-lora
candle-lora copied to clipboard
Low rank adaptation (LoRA) for Candle.
I tried to fine tune [TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) with this crate. After training, the safetensors saved only contains two tensors: ``` lora_llama.b0 lora_llama.a0 ``` I expand the macro in mod llama and...
I'm really excited with finetuning LLMs on rust but I'm a complete beginner to machine learning so even though I've spent some time going through the repo it would be...
Hello, The current version of `candle-core` is 0.6.0, and this library still requires an older version. I think this is the same problem as https://github.com/EricLBuehler/candle-lora/issues/15, and it is a matter...
I tried to instantiate a bert model with the following code: ```rust use candle_core::DType; use candle_lora::LoraConfig; use candle_lora_transformers::bert::{BertModel, Config}; use candle_nn::{VarBuilder, VarMap}; fn main() { let config = "config.json"; let...
I can't see any case where the rank can be negative for Tensor multiplication as for 0 that would be equal to drop everything before. Please correct me.
Adding the macros to the Flux Model gives error that it is now expecting an Arc ``` mismatched types expected struct `Arc
The `AutoLoraConvert` macro adds the `get_merged_lora_weights` method which is supposed to make it so that the A and B tensors are merged into the frozen layer rather than being kept...
I have updated versions of candle to latest ## Checklist - [x] I have read the [Contributing Guide](https://github.com/EricLBuehler/candle-lora/blob/master/.github/CONTRIBUTING.md)m ## Summary by CodeRabbit - **Chores** - Updated dependency versions for improved...