TensorRT-LLM icon indicating copy to clipboard operation
TensorRT-LLM copied to clipboard

Fixed rslora scaling in lora_manager

Open TheCodeWrangler opened this issue 1 year ago • 1 comments

Addressing issue mentioned in https://github.com/NVIDIA/TensorRT-LLM/issues/1668

When weights were trained using rslora scaling they should be scaled differently. Code initially was always normalizing by rank regardless of "use_rslora" flag in huggingface adapter_config.json file.

Scaling has also been updated in examples/hf_lora_convert.py

TheCodeWrangler avatar May 24 '24 19:05 TheCodeWrangler

Could you share a model trained by rslora?

byshiue avatar May 27 '24 09:05 byshiue

Hi @TheCodeWrangler , thanks for your contributing. We've merged your contribution into code base and will add you into contributor list.

nv-guomingz avatar Jun 03 '24 12:06 nv-guomingz