windreamer

Results 28 comments of windreamer

> The poor performance is relative though, take the RTX 6000 - if you have a 110GB model that you want to use with the 96GB VRAM, offloading that final...

``` File "/home/li_mingze/.local/lib/python3.12/site-packages/transformers/models/auto/processing_auto.py", line 28, in from ...processing_utils import ProcessorMixin File "/home/li_mingze/.local/lib/python3.12/site-packages/transformers/processing_utils.py", line 34, in from .audio_utils import load_audio File "/home/li_mingze/.local/lib/python3.12/site-packages/transformers/audio_utils.py", line 42, in import soundfile as sf File "/home/li_mingze/.local/lib/python3.12/site-packages/soundfile.py",...

It seems we didn't find corresponding lora parameters in your lora model. Could you please share your lora model or list your lora parameters so that we can reproduce this...

May I know how do you train your lora model? It seems the parameter name is not what LMDeploy expected. For lora models of internvl, the parameters should be named...

Actrually I am not quite falimlar with this topic. You can check if the following link helps: https://internvl.readthedocs.io/en/latest/tutorials/coco_caption_finetune.html

You may try [Xtuner]() for LoRA finetuning of InternVL3: Here is an example script for internlm2-chat-7b: https://github.com/InternLM/xtuner/blob/main/xtuner/configs/custom_dataset/sft/internlm/internlm2_chat_7b_qlora_custom_sft_e1.py And you can also read this Chinese doc for more details: https://github.com/InternLM/xtuner/blob/main/docs/zh_cn/legacy/training/custom_sft_dataset.rst Hope...

This may have been fixed in #4029 in release v0.10.2 . You can check if this release fix your issue.