[Bug] Weight key issue when using lora fine-tuning(Already fixed)
Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [X] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
Describe the bug
Issue
Due to the use of PEFT, the key names of the saved weights after lora training are inconsistent with the original ones, where language.model becomes language.base_model.model.
Fix
Before saving the weights at the end of training, I use model.language_model = model.language_model.merge_and_unload() and everything looks fine. I hope you can add this in future updates~
Reproduction
Already fixed
Environment
Already fixed
Error traceback
No response
We appreciate you bringing this issue to our attention. We will conduct a thorough investigation and provide an update as soon as possible. Should we identify a bug, we will implement the necessary code changes. Thank you for your continued support.
In addition, peft==0.4.0 will not got this problem.
Hi, since there hasn't been any recent activity on this issue, I'll be closing it for now. If it's still an active concern, don't hesitate to reopen it. Thanks for your understanding!