Fix lora alpha and metadata handling
What does this PR do?
This PR adds --lora_alpha and metadata handling in training scripts in 'train_dreambooth_lora_sdxl.py'.
Part of #11730
Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the contributor guideline?
- [X] Did you read our philosophy doc (important for complex PRs)?
- [ ] Was this discussed/approved via a GitHub issue or the forum? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
- [ ] Did you write any new necessary tests?
Who can review?
@linoytsaban @sayakpaul
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
thanks! made some small changes
Thank you @linoytsaban , for reviewing and suggesting changes. I’ve implemented the requested updates.
thanks @Tanuj-rai :) I just noticed lora_alpha is missing from the text_encoder config, after that's merged we'd be good to go
https://github.com/huggingface/diffusers/blob/4f69b28e5521a1d5b62dce97668a046f45a9e32e/examples/dreambooth/train_dreambooth_lora_sdxl.py#L1245-L1249
thanks @Tanuj-rai :) I just noticed
lora_alphais missing from thetext_encoderconfig, after that's merged we'd be good to gohttps://github.com/huggingface/diffusers/blob/4f69b28e5521a1d5b62dce97668a046f45a9e32e/examples/dreambooth/train_dreambooth_lora_sdxl.py#L1245-L1249
Thank you @linoytsaban for pointing it out. I have now added the lora_alpha in the text_encoder config.
@bot /style
@sayakpaul yep, but I can take care of it in a separate PR, wdyt?
Okay for me.
@bot /style