[LoRA] make `set_adapters()` robust on silent failures.
What does this PR do?
Currently, if we do
scales = {"text_encoder": 0.0, "text_encoder_2": 0.0, "unet": 0.0}
pipe.set_adapters("optimus", adapter_weights=scales)
where pipe is an instance of the FluxPipeline it doesn't error out whereas it should because Flux doesn't have any UNet and its text_encoder_2 component isn't LoRA-loadable:
https://github.com/huggingface/diffusers/blob/31058cdaef63ca660a1a045281d156239fba8192/src/diffusers/loaders/lora_pipeline.py#L1650
Instead, we silently ignore things. This PR fixes this behavior.
Thanks to @asomoza for the idea in https://github.com/huggingface/diffusers/pull/9542#issuecomment-2380411627!
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
@BenjaminBossan thanks!
Apart from that, I'm wondering if we still need the checks starting here:
Good catch. Resolved in 271404336.
@DN6 could you give this a look?
@DN6 a gentle ping.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
@yiyixuxu @DN6 a gentle ping.
@asomoza @yiyixuxu @DN6 could you give this a look?
@DN6 I have changed from raising error to raising warnings. Additionally, we're removing the invalid components from the adapter_weights. LMK what you think.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
@hlky could you give this a look?