diffusers
diffusers copied to clipboard
[wip][lora]feat: use exclude modules to loraconfig.
What does this PR do?
If we try to do:
from diffusers import AutoModel, DiffusionPipeline
import torch
model_id = "Wan-AI/Wan2.1-VACE-14B-diffusers"
vae = AutoModel.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = DiffusionPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16).to("cuda")
pipe.load_lora_weights(
"vrgamedevgirl84/Wan14BT2VFusioniX",
weight_name="FusionX_LoRa/Wan2.1_T2V_14B_FusionX_LoRA.safetensors",
)
It prints:
Unfold
Loading adapter weights from state_dict led to missing keys in the model: vace_blocks.0.proj_out.lora_A.default_0.weight, vace_blocks.0.proj_out.lora_B.default_0.weight, vace_blocks.1.proj_out.lora_A.default_0.weight, vace_blocks.1.proj_out.lora_B.default_0.weight, vace_blocks.2.proj_out.lora_A.default_0.weight, vace_blocks.2.proj_out.lora_B.default_0.weight, vace_blocks.3.proj_out.lora_A.default_0.weight, vace_blocks.3.proj_out.lora_B.default_0.weight, vace_blocks.4.proj_out.lora_A.default_0.weight, vace_blocks.4.proj_out.lora_B.default_0.weight, vace_blocks.5.proj_out.lora_A.default_0.weight, vace_blocks.5.proj_out.lora_B.default_0.weight, vace_blocks.6.proj_out.lora_A.default_0.weight, vace_blocks.6.proj_out.lora_B.default_0.weight, vace_blocks.7.proj_out.lora_A.default_0.weight, vace_blocks.7.proj_out.lora_B.default_0.weight.
It happens because target_modules in LoraConfig treats the values in target_modules as suffixes. In this case, proj_out is a part of target_modules which is why vace_blocks_* get targeted here (and hence the missing warning).
LoraConfig allows us to specify exclude_modules, too. This PR introduces support to add it when initializing the LoraConfig.
@apolinario does it work for you?
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.