diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

Support for control-lora

Open lavinal712 opened this issue 1 year ago • 88 comments

This PR is a continuation of the following discussion https://github.com/huggingface/diffusers/issues/4679 https://github.com/huggingface/diffusers/pull/4899, and it addresses the following issues:

  1. Loading SAI's control-lora files and enabling controlled image generation.
  2. Building a control-lora Pipeline and Model to facilitate user convenience.

This code is only an initial version and contains many makeshift solutions as well as several issues. Currently, these are the observations I have made:

  1. Long Loading Time: I suspect this is due to repeatedly loading the model weights.
  2. High GPU Memory Usage During Runtime: Compared to a regular ControlNet, Control-Lora should actually save GPU memory during runtime (this phenomenon can be observed in sd-webui-controlnet). I believe that the relevant parts of the code have not been handled properly.

lavinal712 avatar Jan 30 '25 03:01 lavinal712

图像 (1)

To reproduce, run

cd src
python -m diffusers.pipelines.control_lora.pipeline_control_lora_sd_xl

lavinal712 avatar Jan 30 '25 03:01 lavinal712

My solution was referenced from: https://github.com/Mikubill/sd-webui-controlnet/blob/main/scripts/controlnet_lora.py and https://github.com/HighCWu/control-lora-v2/blob/master/models/control_lora.py, but it differs in several ways. Here are my observations and solutions:

  1. The weight format of control-lora differs from that of the lora in the peft library; it comprises two parts: lora weights and fine-tuned parameter weights. The lora weights have suffixes "up" and "down". From my observation, we cannot use existing libraries to load these weights (I once worked on reproducing it at https://github.com/lavinal712/control-lora-v3, which includes training lora and specific layers and converting their weight names from diffusers to stable diffusion with good results).
  2. The prefix of control-lora's weight names follows the stable diffusion format, which poses some challenges when converting to the diffusers format (I had to use some hacky code to solve this issue).
  3. My approach is as follows: I converted linear and conv2d layers into a form with lora applied across all layers. Then, I used unet to restore controlnet, loading both lora weights and trained parameters using control-lora.

lavinal712 avatar Jan 30 '25 03:01 lavinal712

Thanks for starting this!感谢你开始这个!

In order to get this PR ready for reviews, we would need to:为了让这个 PR 准备好接受审查,我们需要:

  • Use peft for all things LoRA instead of having to rely on things like LinearWithLoRA.使用 peft 来处理所有 LoRA 相关的事情,而不是依赖像 LinearWithLoRA 这样的东西。
  • We should be able to run the LoRA conversion on the checkpoint during loading like how it's done for other LoRA checkpoints. Here is an example.我们应该能够在加载检查点时运行 LoRA 转换,就像对其他 LoRA 检查点所做的那样。这里有一个示例。
  • Ideally, users should be able to call ControlNetModel.load_lora_adapter() (method reference) on a state dict and we run the conversion first if needed and then take rest of the steps.理想情况下,用户应该能够在状态字典上调用 ControlNetModel.load_lora_adapter() (方法引用),如果需要,我们先运行转换,然后执行其余步骤。

The higher-level design I am thinking of goes as follows:我正在考虑的高层设计如下:

controlnet = # initialize ControlNet model.

# load ControlNet-LoRA into `controlnet`
controlnet.load_lora_adapter("stabilityai/control-lora", weight_name="...")

pipeline = # initialize ControlNet pipeline.

...

LMK if this makes sense. Happy to elaborate further.如果这有意义,请告诉我。乐意进一步详细说明。

I hold a reserved attitude because I have observed that the required memory for control-lora is less than that for controlnet, yet running it in this manner requires at least as much memory as controlnet. I want control-lora not only to be a lora but also to be a memory-saving model. Of course, the existing code cannot handle this yet, and it will require future improvements.

lavinal712 avatar Jan 30 '25 04:01 lavinal712

I want control-lora not only to be a lora but also to be a memory-saving model.

If we do incorporate peft (the way I am suggesting), it will be compatible with all the memory optims we already offer from the library.

sayakpaul avatar Jan 30 '25 04:01 sayakpaul

If we do incorporate peft (the way I am suggesting), it will be compatible with all the memory optims we already offer from the library.

I once observed while running sd-controlnet-webui that the peak VRAM usage was 5.9GB when using sd1.5 controlnet, and it was 4.7GB when using sd1.5 control-lora. Clearly, sd-controlnet-webui employs some method to reuse weights rather than simply merging the lora weights on top of controlnet. Can loading controlnet in this manner provide such VRAM optimization?

lavinal712 avatar Jan 30 '25 04:01 lavinal712

I am quite sure we can achieve those numbers without having to do too much given the recent set of optimizations we have shipped and are going to ship.

Clearly, sd-controlnet-webui employs some method to reuse weights rather than simply merging the lora weights on top of controlnet.

We're not merging the LoRA weights into the base model when initially loading the LoRA checkpoint. That goes against our LoRA design. Users can always merge the LoRA params into the base model params after loading the LoRA params but that is not the default behaviour.

sayakpaul avatar Jan 30 '25 04:01 sayakpaul

Good, resolving this concern, I believe such a design is reasonable. It is simpler and more user-friendly.

lavinal712 avatar Jan 30 '25 04:01 lavinal712

Appreciate the understanding. LMK if you would like to take a crack at the suggestions I provided above.

sayakpaul avatar Jan 30 '25 04:01 sayakpaul

I encountered a problem: after running the command python -m diffusers.pipelines.control_lora.control_lora, the following error occurred:

Traceback (most recent call last):
  File "/home/azureuser/miniconda3/envs/diffusers/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/azureuser/miniconda3/envs/diffusers/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/azureuser/v-yuqianhong/diffusers/src/diffusers/pipelines/control_lora/control_lora.py", line 19, in <module>
    controlnet.load_lora_weights(lora_id, weight_name=lora_filename, controlnet_config=controlnet.config)
  File "/home/azureuser/v-yuqianhong/diffusers/src/diffusers/loaders/controlnet.py", line 178, in load_lora_weights
    self.load_lora_into_controlnet(
  File "/home/azureuser/v-yuqianhong/diffusers/src/diffusers/loaders/controlnet.py", line 212, in load_lora_into_controlnet
    controlnet.load_lora_adapter(
  File "/home/azureuser/v-yuqianhong/diffusers/src/diffusers/loaders/peft.py", line 293, in load_lora_adapter
    is_model_cpu_offload, is_sequential_cpu_offload = self._optionally_disable_offloading(_pipeline)
  File "/home/azureuser/v-yuqianhong/diffusers/src/diffusers/loaders/peft.py", line 139, in _optionally_disable_offloading
    return _func_optionally_disable_offloading(_pipeline=_pipeline)
  File "/home/azureuser/v-yuqianhong/diffusers/src/diffusers/loaders/lora_base.py", line 435, in _func_optionally_disable_offloading
    if _pipeline is not None and _pipeline.hf_device_map is None:
  File "/home/azureuser/v-yuqianhong/diffusers/src/diffusers/models/modeling_utils.py", line 187, in __getattr__
    return super().__getattr__(name)
  File "/home/azureuser/miniconda3/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1931, in __getattr__
    raise AttributeError(
AttributeError: 'ControlNetModel' object has no attribute 'hf_device_map'

You can read the code. Does this method meet your expectations?

lavinal712 avatar Feb 01 '25 01:02 lavinal712

@sayakpaul Can you help me solve this problem?

lavinal712 avatar Feb 03 '25 03:02 lavinal712

Can you try to help me understand why python -m diffusers.pipelines.control_lora.control_lora is needed to be run?

sayakpaul avatar Feb 03 '25 03:02 sayakpaul

My design is as follows: the core code is located in src/diffusers/loaders/controlnet.py, and ControlNetLoadersMixin is set as the parent class of ControlNetModel in src/diffusers/models/controlnets/controlnet.py, providing the implementation of the function load_lora_weights. The diffusers.pipelines.control_lora.control_lora is a test code with the purpose of loading LoRA into ControlNetModel, but it should eventually be cleaned up.

lavinal712 avatar Feb 03 '25 04:02 lavinal712

load_lora_weights() is implemented at the pipeline-level. ControlNetModel is subclassed by ModelMixin. So, we will have to rather implement the load_lora_adapters() method:

https://github.com/huggingface/diffusers/blob/3e35f56b00d73bc3c2d3bb69615176d0909fab8a/src/diffusers/loaders/peft.py#L141

sayakpaul avatar Feb 04 '25 05:02 sayakpaul

I'm having trouble converting the prefix of control-lora into the diffusers format. The prefix of control-lora is in the sd format, while the loaded controlnet is in the diffusers format. I can't find a clean and efficient way to achieve the conversion. Could you provide some guidance? @sayakpaul

lavinal712 avatar Feb 04 '25 09:02 lavinal712

I'm having trouble converting the prefix of control-lora into the diffusers format. The prefix of control-lora is in the sd format, while the loaded controlnet is in the diffusers format. I can't find a clean and efficient way to achieve the conversion. Could you provide some guidance? @sayakpaul

You could refer to the following function to get a sense of how we do it for other non-diffusers LoRAs: https://github.com/huggingface/diffusers/blob/f63d32233f402bd603da8f3aa385aecb9c3d8809/src/diffusers/loaders/lora_conversion_utils.py#L128

Would this help?

sayakpaul avatar Feb 04 '25 10:02 sayakpaul

I tried to load Control-LORA in the load_lora_adapters() function of the PeftAdapterMixin class. However, by default, the keys for the model weights are in the form of lora_A.default_0.weight instead of the expected lora_A.weight. This is caused by adapter_name = get_adapter_name(self). Could you please tell me what the default format of the LoRA model weight keys is and how to resolve this issue? @sayakpaul

lavinal712 avatar Feb 07 '25 10:02 lavinal712

I think the easiest might to have a class for Control LoRA overridden from PeftAdapterMixin and override the load_lora_adapter() method. We can handle the state dict conversion directly there so that SD format is first converted into the peft format. WDYT?

sayakpaul avatar Feb 07 '25 13:02 sayakpaul

I think the easiest might to have a class for Control LoRA overridden from PeftAdapterMixin and override the load_lora_adapter() method. We can handle the state dict conversion directly there so that SD format is first converted into the peft format. WDYT?

Is there any example?

lavinal712 avatar Feb 07 '25 13:02 lavinal712

There is none but here is how it may look like in terms of pseudo-code:

class ControlLoRAMixin(PeftAdapterMixin):
    def load_lora_adapter(...):
        state_dict = # convert the state dict from SD format to peft format.
        ...
        # proceed with the rest of the logic.

sayakpaul avatar Feb 07 '25 14:02 sayakpaul

Okay, I will give it a try.

lavinal712 avatar Feb 07 '25 14:02 lavinal712

It is done.

from diffusers import (
        StableDiffusionXLControlNetPipeline,
        ControlNetModel,
        UNet2DConditionModel,
    )
    import torch

    pipe_id = "stabilityai/stable-diffusion-xl-base-1.0"
    controlnet_id = "xinsir/controlnet-canny-sdxl-1.0"
    lora_id = "stabilityai/control-lora"
    lora_filename = "control-LoRAs-rank128/control-lora-canny-rank128.safetensors"


    unet = UNet2DConditionModel.from_pretrained(pipe_id, subfolder="unet", torch_dtype=torch.float16).to("cuda")
    controlnet = ControlNetModel.from_unet(unet).to(device="cuda", dtype=torch.float16)
    controlnet.load_lora_adapter(lora_id, weight_name=lora_filename, controlnet_config=controlnet.config)

    from diffusers import AutoencoderKL
    from diffusers.utils import load_image, make_image_grid
    from PIL import Image
    import numpy as np
    import cv2

    prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
    negative_prompt = "low quality, bad quality, sketches"

    image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png")

    controlnet_conditioning_scale = 1.0  # recommended for good generalization

    vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
    pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
        pipe_id,
        unet=unet,
        controlnet=controlnet,
        vae=vae,
        torch_dtype=torch.float16,
        safety_checker=None,
    ).to("cuda")

    image = np.array(image)
    image = cv2.Canny(image, 100, 200)
    image = image[:, :, None]
    image = np.concatenate([image, image, image], axis=2)
    image = Image.fromarray(image)

    images = pipe(
        prompt, negative_prompt=negative_prompt, image=image,
        controlnet_conditioning_scale=controlnet_conditioning_scale,
        num_images_per_prompt=4
    ).images

    final_image = [image] + images
    grid = make_image_grid(final_image, 1, 5)
    grid.save(f"hf-logo.png")

lavinal712 avatar Feb 07 '25 18:02 lavinal712

@sayakpaul Please check the implement in src/diffusers/loaders/peft.py

lavinal712 avatar Feb 07 '25 18:02 lavinal712

Thanks for the code snippet. Could you share some results?

sayakpaul avatar Feb 08 '25 02:02 sayakpaul

图像

lavinal712 avatar Feb 08 '25 04:02 lavinal712

Thanks for providing the examples. The examples seem a bit worse than the ones originally shared in https://github.com/huggingface/diffusers/pull/10686#issuecomment-2623431598. Or am I missing out on something?

sayakpaul avatar Feb 08 '25 05:02 sayakpaul

Thanks for providing the examples. The examples seem a bit worse than the ones originally shared in #10686 (comment). Or am I missing out on something?

I am not certain, as I repeatedly conducted experiments to confirm the modules that required modification, ensuring that the weight names loaded into the ControlNet perfectly matched.

lavinal712 avatar Feb 08 '25 05:02 lavinal712

Wait a moment, I have discovered a significant issue. When removing unused code, problems arose in the generated images. I am reverting to last night's version to investigate the cause.

lavinal712 avatar Feb 08 '25 05:02 lavinal712

Wait a moment, I have discovered a significant issue. When removing unused code, problems arose in the generated images. I am reverting to last night's version to investigate the cause.

Sure, let's try to narrow down what we're missing.

sayakpaul avatar Feb 08 '25 05:02 sayakpaul

@sayakpaul Good, the bug has been fixed. Currently, I have deleted the unnecessary files, and the effect of the images generated using the following code is as follows:

from diffusers import (
    StableDiffusionXLControlNetPipeline,
    ControlNetModel,
    UNet2DConditionModel,
)
import torch

pipe_id = "stabilityai/stable-diffusion-xl-base-1.0"
lora_id = "stabilityai/control-lora"
lora_filename = "control-LoRAs-rank128/control-lora-canny-rank128.safetensors"

unet = UNet2DConditionModel.from_pretrained(pipe_id, subfolder="unet", torch_dtype=torch.bfloat16).to("cuda")
controlnet = ControlNetModel.from_unet(unet).to(device="cuda", dtype=torch.bfloat16)
controlnet.load_lora_adapter(lora_id, weight_name=lora_filename, controlnet_config=controlnet.config)

from diffusers import AutoencoderKL
from diffusers.utils import load_image, make_image_grid
from PIL import Image
import numpy as np
import cv2

prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
negative_prompt = "low quality, bad quality, sketches"

image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png")

controlnet_conditioning_scale = 1.0  # recommended for good generalization

vae = AutoencoderKL.from_pretrained("stabilityai/sdxl-vae", torch_dtype=torch.bfloat16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
    pipe_id,
    unet=unet,
    controlnet=controlnet,
    vae=vae,
    torch_dtype=torch.bfloat16,
    safety_checker=None,
).to("cuda")

image = np.array(image)
image = cv2.Canny(image, 100, 200)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
image = Image.fromarray(image)

images = pipe(
        prompt, negative_prompt=negative_prompt, image=image,
        controlnet_conditioning_scale=controlnet_conditioning_scale,
        num_images_per_prompt=4
).images

final_image = [image] + images
grid = make_image_grid(final_image, 1, 5)
grid.save(f"hf-logo.png")

图像 (2)

lavinal712 avatar Feb 15 '25 14:02 lavinal712

Currently, I am trying to minimize the necessity of the code. Regarding get_peft_kwargs, I found that it cannot parse model parameters well, so I had to put in a lot of effort to manually convert them. @sayakpaul

lavinal712 avatar Feb 17 '25 06:02 lavinal712