fine-tuning OpenClip with Hugingface's PEFT (such as LoRA)
Feature request
fine-tuning OpenClip with Hugingface's PEFT (such as LoRA)
Motivation
fine-tuning OpenClip with Hugingface's PEFT (such as LoRA)
Your contribution
refer to https://github.com/KyanChen/MakeMultiHeadNaive/tree/master for help!
Sorry, could you please provide more details? Are you looking for help how to achieve that or are you suggesting that it doesn't work right now?
Now, Hugingface's PEFT (such as LoRA) can not finetune the linear layer of torch.nn.MultiHeadAttention based transformer model (such as OpenCLIP). If I must use the LoRA, I should replace the torch.nn.MultiHeadAttention layer with a self-implemented naive MultiHeadAttention layer. Can you help to integrate it to the official PEFT lib?
I see, thanks for explaining. Indeed, right now, it is impossible as a user to change what type of LoRA layer is being used. We have ideas about exposing a "low level" API that would allow users more fine-grained control, including the possibility to allow using custom layers, as you suggest. I cannot say yet if it will really work out and when it's ready, but I'll let you know.
Thanks for your efforts!
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
I'd like to bump this, being unable to put LoRA weights on anything that uses nn.MultiheadAttention is a real pain and using a naive implementation is clunky and cumbersome. Seems strange that LoRA-Torch can do it but not peft.
Hey, I created a PR to add MHA: #1324. The implementation was a bit tricky because this layer is not very "friendly" for LoRA-adaptation, but I think I got it working.
For now, this is just a rough draft, so it would be great if you could test it and tell me if it works your use case. To install from this branch, run:
python -m pip install git+https://github.com/BenjaminBossan/peft.git@feat-add-lora-multihead-attention
So far, I did the following testing:
import torch
from torch import nn
import open_clip
from peft import LoraConfig, get_peft_model
from PIL import Image
import requests
model, preprocess = open_clip.create_model_from_pretrained('hf-hub:laion/CLIP-ViT-g-14-laion2B-s12B-b42K')
tokenizer = open_clip.get_tokenizer('hf-hub:laion/CLIP-ViT-g-14-laion2B-s12B-b42K')
peft_model = get_peft_model(model, config)
opt = torch.optim.SGD(peft_model.parameters(), 0.1)
# text encoder
text = tokenizer(["a diagram", "a dog", "a cat"])
text_features = peft_model.encode_text(text)
loss = text_features.sum()
loss.backward()
opt.step()
# image encoder
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image = preprocess(image).unsqueeze(0)
image_features = model.encode_image(image)
image_features.sum().backward()
opt.step()
@ambroser53 I think the linked LoRA-torch library has some bugs. For instance:
import torch, loratorch
import torch.nn as nn
model_torch = loratorch.Linear(5, 6, r=4, lora_alpha=1)
loratorch.mark_only_lora_as_trainable(model_torch)
print(model_torch.state_dict().keys())
# prints odict_keys(['weight', 'bias', 'w_lora_A', 'w_lora_B'])
optimizer_torch = torch.optim.SGD(model_torch.parameters(), lr=0.1)
for _ in range(3):
model_torch.train()
x = torch.rand(2, 5)
loss2 = model_torch(x).sum()
optimizer_torch.zero_grad()
loss2.backward()
optimizer_torch.step()
print(model_torch.state_dict().keys())
# odict_keys(['bias', 'w_lora_A', 'w_lora_B'])
# note the missing 'weight' key!
As you can see, the weight is dropped from the state_dict, making it impossible to save the model. Same is true for named_parameters(). So if you're using this package, you should be aware of this.
Hey @BenjaminBossan cheers for the fork I'll run some tests on Tuesday. I realised that LoRATorch was a bit buggy after I started trying to combine it with peft's LoraLayer but if there's a way to do it without it that'd be much better.
@ambroser53 Did you have time to give it a try?
Hi sorry I meant to get back to you sooner. It appears the layers are placed on the nn.MultiheadAttention blocks just fine on my model. My use case is very complicated though as its a custom architecture so I will need to get back to you on how effective it is and whether the openclip finetuning is bottlenecked or non-performative in some way. Once I have these answers I'll report back.
Great, thanks for testing. Do you have an ETA for when these tests finish?
Regarding performance, I would expect a larger overhead than for simple LoRA layers like Linear because of the merging-unmerging roundtrip we have to take, but I'm not sure if it makes a difference grand scheme of things.
Should get initial results early next week if theres no disasters.
Out of curiousity is said overheard computational or memory?
Should get initial results early next week if theres no disasters.
Thanks!
Out of curiousity is said overheard computational or memory?
It should be computational only. However, since we take the same approach here as LoRA-torch, it shouldn't be better or worse than using that.
I've dug deeper in my testing. Mine is a very specific case where LoRA weights are only placed on specific layers and the model is mixed quantisation so the placement needed further tinkering. However, now that I've specifically made sure which layers are getting where they need to there's a logic error that seems to only occur some of the time. Essentially, say you have nn.MultiheadAttention called attn then it will have the submodule attn.out_proj which is a nn.Linear (or at least it should be, there's this weird NonDynamicQuantisableWhatever going on but lets not get into that). If you have target_modules on your LoraConfig that point to both attn and attn.out_proj then if attn gets turned into a LoraLayer first then when it tries to find attn.out_proj it's now under attn.base_layer.out_proj.
It doesn't look like the out_proj is taken into account by the merge and unmerge which seems to be more to do with the in_proj_weight. In the implementation of nn.MultiheadAttention it doesn't actually use the forward of said out_proj and only passes the weight and bias tensors. I thought this could be fixed just by forcing it to put the LoraLayer on attn.out_proj before attn but I think this would create problems due to the way nn.MultiheadAttention never calls forward which would then neglect the lora weights entirely.
Could there be a simple fix to just do the same as there is on in_proj_weight for out_proj.weight?
Thanks a lot @ambroser53, your analysis is 100% correct. I pushed a new commit to the PR that now takes into account out_proj.
As is, we now apply LoRA to both in_proj and out_proj. There is currently no way to specify only in_proj or only out_proj. That wouldn't be easy to achieve, we would probably have to implement a new argument (or even multiple) on LoraConfig to allow that, which seems a bit overkill for this rather niche feature. My reasoning for applying LoRA to both instead of only in_proj is that recently the consensus seems to converge towards applying LoRA to as many Linear layers as possible. LMK what you think.
I'll be out of office starting next week, so that PR may stall for a while unless one of the other maintainers has time to take over. Still, please try out this new PR and give us feedback if it works for you.
No that sounds perfect I don't think having one or the other would make sense. I should be able to give it a go now and give results next week.
I should be able to give it a go now and give results next week.
Nice. If you can give some early feedback today, I may still have time to react to it :)
This may be a problem with my own complex set up so could be out of scope here but does peft automatically cast parameters to int8 if the underlying model is loaded in int8? Asking since part of the model is in int8 but the rest is skipped via int8_quant_skip_modules this is because now with out_proj implemented it's throwing an error when calling get_peft_model within _restore_weights for lora.MultiheadAttention because registering the out_proj as "weight" seems to have it cast as int8 when it's supposed to have been skipped and left as float16. Have any insights or will mixed quantisation be something wholly unwieldy I'm unlikely to find a quick fix for?
Hmm, normally the weights should not be automatically cast to int8. If you have some way to reproduce this error, I could investigate.
Looking at this issue in general, I think, however, that this implementation will not work correctly with quantized weights. As is, we merge the LoRA weights into the base weights. When the latter are quantized, this requires special treatment, similar to the bnb layers we have for LoRA, a normal merge would surely fail. So I think we would need a completely separate MHA class for quantized layers.
I'm not exactly sure what it is that you're doing with quantization, but as you've remarked earlier, the out_proj actually uses NonDynamicallyQuantizableLinear, which from my understanding exists to prevent some kind of error with quantization. I wonder if that could be related.
I understand that but the point is that the MHA aren't quantised at all. The confusing part is that the MHA and out_proj nn.Linear are being passed to int8_quant_skip_modules. It should be okay for now I'll train on two cards since it can't all fit on one. Hopefully have some results soon.
I understand that but the point is that the MHA aren't quantised at all.
Ah I see, that is indeed very strange and should not happen.
The confusing part is that the MHA and
out_projnn.Linearare being passed toint8_quant_skip_modules
Can you point me to a reference for int8_quant_skip_modules?
Here's the code for bitsandbytesconfig configuration object where you can specify int8_quant_skip_modules but there's no further documentation than what is in the initialisation comment. It does seem to be working as prior to calling get_peft_config the correct modules are in the correct datatype.
I'll try and get together a code sample that reproduces (this code I'm referring to right now is a proprietary for a company)
One more potential bug. It seems that when using get_peft_model on a large model with an MHA inside, it puts the internal parameters (i.e. in_proj_weight and out_proj.weight) in the MHA as requires_grad=True. Its actually really hard to force it it to not be true and I don't quite know why. I wonder whether its because of the nested LoraLayers or something missing in terms of ensuring the base weights dont require gradients that is present in other LoraLayers
It is very bizarre. The following code is from my script. attn_pool.attn is the (only) MHA:
model.base_model.model.model.vision_model.attn_pool.attn.base_layer.in_proj_weight.requires_grad = False
model.base_model.model.model.vision_model.attn_pool.attn.base_layer.out_proj.base_layer.weight.requires_grad = False
trainable_params = [name for name, param in model.named_parameters() if param.requires_grad]
print(model.base_model.model.model.vision_model.attn_pool.attn.base_layer.in_proj_weight.requires_grad)
This outputs true and both the in_proj_weight and out_proj.weight will be in trainable_params. It's almost like iterating through the module names causes the to be made trainable. This doesn't happen with any other parameters in the wrapped model only these two that reside in the MHA.
This repo is a self contained case that reproduces the error when using the MHA peft branch
This takes priority over the int8 stuff.
Hi @ambroser53 I'm back in office. Thanks a lot for figuring out this bug and providing a reproducer. I could identify the issue and it should now be fixed. When running your example locally, I now get the correct gradients. Please take a look.
It's almost like iterating through the module names causes the to be made trainable.
This was indeed the case! The reason for this is explained here:
https://github.com/huggingface/peft/pull/1324/files#diff-24a141c266b7b714ae8fcc470f31bc283f7b0f5a671bbf6d5f092741fc374104R899-R903
Here's the code for
bitsandbytesconfigconfiguration object
Sorry, did you mean to include a link here?
Thank you for this. If I get a chance I'll test it in my use case. Problem is it's been a while so we started using a different approach so not sure how fast I'll be. Do you think this hacky way will have any additional speed inefficiencies?
The link I meant to put was this one here: https://github.com/huggingface/transformers/blob/v4.37.2/src/transformers/utils/quantization_config.py#L150
If I get a chance I'll test it in my use case. Problem is it's been a while so we started using a different approach so not sure how fast I'll be.
Whenever you have some time, or some code you could share for us to test, it would be great.
Do you think this hacky way will have any additional speed inefficiencies?
Not 100% sure. I'd say probably not because _restore_weights is only called when named_modules, modules, or state_dict is called, so not during regular training or inference. But the top priority is to get this running at all (as long as it's not super slow), performance could be improved later if necessary.
The link I meant to put was this one here:
I see. It appears the option was renamed to llm_int8_skip_modules, which is why I couldn't find it. I skimmed the bnb code base and from what I can tell, the full names are matched (so not like target_modules in PEFT), so maybe that's the reason why it didn't work? Otherwise, I'd need code to reproduce the issue to investigate further.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.