Loading lora weights in pipeline.py has no effect on the models.
Also, is there a way to use a non coreML text-encoder or to make this compatible with Compel?
Hello!
- LoRA in Core ML is not implemented in the current version of this library.
- You may fork and add non-Core ML components (e.g. PyTorch) if you desire
- cc: @ZachNagengast regarding Compel
So if I loaded the lora weights using diffusers before the pipeline is converted to CoreMLPipeline it seems to have no effect.
Also, with compel I tried to pass the pooled prompt embeds in the _encode_prompt but got error that device doesn’t exist on the pipeline.. if I get rid of del pyrorch_pipe it may work
Also - I was able to get height and width to 1024x1024, You should add ‘if xl’ to set that as max… excellent quality so far
Architecturally, it makes sense to allow arbitrary vector inputs as an alternative to text prompts so that you can implement your own token weighting scheme for your app. In that scenario, the main changes to be compatible with compel here would be updating those interfaces, what do you think? If you didn't notice we already support text encoder overriding with the NaturalLanguage.NLContextualEmbedding API, so I think it won't be too much of a lift to open it up for arbitrary text encoder inputs.
I am getting an error:
Error: Expected shape (2, 2048, 1, 77), got (1, 77, 2048) for input: encoder_hidden_states
if I just set the pipeline not to load the text encoder will this fix it ?