diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

How to implement `IPAdapterAttnProcessor2_0` with xformers

Open JWargrave opened this issue 1 year ago • 0 comments

I want to fine-tune IP-adapter model with xformers, but I did not find the implementation of the xformers version corresponding to IPAdapterAttnProcessor2_0. I want to implement attention processor in xformers, are the following two lines of code the only difference between the two versions?

In XFormersAttnProcessor:

hidden_states = xformers.ops.memory_efficient_attention(
    query, key, value, attn_bias=attention_mask, op=self.attention_op, scale=attn.scale
)

In AttnProcessor2_0:

hidden_states = F.scaled_dot_product_attention(
    query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
)

JWargrave avatar May 16 '24 08:05 JWargrave