latent-diffusion
latent-diffusion copied to clipboard
What is rel_pos in the x_transformer.py file?
Based on what has been defined here, self.rel_pos would be a function that always returns None. Is there any specific reason for this?
We used the transformer implementation of lucidrains, which includes lots of different positional encodings. As we never used relative position encodings, we always set it to None to avoid adaptations to the forward()-method of the individual attention layers.
please close if this answers your question :)