s-chh
s-chh
Thank you for the comments. I don't have experience with Rotary Position Embedding. At a glance, it looked like it was for 1D sequential data like text. I will see...
Rotational Embedding is for 1-D sequences. I am still figuring out how to make it work for 2-D data, but it might not be so straightforward. Any suggestions would help.
Thanks for sharing the link. This motivated me to learn about different positional embeddings and how they can be used with Vision Transformers. I am currently testing the implementation and...
I have created a new repository with different types of positional embeddings for Vision Transformers. Would appreciate your feedback on it. Here is the link: https://github.com/s-chh/Vision-Transformer-ViT-Positional-Embeddings
Feel free to open it here or raise an issue on the new repo if you need anything.