comrade bionic
comrade bionic
`attention_vector = torch.cat( [ self.conv_ex(Z).unsqueeze(dim=1), self.conv_ex(Z).unsqueeze(dim=1) ], dim=1)` `attention_vector = self.softmax(attention_vector)` and `self.softmax = nn.Softmax(dim=1)` it seems that the elements of the attention_vector are the same, so if you apply...
hi, do you have the orgin video sequence
Hi, murari I am interested in the MOR-UAV dataset,but i don't where i can download the dataset, I visit the offical site, but didn't find the download button for this...
Awesome work ! Thank you for sharing. I'd like to ask about comparing two images: one being the ground truth image and the other the reconstructed result. Due to certain...
代码块换行问题
我在编辑模式下代码块不换行显示的结果很难受,  就是他会在代码很长的那一行去生成一个滑动条  我希望的是像阅读模式下一样,在代码块的最底下去生成一个滑动条 
Thank you for sharing the code. I'd like to ask a question regarding the MHSA module. In the original text, the output description seems to suggest attention weights, but why...
The default yaml file seems not work.
I would like to ask about the reproducibility issues with SSIM and MS-SSIM. I am considering using MS-SSIM as a loss function to supervise my model's training. However, I have...