accelerated_features icon indicating copy to clipboard operation
accelerated_features copied to clipboard

XFeat + LightGlue

Open guipotje opened this issue 1 year ago • 14 comments

Hello everyone,

I'm training some LightGlue variations (finding a neat trade-off between model size vs accuracy) and soon I will update the repo with the model and weights in the next weeks!

You can follow this issue if you are interested.

Best,

Guilherme

guipotje avatar Jul 10 '24 22:07 guipotje

That's great! Looking forward to your work

cxb1998 avatar Jul 16 '24 07:07 cxb1998

Hey guys, just released a version of LightGlue matcher, please check out in README

guipotje avatar Jul 21 '24 23:07 guipotje

大家好,刚刚发布了一个版本的 LightGlue 匹配器,请在 README 中查看

Hello, author, didn't you say that XFEAT performance is better than Superpoint, why look at the data this time Superpoint is stronger

muchluv525 avatar Jul 22 '24 01:07 muchluv525

Hi @muchluv525,

Please notice that we trained a smaller and faster version of LightGlue.

Other than that, there are still a few reasons that SuperPoint + LightGlue might be still better than XFeat + LG (full size) 1 - Descriptor embedding: XFeat extracts a much more compact (64-D vs 256-D) descriptors; 2 - SuperPoint has a much larger backbone.

guipotje avatar Jul 22 '24 01:07 guipotje

And the number of layers is just 6. BTW, would you upload the training code of LightGlue for XFeat?

zhouzq-thu avatar Jul 22 '24 02:07 zhouzq-thu

嗨,

请注意,我们训练了一个_更小、更快_的 LightGlue 版本。

除此之外,SuperPoint + LightGlue 可能仍然优于 XFeat + LG(全尺寸)有几个原因 1 - 描述符嵌入:XFeat 提取更紧凑(64-D 与 256-D)的描述符; 2 - SuperPoint 具有更大的主干网。

我明白了,谢谢你的回答,我是初学者,期待对xfeat进行更详细的解释

muchluv525 avatar Jul 22 '24 02:07 muchluv525

@guipotje Have you tried training Xfeat and LightGlue end-to-end?

noahzn avatar Aug 02 '24 07:08 noahzn

Hello @noahzn, I haven't tried to train it end-to-end. It might deliver some improvements, as mentioned in SuperGlue's paper (section 5.4), when backpropagating through the descriptors. However, it also might lead to less generalization to different scenes.

guipotje avatar Aug 06 '24 12:08 guipotje

Hello, I encountered an error while converting the ONNX model using torch.export. Here is the code we used for the model conversion. 2024-08-12 14-43-37 的屏幕截图

This is the error. 2024-08-12 14-44-29 的屏幕截图

zhangchang0127 avatar Aug 12 '24 06:08 zhangchang0127

Hello,

Thank you very much for your great work.

I am curious about the modifications made to the lightglue network structure to achieve the balance between inference accuracy and speed mentioned in the README. Will this part of the code be made publicly available?

I checked the match_lighterglue.py file, but it does not provide more information on this aspect.

EndlessPeak avatar Aug 15 '24 06:08 EndlessPeak

Hello @noahzn, I haven't tried to train it end-to-end. It might deliver some improvements, as mentioned in SuperGlue's paper (section 5.4), when backpropagating through the descriptors. However, it also might lead to less generalization to different scenes.

Hi @guipotje I might try end-to-end training, do you have any idea about the implementation? Since XFeat and LightGlue use different homography code for training, I'm wondering if it's possible to keep their own random homography code, but only optimize two networks together. This may need XFeat to return a loss and then we add it to LightGlue's loss and backprop together. Do you think this is the minimal effort for an end-to-end training of the two networks?

noahzn avatar Sep 03 '24 06:09 noahzn

Great work! I notice that lighter version of your lightglue which is selected n_layers=6, as we know the original paper of lightglue with some metrics when L=3, 5, 7, 9, I am a little curious about your n_layers settings.

noahzhy avatar Feb 12 '25 16:02 noahzhy

Hello, thank you very much for your work. However, when I evaluated the notebooks' xfeat + lightglue on Megadepth1500, I found that sometimes the output matching points were too small, causing the set verification (RANSAC) to fail. Do you have any suggestions? Looking forward to your reply.

zw-92 avatar Apr 15 '25 05:04 zw-92

hi,we find when we set topk=4096, the score of some keypoints<0, finally we get the number of keypoints is<4096, then more batch is wrong, how can we get the same numbers of keypoints?

July0928 avatar Jul 09 '25 02:07 July0928