LATCH descriptor does not seem to work well on my dataset
@mdaiter I have tried the LATCH binary and LATCH unsigned, it seems both of them work very well on my image set (it seems there are less features detected and much less matches found), is there any suggestions to tune the parameters or something?
Also, the matching is still not fast even I use GPU_LATCH,is there any way to speed it up?
@LingyuMa what's your dataset and what numbers are you getting?
@LingyuMa I also set the ratio parameter to 0.99 when matching: binary descriptors are sensitive to those sorts of changes, and these fluctuations can seriously kill matching ability.
I have also changed that to 0.99
I have attached the image set and matches.bin, can you have a look? https://drive.google.com/file/d/0BwWAt5w3811WdG16Z1ZKRGpVTFk/view?usp=sharing https://drive.google.com/file/d/0BwWAt5w3811WV1U5Wm9hTEdGTk0/view?usp=sharing
@mdaiter What parameters are you using for matching?
@LingyuMa I'm just using -r 0.99 . That's it...hm. How many putatives do you get, and how many geometrics? Are you using -g e or -g f?
@mdaiter Can you run my dataset in your computer (to see what will happen)? I am using fundamental matrix for filtering, so it is -g f. I have attached my matches.f.bin, not sure how to check it.
./bin/openMVG_main_exportMatches -i outputLatch/sfm_data.json -d outputLatch -m outputLatch/matches.putative.bin -o matches will give you all of your matcher data back and export it to svgs. Curious to see the numbers.
it seems it gave me a bunch of svg to show matches, is there a way to show the total number?
@mdaiter for matches.f.bin, All I can say is that it gave me 136 image pairs. The pairs seem to be reasonable, though the number is 49.5MB, though it is much less than the sift (209.4MB), I can also see since the matching becomes sparse when it comes to LATCH
The total number between each pair appears right before the end of the SVG file. x_y_n_.svg is the format, where x is the first ID of the image, y is the second ID of the image, and n is the number of matches between images
Here are the two output matches svg files screenshots
@mdaiter the second image is LATCH
I know it is hard to see, but the number of image pairs is about 5 times less.
@LingyuMa Can you send me the SIFT matches that align with the LATCH matches? It seems as though the SIFT matches compare sets whose LATCH equivalents aren't visible from your screenshots
The problem is that the matched images are not the same for the two descriptors. I'll see what I can do when I come back from lunch.
Sent from my iPhone
On Jul 22, 2016, at 11:11 AM, Matthew Daiter [email protected] wrote:
@LingyuMa Can you send me the SIFT matches that align with the LATCH matches? It seems as though the SIFT matches compare sets whose LATCH equivalents aren't visible from your screenshots
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@mdaiter Can you have a look at these two photos

the first one is Latch
@LingyuMa these seem correct. Maybe @csp256 (original author of the library) could provide some insight, but I believe these are the results you should be receiving back from each image.
But the matching number seems much less than SIFT, which makes the global construction fails. Is there a way to increase matching number?
Sent from my iPhone
On Jul 25, 2016, at 5:08 AM, Matthew Daiter [email protected] wrote:
@LingyuMa these seem correct. Maybe @csp256 (original author of the library) could provide some insight, but I believe these are the results you should be receiving back from each image.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@LingyuMa if you modify these two parameters: https://github.com/mdaiter/cudaLATCH/blob/cf05a8fdf19b83519e68cc0c184e334f83be18e5/params.hpp and here: https://github.com/mdaiter/openMVG/blob/custom/src/openMVG/matching_image_collection/gpu/params.hpp you'll be able to tune matching threshold and total allowed points to detect. Each increment in NUM_SM gives back 512 more key points.
Also, I have found that the matching time is still pretty slow compared with the openMVG default matching method+SIFT. Which is really weird, is there any way to accelerate it?
Sent from my iPhone
On Jul 25, 2016, at 8:55 AM, Matthew Daiter [email protected] wrote:
@LingyuMa if you modify these two parameters: https://github.com/mdaiter/cudaLATCH/blob/cf05a8fdf19b83519e68cc0c184e334f83be18e5/params.hpp and here: https://github.com/mdaiter/openMVG/blob/custom/src/openMVG/matching_image_collection/gpu/params.hpp you'll be able to tune matching threshold and total allowed points to detect. Each increment in NUM_SM gives back 512 more key points.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@LingyuMa if you're using the LATCH_UNSIGNED method, I'd use the GPU_LATCH matching method; otherwise, you're technically comparing two different fundamental ways of matching. With SIFT, you'd have to run the BRUTE_FORCE_MATCHER_L2 in order to perform a fair comparison. I have the numbers on my computer, and it's far slower than the BRUTE_FORCE_HAMMING matcher.
@mdaiter the problem is I am using LATCH_UNSIGNED + GPU_LATCH and I compared it with SIFT + ANNL2, the speed does not improve.
Something is definitely up. The number of matches is what I would expect sometimes (10k ish), but much lower the rest (<1k). I am interpreting this as my code working, and something upstream being broken.
I really do not think that the ratio test makes sense in Hamming space: as a first order improvement you should impose a hard threshold between best and second best matches. This is done on the GPU matcher.
If the CPU matcher is slow you are probably being bit by the Intel popcount bug. Can you try the GPU matcher?
Agreed with @csp256 . I'm curious: what is your total number of matches putatively? You can check by running the exportMatches command with matches.putative.bin instead of matches.f.bin
@LingyuMa if you're looking for a GPU Brute Force L2 matcher, I just finished one up and should be pushing code either today or tomorrow. It's on the default OpenCV version of the GPU matcher, but I'm implementing a CUDA dynamic parallel solution at the moment and will inform you when it's ready.
@LingyuMa my GPU L2 Brute Force matcher is now finished. Feel free to use it with SIFT, PNNet, LATCH, DeepSiam2Stream or DeepSiam
Did you try to extract LATCH descriptors on SIFT keypoints? Since there is a "clean" SIFT integration pending... it could be easy to test: https://github.com/openMVG/openMVG/issues/556 We can also test the LATCH descriptor on Affine detector (we can extract rectified patch regions and compute the descriptor on it). See there for Affine patch normalization: https://github.com/openMVG/openMVG/blob/master/src/openMVG_Samples/features_affine_demo/features_affine_demo.cpp (only the rotation invariance is missing, compute rectified patch rotation, and then rotate it)