range_width having no effect in FeatureMatcher?
Hi, I'm trying to stitch a sequence of images that I know are adjacent, and I thought I understood setting range_width to 1 when matching might improve results, but it doesn't seem to have any effect. Perhaps I have misunderstood how this parameter works, but it doesn't seem to be doing anything from what I can tell.
I tried taking a fresh copy of Stitching Tutorial.ipynb and changing the range_width parameter in the matching section, and I still get a full matrix of confidence values for every pair of images no matter what value I choose – I expected that setting it to 1 would force it to only consider adjacent pairs (e.g. pair 1-2 would have a confidence but pair 1-3 would not). That's what the C++ code seems to be doing, but I admit I haven't dug deeply enough into it.
Here's my results:

You can see on the final block that under the hood the type is being set to cv2.detail.BestOf2NearestRangeMatcher correctly.
Do you have any suggestions, or have I misunderstood how this is supposed to work? If I manually set the confidence of non-adjacent images to zero then I get a better stitching result in later stages, but I was hoping for a significant performance increase of comparing fewer images too. Thanks!
Could you share the Images you are using?
I'm using the default images that come in the Stitching Tutorial Jupyter Notebook, no other changes
OK, I had a first brief look, but I havnt worked with the RangeMatcher yet and dont quite understand why there are no changes. If you dug deeper please let me know.
Thanks for looking. I've been trying to do more digging. Honestly, I don't think this is an issue with stitching. I finally bit the bullet and set up a build of the C++ source code, and tried running stitching_detailed.cpp with the range_width parameter, and the BestOf2NearestRangeMatcher still seems to give me a full list of matches between images with confidence values, even with the range set to 1. I'll report back if I work anything out.
Rusty as my C++ is, I noticed that BestOf2NearestRangeMatcher::operator () didn't seem to be called at all in my tests.
At least for stitching_detailed.cpp, I believe the issue is simply that the function isn't labelled as virtual in matchers.hpp, since it does the following:
Ptr<FeaturesMatcher> matcher;
if (matcher_type == "affine")
matcher = makePtr<AffineBestOf2NearestMatcher>(false, try_cuda, match_conf);
else if (range_width==-1)
matcher = makePtr<BestOf2NearestMatcher>(try_cuda, match_conf);
else
matcher = makePtr<BestOf2NearestRangeMatcher>(range_width, try_cuda, match_conf);
After marking the function as virtual it works correctly and the matrix of confidence values comes back with all zeros for non-index-adjacent images when range_width is 1.
I'm not sure how the Python bindings work under the hood though, since this is a C++ inheritance related issue. It seems odd, since stitching really calls cv2.detail_BestOf2NearestRangeMatcher directly. A bit like how avoiding polymorphism in the C++ code also works:
Ptr<BestOf2NearestRangeMatcher> matcher = makePtr<BestOf2NearestRangeMatcher>(range_width, try_cuda, match_conf);
But I guess this is just to do with the Python -> C++ layer. I'll post an issue over on the OpenCV GitHub.
Nice, thanks for your effort. Please link the issue If you don't mind!
OK I saw it xD
https://github.com/opencv/opencv/issues/22315
FWIW if you want to emulate the behaviour of the range based matcher, for now I've realised you can do so manually by doing the image pairing part yourself, calling .apply rather than .apply2, e.g. something like the code below (although if you have any idea how to more nicely create a deep copy of an OpenCV object, please let me know...!).
matcher = cv.detail_BestOf2NearestMatcher()
matches = []
for i in range(len(features)):
for j in range(len(features)):
if 0 < j-i <= range_width:
match = matcher.apply(features[i], features[j])
match.src_img_idx = i
match.dst_img_idx = j
matches.append(match)
elif 0 < i-j <= range_width:
match_to_copy = matches[j * len(features) + i]
match = cv.detail.MatchesInfo()
# swap src and dst, invert homography
match.src_img_idx = match_to_copy.dst_img_idx
match.dst_img_idx = match_to_copy.src_img_idx
match.H = np.linalg.inv(match_to_copy.H)
match.inliers_mask = match_to_copy.inliers_mask
match.num_inliers = match.num_inliers
match.confidence = match_to_copy.confidence
dmatches = []
for dmatch_to_copy in match_to_copy.matches:
dmatch = cv.DMatch()
dmatch.distance = dmatch_to_copy.distance
dmatch.imgIdx = dmatch_to_copy.imgIdx
# swap queryIdx and trainIdx
dmatch.queryIdx = dmatch_to_copy.trainIdx
dmatch.trainIdx = dmatch_to_copy.queryIdx
dmatches.append(dmatch)
match.matches = dmatches
matches.append(match)
else:
matches.append(cv.detail.MatchesInfo())
Nice thank you! Do you need the functionality in stitching asap or could we wait another week or two If the issue is solved within OpenCV?
Btw do you think https://github.com/opencv/opencv/issues/20945 is similar? I havnt been able to check if its working in c++ but if yes it could be a wrapper issue as well
Nice thank you! Do you need the functionality in stitching asap or could we wait another week or two If the issue is solved within OpenCV?
No not at all, the workaround above is working for me for now, thanks!
Btw do you think opencv/opencv#20945 is similar? I havnt been able to check if its working in c++ but if yes it could be a wrapper issue as well
I'll take a look at this later and see what happens in the C++ version too.
Btw do you think opencv/opencv#20945 is similar? I havnt been able to check if its working in c++ but if yes it could be a wrapper issue as well
In short, yes! The output argument masks should be marked with CV_IN_OUT in the header file, but it isn't.
Your issue has been open so long I decided to just get the Python build working from the source code. Marking that argument fixes the issue and now gc_color and gc_colorgrad work correctly within stitching if I write e.g.:
seam_finder = SeamFinder()
seam_finder.finder = cv.detail_GraphCutSeamFinder('COST_COLOR_GRAD')
I've made a pull request for both fixes https://github.com/opencv/opencv/pull/22329
Thank you! This is something that has been bugging me for some time. Unfortunately I have zero C++ experience and am somewhat dependent on the Python wrappers and docs.
No problem, thanks for all your work on the stitching library! 😁
(p.s. if you do run into any more issues with the C++ code or Python bindings feel free to message me, can't promise anything but I can take a look!)
Thanks! BTW for which use case are you using the library? I'm always interested whats brings people here.
I taught some of the maths behind photo stitching at my old university lecturing job, so I was familiar with the ideas, and now just trying to put together some proof of concepts for an idea at work, but it's just for lining up photos really. Your library just makes it much easier than dealing with the raw OpenCV bindings!
By the way, I'll make a pull request on this repo (or maybe more than one) at some point soon with some minor ideas from what I've found playing around with settings.
Your library just makes it much easier than dealing with the raw OpenCV bindings!
Thanks! I'm happy I published it since it seems to help a bunch of other people.
By the way, I'll make a pull request on this repo (or maybe more than one) at some point soon with some minor ideas from what I've found playing around with settings.
🚀