More infos
hey! I am also trying to make a similar project than yours and I have dificulties to understand what is doing what in your code.
So in your launchfiles "tracker1 and 2" you are positionning you cameras, and also linking them to their calibrating files.Then you have 3 scripts that are very similar si I started to look into "image_track.py".In this file you're tracking the pixels in range lower_threshold, upper_threshold, and then after treatment, you find the centroid of the object and you publish it for each camera.
I am getting right? What I don't understand is where are you doing the mathematicals calculation to get the absolute positiong of your object in the world?
Ha I found it in disparity_track, my bad ! So image_track is to define the good treshold of the color and disparity_track is to track the object. Now I am just wondering if the matrix self.Kinv is the camera matrix that you get when you do the calibration and what is your mathematical way to get the disparity ratio ( even if you use the trackbar later)