Running in real-time on live data
Hello! I'm able to set up and run MonDi on the VOID dataset and get evaluation results similar to what is presented.
Evaluation results:
MAE RMSE iMAE iRMSE
30.884 87.478 15.308 38.333
+/- +/- +/- +/-
23.885 79.675 18.747 51.171
Total time: 13383.36 ms Average time per sample: 16.73 ms
How could I go about running MonDi in real-time on live images? I noticed that the dataset images are preprocessed to be in triplets. Would this have to be done for real-time images too? Also, my depth stream is as a pointcloud; would that have to be saved as PNG depth images as in the dataset?
Thank you!
@alexklwong Hey Alex, similar question - is it possible to feed it dense depth maps? I'm feeding it a dense depth map from an Azure Kinect and am struggling to see good results, but not sure if it's because it's dense or there's some other parameter I'm inputting wrong.
Hi, it looks like the code base is an old version of the original (in fact, I looked at the training loop and it looks like there is an error). I will ask the students to prepare the new code and add an online inference script.