Question about usage with SLAM
Hi ! Thanks for your great work~
I've just read your paper and have some question about it.
The Paper directly use the pose estimated by the SLAM algorithm and transform the pointcloud sequence to current pose before calculate the residual image, right? So the pose should be accurate, or the residual image will be wrong. Am i right?
If what I said above is right, when moving object causes large drift in odometry, your algorithm might not improve the odometry accuracy since the moving object could not be identified accurately.
Thanks for your reply in advance!!
Hey @mysterybc, thanks for following our work. Yes, you are right. If the odometry is not accurate, it will influence the MOS results. We also showed one ablation study on the noisy poses in Figure 4. You may see that the MOS performance will drop until the residual image is too noisy which doesn't appear during training and will be ignored.
However, during the real application, the proposed method should be done together with the pose estimation which means we can estimate the pose and conduct MOS iteratively. This may help both pose estimation and MOS and may not cause large drift in local pose estimation.
Thanks! And one more question, if I estimate pose and conduct MOS iteratively, is it able to achieve real time? I think iterative 2-3 times will take most Lidar odometry more than 100ms, which means it can't reach 10Hz performance. I didn't take MOS in to account since I don't konw its't operating time.
Yes, runtime could be a problem. The best MOS performance model runs at around 20Hz. There are also faster models, but the MOS performance is worse. I never try the idea before. It's an interesting idea and worth trying.
How to estimate pose using SLAM ? Can you provide some link ? Thanks in Advance
Hey @A1-one, one easy way is to use our SuMa with the cleaned scans. You may compare the results before and after cleaning to see the influence of the moving objects.
Thank you for the reply @Chen-Xieyuanli . I have the point cloud data in the form of bin files. How can i get poses corresponding to those bin files?
Thank you for the reply @Chen-Xieyuanli . I have the point cloud data in the form of bin files. How can i get poses corresponding to those bin files?
You could use any LiDAR odometry/SLAM method to estimate the poses of your scans. SuMa is rather easy to use and you could find the document here. You could also use ICP like algorithms to easily get the poses.
hey @Chen-Xieyuanli , i am getting this runtime error while running the visualizer in SuMa, OpenGL Context Version 3.3 core profile GLEW initialized. OpenGL context version: 3.3 OpenGL vendor string : VMware, Inc. OpenGL renderer string: llvmpipe (LLVM 9.0, 256 bits) Segmentation fault (core dumped)
hey @Chen-Xieyuanli , i am getting this runtime error while running the visualizer in SuMa, OpenGL Context Version 3.3 core profile GLEW initialized. OpenGL context version: 3.3 OpenGL vendor string : VMware, Inc. OpenGL renderer string: llvmpipe (LLVM 9.0, 256 bits) Segmentation fault (core dumped)
I never met such a problem before. Could you please open an issue in the SuMa repo and may get the solution there.
Yes, runtime could be a problem. The best MOS performance model runs at around 20Hz. There are also faster models, but the MOS performance is worse. I never try the idea before. It's an interesting idea and worth trying.
I recently got caught in some projects and forget to comment your reply = = I'll try when I finish these project, Thanks a lot
hey @Chen-Xieyuanli , i am getting this runtime error while running the visualizer in SuMa, OpenGL Context Version 3.3 core profile GLEW initialized. OpenGL context version: 3.3 OpenGL vendor string : VMware, Inc. OpenGL renderer string: llvmpipe (LLVM 9.0, 256 bits) Segmentation fault (core dumped)
Hi, if you haven't solved this problem and still want to use Lidar SLAM to estimate poses, I suggest you could try LOAM since it's easy to implement and its performance is satisfied.
hey @Chen-Xieyuanli , i am getting this runtime error while running the visualizer in SuMa, OpenGL Context Version 3.3 core profile GLEW initialized. OpenGL context version: 3.3 OpenGL vendor string : VMware, Inc. OpenGL renderer string: llvmpipe (LLVM 9.0, 256 bits) Segmentation fault (core dumped)
Hi, if you haven't solved this problem and still want to use Lidar SLAM to estimate poses, I suggest you could try LOAM since it's easy to implement and its performance is satisfied.
Thank you @mysterybc
It means that you need to first estimate the position of the SLAM system, and then remove the dynamic environment? Thanks in Advance