Jiapeng Xie
Jiapeng Xie
> Hello, I would like to inquire about how you achieve the visualization of tracking results. For example, similar to the demo.gif. thanks! visualization code: https://github.com/hailanyi/3D-Detection-Tracking-Viewer
分割128线的点云应该是可以的,但是直接使用semantic kitti中训练的权重可能会有一定的性能下降。在c++中部署的话,需要使用libtorch来加载模型进行推理,同时将残差生成部分的python代码转成c++。
> 实时运行过程中怎么生成残差图呢? 不好意思很久没看GitHub了,实际运行中你需要修改生成残差的方法,现在提供的这个[文件](https://github.com/xieKKKi/MotionBEV/blob/master/utils/generate_residual/utils/auto_gen_polar_sequential_residual_images.py)是加载所有frame一起生成的,你需要单独处理每一帧,通过前几帧和后几帧画面来生成残差。不过这可能会引入几帧的延迟。
The program seems to go wrong here: https://github.com/HKUST-Aerial-Robotics/G3Reg/blob/755d3f36cd1d941e005dbd7c055add92093e39bd/src/utils/opt_utils.cpp#L74 The GTSAM version I installed is the latest 4.2.
> hi, I trained using the Semantic KITTI dataset, but did not achieve the official accuracy. (bs: 20, lr: 0.025) (bs: 8, lr: 0.025) all not achieve the official accuracy...