[Feat/Refactor] MID360-Fisheye Support + Core Module Refactor (Params/Storage/Image) for FAST-LIVO2 ROS2 Humble
Key Improvements
Based on the FAST-LIVO2-ROS2-HUMBLE implementation:
-
Parameter System Refactor
Implemented safe parameter declaration with override support, removed global parameter server node, and adopted direct parameter injection to resolve camera parameter loading issues with the open-source rpg_vikit. -
New Hardware Configuration
Added dedicated launch scriptmapping_aviz_metacamedu.launch.pyand parameter fileavia_metacamedu.yamlfor full MID360-Fisheye dataset compatibility. -
MID360-Specific Enhancements
Enabled per-point hardware timestamp synchronization (nanosecond precision) by storing relative time offsets incurvaturefor motion compensation; maintained standard blind zone filteringx²+y²+z² > blind². -
Fisheye Camera Correction
Implemented real-time fisheye distortion correction viaequidistant_cam->undistortImage. -
Image Processing Pipeline
Added direct JPEG decompression withenable_image_processingtoggle and timestamp validation, replacing failed republish nodes. -
Storage System Upgrade
Introduced Boost.Filesystem for automated multi-level directory management, fixing Colmap/PCD data saving failures. -
Documentation Expansion
Added MID360-Fisheye configuration guides, specified Sophus 1.22.10 installation, and optimized Vikit integration documentation.
Looks awesome!
I was looking at the code and reading the statement: "Integrated MID360 point cloud processing with nanosecond-precision timestamp synchronization" - what does "nanosecond-precision timestamp synchronization" mean exactly?
Looks awesome!
I was looking at the code and reading the statement: "Integrated MID360 point cloud processing with nanosecond-precision timestamp synchronization" - what does "nanosecond-precision timestamp synchronization" mean exactly?
The term means we leverage MID360’s unique per-point hardware timestamps (measured in nanoseconds). We convert them into precise time offsets relative to the ROS message header, store these offsets (in milliseconds) in each point’s curvature field. This enables sub-millisecond motion compensation per point during sensor fusion.