Lidar in Computer Vision mode not working
Hi,
I am opening this issue about Lidar in computer vision mode. I have tested both Lidar and GPU lidar and I projected Lidar points on the same image plane of depth images, i.e. using intrinsics of depth image on lidar points to obtain a depth image.
For Lidar, with the following settings.json:
{
"SettingsVersion": 1.2,
"SimMode": "ComputerVision",
"Vehicles": {
"drone_test": {
"VehicleType": "ComputerVision",
"X": 0,
"Y": 0,
"Z": 0,
"Pitch": 0.0,
"Roll": 0.0,
"Yaw": 0.0,
"Cameras": {
"0": {
"X": 0.5,
"Y": 0,
"Z": -1,
"Pitch": 0.0,
"Roll": 0.0,
"Yaw": 0.0,
"CaptureSettings": [
{
"ImageType": 0,
"Width": 1280,
"Height": 720,
"FOV_Degrees": 32
},
{
"ImageType": 1,
"Width": 1280,
"Height": 720,
"FOV_Degrees": 32
}
]
}
},
"Sensors":{
"LidarSensor1": {
"SensorType": 6,
"Enabled": true,
"NumberOfChannels": 133,
"Range": 1200,
"RotationsPerSecond": 20,
"MeasurementsPerCycle": 188,
"X": 0.5,
"Y": 0,
"Z": -1,
"Roll": 0,
"Pitch": 0.0,
"Yaw": 0,
"VerticalFOVUpper": 11.25,
"VerticalFOVLower": -11.25,
"HorizontalFOVStart": -16,
"HorizontalFOVEnd": 16,
"DrawDebugPoints": false,
"IgnoreMarked": true,
"GenerateNoise": true,
"DrawSensor": false
}
}
}
}
}
I got the following result (top-left: rgb image with lidar points overlay, top-right: depth, bottom-left: lidar image, bottom-right: depth image), which is not correct and also all the other images I tested with Lidar in computer vision mode (with the settings above) have the same issue:
While for GPULidar, with the following settings.json:
{
"SettingsVersion": 1.2,
"SimMode": "ComputerVision",
"Vehicles": {
"drone_test": {
"VehicleType": "ComputerVision",
"X": 0,
"Y": 0,
"Z": 0,
"Pitch": 0.0,
"Roll": 0.0,
"Yaw": 0.0,
"Cameras": {
"0": {
"X": 0.5,
"Y": 0,
"Z": -1,
"Pitch": 0.0,
"Roll": 0.0,
"Yaw": 0.0,
"CaptureSettings": [
{
"ImageType": 0,
"Width": 1280,
"Height": 720,
"FOV_Degrees": 32
},
{
"ImageType": 1,
"Width": 1280,
"Height": 720,
"FOV_Degrees": 32
}
]
}
},
"Sensors":{
"LidarSensor1": {
"SensorType": 8,
"Enabled": true,
"NumberOfChannels": 133,
"Range": 1200,
"RotationsPerSecond": 20,
"MeasurementsPerCycle": 188,
"X": 0.5,
"Y": 0,
"Z": -1,
"Roll": 0,
"Pitch": 0.0,
"Yaw": 0,
"VerticalFOVUpper": 11.25,
"VerticalFOVLower": -11.25,
"HorizontalFOVStart": -16,
"HorizontalFOVEnd": 16,
"DrawDebugPoints": false,
"IgnoreMarked": true,
"GenerateNoise": true,
"DrawSensor": false
}
}
}
}
}
I got the following result (top-left: rgb image with lidar points overlay, top-right: depth, bottom-left: lidar image, bottom-right: depth image), which is not even close to a meaningful point cloud:
Why is your FOV of your LiDAR sensor so small both horizontally and vertically? You will never be able to fully match the FOV of the camera that way.