apollo icon indicating copy to clipboard operation
apollo copied to clipboard

Camera fusion in Apollo

Open sunjia0909 opened this issue 3 years ago • 10 comments

Hello, I'm confused about the camera fusion in Apollo. Since there are two cameras with different focal lengths in Apollo, how to fuse the results from these two cameras? Should the results from all cameras be fused before being fused with Lidar's result? Is there any tutorial docs about this part or which part of codes should I refer to? Thanks!

sunjia0909 avatar Aug 08 '22 09:08 sunjia0909

how to fuse the results from these two cameras?

The fusion_component will do the job

Should the results from all cameras be fused before being fused with Lidar's result?

No, lidar, camera and radar are all late fusion. They perceive the results separately and then use fusion_component to fuse

Is there any tutorial docs about this part or which part of codes should I refer to?

You can find the detail in fusion_component

daohu527 avatar Aug 21 '22 00:08 daohu527

Thanks for your reply. And I have another question, what method does Apollo use to track objects within each sensor separately? I can't find the exact code which is related to this part, could you give me any advice about this? Thanks!

sunjia0909 avatar Aug 23 '22 02:08 sunjia0909

Use lidar as a example, you can find the tracking algorithm in modules/perception/lidar/lib/tracker. Seems as camera in modules/perception/camera/lib/obstacle/tracker and so on.

daohu527 avatar Aug 24 '22 14:08 daohu527

ok, thanks for your reply. And I have one more question, how to initialize the fused track at the beginning? And when multiple sensors detect the same object, how to use the multiple detections to update the fused track? Should we rely on one main sensor or anything else? Thanks a lot!

sunjia0909 avatar Aug 26 '22 11:08 sunjia0909

Strictly speaking, here are several questions. : )

Detailed documentation is now lacking, we will add documentation in Q4. At present, I suggest you look at the code first, and then ask questions about specific doubts.

daohu527 avatar Aug 27 '22 09:08 daohu527

ok, thanks for your suggestion and looking forward to the documentation. Now I have a question. In the code of IDAssign, a map named sensor_id_2_track_ind is created, and I'm a bit confused about its specific content. According to the code, the key is the local track id of the object and the value is the global fused track id, is my understanding right? If so, which local id is used here? For example, an object is detected by the camera and the lidar, there are two local ids within each sensor detection, so how to determine the key of the map here? Thanks!

sunjia0909 avatar Aug 29 '22 02:08 sunjia0909

By the way, what is the function of the file 'fusion_camera_detection_component.cc'? Is it used for fusing detections from different cameras? Looking forward to your reply, thanks a lot!

sunjia0909 avatar Aug 30 '22 13:08 sunjia0909

Hi, now I'm going to just fuse two cameras, and I set the main sensors as "front_6mm" and "front_12mm". However, each camera has called the fusion function, but their contents were not fused. I found the problem may be when computing the association matrix, the ComputeCameraCamera function just returns a maximum so that the hungarian match algorithm doesn't work. If I want to fuse two cameras, I need to complete this function. Am I right? Hope you could give any insight. Thanks!

sunjia0909 avatar Sep 09 '22 09:09 sunjia0909

@sunjia0909 We are currently upgrading the perception module and will refresh the documentation at the end of September, so I recommend discussing this later

daohu527 avatar Sep 09 '22 10:09 daohu527

ok, thanks for your reply, and looking forward to your work!

sunjia0909 avatar Sep 19 '22 02:09 sunjia0909

Hi, I have found that the beta version has updated, and the function :ComputeCamera2Camera" is not changed. Does this mean this function is not very necessary or Apollo doesn't believe pure camera fusion? Can I use fusion component to just fuse data from two cameras? And another question is that in the file "omt_obstacle_tracker", there is a function named "Associate 2D", and there is a procedure named "ProjectBox" which seems to project the points from a camera to the other. Could you tell me what the function of this procedure is in the tracking of cameras? Thanks! 截屏2022-09-29 下午7 58 22

sunjia0909 avatar Sep 29 '22 12:09 sunjia0909

The previous camera perception integrates too many functions, and the number of cameras is limited. We are upgrading it. After that, we can integrate any number of cameras. We have modified most of the code, and it is still in the testing stage.

daohu527 avatar Sep 30 '22 00:09 daohu527

Ok, thanks for the information. And I find that when you do fusion for different sensors, you don't assign camera objects in IDAssign, but in PostIDAssign, could you tell me what the reason is for this? Thanks!

sunjia0909 avatar Oct 08 '22 13:10 sunjia0909