patchwork-plusplus icon indicating copy to clipboard operation
patchwork-plusplus copied to clipboard

Depth camera adaptation

Open yyqgood opened this issue 1 year ago • 1 comments

thank you for your work

I would like to know:

  1. Can this algorithm be adapted to depth cameras?
  2. If so, what should you pay attention to?

Looking forward to your reply, thank you!

yyqgood avatar Mar 07 '24 01:03 yyqgood

Hi @yyqgood!

  1. Can this algorithm be adapted to depth cameras?

You can try Patchwork++ on point clouds from depth cameras. However, the performance of ground segmentation should be checked because point clouds from depth cameras have different characteristics from LiDAR point clouds in terms of accuracy, maximum distance, FoV.

  1. If so, what should you pay attention to?

You should turn off the module RNR, it can be disabled by the parameter, enable_RNR. Actually, it removes undesirable reflected noises based on some properites including intensity. However, as depth cameras don't provide intensity information for points, you should disable the module. Also, as the point cloud from depth camera have a smaller maximum range than LiDAR, you should adjust the parameters of Concentric Zone Model (e.g., number of zones/rings/sectors). You should set them to divide point clouds having an enough amount in each patch to be estimated as a ground or non-ground properly.

seungjae24 avatar Apr 11 '24 02:04 seungjae24

Hi @yyqgood!

  1. Can this algorithm be adapted to depth cameras?

You can try Patchwork++ on point clouds from depth cameras. However, the performance of ground segmentation should be checked because point clouds from depth cameras have different characteristics from LiDAR point clouds in terms of accuracy, maximum distance, FoV.

  1. If so, what should you pay attention to?

You should turn off the module RNR, it can be disabled by the parameter, enable_RNR. Actually, it removes undesirable reflected noises based on some properites including intensity. However, as depth cameras don't provide intensity information for points, you should disable the module. Also, as the point cloud from depth camera have a smaller maximum range than LiDAR, you should adjust the parameters of Concentric Zone Model (e.g., number of zones/rings/sectors). You should set them to divide point clouds having an enough amount in each patch to be estimated as a ground or non-ground properly.

Hi @yyqgood!

  1. Can this algorithm be adapted to depth cameras?

You can try Patchwork++ on point clouds from depth cameras. However, the performance of ground segmentation should be checked because point clouds from depth cameras have different characteristics from LiDAR point clouds in terms of accuracy, maximum distance, FoV.

  1. If so, what should you pay attention to?

You should turn off the module RNR, it can be disabled by the parameter, enable_RNR. Actually, it removes undesirable reflected noises based on some properites including intensity. However, as depth cameras don't provide intensity information for points, you should disable the module. Also, as the point cloud from depth camera have a smaller maximum range than LiDAR, you should adjust the parameters of Concentric Zone Model (e.g., number of zones/rings/sectors). You should set them to divide point clouds having an enough amount in each patch to be estimated as a ground or non-ground properly.

how do i adjust the parameters of Concentric Zone Model in the python version?

elvistheyo avatar Oct 24 '24 05:10 elvistheyo