ad-rss-lib icon indicating copy to clipboard operation
ad-rss-lib copied to clipboard

Question on replacement of RSS sensor with Radar sensor

Open yashwantj opened this issue 5 years ago • 9 comments

Dear,

We need to replace RSS sensor with Radar sensor, it is simple to integrate or what challenges we might get?

yashwantj avatar Oct 20 '20 01:10 yashwantj

Hi, I'm not sure what you actually referring to and what you intend to do? Replacing the RSS sensor: are you referring to the CARLA RssSensor implementation here https://carla.readthedocs.io/en/latest/adv_rss/#rsssensor ? And what do you mean with "replacing" at all?

Do you want to integrate an additional Radar sensor into CARLA, is that what you ask? In that case I can refer you to this here: https://carla.readthedocs.io/en/latest/tuto_D_create_sensor/

Bernd

berndgassmann avatar Oct 20 '20 11:10 berndgassmann

Dear Bernd,

Thank you for reply. Sorry for inconvenience.

Yes, we want to integrate an additional Radar sensor into CARLA.

The question is,

If we add Radar sensor, will Carla RSS library accept Radar inputs instead of RSS sensor?

Regards, Yashwant

yashwantj avatar Oct 21 '20 02:10 yashwantj

I believe the other comment in issue #89 answers this, correct?

berndgassmann avatar Oct 21 '20 06:10 berndgassmann

Hi,

Thanks for explanation.

We want to keep occluded actors in scenario. But want to stop them as an input to rss_lib. Latter that actor will become fully visible then it should be detected by rss sensor.

In this case, how we can identify those actors as occluded object; do we need to add an additional Radar sensor to detect them in required FOV and stop them as an input to rss_lib?

If yes, then where exactly we shall make the changes in code after adding Radar sensor?

Regards, Yashwant

yashwantj avatar Oct 21 '20 07:10 yashwantj

Hi,

up to now I didn't think of the detailed architecture of this.

The FOV depends on the sensor suite the vehicle is equipped with. Based on those the AD-vehicle-perception algorithms would be able to detect certain objects and other not. So taking the measurement of the other sensors into account is the right approach, yes. What sensors to use best for this, I don't know; but I also didn't look into the capabilities of the current available sensors. Maybe also the combination of different available sensors are feasible. Maybe one can derive it from a simple model, or one has to look into the actual sensor data. In each case one has to perform the calculation for every of the object present in the ground truth: Would it be visible/detectable by the current sensors? And based on this leave it out or not. The question is always to what detail level you want to get. As you mention Radar. A Radar might be able to see object which are occludes in the video stream by the reflections on the ground below other vehicles. This depends on the realism of the Radar implementation.

I don't know your actual test setup and how your actual client looks like, but the decision can be made in the actor_constellation_callback as mentioned in issue #89. The code in PythonAPI/examples/rss is mainly exemplary code (as the folder name suggests); but for sure you could add it directly there and make is configurable (switching feature on/off) if you like.

berndgassmann avatar Oct 21 '20 09:10 berndgassmann

Maybe the semantic lidar would provide an easy to realize solution: https://carla.readthedocs.io/en/latest/ref_sensors/#semantic-lidar-sensor Idea might be: go through the measurement points and create a set of all objects hit by the LIDAR currently and store this on each sensor tick. Then in the actor_constellation_callback check if the object is within that list to make the decision.

As the Radar sensor, at least at the first quick look, doesn't provide such semantics on the hit objects https://carla.readthedocs.io/en/latest/ref_sensors/#radar-sensor

So for a first test, I would start with the semantic lidar; maybe that's sufficient for a starting point.

Please keep us updated if it worked out.

Regards, Bernd.

berndgassmann avatar Oct 21 '20 10:10 berndgassmann

Hi Bernd,

Thank you so much for reply.

Sure! we will be keep posted you on our analysis results.

Regards, Yashwant

yashwantj avatar Oct 21 '20 10:10 yashwantj

Dear,

As you suggested, I checked 'actor_constellation_callback' function from https://github.com/carla-simulator/carla/blob/dev/PythonAPI/examples/rss/rss_sensor.py

Below is my understanding,

  1. self.sensor.register_actor_constellation_callback(self._on_actor_constellation_request)-

    -> This function is calling back for each actor present in scenario/world, for example, if there are 3 vehicle in scenario including ego then this function will be callback 3 time to make actor_constellation_result. -> In function, _on_actor_constellation_request(self, actor_constellation_data), actor constellation data structure (at a time one vehicle) readily available for 'actor_constellation_result'. -> How I can access, 'carla.RssActorConstellationData', where all vehicle data will be available for filter out unwanted actors? -> In short, how I can get all actor data list before 'register_actor_constellation_callback' execution? So, I can filter occluded vehicle . -> If I get one by one traffic/actor information in this callback function then comparison (for filter out occluded, based on their overlap situation) among all actors would be impossible.

  2. self.sensor.listen(self._on_rss_response)-

-> this will be executed once for single frame data

Please correct if my understanding is wrong and Guide me.

Thank you for your understanding.

yashwantj avatar Nov 09 '20 04:11 yashwantj

The first time, the callback it is called with no other actor (https://github.com/carla-simulator/carla/blob/dev/PythonAPI/examples/rss/rss_sensor.py#L136 the case with other_actor==None) which queries for the default parameters used for the ego. Then it's called once per other actor; correct. You could then insert e.g. the following at https://github.com/carla-simulator/carla/blob/dev/PythonAPI/examples/rss/rss_sensor.py#L139

if self.should_actor_be_filtered_out(actor_id): actor_constellation_result.rss_calculation_mode = ad.rss.map.RssMode.NotRelevant print("Filtering out actor {} because of ... ".format(actor_id)) return actor_constellation_result

Like this, that actor would not be considered in the RSS calculation afterwards.

And to your question that you are not able to calculate the overlaps... you could at the first call (with actor_constellation_data.other_actor==None) calculate the overlapping and the desired filtering to be done in this frame for all actors by using the all information that carla provides.

insert the analysis e.g. at around https://github.com/carla-simulator/carla/blob/dev/PythonAPI/examples/rss/rss_sensor.py#L268

all_carla_actors = self.world.get_actors() ... perform overlap analysis and store the vehicles to be filters in the class... self.vehicles_to_be_filtered_out.append(...)

Later on the function self.should_actor_be_filtered_out() just would check if the vehicle is in the list of vehicles to be filtered out.

The 2. _on_rss_response is called once per frame (at least when running in sync mode, in async mode server frames might get skipped within the RSSSensor if the computations take too long to follow) But the response callback is issued AFTER the RSS check and AFTER the frame. Still the overlap analysis for the next frame could be done and stored one frame ahead ... it's also possible, but the actor_constellation_callback would consider the current frame to be processed instead, so most probably the better choice.

berndgassmann avatar Nov 09 '20 08:11 berndgassmann