rl-collision-avoidance icon indicating copy to clipboard operation
rl-collision-avoidance copied to clipboard

Implementation of the paper "Towards Optimally Decentralized Multi-Robot Collision Avoidance via Deep Reinforcement Learning"

Results 13 rl-collision-avoidance issues
Sort by recently updated
recently updated
newest added

Hello, Thank you very much for providing this code. I have questions about the simulator. When training, the Stage suddenly crashed and throw an error,"Segmentation fault", did you meet that?...

How to visualize in circle_test.py

sorry to bother you, I want to know how many agents you used in stage_1 those trained in three PC?And my rewards are not convergent, how many eposides you used?...

Hello, Thank you very much for providing this code. A student and I have been following the training example for Stage1, but when one of the environments reaches the max...

Hello, I would like to ask what is the function of cmdpose tests.py?Can it be implemented?I would appreciate it if you could answer my question.

Env 03, Goal (-07.0, 009.5), Episode 00000, setp 097, Reward 12.6 , Reach Goal, Env 04, Goal (-12.5, 004.0), Episode 00000, setp 052, Reward -33.4, Crashed, Env 00, Goal (-18.0,...

Hello Professor, Recently, I've been studying your paper and reproducing your code, and I have some question as follows: 1. After training the stage1 and stage2, I got Figure 4...

hi, in the main loop, the environment is subscribing to the \crash topic, but I didn't find any evidence how it produces the crash topic considering the distance the obstacle...

Hi, I followed all your steps and trained the policy from scratch for stage 1. I am not able to get a policy as good as yours (still always crashes)...