Han Yu

Results 6 comments of Han Yu

> There are two "horizon + 1". You can simply change them to "horizon" to make rig.py run. I don't know why. It just works.

I guess they're true, weak true, weak false and false. The label is about how success a grasp is.

Really? That's too high. The result on paper is only 0.95/0.91. However I trained as guided for 1200 epoches and the mean_score almost reached limit. The result was only 0.89/0.82....

I encounted this problem too. Details Exception ignored in: Traceback (most recent call last): File "/root/mambaforge/envs/robodiff/lib/python3.9/site-packages/gym/vector/vector_env.py", line 139, in __del__ self.close(terminate=True) File "/root/mambaforge/envs/robodiff/lib/python3.9/site-packages/gym/vector/vector_env.py", line 121, in close self.close_extras(**kwargs) File "/opt/data/private/diffusionp/diffusion_policy/diffusion_policy/gym_util/async_vector_env.py",...

Solved. Just request more ARM like 48GB. This is the same as issue 36. See ![EOF Error in "async_vector_env.py" #36.](https://github.com/real-stanford/diffusion_policy/issues/36)

* It's "test", not "training". * 10000,10001 are seeds of the env.