Bo Liu
Bo Liu
Sorry that I do not see the problem here. Note that when $g_w = 0$ it has to be that $g_0 = 0$, as $g_w^\top g_0 \geq (1 - c)...
Thanks for asking, I now understand what you mean. Yes, $\lambda = 0$ should be a separate case as the optimal $d$ form is different. Thanks for catching it.
Have you checked our notebooks: https://github.com/Lifelong-Robot-Learning/LIBERO/blob/master/notebooks/procedural_creation_walkthrough.ipynb and https://github.com/Lifelong-Robot-Learning/LIBERO/blob/master/notebooks/custom_object_example.ipynb ?
Hi, could you please provide the full command to reproduce this error?
Can you comment the ``` if multiprocessing.get_start_method(allow_none=True) != "spawn": multiprocessing.set_start_method("spawn", force=True) ``` From 270-271 in `libero/lifelong/main.py` and try again? Try keeping everything else the same as in the HEAD first....
The physics might be a bit different on different machines. If you want to replay data, you can directly reset to the sim state instead of replaying action sequences. There...
Please check our https://github.com/Lifelong-Robot-Learning/LIBERO/blob/master/notebooks/quick_walkthrough.ipynb notebook, basically you set_init_state once to the starting state, then simulate the action, then record the observation.
I think so, I followed the NashMTL implementation and inherits the `scale-y=True`. I didn't fully test how these methods perform when `scale-y=False`
I don't remember how to visualize it as it has been a long time. But maybe you can take a look [here](https://github.com/Cranial-XIX/marl-copa/blob/master/multiagent-particle-envs/multiagent/rendering.py). What I remember is that it should be...
Hi, Thanks for reaching out. The pddl_parser should come with the [fast downward](https://github.com/aibasel/downward) installation. You can check our README on how to install it. In addition, the code of pddl_parser...