Enviroment stepping for faster than real time simulation
Hello!
Is there a specific procedure that needs to be followed in order to achieve faster than real-time simulation stepping?
Following the instructions provided here, https://robosuite.ai/docs/quickstart.html, it seems that the simulation runs in approximately real-time, regardless of whether or not there is rendering.
For example, the snippet below takes ~3 seconds to complete, regardless of whether there is rendering.
To reproduce, using the latest, robosuite version (also tested on robosuite 1.1), run the following snippet, both with should_render=False and should_render=True
(My) expected behavior is that running sim without rendering should be faster in real-time (to allow for more efficient experimentation/evaluation).
import numpy as np
import time
import robosuite as suite
should_render = False
# create environment instance
env = suite.make(
env_name="PickPlace", # try with other tasks like "Stack" and "Door"
robots="Sawyer", # try with other robots like "Sawyer" and "Jaco"
has_renderer=should_render,
has_offscreen_renderer=False,
use_camera_obs=False,
use_object_obs=False,
)
# reset the environment
env.reset()
start_t = time.time()
for i in range(200):
action = np.random.randn(env.robots[0].dof) # sample random action
obs, reward, done, info = env.step(action) # take action in the environment
if should_render:
env.render() # render on display
print(time.time() - start_t)
I was not able to find any documentation regarding the simulation speed. Ex. the base mujoco env does not seem to have any customization on allowing faster than real time simulation. https://robosuite.ai/docs/source/robosuite.environments.html?highlight=make#robosuite.environments.base.MujocoEnv
Thanks!
Upon further inspection, it seems that the simulation for PickPlace is just particularly slow. Are there any suggestions for discovering where the simulation bottleneck is or speeding up the simulation?
Running into the same issue here. It's surprising because robosuite uses the same mujoco version as dm_control, but dm_control runs over 3x faster with rendering enabled than robosuite does without rendering:
import time
import numpy as np
from dm_control import manipulation
env = manipulation.load('lift_brick_vision')
env.reset()
start_t = time.time()
for i in range(200):
action = np.random.randn(9)
time_step = env.step(action)
assert time_step.observation['front_close'].shape == (1, 84, 84, 3)
print(time.time() - start_t) # 0.9
Hi folks,
Yuval here from the MuJoCo team. We have a push to speed up simulation of complex scenes in the next few months and are looking for specific examples that could serve as benchmarks.
If someone could create one or a small number of MuJoCo models (only XMLs and binary assets, no Python, please test that they load in the simulate utility) that you think are simulating slower than you expect, that would be highly appreciated.
Hi @wuphilipp @danijar, thanks for bringing this to our attention. We'll need to do some investigation to understand the difference in speed between robosuite and dm_control. This week is busy for the team due to deadlines, but we will try to get back to you within a week or two. Thanks!
Hi @yuvaltassa, we really appreciate your efforts in speeding up the simulation! I will share with you some models next week (this week we are all working towards deadlines). Thanks!
Hi @wuphilipp @danijar we've done some profiling of our code, and we found that our controller logic (computing raw low-level actuation) takes up a significant portion of the compute. We are looking for ways to address this, possibly by writing the controller logic in C++ instead of python. This effort may take some time, stay tuned for the next release of robosuite for significant speed improvements!
@snasiriany what does your controller do?
- We now have native Cartesian control in MuJoCo.
- If you're doing something more fancy, I'd recommend writing it as a MuJoCo plugin. I'll write back here when those are fully documented.
@yuvaltassa robosuite offers several controllers (IK, joint position control, operational space control, etc). Our most commonly used controller is operational space control (OSC). The team will take a closer look at the native Cartesian control, but I think most likely we will write our own implementation of our OSC controller. Please do keep us in the loop about the plugin!