Performance gap between viewer and simulate.cc
Hello,
i am experiencing quite the performance gap between the compiled simulate.cc GUI and the viewer application. I made i quick video of the issue (https://drive.google.com/file/d/1zT55YzBXGRdBOVW2JMWQVyoJtmYLpZf3/view?usp=sharing).
My system is an Thinkpad x1 yoga gen 6 (i7 1165, 32gb Ram) running Ubuntu 20 LTS. I do not experience this gap on my private pc, a Macbook Air M1. It runs just fine there.
The simulated .xml File is the following and a tiny torque is applied to break the stable pose.
<mujoco>
<option timestep="0.0001" integrator="RK4">
<flag sensornoise="enable" energy="enable" contact="disable"/>
</option>
<asset>
<material name="dense" rgba=".9 .9 .9 .9"/>
</asset>
<worldbody>
<light diffuse=".5 .5 .5" pos="0 0 3" dir="0 0 -1"/>
<geom type="plane" size="1 1 0.1" material="dense"/>
<body pos="0 0 3" euler="0 0 0">
<joint name="pin" type="hinge" pos="0 0 -.5" axis="0 1 0"/>
<geom type="cylinder" size=".05 .5" rgba="1 0 0 1" mass="1"/>
<body pos="0 0.1 1" euler="0 0 0">
<joint name="pin2" type="hinge" pos="0 0 -.5" axis="0 1 0"/>
<geom type="cylinder" size=".05 .5" rgba="0 1 0 1" mass="1"/>
</body>
</body>
</worldbody>
<actuator>
<motor joint="pin" name="torque" gear="1" ctrllimited="true" ctrlrange="-100 100"/>
<motor joint="pin2" name="torque2" gear="1" ctrllimited="true" ctrlrange="-100 100"/>
</actuator>
</mujoco>
Can you please try setting the environment variable MUJOCO_GL=glfw before running the dm_control viewer?
Unfortunately this did not do anything noticeable.
This is a screenshot from cpu+igpu usage whilst running the viewer. Not sure if this seems odd. On my M1 Air gpu usage is around 70% and it runs smoothly.

Does your laptop have both integrated and discrete GPUs? If it does, which one is used when you run simulate ?
Thank you for your response!
No my laptop (ThinkPad X1 Yoga G6, i7 1165, 32GB Ram) does not have any discrete GPU.
Also the intel-gpu-top output is whilst running the viewer, the igpu usage is 0 when not running viewer, so it does certainly use it.
I can tomorrow quickly check igpu usage while running simulate, and whether or not it utilises igpu more (than viewer).
Yep, it would be helpful if you could also check what GPU utilisation you're seeing when running simulate.
So, this is running the simulate. It looks like the single-core utilisation with viewer is limiting at 100%.
