robosuite icon indicating copy to clipboard operation
robosuite copied to clipboard

How to save human demonstration with touch sensor reading?

Open gokceay opened this issue 8 months ago • 2 comments

Hi, I would like to know how to save human demonstrations with touch (tactile) sensor readings?

Thanks in advance

gokceay avatar Jun 06 '25 16:06 gokceay

The human demonstration output from robosuite contains the initial xml and the entire state of the scene at each timestep. In other words, the sensor readings should be saved already (stored in the mujoco state), but they need to be extracted - similar to how we extract images and proprioceptive state in robomimic

Abhiram824 avatar Jun 06 '25 19:06 Abhiram824

From the dataset_states_to_obs.py I could not understand how they have collected obs keys. I also checked observation related code in the robosuite. When I check the raw demo.hdf5 and image.hdf5 files I could not reach the obs keys. when I am training with image modality my obs keys are printed as obs_keys=('agentview_image', 'object', 'robot0_eef_pos', 'robot0_eef_quat', 'robot0_eye_in_hand_image', 'robot0_gripper_qpos'). In this list I must have the touch related keys right?

gokceay avatar Jun 08 '25 13:06 gokceay

They collect the keys by here by calling env.reset_to(state), which gives the observations of the environment at the specified state. Any extra information at that timestep is obtained by making the relevant API calls to the environment at the specified timestep like obtaining rewards. To obtain the touch sensor observations, you can read the sim state at each timestep similar to what is done for rewards.

Abhiram824 avatar Jun 16 '25 15:06 Abhiram824

Closing for now; feel free to reopen if you have any follow-up questions.

Abhiram824 avatar Aug 18 '25 15:08 Abhiram824