ASE
ASE copied to clipboard
Minor bug: agent is using cuda:0 device no matter what rl_device arg is
Problem
-
ase.learning.common_agent.CommonAgentinheritsrl_games.common.a2c_common.A2CBasewhich stores all tensors toself.ppo_device. -
self.ppo_deviceis set by gettingdevicekey fromconfig. If there is nodevicekey, it is set tocuda:0by default. (see here) - tracing back to
run.pyfile,configis supplied bycfg_train["params"]["config"]. You can printcfg_train["params"]["config"].keys()and there is nodevice.
How to check
To check this issue, simply run the original pretraining command with --rl_device argument is set to another cuda device such as cuda:1 and it still consumes cuda:0 memory.
python ase/run.py --task HumanoidAMPGetup --cfg_env ase/data/cfg/humanoid_ase_sword_shield_getup.yaml --cfg_train ase/data/cfg/train/rlg/ase_humanoid.yaml --motion_file ase/data/motions/reallusion_sword_shield/dataset_reallusion_sword_shield.yaml --headless --rl_device cuda:1
How to fix
To fix this, simply add cfg_train["params"]["config"]["device"] = args.rl_device in function load_cfg().
Have you managed to get this to run on cuda:1 on a 2 GPU system? Even with your change adding --rl_device cuda:1 and --sim_device cuda:1 always results in a segmentation fault.
The easiest solution to this is to set your environment variable CUDA_VISIBLE_DEVICES=<GPU_NUM> ex:
export CUDA_VISIBLE_DEVICES=1