Distributed training error
Hi, I allocate 64g REM with 4 A100 GPUs
#SBATCH --time=72:00:00 #SBATCH --mem=64g #SBATCH --job-name="ifseg" #SBATCH --partition=gpu #SBATCH --gres=gpu:a100:4 #SBATCH --cpus-per-task=4 #SBATCH --mail-type=BEGIN,END,ALL
sh run_scripts/IFSeg/coco_unseen.sh
Here is the distributed training error message. Any input? Thanks.
--Ruida
single-machine distributed training is initialized.
/gpfs/gsfs12/users/me/conda/envs/ifseg/lib/python3.8/site-packages/torch/distributed/launch.py:180: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects --local_rank argument to be set, please
change it to read from os.environ['LOCAL_RANK'] instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
warnings.warn(
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -11) local_rank: 0 (pid: 2012686) of binary: /gpfs/gsfs12/users/me/conda/envs/ifseg/bin/python3
Traceback (most recent call last):
File "/gpfs/gsfs12/users/me/conda/envs/ifseg/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/gpfs/gsfs12/users/me/conda/envs/ifseg/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/gpfs/gsfs12/users/me/conda/envs/ifseg/lib/python3.8/site-packages/torch/distributed/launch.py", line 195, in
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: