objectsdf_plus icon indicating copy to clipboard operation
objectsdf_plus copied to clipboard

Error while running the code

Open danperazzo opened this issue 2 years ago • 1 comments

Hello, thank you for releasing the code, I was trying to run but came across the following error:

ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 0 (pid: 133) of binary: /opt/conda/bin/python
Traceback (most recent call last):
  File "/opt/conda/bin/torchrun", line 33, in <module>
    sys.exit(load_entry_point('torch==2.0.0', 'console_scripts', 'torchrun')())
  File "/opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main
    run(args)
  File "/opt/conda/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/opt/conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/opt/conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
====================================================
training/exp_runner.py FAILED
----------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
----------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-12-23_02:46:51
  host      : 51bc79025499
  rank      : 0 (local_rank: 0)

  exitcode  : -9 (pid: 133)
  error_file: <N/A>
  traceback : Signal 9 (SIGKILL) received by PID 133
====================================================

I did not know what caused this error. I used the following dockerfile to create the environment:

Dockerfile.txt

Thanks for the attention!

danperazzo avatar Dec 23 '23 02:12 danperazzo

Hi

Sorry for the late reply. It might be the insufficient shared memory when you start load the training dataset. You can try to add --shm-size 16G or even larger shared memory when you lauch the docker image.

QianyiWu avatar Mar 15 '24 03:03 QianyiWu