ColossalAI icon indicating copy to clipboard operation
ColossalAI copied to clipboard

[BUG]: timed out when using 64 GPUs.

Open bestbzw opened this issue 3 years ago • 5 comments

🐛 Describe the bug

I am experimenting gemini, the code runs fine when using only 16 GPUs or less on a single machine. But if I use 64 GPUs, it errors with timed out.

This is the error: Traceback (most recent call last): File "train.py", line 209, in main() File "train.py", line 169, in main colossalai.launch_from_torch(config=args.gpc_config) File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/colossalai/initialize.py", line 220, in launch_from_torch launch(config=config, File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/colossalai/initialize.py", line 100, in launch gpc.init_global_dist(rank, world_size, backend, host, port) File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/colossalai/context/parallel_context.py", line 378, in init_global_dist cpu_group = dist.new_group(ranks, backend='gloo') if dist.get_backend() != 'gloo' else None File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2974, in new_group pg = _new_process_group_helper( File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 703, in _new_process_group_helper pg = ProcessGroupGloo(prefix_store, rank, world_size, timeout=timeout) RuntimeError: Connection timed out /data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by default in torchrun. If your script expects --local_rank argument to be set, please change it to read from os.environ['LOCAL_RANK'] instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions

warnings.warn( ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 6452) of binary: /data/miniconda3/envs/env-3.8.8/bin/python ERROR:torch.distributed.elastic.agent.server.api:Error waiting on exit barrier. Elapsed: 305.97832345962524 seconds Traceback (most recent call last): File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 906, in _exit_barrier store_util.barrier( File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py", line 78, in barrier synchronize(store, data, rank, world_size, key_prefix, barrier_timeout) File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py", line 64, in synchronize agent_data = get_all(store, rank, key_prefix, world_size) File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py", line 34, in get_all data = store.get(f"{prefix}{idx}") RuntimeError: Socket Timeout Traceback (most recent call last): File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in main() File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main launch(args) File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch run(args) File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/distributed/run.py", line 752, in run elastic_launch( File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

============================================================ train.py FAILED

Failures: <NO_OTHER_FAILURES>

Root Cause (first observed failure): [0]: time : 2023-01-19_02:38:07 host : ee39c596-a0e2-439b-969a-c3cd3b647981 rank : 28 (local_rank: 0) exitcode : 1 (pid: 6452) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Environment

No response

bestbzw avatar Jan 19 '23 02:01 bestbzw

Hi. Are you using an example code? or you are trying to run your own project. If it's the later case, could you please provide more code, such as launch code.

gouchangjiang avatar Jan 19 '23 02:01 gouchangjiang

@gouchangjiang I use this script https://github.com/hpcaitech/ColossalAI/blob/main/examples/language/gpt/gemini/run_gemini.sh with my own DataLoader.

the gpc config is:

BATCH_SIZE = 4 WARMUP_STEPS = 1000 TOTAL_STEPS = 2e+8

SEQ_LEN = 1024 HIDDEN_SIZE = 5120 VOCAB_SIZE = 35693 NUM_LAYERS = 40 NUM_ATTENTION_HEADS = 32

bestbzw avatar Jan 19 '23 03:01 bestbzw

As far as I know, this script is for running Gemini on a node, that's what '--standalone' means. Did you modify it to adapt to multi-node?

gouchangjiang avatar Jan 19 '23 03:01 gouchangjiang

Hi, may I know your start command?

FrankLeeeee avatar Jan 19 '23 03:01 FrankLeeeee

met same issue when run examples/language/gpt/gemini/run_gemini.sh , have you solved this ? @bestbzw

joan126 avatar Feb 23 '23 02:02 joan126

We have updated a lot. This issue was closed due to inactivity. If you have similar bugs, please open a new issue. Thanks.

binmakeswell avatar Apr 18 '23 08:04 binmakeswell