DeepSpeed icon indicating copy to clipboard operation
DeepSpeed copied to clipboard

[BUG] deepspeed Chat. OOM when running "single_node" mode because deepspeed is assigning calculating job to a small display card instead of other four A100 gpu cards

Open feiliya333 opened this issue 2 years ago • 2 comments

Describe the bug i have 5 gpus card on my machine, 0,1,2,4 are A100 and gpu-3 is a small capacity display card(4GB), but the deepspeed is running on the gpu-0,1,2,3 including the small-memory display card

To Reproduce Steps to reproduce the behavior:

  1. Go to 'DeepSpeed/DeepSpeedExamples/applications/DeepSpeed-Chat'
  2. make a little modification to enable 4 gpus for "single_node" mode from the default 8 gpus ( ``` if args.num_gpus == 1: args.script_type = "single_gpu" elif args.num_gpus == 4: # here replace 8 with 4 args.script_type = "single_node" elif args.num_gpus == 64: args.script_type = "multi_node"
3. see the program is occupying the GPU-0,1,2,3 so an OOM bug occurs 

**Expected behavior**
 the program is occupying the GPU-0,1,2,4 except the small-memory display card gpu-3

**ds_report output**

DeepSpeed C++/CUDA extension op report

NOTE: Ops not installed will be just-in-time (JIT) compiled at runtime if needed. Op compatibility means that your system meet the required dependencies to JIT install the op.

JIT compiled ops requires ninja ninja .................. [OKAY]

op name ................ installed .. compatible

[WARNING] async_io requires the dev libaio .so object and headers but these were not found. [WARNING] async_io: please install the libaio-dev package with apt [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found. async_io ............... [NO] ....... [NO] cpu_adagrad ............ [NO] ....... [OKAY] cpu_adam ............... [NO] ....... [OKAY] fused_adam ............. [NO] ....... [OKAY] fused_lamb ............. [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] random_ltd ............. [NO] ....... [OKAY] [WARNING] sparse_attn requires a torch version >= 1.5 but detected 2.0 [WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible sparse_attn ............ [NO] ....... [NO] spatial_inference ...... [NO] ....... [OKAY] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] transformer_inference .. [NO] ....... [OKAY] utils .................. [NO] ....... [OKAY]

DeepSpeed general environment info: torch install path ............... ['miniconda3/envs/rlhf_deepseed/lib/python3.8/site-packages/torch'] torch version .................... 2.0.0+cu117 deepspeed install path ........... ['miniconda3/envs/rlhf_deepseed/lib/python3.8/site-packages/deepspeed'] deepspeed info ................... 0.9.0+0b5252bb, 0b5252bb, master torch cuda version ............... 11.7 torch hip version ................ None nvcc version ..................... 11.6 deepspeed wheel compiled w. ...... torch 2.0, cuda 11.7


**Screenshots**
<img width="417" alt="image" src="https://user-images.githubusercontent.com/64009988/232102534-056ab150-53e2-4a12-b190-14d4886d6aa8.png">


**Launcher context**

python train.py
--actor-model facebook/opt-1.3b
--reward-model facebook/opt-350m
--num-gpus 4
--output-dir ./output2
--step 1 2 3


**Docker context**
Are you using a specific docker image that you can share?
i am not using docker


feiliya333 avatar Apr 14 '23 16:04 feiliya333

@feiliya333 Please update your DeepSpeedExamples repo with the latest changes. We've replaced --num-gpus with --deployment-type. In your case, you would want to run with the option --deployment-type single_node.

With the latest changes, run the script and in the output you will see which bash script is being executed. Modify the line with deepspeed in that bash script and add --include=localhost:0,1,2,4 (directly after deepspeed). This will restrict which GPUs DeepSpeed runs on.

mrwyattii avatar Apr 14 '23 16:04 mrwyattii

The solution to the GitHub issue provided by the contributor @mrwyattii seems to be right. Here's a summary of the solution:

  1. Update your DeepSpeedExamples repository with the latest changes. The --num-gpus flag has been replaced with --deployment-type. In your case, you should use --deployment-type single_node.

  2. Run the script with the latest changes, and you will see which bash script is being executed in the output.

  3. Modify the line with deepspeed in the executed bash script and add --include=localhost:0,1,2,4 (directly after deepspeed). This will restrict which GPUs DeepSpeed runs on, and it will not use the small-memory display card gpu-3.

Here's an example of how to modify the bash script:

# Original command
deepspeed train.py --deepspeed_config config.json

# Modified command
deepspeed --include=localhost:0,1,2,4 train.py --deepspeed_config config.json

By following these steps, you should be able to avoid the OOM issue by excluding the small-memory display card GPU-3 from the training.

hemangjoshi37a avatar Apr 15 '23 09:04 hemangjoshi37a

the problem has been solved! thanks so much for help!

feiliya333 avatar Apr 18 '23 08:04 feiliya333