DeepSpeed icon indicating copy to clipboard operation
DeepSpeed copied to clipboard

[BUG] generating with different batch size before causes RuntimeError

Open twaka opened this issue 2 years ago • 1 comments

Describe the bug When using deepspeed inference 0.9.0 and after, generating with different batch size before causes RuntimeError. For example, when first generation input is ['Hello'] and second generation input is ['Hello', 'Hello'], second generation will fail by the following error. This error didn't happen in deepspeed inference 0.8.3 and before.

0.9.1
------------------------------------------------------
Free memory : 14.532532 (GigaBytes)
Total memory: 15.554932 (GigaBytes)
Requested memory: 0.073242 (GigaBytes)
Setting maximum total tokens (input + output) to 1024
WorkSpace: 0x14e94e000000
------------------------------------------------------
["Hello, I'm a newbie in the world of web development. I'm a newbie in"]
...
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/ops/transformer/inference/op_binding/softmax_context.py", line 31, in forward
    output = self.softmax_context_func(query_key_value, attn_mask, self.config.rotary_dim, self.config.rotate_half,
RuntimeError: The specified pointer resides on host memory and is not registered with any CUDA device.

To Reproduce

import deepspeed
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neo-125M')
model = AutoModelForCausalLM.from_pretrained('EleutherAI/gpt-neo-125M')
model = deepspeed.init_inference(model,
                                           mp_size=1,
                                           dtype=torch.half,
                                           replace_with_kernel_inject=True)

print(deepspeed.__version__)
inputs = tokenizer(['Hello'], return_tensors='pt', add_special_tokens=False)
outputs = model.generate(**inputs.to('cuda'))
print(tokenizer.batch_decode(outputs))
inputs = tokenizer(['Hello', 'Hello'], return_tensors='pt', add_special_tokens=False)
outputs = model.generate(**inputs.to('cuda'))
print(tokenizer.batch_decode(outputs))

Expected behavior

0.8.3
------------------------------------------------------
Free memory : 14.532532 (GigaBytes)  
Total memory: 15.554932 (GigaBytes)  
Requested memory: 0.105469 (GigaBytes) 
Setting maximum total tokens (input + output) to 1024 
------------------------------------------------------
["Hello, I'm a newbie in the world of web development. I'm a newbie in"]
["Hello, I'm a newbie in the world of web development. I'm a newbie in", "Hello, I'm a newbie in the world of web development. I'm a newbie in"]

ds_report output

--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-dev package with apt
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
 [WARNING]  sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0
 [WARNING]  using untested triton version (2.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
utils .................. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/usr/local/lib/python3.8/dist-packages/torch']
torch version .................... 2.0.0+cu118
deepspeed install path ........... ['/usr/local/lib/python3.8/dist-packages/deepspeed']
deepspeed info ................... 0.9.1, unknown, unknown
torch cuda version ............... 11.8
torch hip version ................ None
nvcc version ..................... 11.8
deepspeed wheel compiled w. ...... torch 2.0, cuda 11.8

Screenshots If applicable, add screenshots to help explain your problem.

System info (please complete the following information):

  • OS: Ubuntu 20.04
  • GPU count and types: x1 T4
  • (if applicable) what DeepSpeed-MII version are you using
  • (if applicable) Hugging Face Transformers/Accelerate/etc. versions
  • Python version: 3.8
  • Any other relevant info about your setup

Docker context Are you using a specific docker image that you can share?

Additional context Add any other context about the problem here.

twaka avatar Apr 28 '23 00:04 twaka

Same problem with LLaMa model and DS version 0.9.3+0a61d5d6. Would like to add that the problem can be 'avoided' by switching off kernel inject and using the injection policy (but that is much slower).

thies1006 avatar May 05 '23 07:05 thies1006

I think this is the same issue as #3178. Hope this can get fixed!

PythonNut avatar Jul 15 '23 00:07 PythonNut