starcoder icon indicating copy to clipboard operation
starcoder copied to clipboard

Batching Errors

Open ryanolson opened this issue 2 years ago • 0 comments

I'm seeing batching errors when updating to the latest text-generation-inference container.

Latest container image:

ghcr.io/huggingface/text-generation-inference                                        latest                                      7b12068effa3   2 hours ago     9.15GB

I cloned the model repo, which is the only differece between my setup and the one-line docker command provided in the readme.

Here is my interactive session:

deepops@a100:~/Projects/starcoder$ NV_GPU=0 nvidia-docker run  -p 8080:80 -v /raid/data:/data -e HUGGING_FACE_HUB_TOKEN=<removed> -e HF_HUB_ENABLE_HF_TRANSFER=0 -ti --rm --entrypoint bash ghcr.io/huggingface/text-generation-inference:latest
root@fce683d4ae5a:/usr/src# text-generation-launcher --model-id /data/starcoder --max-total-tokens 8192
2023-05-10T17:11:17.297074Z  INFO text_generation_launcher: Args { model_id: "/data/starcoder", revision: None, sharded: None, num_shard: Some(1), quantize: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_input_length: 1000, max_total_tokens: 8192, max_batch_size: None, waiting_served_ratio: 1.2, max_batch_total_tokens: 32000, max_waiting_tokens: 20, port: 80, shard_uds_path: "/tmp/text-generation-server", master_addr: "localhost", master_port: 29500, huggingface_hub_cache: Some("/data"), weights_cache_override: None, disable_custom_kernels: false, json_output: false, otlp_endpoint: None, cors_allow_origin: [], watermark_gamma: None, watermark_delta: None, env: false }
2023-05-10T17:11:17.297178Z  INFO text_generation_launcher: Starting download process.
2023-05-10T17:11:18.827834Z  INFO download: text_generation_launcher: Files are already present on the host. Skipping download.

I notice in the startup that max_batch_size is set to None.

I have a VS Code session with the HF Code Autocomplete plugin driving requests to the generate endpoint. Batch 1 works fine, but when typing out paces responses, I start to see batching errors on the inference server.

Specific error:

2023-05-10T17:15:27.396195Z ERROR shard-manager: text_generation_launcher: Method Decode encountered an error.
Traceback (most recent call last):
  File "/opt/conda/bin/text-generation-server", line 8, in <module>
    sys.exit(app())
  File "/opt/conda/lib/python3.9/site-packages/typer/main.py", line 311, in __call__
    return get_command(self)(*args, **kwargs)
  File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/opt/conda/lib/python3.9/site-packages/typer/core.py", line 778, in main
    return _main(
  File "/opt/conda/lib/python3.9/site-packages/typer/core.py", line 216, in _main
    rv = self.invoke(ctx)
  File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/opt/conda/lib/python3.9/site-packages/typer/main.py", line 683, in wrapper
    return callback(**use_params)  # type: ignore
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py", line 58, in serve
    server.serve(model_id, revision, sharded, quantize, uds_path)
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py", line 155, in serve
    asyncio.run(serve_inner(model_id, revision, sharded, quantize))
  File "/opt/conda/lib/python3.9/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/opt/conda/lib/python3.9/asyncio/base_events.py", line 634, in run_until_complete
    self.run_forever()
  File "/opt/conda/lib/python3.9/asyncio/base_events.py", line 601, in run_forever
    self._run_once()
  File "/opt/conda/lib/python3.9/asyncio/base_events.py", line 1905, in _run_once
    handle._run()
  File "/opt/conda/lib/python3.9/asyncio/events.py", line 80, in _run
    self._context.run(self._callback, *self._args)
  File "/opt/conda/lib/python3.9/site-packages/grpc_interceptor/server.py", line 159, in invoke_intercept_method
    return await self.intercept(
> File "/opt/conda/lib/python3.9/site-packages/text_generation_server/interceptor.py", line 20, in intercept
    return await response
  File "/opt/conda/lib/python3.9/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 82, in _unary_interceptor
    raise error
  File "/opt/conda/lib/python3.9/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 73, in _unary_interceptor
    return await behavior(request_or_iterator, context)
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py", line 86, in Decode
    batch = self.model.batch_type.concatenate(batches)
  File "/opt/conda/lib/python3.9/contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/causal_lm.py", line 351, in concatenate
    _, num_heads, padded_sequence_length, head_dim = first_past_kvs[0][1].shape
IndexError: index 1 is out of bounds for dimension 0 with size 1
 rank=0
2023-05-10T17:15:27.396322Z ERROR batch{batch_size=2}:decode:decode{size=2}:decode{size=2}: text_generation_client: router/client/src/lib.rs:33: Server error: index 1 is out of bounds for dimension 0 with size 1

ryanolson avatar May 10 '23 17:05 ryanolson