DriveBench icon indicating copy to clipboard operation
DriveBench copied to clipboard

ActorDiedError Caused by ValueError in VLLM Initialization

Open curryqka opened this issue 9 months ago • 2 comments

Description:
I encountered an ActorDiedError exception when running my Ray-based application. The error occurs during the initialization of the LLM class from the vllm library, specifically when configuring the model. The root cause appears to be a ValueError related to the limit_mm_per_prompt parameter, which is only supported for multimodal models.

Error Stack Trace:

Exception has occurred: ActorDiedError
The actor died because of an error raised in its creation task, ray::_MapWorker.__init__() (pid=2454810, ip=10.96.192.35, actor_id=792fbc7178acf9ef3f06667101000000, repr=MapWorker(MapBatches(LLMPredictor)))
  File "/root/miniconda3/lib/python3.10/site-packages/ray/data/_internal/execution/operators/actor_pool_map_operator.py", line 403, in __init__
    self._map_transformer.init()
  File "/root/miniconda3/lib/python3.10/site-packages/ray/data/_internal/execution/operators/map_transformer.py", line 208, in init
    self._init_fn()
  File "/root/miniconda3/lib/python3.10/site-packages/ray/data/_internal/planner/plan_udf_map_op.py", line 268, in init_fn
    udf_map_fn=op_fn(
  File "/root/miniconda3/lib/python3.10/site-packages/ray/data/_internal/execution/util.py", line 70, in __init__
    super().__init__(*args, **kwargs)
  File "/high_perf_store/mlinfra-vepfs/wangjinghui/drive-bench/inference/llava1.5.py", line 48, in __init__
    self.llm = LLM(
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 178, in __init__
    self.llm_engine = LLMEngine.from_engine_args(
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 547, in from_engine_args
    engine_config = engine_args.create_engine_config()
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 844, in create_engine_config
    model_config = self.create_model_config()
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 782, in create_model_config
    return ModelConfig(
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/config.py", line 235, in __init__
    self.multimodal_config = self._init_multimodal_config(
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/config.py", line 256, in _init_multimodal_config
    raise ValueError(
ValueError: limit_mm_per_prompt is only supported for multimodal models.

curryqka avatar Apr 01 '25 03:04 curryqka

Hi @curryqka, thank you for reaching out. Could you please provide more details about the specific commands you ran and the version of vllm you are using?

Since our inference code is built on top of the vllm framework, you might also find it helpful to check their official repository for similar issues or error references in the future.

drive-bench avatar Apr 02 '25 22:04 drive-bench

Hi @curryqka, thank you for reaching out. Could you please provide more details about the specific commands you ran and the version of vllm you are using?

Since our inference code is built on top of the vllm framework, you might also find it helpful to check their official repository for similar issues or error references in the future.

Thanks for your reaction! I have solved this problem. This is due to the wrong type of the model ckpt(we download llama which is mismatch).

curryqka avatar Apr 07 '25 08:04 curryqka