MiniCPM-V
MiniCPM-V copied to clipboard
[BUG] <title> 我在 vllm v0.5.5 版本运行报错
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
- [X] 我已经搜索过FAQ | I have searched FAQ
当前行为 | Current Behavior
vllm serve /models/MiniCPM-V-2 --served-model-name "MiniCPM-V-2" --dtype=auto --tensor-parallel-size 4 --trust-remote-code
报错如下:
INFO 08-30 06:09:24 api_server.py:440] vLLM API server version 0.5.5
INFO 08-30 06:09:24 api_server.py:441] args: Namespace(model_tag='/models/MiniCPM-V-2', host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, model='/models/MiniCPM-V-2', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=True, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['MiniCPM-V-2'], qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, engine_use_ray=False, disable_log_requests=False, max_log_len=None, dispatch_function=<function serve at 0x7f05606c5870>)
INFO 08-30 06:09:24 api_server.py:144] Multiprocessing frontend to use ipc:///tmp/7374289a-2566-4c22-9cec-48af8a537d2e for RPC Path.
INFO 08-30 06:09:24 api_server.py:161] Started engine process with PID 220
INFO 08-30 06:09:33 llm_engine.py:184] Initializing an LLM engine (v0.5.5) with config: model='/models/MiniCPM-V-2', speculative_config=None, tokenizer='/models/MiniCPM-V-2', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=MiniCPM-V-2, use_v2_block_manager=False, enable_prefix_caching=False)
WARNING 08-30 06:09:35 tokenizer.py:137] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead.
INFO 08-30 06:09:35 model_runner.py:879] Starting to load model /models/MiniCPM-V-2...
/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/flash.py:211: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
@torch.library.impl_abstract("xformers_flash::flash_fwd")
/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
@torch.library.impl_abstract("xformers_flash::flash_bwd")
Loading safetensors checkpoint shards: 0% Completed | 0/2 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 50% Completed | 1/2 [00:00<00:00, 1.05it/s]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:01<00:00, 1.31it/s]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:01<00:00, 1.26it/s]
INFO 08-30 06:09:39 model_runner.py:890] Loading model weights took 6.4513 GB
Process SpawnProcess-1:
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 230, in run_rpc_server
server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path)
File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 31, in __init__
self.engine = AsyncLLMEngine.from_engine_args(
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 740, in from_engine_args
engine = cls(
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 636, in __init__
self.engine = self._init_engine(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 840, in _init_engine
return engine_class(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 272, in __init__
super().__init__(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 284, in __init__
self._initialize_kv_caches()
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 390, in _initialize_kv_caches
self.model_executor.determine_num_available_blocks())
File "/usr/local/lib/python3.10/dist-packages/vllm/executor/gpu_executor.py", line 113, in determine_num_available_blocks
return self.driver_worker.determine_num_available_blocks()
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 222, in determine_num_available_blocks
self.model_runner.profile_run()
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1089, in profile_run
model_input = self.prepare_model_input(
File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1345, in prepare_model_input
model_input = self._prepare_model_input_tensors(
File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1002, in _prepare_model_input_tensors
builder.add_seq_group(seq_group_metadata)
File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 630, in add_seq_group
per_seq_group_fn(inter_data, seq_group_metadata)
File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 602, in _compute_multi_modal_input
mm_kwargs = self.multi_modal_input_mapper(mm_data)
File "/usr/local/lib/python3.10/dist-packages/vllm/multimodal/registry.py", line 125, in map_input
input_dict = plugin.map_input(model_config, data_value)
File "/usr/local/lib/python3.10/dist-packages/vllm/multimodal/base.py", line 269, in map_input
return mapper(InputContext(model_config), data)
File "/usr/local/lib/python3.10/dist-packages/vllm/multimodal/image.py", line 39, in _default_input_mapper
image_processor = self._get_hf_image_processor(model_config)
File "/usr/local/lib/python3.10/dist-packages/vllm/multimodal/image.py", line 26, in _get_hf_image_processor
return cached_get_image_processor(
File "/usr/local/lib/python3.10/dist-packages/vllm/transformers_utils/image_processor.py", line 17, in get_image_processor
processor = AutoImageProcessor.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/image_processing_auto.py", line 410, in from_pretrained
config_dict, _ = ImageProcessingMixin.get_image_processor_dict(pretrained_model_name_or_path, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/image_processing_base.py", line 335, in get_image_processor_dict
resolved_image_processor_file = cached_file(
File "/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py", line 373, in cached_file
raise EnvironmentError(
OSError: /models/MiniCPM-V-2 does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co//models/MiniCPM-V-2/tree/None' for available files.
ERROR 08-30 06:09:44 api_server.py:171] RPCServer process died before responding to readiness probe
期望行为 | Expected Behavior
No response
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
备注 | Anything else?
No response
不是有提示,缺这个preprocessor_config.json文件吗
这个文件哪来的啊?模型仓库里面没有啊
你好模型仓库里有的,
你好模型仓库里有的,
也就是说我训练完后得把原始的模型文件夹下的东西合并过来对吗
你好模型仓库里有的,
也就是说我训练完后得把原始的模型文件夹下的东西合并过来对吗
是的
