MiniCPM-V icon indicating copy to clipboard operation
MiniCPM-V copied to clipboard

[vllm] - <title>音频输入时长限制?

Open FengRui1998 opened this issue 9 months ago • 3 comments

起始日期 | Start Date

No response

实现PR | Implementation PR

Traceback (most recent call last): File "/public/home/feng_rui/code/ASTAR_Project/test_audio_model.py", line 85, in <module> outputs = llm.generate(input_data, sampling_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/utils.py", line 1196, in inner return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 465, in generate self._validate_and_add_requests( File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1354, in _validate_and_add_requests self._add_request( File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1372, in _add_request self.llm_engine.add_request( File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py", line 183, in add_request prompt_str, request = self.processor.process_inputs( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/v1/engine/processor.py", line 226, in process_inputs processed_inputs: ProcessorInputs = self.input_preprocessor.preprocess( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/inputs/preprocess.py", line 712, in preprocess return self._process_decoder_only_prompt( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/inputs/preprocess.py", line 661, in _process_decoder_only_prompt prompt_comps = self._prompt_to_llm_inputs( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/inputs/preprocess.py", line 344, in _prompt_to_llm_inputs return self._process_multimodal( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/inputs/preprocess.py", line 252, in _process_multimodal return mm_processor.apply(prompt, mm_data, mm_processor_kwargs, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/multimodal/processing.py", line 1665, in apply ) = self._cached_apply_hf_processor( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/multimodal/processing.py", line 1391, in _cached_apply_hf_processor ) = self._apply_hf_processor_main( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/multimodal/processing.py", line 1336, in _apply_hf_processor_main mm_kwargs = self._apply_hf_processor_mm_only( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/multimodal/processing.py", line 1297, in _apply_hf_processor_mm_only _, mm_kwargs, _ = self._apply_hf_processor_text_mm( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/multimodal/processing.py", line 1230, in _apply_hf_processor_text_mm processed_data = self._call_hf_processor( ^^^^^^^^^^^^^^^^^^^^^^^^ File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/model_executor/models/minicpmv.py", line 653, in _call_hf_processor mm_inputs = self.process_mm_inputs(mm_data, mm_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/model_executor/models/minicpmo.py", line 304, in process_mm_inputs **self.process_audios(mm_data, mm_kwargs), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/public/home/feng_rui/miniconda3/envs/qwen_omni/lib/python3.12/site-packages/vllm/model_executor/models/minicpmo.py", line 284, in process_audios feat[:, :feature_len] for feat, feature_len in zip( ~~~~^^^^^^^^^^^^^^^^^ TypeError: only integer tensors of a single element can be converted to an index 我发现我的音频输入一旦超过30s就会报错,这个怎么解决?

相关Issues | Reference Issues

No response

摘要 | Summary

音频输入时长限制?

基本示例 | Basic Example

音频输入时长限制?

缺陷 | Drawbacks

音频输入时长限制?

未解决问题 | Unresolved questions

音频输入时长限制?

FengRui1998 avatar May 15 '25 11:05 FengRui1998

+1

rixyyy avatar May 26 '25 03:05 rixyyy

Drive by comment: I think this is related to audio length. I get the same traceback (from vllm serve in v0.9.0.1) when I pass a >30s audio file, if its less than 30 seconds then it processes it.

anrp avatar Jun 09 '25 13:06 anrp

We use whisper encoder to process audio inputs, whisper encoder has a context length of 30s. Please split raw input audio into small chunks shorter than 30s before feeding into the model.

bokesyo avatar Jun 11 '25 07:06 bokesyo