iic/SenseVoiceSmall got: TypeError: expected Tensor as element 1 in argument 0, but got str
Notice: In order to resolve issues more efficiently, please raise issue following the template. (注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节)
🐛 Bug
用示例代码测试时报错TypeError: expected Tensor as element 1 in argument 0, but got str
To Reproduce
Steps to reproduce the behavior (always include the command you ran): `from funasr import AutoModel from funasr.utils.postprocess_utils import rich_transcription_postprocess import os
model_dir = "iic/SenseVoiceSmall"
model = AutoModel( model=model_dir, vad_model="fsmn-vad", vad_kwargs={"max_single_segment_time": 30000}, device="cuda:0", ) base_dir = "综艺" for p in os.listdir(base_dir): res = model.generate( input=os.path.join(base_dir, p), cache={}, language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech" use_itn=True, batch_size_s=60, merge_vad=True, # merge_length_s=15, ) text = rich_transcription_postprocess(res[0]["text"]) print(text)`
报错信息:
0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last): File ".\fun_asr_sensevoice.py", line 15, in <module> res = model.generate( File "d:\funasr\funasr\auto\auto_model.py", line 266, in generate return self.inference_with_vad(input, input_len=input_len, **cfg) File "d:\funasr\funasr\auto\auto_model.py", line 339, in inference_with_vad res = self.inference( File "d:\funasr\funasr\auto\auto_model.py", line 305, in inference res = model.inference(**batch, **kwargs) File "d:\funasr\funasr\models\fsmn_vad_streaming\model.py", line 690, in inference audio_sample = torch.cat((cache["prev_samples"], audio_sample_list[0])) TypeError: expected Tensor as element 1 in argument 0, but got str
Code sample
Expected behavior
Environment
- OS (e.g., Linux): Windows 11
- FunASR Version (e.g., 1.0.0):
- ModelScope Version (e.g., 1.11.0): latest
- PyTorch Version (e.g., 2.0.0):
- How you installed funasr (
pip, source): tried both pip and from source, neither worked - Python version: 3.10
- GPU (e.g., V100M32):NVIDIA RTX 4080
- CUDA/cuDNN version (e.g., cuda11.7): 12.4
- Docker version (e.g., funasr-runtime-sdk-cpu-0.4.1)
- Any other relevant information: