Qile Xu

Results 10 comments of Qile Xu

Already solve it: 1. Execute `git clone https://github.com/ValveSoftware/openvr.git --checkout v1.14.15` in "xxx\iGibson\igibson\render\". 2. Rename the folder from "v1.14.15" to "openvr". 3. Execute `pip install -e .` again.

Great method! It's really helpful to my situation, thx!

如果是用的 PEFT 来微调,那可以这样合并 ```python from transformers import AutoModelForVision2Seq from peft import PeftModel base_model = AutoModelForVision2Seq.from_pretrained("Qwen/Qwen3-VL-8B-Instruct") model = PeftModel.from_pretrained(base_model, "YOUR_LORA_WEIGHT_PATH") merged_model = model.merge_and_unload() ```

I cannot reproduce the error with `transformers==4.57.0`. Everything works fine on my side. ``` {'input_ids': tensor([[151644, 872, 198, ..., 151644, 77091, 198]], device='cuda:0'), 'attention_mask': tensor([[1, 1, 1, ..., 1, 1,...

It is recommended to use the video path directly. You can use `file://YOUR/VIDEO/PATH` to set the `video_url`, such as `file://datasets/videos/video_1.mp4` And you also need to set `--allowed-local-media-path /` when launching...

这是使用vLLM部署时发生的吗,我之前也有遇到过,我的问题是生成的config没有和官方提供的参数对齐,后面将参数对齐之后就好了

@dyhuachi vLLM对于Qwen3VL的视频时间戳计算有bug,https://github.com/vllm-project/vllm/pull/27104 对这个问题进行修复了,但还没更新到最新的release,你可以手动更改或者下载nightly版本,详情请看 https://github.com/QwenLM/Qwen3-VL/issues/1606#issuecomment-3415927977

@ycsun1972 你好,我在测试三个模型时使用的同一套代码,并且分别在 vLLM 和 Transformers 都测试了,最后出来的效果明显感觉 32B-Instruct 的幻觉比较严重。 我尝试了一下添加`processor.video_processor.size = {"longest_edge": 256000 * 32 * 32, "shortest_edge": 4 * 32 * 32}`,最后爆显存了,可能不是这个原因? 除此之外我还想问一下,process_vision_info和HF的processor的处理逻辑有什么区别,只是`loggest_edge`的大小不一致吗?

Same issue occurs when downloading scene0006_00.sens, scene0111_02.sens, scene0589_00_clean.segs.json, scene0673_03_vh_clean.ply, scene0736_00.sens, and scene0764_00.sens.