TensorRT-LLM
TensorRT-LLM copied to clipboard
Support for Qwen2-VL 2B/4B? or Qwen2.5-VL
System Info
Is this support capable yet for Qwen2-VL 2B/4B or Qwen2.5-VL? I am curious to run TensorRT with these models.
Thank you! ❤️
How would you like to use TensorRT-LLM
Qwen2-VL 2B/4B? or Qwen2.5-VL
Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and checked the documentation and examples for answers to frequently asked questions.
[!CAUTION] The CodeRabbit agent's plans did not produce any file changes.
yes, you can run it with our quickstart example by
python examples/quickstart_multimodal.py --model_dir Qwen/Qwen2-VL-3B-Instruct --modality image
or, via trtllm-serve, e.g.,
trtllm-serve Qwen/Qwen2-VL-7B-Instruct --host localhost --port 8001 --backend pytorch
Issue has not received an update in over 14 days. Adding stale label.
This issue was closed because it has been 14 days without activity since it has been marked as stale.