[Bug] 無法在windows上部署 Phi-3.5-vision-instruct
Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [X] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
Describe the bug
無法在windows上部署 Phi-3.5-vision-instruct
Reproduction
(lmdeploy) C:\Users\mi_ap>lmdeploy serve api_server D:\LLM_Project\Baseline_Multimodal_Model\Phi-3.5-vision-instruct --backend turbomind
Add dll path C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin, please note cuda version should >= 11.3 when compiled with cuda 11
2024-08-27 15:59:09,383 - lmdeploy - WARNING - Try to run with pytorch engine because D:\LLM_Project\Baseline_Multimodal_Model\Phi-3.5-vision-instruct is not explicitly supported by lmdeploy.
D:\miniconda3\envs\lmdeploy\lib\site-packages\transformers\models\auto\image_processing_auto.py:510: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use slow_image_processor_class, or fast_image_processor_class instead
warnings.warn(
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
2024-08-27 15:59:17,427 - lmdeploy - ERROR - RuntimeError: Failed to find C compiler. Please specify via CC environment variable.
2024-08-27 15:59:17,427 - lmdeploy - ERROR - <Triton> test failed!
Please ensure it has been installed correctly.
(lmdeploy) C:\Users\mi_ap>lmdeploy -v 0.6.0a0
Environment
(lmdeploy) C:\Users\mi_ap>lmdeploy check_env
sys.platform: win32
Python: 3.10.14 | packaged by Anaconda, Inc. | (main, May 6 2024, 19:44:50) [MSC v.1916 64 bit (AMD64)]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA GeForce RTX 3090
CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1
NVCC: Cuda compilation tools, release 12.1, V12.1.66
MSVC: Microsoft (R) C/C++ Optimizing Compiler Version 19.37.32825 for x64
GCC: n/a
PyTorch: 2.2.2+cu121
PyTorch compiling details: PyTorch built with:
- C++ Version: 201703
- MSVC 192930151
- Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v3.3.2 (Git Hash 2dc95a2ad0841e29db8b22fbccaf3e5da7992b01)
- OpenMP 2019
- LAPACK is enabled (usually provided by MKL)
- CPU capability usage: AVX512
- CUDA Runtime 12.1
- NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
- CuDNN 8.8.1 (built against CUDA 12.0)
- Magma 2.5.4
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.8.1, CXX_COMPILER=C:/actions-runner/_work/pytorch/pytorch/builder/windows/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.2.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,
TorchVision: 0.17.2+cpu
LMDeploy: 0.6.0a0+
transformers: 4.42.3
gradio: Not Found
fastapi: 0.111.0
pydantic: 2.8.2
triton: 2.1.0
Error traceback
No response
pytorch engine部署會失敗,但是我改用--backend turbomind他還是會强制使用pytorch
log如下:
2024-08-27 15:59:09,383 - lmdeploy - WARNING - Try to run with pytorch engine because D:\LLM_Project\Baseline_Multimodal_Model\Phi-3.5-vision-instruct is not explicitly supported by lmdeploy.
phi3.5 因为 head_dim 不是128,turbomind就没有支持,转而在 pytorch engine中支持的。 pytorch engine的kernel是基于openai triton开发的,不幸的是,triton并不支持 windows 系统
我安裝了windows版的triton triton-2.1.0-cp310-cp310-win_amd64.whl,但是結果還是一樣。所以無解
根据 triton 的说明,它支持的系统是 Linux https://github.com/triton-lang/triton?tab=readme-ov-file#compatibility 在 pypi 上,也只找到了基于 linux 的发布包 https://pypi.org/project/triton/2.1.0/#files