CyCle1024
CyCle1024
@bkryza Thanks for rely ! I'm **not** in hurry. I found sth detailed... The .clang-uml used to generate problem uml is: ```yaml compilation_database_dir: ./build output_directory: . diagrams: test: type: class...
@bkryza Thx a lot!
Require https://github.com/InternLM/lmdeploy/pull/2321 merged first.
> I got this error when trying to import `torch_dipu` inside the container: > > ``` > ImportError: /deeplink/deeplink.framework/dipu/torch_dipu/libtorch_dipu.so: undefined symbol: aclprofSetStampCallStack > ``` > > The `CANN` used in...
> 构建好docker镜像后运行:lmdeploy serve api_server Qwen2-7B-Instruct --backend pytorch 得到的报错:  但是triton这个库在aarch64上没有提供预编译好的包,自行编译也失败了。 目前ascend平台支持的模型不包括Qwen2-7B-Instruct,并且api_server尚未支持设置输入device_type参数以选择ascend后端。
> 构建好docker镜像后运行:lmdeploy serve api_server Qwen2-7B-Instruct --backend pytorch 得到的报错:  但是triton这个库在aarch64上没有提供预编译好的包,自行编译也失败了。 @yunfwe 目前支持的模型为 llama2-7b, internlm2-7b, mixtral-8x7b,可以参考以下脚本进行静态的推理,chat版本的功能还在开发中: ```python import deeplink_ext import lmdeploy from lmdeploy import PytorchEngineConfig if __name__ == "__main__": backend_config =...
> 请问,有人遇到ValueError: xpu is not available, you should use device="cpu" instead的错误嘛? 我使用的是RC1,910B2C 能否附上测试脚本test_deploy.py?
@jiajie-yang Can you comment this line: https://github.com/InternLM/lmdeploy/blob/231e5bbcc5e1ea5f253002488587eacacd8f5e55/lmdeploy/pytorch/engine/model_agent.py#L630 And then try it again?
@BruceYu-Bit 请补充你安装方式(如果是官方文档是否为:`pip install git+https://github.com/InternLM/GroupedGEMM.git@main`),以及使用的conda环境,可以用conda env export -n xtuner > xtuner.yaml,然后给出xtuner.yaml,我这边可以尝试复现一下。
> 按照官方文档安装GroupedGEMM,无法build。报错如下 Running setup.py clean for grouped_gemm Running command python setup.py clean /root/anaconda3/envs/xtuner/lib/python3.10/site-packages/setuptools/dist.py:759: SetuptoolsDeprecationWarning: License classifiers are deprecated. !! > > ``` > ******************************************************************************** > Please consider removing the following...