MiniCPM-V icon indicating copy to clipboard operation
MiniCPM-V copied to clipboard

[BUG] <title>whether it is possible to deploy minicpmv2.5 on Android based on llama.cpp (using Qualcomm GPU for inference)?

Open My-captain opened this issue 1 year ago • 1 comments

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • [X] 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

No response

期望行为 | Expected Behavior

May I ask whether it is possible to deploy minicpmv2.5 on Android based on llama.cpp (using Qualcomm GPU for inference)?

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

My-captain avatar Sep 09 '24 11:09 My-captain

I would like to know about the Qualcomm NPU for inference. +1

theoctopusride avatar Sep 10 '24 21:09 theoctopusride

First of all, I'm sorry that I was busy with other projects before and I responded a little late. Due to the manpower of our team, we cannot support Qualcomm NPU yet.

tc-mb avatar Oct 14 '24 07:10 tc-mb

Got it, thanks for your response.

My-captain avatar Dec 27 '24 02:12 My-captain