guoguo1314
guoguo1314
Hello, I have resolved my issue. I created a separate conda environment with nir version 1.0.4 and nirtorch version 2.0.2. Additionally, I created a test folder and inside it a...
老哥,真巧,我正在看这个问题,我的llm想用qwen2:1.5b ,**我的ollama list如下:** `userland@localhost:/root/codes/Langchain-Chatchat/libs/chatchat-server/chatchat$ ollama list NAME qwen2:1.5b qwen:4b qwen2:7b quentinz/bge-large-zh-v1.5:latest qwen2:0.5b **我的model_setting如下:** “DEFAULT_LLM_MODEL: qwen2:1.5b DEFAULT_EMBEDDING_MODEL: quentinz/bge-large-zh-v1.5:latest MODEL_PLATFORMS: - platform_name: ollama platform_type: ollama api_base_url: http://127.0.0.1:11434/v1 api_key: EMPTY api_proxy: '' api_concurrencies:...
怎么样,老哥,解决了吗?
我使用的qwen2:1.5,embedding是quentinz/bge-large-zh-v1.5:latest,默认知识库,目前是乱回答状态(右上角有个runing,它停了就出来了)。
when I update the onnxruntime, the error also exist. Please note that, apart from the model_quantized.onnx file, the tokens.txt, espeak-ng-data, dict, lexicon-us-en.txt, and lexicon-zh.txt files were all copied from the...
 1.17.1, `pip install onnxruntime==1.17.1`dirctly
thanks a lot. I've replaced InternLM-XComposer2.5 with another vision-language (VL) model. Therefore, the `MBZUAI/GeoPixel-7B` specified after `--version` in the command `CUDA_VISIBLE_DEVICES=0 python merge_lora_weights_and_save_hf_model.py --version="MBZUAI/GeoPixel-7B" --weight="output/pytorch_model.bin" --save_path="GeoPixel-7B-finetuned"` is no longer applicable....
遇到同样的问题了,QNN SDK是v2.36版本,高于2.14,请问这个怎么解决的?
Qualcomm AI Engine Direct SDK (QNN SDK):2.31.0.250130 qualcomm_neural_processing_sdk:2.21.0 Hexagon_SDK:5.5.5.0 ndk:25.1 依然会出现这个错误 ``` RE5C4FL1:/data/local/tmp/mllm/bin $ ./demo_qwen_npu -m ../models/qwen-1.5-1.8b-chat-int8.mllm [INFO] Mon Jul 28 18:51:12 2025 [/home/lyg/Codes/mllm/src/backends/qnn/QNNBackend.cpp:118] Backend: libQnnHtp.so [ERROR] Mon Jul 28...