zhanaali
zhanaali
(vicuna) ps@ps[13:56:20]:/data/chat/langchain-ChatGLM2/langchain-ChatGLM-0.1.13$ python webui.py --model-dir local_models --model moss --no-remote-model INFO 2023-06-08 13:56:28,038-1d: loading model config llm device: cuda embedding device: cuda dir: /data/chat/langchain-ChatGLM2/langchain-ChatGLM-0.1.13 flagging username: 7ab3fa902a0243dab3564ddb86e42266 ===================================BUG REPORT=================================== Welcome to...
### Self Checks - [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones. - [X] I confirm that I am using English to submit this...
[Badcase]: Model inference Qwen2.5-32B-Instruct-GPTQ-Int4 appears as garbled text !!!!!!!!!!!!!!!!!!
### Model Series Qwen2.5 ### What are the models used? Qwen2.5-32B-Instruct-GPTQ-Int4 ### What is the scenario where the problem happened? Using vllm reasoning Qwen2.5-32B-Instruct-GPTQ-Int4 appears with garbled text !!!!!!!!!!!!!!!!!! ###...
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this? - [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions ### 该问题是否在FAQ中有解答? | Is there an...