WrenAI
WrenAI copied to clipboard
lm_studio如何本地化部署,env文件和config文件内容,求大神帮助
1、如果对接lm_studio,是否需要修改env文件内容,如何修改
2、config文件如何设置?
本人的config如下:(跪谢大神)
type: llm
provider: litellm_llm
timeout: 120
models:
- model: lm_studio/deepseek-coder-v2-lite-instruct-mlx # api_key: deepseek-coder-v2-lite-instruct-mlx api_base: http://172.24.12.23:1234 kwargs: n: 1 seed: 0 max_completion_tokens: 4096 reasoning_effort: low type: embedder provider: litellm_embedder models:
- model: lm_studio/text-embedding-bge-m3 alias: default api_base: http://172.24.12.23:1234 timeout: 120
lm_studio model:deepseek-coder-v2-lite-instruct-mlx 本地IP:http://172.24.12.23:1234 通过orbstack部署的 一直有个红灯如下图:
跪谢大佬、大神们协助,麻烦作者能不能把对接模型做的简单一些
@cyyeh could you reply this one ? thanks
@Marsedward please use WREN_AI_SERVICE_VERSION=0.19.3 in ~/.wrenai/.env and use this config example: https://github.com/Canner/WrenAI/blob/main/wren-ai-service/docs/config_examples/config.lm_studio.yaml