Failed to add embedding models in LocalAI to dify
Self Checks
- [X] This is only for bug report, if you would like to ask a quesion, please head to Discussions.
- [X] I have searched for existing issues search for existing issues, including closed ones.
- [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [X] Pleas do not modify this template :) and fill in all the required fields.
Dify version
0.6.5
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
- I downloaded models and configure as Doc: Deploying LocalAI mentioned. However, when I tried to configure the embedding models in dify, I encounter problems as below:
- when I used
curl, I can get response from the LocalAI docker (I did not run store.py provided by localAI example, so there is no embedding vector) - when I tried to add the model in dify, I get ``Internal server error''
so I can add LLM models but not embedding models
I believe this is a bug because 2. and 3. are different...
✔️ Expected Behavior
correctly add embedding models in LocalAI to dify
❌ Actual Behavior
embedding models in LocalAI could not be added to dify
To address the "Internal server error" when adding embedding models from LocalAI to Dify, ensure the following:
-
Use the Host Machine's IP Address: Dify cannot access LocalAI using
localhostor127.0.0.1when deployed in Docker. Use the host machine's IP address, e.g.,http://192.168.1.100:8080, for the Server URL in Dify's configuration. -
Correct Model Configuration: Verify that the model type and name in Dify's settings match exactly with your LocalAI setup. For embedding models, the model type should be
text-embedding, and the model name should be exactly as it is in LocalAI. -
Check the
.envFile: TheTHREADSvariable should not exceed the number of CPU cores on your machine to avoid performance issues with LocalAI. -
Inspect LocalAI Logs: Use
docker logs -f langchain-chroma-api-1to check for errors or warnings in LocalAI that might indicate setup issues. -
Network Settings: Ensure your firewall or network settings allow connections on the port used by LocalAI (default is 8080).
-
Additional Configuration Options: Dify now supports more configuration options for LocalAI models. Ensure all fields are correctly filled out, including
completion_typeandcontext_size, if applicable.
If these steps do not resolve the issue, please provide specific error messages or logs from Dify when the error occurs for further assistance.
Details
Sources
- local embedding model 500 error
- LocalAI configuration embedded model is invalid
- Encountered an error while using ZhipuAI's text-embedding model in Dify.
- Access to url ‘/console/api/datasets/indexing-estimate’ results in a 500 error
- xinference部署本地模型,在符合Xinference embed模型的基础上,在配置框中选择Embeddings报错。
- en/tutorials/model-configuration/localai.md
- api/tests/integration_tests/model_runtime/localai/test_embedding.py
- api/core/model_runtime/model_providers/localai/localai.yaml
- api/core/model_runtime/model_providers/localai/text_embedding/text_embedding.py
- zh_CN/guides/model-configuration/localai.md
About Dosu This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
To continue the conversation, mention @dosu.
It seems that I did not use the right configuration...