dify icon indicating copy to clipboard operation
dify copied to clipboard

Failed to add embedding models in LocalAI to dify

Open banianzr opened this issue 1 year ago • 1 comments

Self Checks

  • [X] This is only for bug report, if you would like to ask a quesion, please head to Discussions.
  • [X] I have searched for existing issues search for existing issues, including closed ones.
  • [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [X] Pleas do not modify this template :) and fill in all the required fields.

Dify version

0.6.5

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

  1. I downloaded models and configure as Doc: Deploying LocalAI mentioned. However, when I tried to configure the embedding models in dify, I encounter problems as below:
  2. when I used curl, I can get response from the LocalAI docker (I did not run store.py provided by localAI example, so there is no embedding vector) image
  3. when I tried to add the model in dify, I get ``Internal server error'' image so I can add LLM models but not embedding models image

I believe this is a bug because 2. and 3. are different...

✔️ Expected Behavior

correctly add embedding models in LocalAI to dify

❌ Actual Behavior

embedding models in LocalAI could not be added to dify

banianzr avatar May 06 '24 13:05 banianzr

To address the "Internal server error" when adding embedding models from LocalAI to Dify, ensure the following:

  1. Use the Host Machine's IP Address: Dify cannot access LocalAI using localhost or 127.0.0.1 when deployed in Docker. Use the host machine's IP address, e.g., http://192.168.1.100:8080, for the Server URL in Dify's configuration.

  2. Correct Model Configuration: Verify that the model type and name in Dify's settings match exactly with your LocalAI setup. For embedding models, the model type should be text-embedding, and the model name should be exactly as it is in LocalAI.

  3. Check the .env File: The THREADS variable should not exceed the number of CPU cores on your machine to avoid performance issues with LocalAI.

  4. Inspect LocalAI Logs: Use docker logs -f langchain-chroma-api-1 to check for errors or warnings in LocalAI that might indicate setup issues.

  5. Network Settings: Ensure your firewall or network settings allow connections on the port used by LocalAI (default is 8080).

  6. Additional Configuration Options: Dify now supports more configuration options for LocalAI models. Ensure all fields are correctly filled out, including completion_type and context_size, if applicable.

If these steps do not resolve the issue, please provide specific error messages or logs from Dify when the error occurs for further assistance.

Details

Sources


About Dosu This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

dosubot[bot] avatar May 06 '24 14:05 dosubot[bot]

It seems that I did not use the right configuration...

banianzr avatar May 07 '24 06:05 banianzr