After adding the built-in application on the official website to the workspace, the model modification cannot be completed
Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to Discussions.
- [X] I have searched for existing issues search for existing issues, including closed ones.
- [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [X] Please do not modify this template :) and fill in all the required fields.
Dify version
0.6.8
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Add an application (problem classification+knowledge base+chatbot) that comes with the official website to the workspace, then modify the model inside and change it to the qwen model. Testing shows that there is an error, but the knowledge retrieval module still reads the gpt-3.5-turbo model, but the modified model cannot be found on the interface
✔️ Expected Behavior
No response
❌ Actual Behavior
No response
To address the issue where your system still tries to access the gpt-3.5-turbo model instead of the modified qwen model after adding a built-in application to your Dify workspace, follow these steps:
-
Verify Model Configuration: Ensure the
qwen-turbomodel is correctly configured in your workspace settings, including the model type and provider. -
Update Default Model Settings: Use the
DefaultModelApiendpoint to update the default model settings toqwen-turbo, ensuring the model type (llm) and provider are correctly set. -
Check Model Credentials: Ensure the
qwen-turbomodel's credentials are correctly saved and validated using the appropriate API endpoints. This is crucial as the error suggests an issue with credential initialization. -
Review Model Provider Configuration: Confirm that the model provider for
qwen-turbois correctly configured in your workspace, verifying the association between the model and its provider. -
Adjust Model Parameters: Based on your needs, adjust parameters such as
temperature,max_tokens, etc., according to theqwen-turbo.yamlconfiguration. -
Ensure Privileged Role: Make sure the user has the necessary permissions for operations like saving model credentials or deleting models.
By following these steps and ensuring all configurations and permissions are correctly set, you should be able to successfully switch to using the qwen-turbo model in your Dify workspace. For detailed instructions on model configuration and management, refer to the Dify documentation.
Details
Sources
- Unexpected problems encountered in the use of the knowledge base
- docker/docker-compose.yaml
- en/tutorials/model-configuration/README.md
- api/controllers/console/workspace/models.py
- api/core/model_runtime/model_providers/tongyi/llm/qwen-turbo.yaml
- api/tests/unit_tests/models/test_account.py
- api/core/model_runtime/model_providers/ollama/llm/llm.py
- api/models/provider.py
About Dosu This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
To continue the conversation, mention @dosu.
You need to setup the system reasoning model in the settings section.