dify
dify copied to clipboard
add headers and pass model parameter to llm node
Self Checks
- [X] I have searched for existing issues search for existing issues, including closed ones.
- [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [X] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [X] Please do not modify this template :) and fill in all the required fields.
1. Is this request related to a challenge you're experiencing? Tell me about your story.
- we have a llm proxy service that supports adding some headers for tracking.
- we support users to choose the model.
2. Additional context or comments
No response
3. Can you help us with this feature?
- [ ] I am interested in contributing to this feature.
Can you give us some examples for better understanding?
- for example, we can add the following request headers for tracking a. x-ai-trace-id b. x-ai-user-id c. x-ai-metadata d. ...
this is a request example:
curl https://api.xxxxai.com/v1/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $API_KEY" \
-H "X-AI-TRACE-ID: $TRACE_ID" \
-H "X-AI-USER-ID: $USER_ID" \
-H "X-AI-METADATA: $METADATA" \
-d '{
"model": "gpt-3.5-turbo-instruct",
"prompt": "Say this is a test",
"max_tokens": 7,
"temperature": 0
}'
2.For example, on the openai official website, you can choose the model when chatting. the model parameter is not fixed on the llm node. @crazywoola
It can be done like this, however, to change the models in api level is not the way we want.
yes, but why not design it to be more flexible? can you tell me the reason for this design? Is it possible to support adding headers to llm node? @crazywoola