[BUG] Differences between webservice prediction endpoint and langfuse tracing (llmchain/structured ouput parser)
Describe the bug They are differences between webservice output and langfuse tracing using llmchain and json output
On webservices the structure json output is under a "json" key otherwise in langfuse the json is traced under text key
webservices :
{
"json": {
"go_back_to_menu": false,
"transfer_to_agent": false
},
"question": "hey",
"chatId": "issue-1",
"chatMessageId": "a057c1b8-82c5-4727-9d99-9c0c20d29995",
"sessionId": "issue-1"
}
On langfuse ( LLM chain span )
{
text: {
go_back_to_menu: false
transfer_to_agent: false
}
}
Note: I use structured output parser with autofix enabled
To Reproduce Flowise : 1.8.0 LLM chain with output parser ( autofix on)
Use curl (postman) prediction endpoint --> json key
Check on langfuse trace --> text key
Expected behavior Coherent prediction endpoint response and trace on langfuse
Screenshots
Additional context Basic chain with json output
text is the key we used to trace the output - https://github.com/FlowiseAI/Flowise/blob/main/packages/components/src/handler.ts#L455