Xiaopeng Wang
Xiaopeng Wang
Is there any ETA on this? Seems there's no progress on this for a while...
do you have gunicorn and flask installed? that's a must to run mlflow locally.
>“ if there is a node after llm node, stream result will be just final output, e.g. result["answer"] will be string.” You can also return a generator in that node,...
@kenzabenjelloun could you share the opentelemetry-api and opentelemetry-sdk version installed in your running env? Also, could you share the promptflow related package version in your environment and how you install...
Hi @adampolak-vertex, instead of sending the trace back as part of the response, is it possible that you implement your own trace exporter which directly send the trace to your...
If you want to get token, you can write your own llm tool, in that tool you can parse the response and return both output message and token info as...
@adampolak-vertex what you need is still the trace data, please follow the suggested practice to define your own exporter to collect those info, PF doesn't have plan to provide that...
@pamelafox could you share some details about why you want to build your own web app instead of leveraging the PF serving app which already has good support for different...
+1 on this feature, hope we can get this ready as early as possible.