Fenil Faldu
Fenil Faldu
I looked into this and found that the AgentOps listener in CrewAI attempts to create a new session during the `CrewKickoffStartedEvent`, but it requires the AgentOps API key to be...
Proposed Approach for Managing Sentry Integration 1. Controlling Sentry - A global variable determines whether Sentry is active. - This variable can be changed dynamically at runtime. 2. Sentry Setup...
 Hi @Dwij1704 @areibman, we are expecting the autogen spans to be automatically generated in this format, correct? Or is there something I'm missing?
I have read the CLA Document and I hereby sign the CLA
@areibman @the-praxs Good to go?
So our integration tests are randomly failing on push, different ones each time. What I suspect: The `validate_trace_spans()` function (which I believe validates presence of LLM spans) is synchronous, while...
Hi @axiomofjoy @nate-mar @caroger - This was a missing feature in autogen-agentchat instrumentation, token metrics were never being captured or reported. The instrumentation lacked any token extraction logic, so Phoenix...
The root cause is that the OpenAI Responses API uses different token field names (`input_tokens`, `output_tokens`) compared to the Chat Completion API (`prompt_tokens`, `completion_tokens`). Since we’re utilizing LiteLLM's OpenTelemetry span...
Hi @nate-mar @nkim500 This turned out to be a LiteLLM-side issue, not Phoenix. For streaming calls, LiteLLM wasn’t propagating token usage to the spans consumed by OpenAI-Agents/OpenInference, so Phoenix showed...