openllmetry icon indicating copy to clipboard operation
openllmetry copied to clipboard

🚀 Feature: add prompt caching tokens to OpenAI instrumentation

Open dinmukhamedm opened this issue 10 months ago • 0 comments

Which component is this feature for?

OpenAI Instrumentation

🔖 Feature description

When OpenAI returns usage it shows prompt cache read tokens in the details. We need to add these to the span attributes

🎤 Why is this feature needed ?

For more precise cost-tracking; for consistency with Anthropic instrumentation

✌️ How do you aim to achieve this?

Use SpanAttributes.LLM_USAGE_CACHE_READ_INPUT_TOKENS

🔄️ Additional Information

No response

👀 Have you spent some time to check if this feature request has been raised before?

  • [x] I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

dinmukhamedm avatar Apr 18 '25 16:04 dinmukhamedm