opencode icon indicating copy to clipboard operation
opencode copied to clipboard

feat: display Google Gemini cached token stats

Open thisisryanswift opened this issue 4 weeks ago • 8 comments

feat: display Google Gemini cached token stats

Closes https://github.com/anomalyco/opencode/issues/6851

What

One-line fix to read cached token counts from Google's metadata so they show up in session stats.

Why

Google returns cached token counts in a different spot than Anthropic:

  • Anthropic: usage.cachedInputTokens
  • Google: providerMetadata.google.usageMetadata.cachedContentTokenCount

Implicit caching was already working server-side (and saving money), we just weren't displaying it.

The fix

- const cachedInputTokens = input.usage.cachedInputTokens ?? 0
+ const cachedInputTokens = input.usage.cachedInputTokens ?? 
+   (input.metadata?.["google"] as any)?.usageMetadata?.cachedContentTokenCount ?? 0

Tested

Verified locally with a couple of gemini conversations.


Future: Explicit Caching

Google has two caching modes:

  • Implicit (what we're using): Automatic, server-side, probabilistic
  • Explicit: Guaranteed cache hits, requires managing cache objects

For explicit caching, we'd need:

  1. Add @google/generative-ai dependency
  2. Use GoogleAICacheManager to create/update/delete caches with TTL
  3. Pass cache name via providerOptions.google.cachedContent

This is a bigger lift but would give guaranteed savings. See:

thisisryanswift avatar Jan 04 '26 20:01 thisisryanswift

so the google ai sdk provider doesnt track it?

rekram1-node avatar Jan 04 '26 22:01 rekram1-node

Doesn't seem like it locally. Before and after of this change is that I can go back to sessions and actually see a cached token amount.

Could be something upstream in Vercel's AI SDK?

Transparently I mostly vibe coded here. I did try to be intentional though. Ultimately rolled back from my initial issue. I thought it wasn't caching at all. But really it's just not reporting/recording caching

thisisryanswift avatar Jan 04 '26 22:01 thisisryanswift

what provider are u using? google directly? any plugins?

rekram1-node avatar Jan 05 '26 01:01 rekram1-node

Google via apikey. Only plugin is a notifier I vibe coded. None of the antigravity or gemini cli auth provider plugins.

thisisryanswift avatar Jan 05 '26 02:01 thisisryanswift

Hmm I tested this several times and couldnt get any difference in token counting but it could just be happenstance..

rekram1-node avatar Jan 05 '26 06:01 rekram1-node

So I don't think it will change your total token count in the TUI. But, this change will correctly identify cached tokens. So it impacts the price displayed in the TUI and also would have those cached token counts if someone went back and analyzed their sessions. Previously, Gemini was showing zero cached tokens (at least it was for me locally) when you went back and looked at old sessions. Was just never recording the implicit/automatic caching.

thisisryanswift avatar Jan 05 '26 16:01 thisisryanswift

Do you think this change will apply the fix to the Vertex AI connections, too?

danchurko avatar Jan 06 '26 16:01 danchurko

Do you think this change will apply the fix to the Vertex AI connections, too?

Iirc, the caching is by model not by provider generally. But there is a mix of both. If you use anthropic claude models via vertex I think current system will already cache correctly. But I didn't test this myself. If you use Gemini via vertex my assumption would be that server side implicit caching is working automatically (already). But it like suffers the same fate that this fixes where OpenCode doesn't record that. Not 100% sure.

thisisryanswift avatar Jan 07 '26 00:01 thisisryanswift