trpc icon indicating copy to clipboard operation
trpc copied to clipboard

feat(server + client): streaming mutations and queries over HTTP

Open KATT opened this issue 1 year ago β€’ 6 comments

  • Closes #4477
  • Closes #4911
  • Partially closes #544 (this PR is not SSE but long-running stuff over HTTP at least)

🎯 Changes

  • Replace httpBatchStreamLink with an implementation that also handles async generators and deferred promise
  • Colocate the serializer and deserialize so it's not spread across @trpc/client and @trpc/server
  • it's basically a simplified variant of tupleson that should be easier to fit in with fewer edge-cases

Other stuff / notes

  • I untangled some of the hornets' nest that is httpBatchStream / httpLink / httpBatchStreamLink - the functionality was way too generic and I've made each link a bit more dumb
  • Updated docs
    • https://www-git-05-01-stream-trpc.vercel.app/docs/migrate-from-v10-to-v11#reverse-chronological-changelog
    • https://www-git-05-01-stream-trpc.vercel.app/docs/client/links/httpBatchStreamLink#generators
  • We should probably add support for generators in subscriptions too (done in #5713)
  • Not tested too much - especially around cancellations and stream backpressure etc

KATT avatar May 03 '24 11:05 KATT

Diagnostics Comparison

Numbers

Metric PR next
Files 798 798 (βž– 0)
Lines of Library 40,640 40,640 (βž– 0)
Lines of Definitions 120,184 120,086 (πŸ”Ί 98)
Lines of TypeScript 4,967 4,967 (βž– 0)
Lines of JavaScript 0 0 (βž– 0)
Lines of JSON 0 0 (βž– 0)
Lines of Other 0 0 (βž– 0)
Identifiers 175,981 175,837 (πŸ”Ί 144)
Symbols 109,421 109,350 (πŸ”Ί 71)
Types 89 89 (βž– 0)
Instantiations 0 0 (βž– 0)
Memory used 174,212 177,313 (πŸ”½πŸŸ’ -3,101)
Assignability cache size 0 0 (βž– 0)
Identity cache size 0 0 (βž– 0)
Subtype cache size 0 0 (βž– 0)
Strict subtype cache size 0 0 (βž– 0)

Timings and averages

Metric PR next
max (s) 4.353 4.314 (πŸ”Ί 0.04)
min (s) 4.353 4.314 (πŸ”Ί 0.04)
avg (s) 4.353 4.314 (πŸ”Ί 0.04)
median (s) 4.353 4.314 (πŸ”Ί 0.04)
length 1 1 (βž– 0)
unstable timings

Unstable

Timings are not reliable in here

Metric PR next
I/O Read time 0.05 0.04 (πŸ”Ί 0.01)
Parse time 0.7 0.72 (πŸ”½πŸŸ’ -0.02)
ResolveTypeReference time 0.03 0.03 (βž– 0)
ResolveModule time 0.11 0.1 (πŸ”Ί 0.01)
ResolveLibrary time 0.01 0.02 (πŸ”½πŸŸ’ -0.01)
Program time 1.02 1.04 (πŸ”½πŸŸ’ -0.02)
Bind time 0.42 0.41 (πŸ”Ί 0.01)
Total time 1.43 1.45 (πŸ”½πŸŸ’ -0.02)

github-actions[bot] avatar May 03 '24 11:05 github-actions[bot]

The latest updates on your projects. Learn more about Vercel for Git β†—οΈŽ

Name Status Preview Comments Updated (UTC)
next-prisma-starter βœ… Ready (Inspect) Visit Preview May 19, 2024 4:08pm
og-image βœ… Ready (Inspect) Visit Preview πŸ’¬ Add feedback May 19, 2024 4:08pm
trpc-sse ❌ Failed (Inspect) May 19, 2024 4:08pm
www βœ… Ready (Inspect) Visit Preview πŸ’¬ Add feedback May 19, 2024 4:08pm

vercel[bot] avatar May 03 '24 11:05 vercel[bot]

Super, super cool!

Some high-level thoughts from running this in prod:

  1. I needed to add some sort of heartbeat which was sent every few seconds and just ignored by the client. This will obviously vary based on the infra you're hosting with, but many load balancers will close the socket if nothing is being sent for a few seconds whereas it wouldn't if you just passed a few bytes.
  2. The socket being closed from the client/server was annoying. I needed to race with the socket closed event inside the iterators. There were a ton of edge cases around this which ended up being a pain to debug. Not sure if you've seen them all.

iamnafets avatar May 16 '24 17:05 iamnafets

Another thing I found useful was retaining the query and mutation methods in the client, but having them instead just use the last value. This meant I could re-use the same methods for streaming and non-streaming applications. I instead had mutateGenerator and queryGenerator for when I explicitly needed to use those.

I don't feel strongly about this, but just a thought.

iamnafets avatar May 16 '24 17:05 iamnafets

Hey thanks @iamnafets

Some high-level thoughts from running this in prod:

  1. I needed to add some sort of heartbeat which was sent every few seconds and just ignored by the client. This will obviously vary based on the infra you're hosting with, but many load balancers will close the socket if nothing is being sent for a few seconds whereas it wouldn't if you just passed a few bytes.

The idea is not to use queries and mutations for "infinite" iterators - we expect them to stream continuously and then end. What you're describing is a subscription (I think) which is implemented in #5713

  1. The socket being closed from the client/server was annoying. I needed to race with the socket closed event inside the iterators. There were a ton of edge cases around this which ended up being a pain to debug. Not sure if you've seen them all.

I know. I've seen some, maybe not all 😬

Another thing I found useful was retaining the query and mutation methods in the client, but having them instead just use the last value. This meant I could re-use the same methods for streaming and non-streaming applications. I instead had mutateGenerator and queryGenerator for when I explicitly needed to use those.

I don't feel strongly about this, but just a thought.

Again, this feels like a subscription and not a query/mutaiton

KATT avatar May 17 '24 14:05 KATT

Hey thanks @iamnafets

Some high-level thoughts from running this in prod:

  1. I needed to add some sort of heartbeat which was sent every few seconds and just ignored by the client. This will obviously vary based on the infra you're hosting with, but many load balancers will close the socket if nothing is being sent for a few seconds whereas it wouldn't if you just passed a few bytes.

The idea is not to use queries and mutations for "infinite" iterators - we expect them to stream continuously and then end. What you're describing is a subscription (I think) which is implemented in #5713

Yeah, I’m still talking about finite queries here. We saw hangups happening as soon as 15s which was quite annoying to debug. You can of course just handle this as a user but it’s not exactly a user concern.

iamnafets avatar May 17 '24 15:05 iamnafets

This pull request has been locked because we are very unlikely to see comments on closed issues. If you think, this PR is still necessary, create a new one with the same branch. Thank you.

github-actions[bot] avatar May 21 '24 18:05 github-actions[bot]