feat(server + client): streaming mutations and queries over HTTP
- Closes #4477
- Closes #4911
- Partially closes #544 (this PR is not SSE but long-running stuff over HTTP at least)
π― Changes
- Replace
httpBatchStreamLinkwith an implementation that also handles async generators and deferred promise - Colocate the serializer and deserialize so it's not spread across
@trpc/clientand@trpc/server - it's basically a simplified variant of tupleson that should be easier to fit in with fewer edge-cases
Other stuff / notes
- I untangled some of the hornets' nest that is httpBatchStream / httpLink / httpBatchStreamLink - the functionality was way too generic and I've made each link a bit more dumb
- Updated docs
- https://www-git-05-01-stream-trpc.vercel.app/docs/migrate-from-v10-to-v11#reverse-chronological-changelog
- https://www-git-05-01-stream-trpc.vercel.app/docs/client/links/httpBatchStreamLink#generators
- We should probably add support for generators in subscriptions too (done in #5713)
- Not tested too much - especially around cancellations and stream backpressure etc
Diagnostics Comparison
Numbers
| Metric | PR | next |
|---|---|---|
| Files | 798 | 798 (β 0) |
| Lines of Library | 40,640 | 40,640 (β 0) |
| Lines of Definitions | 120,184 | 120,086 (πΊ 98) |
| Lines of TypeScript | 4,967 | 4,967 (β 0) |
| Lines of JavaScript | 0 | 0 (β 0) |
| Lines of JSON | 0 | 0 (β 0) |
| Lines of Other | 0 | 0 (β 0) |
| Identifiers | 175,981 | 175,837 (πΊ 144) |
| Symbols | 109,421 | 109,350 (πΊ 71) |
| Types | 89 | 89 (β 0) |
| Instantiations | 0 | 0 (β 0) |
| Memory used | 174,212 | 177,313 (π½π’ -3,101) |
| Assignability cache size | 0 | 0 (β 0) |
| Identity cache size | 0 | 0 (β 0) |
| Subtype cache size | 0 | 0 (β 0) |
| Strict subtype cache size | 0 | 0 (β 0) |
Timings and averages
| Metric | PR | next |
|---|---|---|
| max (s) | 4.353 | 4.314 (πΊ 0.04) |
| min (s) | 4.353 | 4.314 (πΊ 0.04) |
| avg (s) | 4.353 | 4.314 (πΊ 0.04) |
| median (s) | 4.353 | 4.314 (πΊ 0.04) |
| length | 1 | 1 (β 0) |
unstable timings
Unstable
Timings are not reliable in here
| Metric | PR | next |
|---|---|---|
| I/O Read time | 0.05 | 0.04 (πΊ 0.01) |
| Parse time | 0.7 | 0.72 (π½π’ -0.02) |
| ResolveTypeReference time | 0.03 | 0.03 (β 0) |
| ResolveModule time | 0.11 | 0.1 (πΊ 0.01) |
| ResolveLibrary time | 0.01 | 0.02 (π½π’ -0.01) |
| Program time | 1.02 | 1.04 (π½π’ -0.02) |
| Bind time | 0.42 | 0.41 (πΊ 0.01) |
| Total time | 1.43 | 1.45 (π½π’ -0.02) |
The latest updates on your projects. Learn more about Vercel for Git βοΈ
| Name | Status | Preview | Comments | Updated (UTC) |
|---|---|---|---|---|
| next-prisma-starter | β Ready (Inspect) | Visit Preview | May 19, 2024 4:08pm | |
| og-image | β Ready (Inspect) | Visit Preview | π¬ Add feedback | May 19, 2024 4:08pm |
| trpc-sse | β Failed (Inspect) | May 19, 2024 4:08pm | ||
| www | β Ready (Inspect) | Visit Preview | π¬ Add feedback | May 19, 2024 4:08pm |
Super, super cool!
Some high-level thoughts from running this in prod:
- I needed to add some sort of heartbeat which was sent every few seconds and just ignored by the client. This will obviously vary based on the infra you're hosting with, but many load balancers will close the socket if nothing is being sent for a few seconds whereas it wouldn't if you just passed a few bytes.
- The socket being closed from the client/server was annoying. I needed to race with the socket closed event inside the iterators. There were a ton of edge cases around this which ended up being a pain to debug. Not sure if you've seen them all.
Another thing I found useful was retaining the query and mutation methods in the client, but having them instead just use the last value. This meant I could re-use the same methods for streaming and non-streaming applications. I instead had mutateGenerator and queryGenerator for when I explicitly needed to use those.
I don't feel strongly about this, but just a thought.
Hey thanks @iamnafets
Some high-level thoughts from running this in prod:
- I needed to add some sort of heartbeat which was sent every few seconds and just ignored by the client. This will obviously vary based on the infra you're hosting with, but many load balancers will close the socket if nothing is being sent for a few seconds whereas it wouldn't if you just passed a few bytes.
The idea is not to use queries and mutations for "infinite" iterators - we expect them to stream continuously and then end. What you're describing is a subscription (I think) which is implemented in #5713
- The socket being closed from the client/server was annoying. I needed to race with the socket closed event inside the iterators. There were a ton of edge cases around this which ended up being a pain to debug. Not sure if you've seen them all.
I know. I've seen some, maybe not all π¬
Another thing I found useful was retaining the
queryandmutationmethods in the client, but having them instead just use the last value. This meant I could re-use the same methods for streaming and non-streaming applications. I instead hadmutateGeneratorandqueryGeneratorfor when I explicitly needed to use those.I don't feel strongly about this, but just a thought.
Again, this feels like a subscription and not a query/mutaiton
Hey thanks @iamnafets
Some high-level thoughts from running this in prod:
- I needed to add some sort of heartbeat which was sent every few seconds and just ignored by the client. This will obviously vary based on the infra you're hosting with, but many load balancers will close the socket if nothing is being sent for a few seconds whereas it wouldn't if you just passed a few bytes.
The idea is not to use queries and mutations for "infinite" iterators - we expect them to stream continuously and then end. What you're describing is a subscription (I think) which is implemented in #5713
Yeah, Iβm still talking about finite queries here. We saw hangups happening as soon as 15s which was quite annoying to debug. You can of course just handle this as a user but itβs not exactly a user concern.
This pull request has been locked because we are very unlikely to see comments on closed issues. If you think, this PR is still necessary, create a new one with the same branch. Thank you.