Use a ReadableStream with byte source (formerly called ReadableByteStream) for .body
/cc @yutakahirano
ReadableByteStream has been merged into ReadableStream interface. It's time to update the ReadableStream section of the Fetch spec (https://fetch.spec.whatwg.org/#readablestream) to adopt the updated ReadableStream and construct a ReadableStream with the byte handling feature (i.e. getReader() returns a ReadableStreamBYOBReader when a certain option is passed).
In the first step, we don't need to fully use the BYOB responding API but may just call the same enqueue() method.
Currently the fetch spec allows constructing a Response with any ReadableStream, which can thus be enqueued with ArrayBuffers, Uint8Arrays or potatoes.
Restricting the Response body to a ReadableStream with a byte source would remove the possibility to enqueue potatoes, leaving space for ArrayBuffer and ArrayBufferView.
But enqueuing something else than Uint8Array would still lead to TypeError when consuming the body, since "read all bytes" requires Uint8Array objects. Can we remove that restriction? That would make it consistent with the possibility to create a Response with whatever ArrayBuffer/ArrayBufferView.
I thought we wanted to require Uint8Array?
Yeah, a while ago we discussed whether we should perform some auto-conversion from array buffer views or blobs or strings, and decided to settle on being conservative and just allowing Uint8Array. I think the discussion was at https://github.com/yutakahirano/fetch-with-streams/issues/53.
"read all bytes" is not really doing any conversion, it merely appends a Uint8Array to a byte sequence. Any BufferSource would be as easily appendable there.
I do not see what is gained with this restriction. I can see how inconsistent it might be if Response gets restricted to a byte ReadableStream.
https://github.com/yutakahirano/fetch-with-streams/issues/53 seems mostly about upload.
Yeah, though a Response object created in a service worker feeding a document/worker API is very similar to upload. I think we should try to keep them consistent.
@yutakahirano is this fixed?
Not yet.
This might not happen quickly, as we want to run an experiment in Chrome to ensure it is web-compatible to use a byte stream. In theory there is no problem, but AFAIK it's never been tested and it's not something we can afford to break.
It seems we should do this before upload streams though. No reason not to have the best-in-class there.
How will Request.clone() and Response.clone() work? Currently, to clone a body, we tee the body's stream. But teeing always returns two regular streams, i.e. non-byte streams. It looks like we first need to define how to tee a readable byte stream into two new readable byte streams.
This might also affect other parts of the specification. For example, in this suggestion on #1144, we'd also want to clone a Request.
I don't understand what is proposed initially and why it blocks upload streams.
- Using ReadableStreamBYOBReader is a part of how we handle a response.
- The Uint8 restriction is applied only if we upload streams in HTTP-network fetch and service worker accepts event.respondWith with non-Uint8 stream.
The Uint8 restriction is applied only if we upload streams in HTTP-network fetch and service worker accepts event.respondWith with non-Uint8 stream.
I'm not sure if I understand the comment, but a streaming request body containing a non-Uint8Array chunk should result in an errored stream in the service worker.
@yoichio if ReadableStreamBYOBReader only makes sense for reading (I'm not sure, to be clear, I haven't looked into this in detail), wouldn't it at least impact requests in service worker fetch events?
I understood ReadableStreamBYOBReader on response was not a core part of issue but accepting only Uint8Array as a streaming request body was.
but a streaming request body containing a non-Uint8Array chunk should result in an errored stream in the service worker.
However I still don't understand this behavior. @yutakahirano, what part of the spec declares that?
Sorry the part is underspecified. We use "create a proxy" in the spec, but it doesn't match our mental mode perfectly. The request and response body transferring between the page and the service worker should reject chunks other than Uint8Arrays, and it doesn't have to preserve the chunk boundary.
@yutakahirano good point, it seems that should be somewhat straightforward to fix with the new "read incrementally" operation.
It should be done by changing "Let requestForServiceWorker be a clone of request." part in HTTP fetch to something having chunk type checking?
I suppose it's really only observable there, yeah. The network side already does the type check. In retrospect it would have been nicer if we could have done this closer to the source for both Request and Response objects, but so be it I guess. (I don't think we can change that anymore as it would have to change object identity.)
I was thinking about creating a TransformStream whose transformer
- checks each chunk's type, and
- may split / combine chunks.
It may be good to talk with @ricea.
I was thinking about creating a TransformStream whose transformer
- checks each chunk's type, and
- may split / combine chunks.
I've thought the same thing. The question I got stuck on in the past is: do we want to define a "reasonable" chunk size to split/combine to?
Personally I think it's fine to leave that <a>implementation-defined</a> until it becomes a problem.
Can we unblock this from upload-streaming-blocking following https://github.com/whatwg/fetch/pull/1199#issuecomment-819409174 ?
I think so, but it would be great if @domenic @MattiasBuelens or @ricea could elaborate on what BYOB stream adoption would mean for Fetch first and how feasible it is to get there if we ship things as-is (i.e., without BYOB).
On a spec level, we're trying to tackle https://github.com/whatwg/streams/issues/1121 to let the spec infrastructure create readable byte streams.
The upgrade path from vanilla readable streams to byte readable streams is seamless: before, getReader({ type: "byob" }) would throw. After implementation, it would work.
On a spec level, we're trying to tackle whatwg/streams#1121 to let the spec infrastructure create readable byte streams.
The upgrade path from vanilla readable streams to byte readable streams is seamless: before,
getReader({ type: "byob" })would throw. After implementation, it would work.
Can that be done currently on Chrome ?
I am trying to make response.body to be handled as byte stream so I get use getReader({ type: "byob" }) with the goal to specify byteLength, which we want to be 64K steadily. But I get
TypeError: Failed to execute 'getReader' on 'ReadableStream': Cannot use a BYOB reader with a non-byte stream
Can that be done currently on Chrome ?
No. First, the specification needs to be updated, that's what this issue is about. Afterwards, Chrome can implement the spec change and make it available to its users.
I am trying to make
response.bodyto be handled as byte stream so I get usegetReader({ type: "byob" })with the goal to specifybyteLength, which we want to be 64K steadily.
Note that a BYOB reader doesn't guarantee that it'll fill your entire 64 KB view on every read(view). That'll need a new method on ReadableStreamBYOBReader, see https://github.com/whatwg/streams/issues/1143.
Thank you @MattiasBuelens for the reply and the helpful suggestions. You just saved my day.
2022 update: Firefox shipped byte stream as of version 102, and it shipped Fetch with byte stream (which was not really intentional but accidental IMO, given that there is still no relevant test). And thus this has been working on Firefox for months:
new Response(new Uint8Array([1,2])).body.getReader({mode: 'byob'})
2022 update: Firefox shipped byte stream as of version 102, and it shipped Fetch with byte stream
That's great news! It implies that it is web-compatible. We are currently evaluating the priority of implementing it in Chrome.
Given https://github.com/whatwg/fetch/issues/267#issuecomment-822559022 is the only thing that's missing here WPT tests or do we need a specification change still?