h2: Currently not honoring request limits
The following test case passes, it should fail. Attempting the same request with HTTP/1 definitely fails. I'm not sure how the correct test case should look like, my assumption is that stream 1 should be reset and the connection preserved.
varnishtest "h2 req limits"
server s1 {
rxreq
expect req.url ~ "(/0123456789){30}"
expect req.http.foo ~ "bar(, 0123456789){1300}"
txresp
} -start
varnish v1 -cliok "param.set feature +http2"
varnish v1 -cliok "param.set http_req_size 256b"
varnish v1 -cliok "param.set http_req_hdr_len 40b"
varnish v1 -vcl+backend { } -start
client c1 {
stream 1 {
txreq -url "${string,repeat,30,/0123456789}" \
-hdr foo {bar${string,repeat,1300,", 0123456789"}}
rxresp
expect resp.status == 200
} -run
} -run
Regarding http_req_size parameter, currently it is not taken into account when using h2, instead the checked parameter is h2_max_frame_size + 9 (h2 header size), which defaults to 16k (the minimum)
We can't predict the length of headers until we decode them. They may be indexed in the session's HPACK dynamic table or compressed using a Huffman tree. I already checked left and right when I registered this issue that our HPACK decoder doesn't panic when it runs out of workspace.
Ideally, we should successfully decode an HPACK block even when the number of headers, headers lengths or the overall request size go over the configured limits. If we fail to decode an HPACK block we have to fail the entire h2 session, and what we should strive for is the failure of the offending stream and nothing else.
How should this stream fail? I did not research it last year, but I suspect it should simply be refused. RFC 7540 hopefully has an answer.