http icon indicating copy to clipboard operation
http copied to clipboard

Undocumented capacity limit for HeaderMap

Open jongiddy opened this issue 2 years ago • 3 comments

HeaderMap has a capacity limit of 24576 headers. Adding any more headers triggers a panic when reserving more capacity.

I can't see any way to prevent the panic when handling incoming headers, except to hard-wire a check for this value into calling code.

It would be useful to expose this value as a public constant.

At the very least it should be documented. Currently HeaderMap::reserve says "Panics if the new allocation size overflows usize." This should be "Panics if the new allocation size is greater than 24576."

jongiddy avatar Apr 27 '23 08:04 jongiddy

Good catch, thanks for reporting this! I'd propose a few improvements:

  • Add a section to the struct docs about the max capacity. Since there are a few methods that would allocate more space, seems better to have a single place to refer to the limit.
  • Update reserve to say "panicks if the new size is larger than the internal max. See struct docs for more."
  • Optional: Add a try_reserve method. This changes the task from a docs fix to a feature request, so perhaps that should be a separate ticket if anyone wants this method.

seanmonstar avatar Apr 28 '23 14:04 seanmonstar

I believe the correct way to fix that is to introduce fallible try_append method. It's too fragile to rely on reservation methods & documentation only and easy to miss the handling.

It's clearly vulnerability right now, because all usages like this one will cause a panic

cc @seanmonstar

DDtKey avatar Jul 18 '23 11:07 DDtKey

Yea, I think the three bullet points I listed above would be a great addition. Want to submit a PR?

seanmonstar avatar Jul 18 '23 13:07 seanmonstar