Undocumented capacity limit for HeaderMap
HeaderMap has a capacity limit of 24576 headers. Adding any more headers triggers a panic when reserving more capacity.
I can't see any way to prevent the panic when handling incoming headers, except to hard-wire a check for this value into calling code.
It would be useful to expose this value as a public constant.
At the very least it should be documented. Currently HeaderMap::reserve says "Panics if the new allocation size overflows usize." This should be "Panics if the new allocation size is greater than 24576."
Good catch, thanks for reporting this! I'd propose a few improvements:
- Add a section to the struct docs about the max capacity. Since there are a few methods that would allocate more space, seems better to have a single place to refer to the limit.
- Update
reserveto say "panicks if the new size is larger than the internal max. See struct docs for more." - Optional: Add a
try_reservemethod. This changes the task from a docs fix to a feature request, so perhaps that should be a separate ticket if anyone wants this method.
I believe the correct way to fix that is to introduce fallible try_append method.
It's too fragile to rely on reservation methods & documentation only and easy to miss the handling.
It's clearly vulnerability right now, because all usages like this one will cause a panic
cc @seanmonstar
Yea, I think the three bullet points I listed above would be a great addition. Want to submit a PR?