Split private-malloc into O(nbigallocs) and O(mem) cases
As of commit ab4d7b07, we have a new __private_malloc() implementation which never does mmap(), thanks to a 'large-enough' (1GB) up-front MAP_NORESERVE area created at the same time as the pageindex.
However, a 1GB area is not really sensible. It is simultaneously too large and not large enough. For a really large workload, we might exhaust it (e.g. a 128GB heap with 16-byte alignment would need 1GB of bitmap). Conversely, we only really need the no-mmap guarantee some of the time, to prevent reentrancies that we can't (or refuse to) deal with.
Ideally we would split private mallocs into two cases: those that are O(nbigallocs) i.e. roughly bounded by the number of big allocations, and those that are O(mem) i.e. bounded instead by the amount of memory in use. Bitmaps are in the latter category, metavectors in the former. The MAP_NORESERVE approach is fine for O(nbigallocs) and we could bound the amount of memory needed to much less than 1GB (since we have at most 32k bigallocs). One risk is that maybe this would bring back bad reentrancies, e.g. would a malloc hook ever need to create a bitmap?
The most obvious reentrancy I see is bitmap malloc -> mmap -> malloc of a mapping_sequence. But his would work fine, because the mapping_sequence is O(nbigallocs). Again this is a stratification property: we always reenter ourselves at a lower stratum -- we have two strata, under which the kernel page allocator is the bedrock.