Assertion triggered on small allocations with custom chunk size
Hi, we've ran into a problem an wanted some guidance.
So, the metall manager has this k_chunk_size template parameter which is set to 1 << 21 by default.
If we set it to e.g. 1 << 28 metall fails to allocate small objects (as far as I can tell everything below 2*sizeof(void *)). The following assertion is triggered inside metall:
multilayer_bitset.hpp:481: std::size_t metall::kernel::multilayer_bitset::num_all_blocks(std::size_t) const: Assertion `idx < mlbs::k_num_layers_table.size()' failed.
So I'm basically just here to ask if this is expected behaviour or if this is a bug (an explaination of what the parameter does exactly would also be appreciated) 😅
Sorry forgot to mention: allocation is done through
auto *x = manager.construct<unsigned char>("test")[sizeof(void *)]();
The error is an expected one due to the limitation of the current implementation.
A chunk holds multiple slots (allocation space) of the same allocation size. Metall uses a multi-layer bitset to track used/unused slots within a chunk.
The i-th element in k_num_layers_table holds the number of required bitset layers to manage 2^i slots.
Currently, the table supports up to 2^24 slots for now.
Since Metall's smallest allocation size is 8 bytes, the maximum chunk size Metall could support is 2^27 (8 x 2^24) bytes.
Is there a situation where you want to use a large chunk size?
We were basically trying to see if there is any performance benefit to choosing a greater chunk size. But apart from that/if you think it makes no difference we don't have any reason to choose a larger chunk size
Were you able to get a notable performance improvement by using a large chunk size?
Unfortunately, there is no simple relationship between the chunk size and Metall's performance. I always use the default chunk size.
No, we were not able to improve the performance by increase the chuck size.
Is supporting up to 2^27 chunk size still okay for you?