EccoTheFlintstone
EccoTheFlintstone
I tried the aforementioned code in a loop but didn't see any leak (memory was steady at 1 GB) on a linux box What I saw was a heavy memory...
Hum, indeed it seems to be related to rayon: https://stackoverflow.com/questions/59205184/how-can-i-change-the-number-of-threads-rayon-uses ``` fn main() { rayon::ThreadPoolBuilder::new().num_threads(4).build_global().unwrap(); std::thread:sleep(std::time::Duration::from_secs(100)) } ``` after some tests, it appears rayon consumes about 65 MB / thread...
Ok, i thin I got an explanation : this is due to Libc https://www.faircom.com/insights/take-control-of-linux-core-file-size https://ewirch.github.io/2013/11/linux-process-memory-layout.html foreach thread, libc creates a new heap with 64 MB I tried compiling the example...
As for the initial problem (memleak), I take back what I said, I DO get an increase in VmRSS and VmData in /proc/pid/status over time
> I wonder if it can be changed at runtime? Like that we could try to limit this nonsensical memory usage. You can try using MALLOC_ARENA_MAX env variable but it...
I get the same behaviour with jemalloc (maybe I missed something), but it seems to be working with tcmalloc (apart from the memleak..) compiling in static with musl is also...
Regardless, concerning rayon, you should maybe use a custom ThreadPool (not the global rayon one) and add a config option to specify how many threads one want when using the...
Did you find the root cause of the memleak? There are AFAIK 2 problems here : - huge memory consumption due to libc threads (in rayon) - potential memleak if...
Any news on this fix?
hey, any news on this? It would be a great feature indeed