Memory Leak & Garbage not working
Context
- MapFish print version: 3.28.2
- Java version: 11.0.21
- OS: Windows & Debian
Describe the bug
-
Description: When starting Mapfish, it exhibits a memory leak behavior. Initially, it consumes around 150 MB of RAM, but over time, this usage gradually increases. For instance, after 5 minutes, it reaches 200 MB.
-
Production Scenario: In a production environment, we utilize Mapfish Print to generate PDFs. However, when a user triggers the PDF generation process, Mapfish Print successfully completes the task. Unfortunately, it fails to release all the allocated memory afterward. Also there is thousand of thread created for the generation but the aren't delete after.
How to reproduce
I can't give the configuration files.
Put the json spec (request sent by the browser) here.
Actual results
I cn't submit the log
Put the logs here.
Expected results
I expect mapfish to release all the ram after usage and to not use more and more ram overtime
I don't see that it use so much space. To go further, it can be good to:
- See a longer period.
- Observe the garbage collector activity (jvm_gc_collection_seconds_count on our monitoring).
- Use low value of JMX to force the garbage collection.
- After a longer period the ram continues to climb, with time it goes faster
-
Garbage collector activity I dot know how to monitor it with Idea
-
As you can see on the capture below the first force does the job then after some call I did nothing and the thread stays alive
Here I call the service them do a garbage collection and as you can see there is a +50Mb diff
Is there something in Mapfish that's keep the data in case there was an other print on the same layers ?
When Mapfish-print is not in used this got +1000 Live object per sec (~50 000 bytes/sec)
I running Mapfish-print for 30min now is doing that (loop):
@sbrunner Does Mapfish-print store the data from previous print in case the same layers is called again ?
No, we don't directly store some things between the run For the thread number I don't know, @sebr72 do you have an idea? For the ramp we observe the same thing, It needs more in-depth investigation.
@LouisBoyaval I am working on the main/master branch. I noticed that indeed the threads are not deallocated immediately. They are managed by Threadpools (responsible to decide when the Threads are no longer required). Most likely it is the same in your version.
@sbrunner and @sebr72 Thanks for your responses ! I will investigate the excessive RAM usage.
@LouisBoyaval We believe the memory leaks are fixed starting from 3.31.5. Please confirm when you have a moment.