WebDAV door large dir listing: Requested array size exceeds VM limit
Dear dCache developers,
We use our dCache to back up an Openstack Swift instance. There are some pretty large buckets in there, with up to 13 million objects. Unfortunately, these translate to dCache as files in a directory, and this breaks our WebDAV door:
Mar 2 14:40:40 pike3 dcache@webdav2880-pike3Domain: java.lang.OutOfMemoryError: Requested array size exceeds VM limit
Mar 2 14:40:40 pike3 dcache@webdav2880-pike3Domain: Dumping heap to /var/log/webdav2880-pike3Domain-oom.hprof ...
Mar 2 14:40:40 pike3 dcache@webdav2880-pike3Domain: Unable to create /var/log/webdav2880-pike3Domain-oom.hprof: File exists
Mar 2 14:40:40 pike3 dcache@webdav2880-pike3Domain: Terminating due to java.lang.OutOfMemoryError: Requested array size exceeds VM limit
Mar 2 14:40:51 pike3 dcache@webdav2880-pike3Domain: OpenJDK 64-Bit Server VM warning: Max heap size too large for Compressed Oops
I found this article that explains it: https://plumbr.io/outofmemoryerror/requested-array-size-exceeds-vm-limit
Unfortunately it seems we can't solve this with assigning more heap space. We tried 180GB of heap space and it still failed. It seems the number of elements in an array is the hard limit and not the allocated memory.
Is there an easy way to fix this, besides putting stuff in subdirectories?
I would understand if you'd say "dCache is not built for directories with 13 million items".
Kind regards, Onno
Hi Onno,
Could you somehow share the file webdav2880-pike3Domain-oom.hprof?
Cheers, Paul
Hi Paul,
I've sent you a share link. The file is ~20GB.
Cheers, Onno