dcache icon indicating copy to clipboard operation
dcache copied to clipboard

WebDAV door large dir listing: Requested array size exceeds VM limit

Open onnozweers opened this issue 3 years ago • 2 comments

Dear dCache developers,

We use our dCache to back up an Openstack Swift instance. There are some pretty large buckets in there, with up to 13 million objects. Unfortunately, these translate to dCache as files in a directory, and this breaks our WebDAV door:

Mar  2 14:40:40 pike3 dcache@webdav2880-pike3Domain: java.lang.OutOfMemoryError: Requested array size exceeds VM limit
Mar  2 14:40:40 pike3 dcache@webdav2880-pike3Domain: Dumping heap to /var/log/webdav2880-pike3Domain-oom.hprof ...
Mar  2 14:40:40 pike3 dcache@webdav2880-pike3Domain: Unable to create /var/log/webdav2880-pike3Domain-oom.hprof: File exists
Mar  2 14:40:40 pike3 dcache@webdav2880-pike3Domain: Terminating due to java.lang.OutOfMemoryError: Requested array size exceeds VM limit
Mar  2 14:40:51 pike3 dcache@webdav2880-pike3Domain: OpenJDK 64-Bit Server VM warning: Max heap size too large for Compressed Oops

I found this article that explains it: https://plumbr.io/outofmemoryerror/requested-array-size-exceeds-vm-limit

Unfortunately it seems we can't solve this with assigning more heap space. We tried 180GB of heap space and it still failed. It seems the number of elements in an array is the hard limit and not the allocated memory.

Is there an easy way to fix this, besides putting stuff in subdirectories?

I would understand if you'd say "dCache is not built for directories with 13 million items".

Kind regards, Onno

onnozweers avatar Mar 02 '22 14:03 onnozweers

Hi Onno,

Could you somehow share the file webdav2880-pike3Domain-oom.hprof?

Cheers, Paul

paulmillar avatar Mar 02 '22 15:03 paulmillar

Hi Paul,

I've sent you a share link. The file is ~20GB.

Cheers, Onno

onnozweers avatar Mar 02 '22 16:03 onnozweers