geesefs icon indicating copy to clipboard operation
geesefs copied to clipboard

Memory allocation issue while reading many large files

Open enp opened this issue 2 years ago • 2 comments

Hi,

Reading https://github.com/yandex-cloud/geesefs?tab=readme-ov-file#memory-limit about ENOMEM error for cases where read-ahead-large * processes count > memory-limit

Looks as not good limitation for production cases where we can't limit processes count

Maybe it is possible to check if read-ahead-large can be really allocated and skip allocation to allow at least slow reading without read ahead?

enp avatar Feb 02 '24 07:02 enp

Hi, I'm not sure it's easy to implement it this way, but we can think about other ways to solve this problem For example we can dynamically increase memory limit when there are too many readers, or maybe just allow readers to wait for buffers to be unlocked I.e. now readers may get ENOMEM when, at the moment of reading, memory cache is full and all data in memory cache is marked as required for some read or write requests. It's probably possible to just make it to wait for this data to be unmarked...

vitalif avatar Feb 06 '24 09:02 vitalif

Any solution above looks better than ENOMEM really :)

enp avatar Feb 06 '24 11:02 enp

Try 0.43.0 https://github.com/yandex-cloud/geesefs/releases/tag/v0.43.0

vitalif avatar Mar 07 '25 16:03 vitalif