Observed excessive memory usage when enabling storage.metrics configuration
Hi team,
Recently we observed excessive memory usage when enabling storage.metrics configuration and for some data, we are not sure what they truly mean. I have checked official doc but still cannot figure it out. Can you please help us on that? Examples:
//storage.backlog
"chunks": {
"total": 33983,
"up": 33982,
"down": 1,
"busy": 2047,
"busy_size": "3.9G"
}
//fluentbit_input
status": {
"overlimit": true,
"mem_size": "4.8G",
"mem_limit": "190.7M"
}
The ingestion rate is tiny (~33kb/s) and we already set Mem_Buf_Limit to 200MB and storage.backlog.mem_limit to 100MB. So it can be kind of confused that:
- Why from the data, 33,983 chunks are loaded into memory and it’s currently busy processing 2047 chunks and is using about 3.9G of memory – From the doc, we can see
storage.max_chunks_upis only default to 128. - Also, why the
mem_sizecan reach to 4.8G while themem_buf_limitalready be set to 200M? What doesmem_sizeexactly mean? - How to ensure FluentBit uses only the allocated 200MB with
storage typeas filesystem
It will be great if we can update the official doc for more detailed explanation for these terms😊.
Thanks
Does fluent-bit with backlog chunks stored in the file set in storage.path? And which version of fluent-bit are you using?
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the exempt-stale label.
This issue was closed because it has been stalled for 5 days with no activity.