orc icon indicating copy to clipboard operation
orc copied to clipboard

Huge memory taken for each field when exporting

Open LouisClt opened this issue 3 years ago • 14 comments

Hello, Using arrow adapter, I became aware that the memory (RAM) footprint of the export (exporting an orc file) was very huge for each field. For instance, exporting a table with 10000 fields can take up to 30Go, even if there is only 10 records. Even for 100 fields, that could take 100Mo+. The "issue" seems to be coming from here : https://github.com/apache/orc/blob/432a7aade9ea8d3cd705d315da21c2c859bce9ef/c%2B%2B/src/ColumnWriter.cc#L59

When we create a writer with the "createWriter" (https://github.com/apache/orc/blob/432a7aade9ea8d3cd705d315da21c2c859bce9ef/c%2B%2B/src/Writer.cc#L681-L684 ), a stream (compressor) is created for each field. As we allocate a Buffer of 1 * 1024 *1024 we get as a minimum 1Mo additionnal size taken in memory for each field.

Is there a reason the BufferedOutputStream initial capacity is that high ? I circumvented my problem by lowering it to 1Ko (it didn't change much the performance according to my testing, but it may depend on usecases). Could it be envisaged to put a global variable (or static one) to parametrize this to allow changing this hard coded parameter ? Thanks

LouisClt avatar Aug 30 '22 15:08 LouisClt

cc @wgtmac , @stiga-huang , @coderex2522

dongjoon-hyun avatar Aug 31 '22 03:08 dongjoon-hyun

@LouisClt To support the zero-copy mechanism, class BufferedOutputStream will have an internal data buffer. And the default capacity of the internal data buffer is 1MB. This default capacity size should be able to be modified, but here's a hint that if the buffer capacity is set too small, it may cause the buffer to expand and trigger memcpy function frequently.

coderex2522 avatar Aug 31 '22 16:08 coderex2522

We may replace the DataBuffer by a new Buffer implementation with a much smarter memory management to automatically grow and shrink its size according to actual usage. This management can happen on the column basis.

wgtmac avatar Sep 13 '22 02:09 wgtmac

Thanks everyone for your answers. I understand the possible performances issues linked with lowering too much the size of the buffer (on my testing it was OK in my case though). I think the solution given by @wgtmac would be fine for me, and better than passing by global variables, if it is feasible.

LouisClt avatar Sep 22 '22 14:09 LouisClt

I have created a JIRA to track the progress: https://issues.apache.org/jira/browse/ORC-1280

wgtmac avatar Sep 27 '22 02:09 wgtmac

@dongjoon-hyun @wgtmac @LouisClt I will follow up on this issues(ORC-1280) and implement a much smarter memory management.

coderex2522 avatar Sep 27 '22 02:09 coderex2522

Thank you, @coderex2522 .

dongjoon-hyun avatar Sep 27 '22 23:09 dongjoon-hyun

Hello, it seems there were commits referencing this issue. Is this issue now fixed ?

LouisClt avatar Jan 23 '23 16:01 LouisClt

Hello, it seems there were commits referencing this issue. Is this issue now fixed ?

@LouisClt Thanks for your follow-up.

We have implemented a block-based buffer called BlockBuffer (by @coderex2522) and used it to replace the output buffer in the CompressionStream. It can decrease the memory footprint to some extent.

IMO, the next step is to use it to replace the input buffer of the CompressionStream which has the size of compressionBlockSize per stream.

wgtmac avatar Jan 24 '23 11:01 wgtmac

Hello, it seems there were commits referencing this issue. Is this issue now fixed ?

@LouisClt Thanks for your follow-up.

We have implemented a block-based buffer called BlockBuffer (by @coderex2522) and used it to replace the output buffer in the CompressionStream. It can decrease the memory footprint to some extent.

IMO, the next step is to use it to replace the input buffer of the CompressionStream which has the size of compressionBlockSize per stream.

To be precise, the rawInputBuffer of every CompressionStream is fixed to the compression block size which is 1M by default. Writer with many columns will suffer from large memory footprint and nothing can be done to alleviate it.

I have created a JIRA to track it: https://issues.apache.org/jira/browse/ORC-1365

cc @coderex2522

wgtmac avatar Feb 03 '23 07:02 wgtmac

Thanks for your reply @wgtmac and the implementation of the BlockBuffer. I'll wait for the replacement of the rawInputBuffer by the BlockBuffer in every compression stream then. Do you think it will take long ?

LouisClt avatar Feb 06 '23 09:02 LouisClt

Hi, @LouisClt . FYI, according to the Apache ORC release cycle, newly developed features will be delivered via v1.9.0 on September 2023 (if they are merged to Apache ORC before.)

  • https://github.com/apache/orc/milestones

dongjoon-hyun avatar Feb 06 '23 11:02 dongjoon-hyun

Understood, and thanks for your answer !

LouisClt avatar Feb 06 '23 13:02 LouisClt

I will work on it.

luffy-zh avatar Feb 07 '23 01:02 luffy-zh