lance icon indicating copy to clipboard operation
lance copied to clipboard

page compaction after encoding

Open broccoliSpicy opened this issue 1 year ago • 3 comments

For encodings like fsst, we store the decoding meta data in PageInfo.encoding, however, we may expect many decoding meta data in one page after compaction

broccoliSpicy avatar Jul 06 '24 23:07 broccoliSpicy

my another concern here is under the assumption that we want our write/read page size in align with the physical disk/cloud storage's optimal write/read size,

but we actually can't tell the output size of a encoding when we are issuing it

broccoliSpicy avatar Jul 08 '24 23:07 broccoliSpicy

We spoke in person (well, google meet) about this issue and here is my understanding:

Now that we have compressive encodings we need to worry about the difference between "decoded size" and "encoded size". Our current approach is "accumulate at least 8MB of decoded data, encode all data, write page" (the 8MB is configurable). If an encoding is very compressive then we might write small pages.

In addition, many encodings have a preferred "chunk size". For example, FSST creates a unique symbol table for each chunk. In Fastlanes style bitpacking/for/delta the authors operate in chunks of size 1024 rows (and each chunk may have a unique bit width).

I propose something like this:

  • Add a new method to the encoding trait to report "preferred chunk size" (may be none if encoding does not chunk)
  • The primitive page encoder accumulates at least "chunk size" bytes of data and then calls the encode routine and puts the encoded buffers in an accumulation queue.
  • The primitive page encoder then accumulates at least "page size" bytes of compressed/encoded data and writes a page.

westonpace avatar Jul 12 '24 20:07 westonpace

@westonpace ha, there are actually some comments related to this issue under #2563, nothing new that we haven't covered in the google meet though

broccoliSpicy avatar Jul 15 '24 20:07 broccoliSpicy