datamon
datamon copied to clipboard
Investigate the role of chunk size streaming.
Currently the upload to backend of CAFS waits on the buffer == leaf size being read in Look into if sending the data to the backend without waiting for the buffer to be read in. The eventual key can be copied over by S3 (unclear if this will be faster). Alternatively the chunk size can be played with.
@kerneltime looks like the current situation is as follows:
- reads and writes are parallelized in chunks the size of a leaf
- we do not split further down than the leaf size
I assume this is acceptable and that we could close this one, unless there is still something I missed Please advise