Append overlay refactor proposal
This branch propose to avoid clones in append by storing offset and size in previous overlay depth. That way on rollback we can just truncate and change size of existing value. To avoid copy it also means that :
- append on new overlay layer if there is an existing value: create a new Append entry with previous offsets, and take memory of previous overlay value.
- rollback on append: restore value by applying offsets and put it back in previous overlay value
- commit on append: appended value overwrite previous value (is an empty vec as the memory was taken). offsets of commited layer are dropped, if there is offset in previous overlay layer they are maintained.
- set value (or remove) when append offsets are present: current appended value is moved back to previous overlay value with offset applied and current empty entry is overwrite (no offsets kept).
The modify mechanism is not needed anymore. This branch lacks testing and break some existing genericity (bit of duplicated code), but good to have to check direction.
Generally I am not sure if it is worth or we just should favor differents directions (transients blob storage for instance), as the current append mechanism is a bit tricky (having a variable length in first position means we sometime need to insert in front of a vector).
Please don't review yet (will change design a bit to be closest to https://github.com/paritytech/polkadot-sdk/issues/30)
Generally I am not sure if it is worth or we just should favor differents directions (transients blob storage for instance), as the current append mechanism is a bit tricky (having a variable length in first position means we sometime need to insert in front of a vector).
So this applies to all appendable storage structus?
Not sure if it is worth it, but could we selectively add this with to a StorageValue with an append_only attribute?
AFAIK we only need this fast truncate on append-only vectors like System::Events, but the advantage is that they are never read.
Otherwise we should probably add a test to check that this solution has a low memory footprint on deeply nested reverts (as expected).
So this applies to all appendable storage structus?
yes all storage item containing a value. But as I did comment a bit in the code, using a same key value with sp_io::set and later with sp_io::append is really bad and no sensible runtime should allow it. (if you got a value that is encoded Vec
Not sure if it is worth it, but could we selectively add this with to a StorageValue with an append_only attribute?
So yes in practice it could be in a different key space, but that is basically what transient storage does (need to copy final value in some finalize to a standard state value though). Alternatively (or extending the transient storage) a child trie/state could be exclusive to append value.
AFAIK we only need this fast truncate on append-only vectors like System::Events, but the advantage is that they are never read.
yes, but we cannot ensure it. In my next change, the storage will make this usecase better (with https://github.com/paritytech/polkadot-sdk/issues/30, if only writes followed by a single read (root calculation), then we will not have anymore the costy resize each time the number of element Compact
I think I did align with https://github.com/paritytech/polkadot-sdk/issues/30 , should be reviewable (requires more test, especially since most test access on every transaction change and thus render the data).
If this works out, then we maybe dont need https://github.com/paritytech/substrate/pull/14120
If this works out, then we maybe dont need paritytech/substrate#14120
I really think storing big value is a mistake so at some point paged list really make sense. (ability to produce way smaller proofs at small cost)
If this works out, then we maybe dont need paritytech/substrate#14120
I really think storing big value is a mistake so at some point paged list really make sense. (ability to produce way smaller proofs at small cost)
Ah right. So it depends on the use-case. For System::Events, we can just use your MR. But for para-chain cases we maybe rather want to use the paged list.
For events we should use transient storage (only write root of an event trie in state and still index data in an external db), but this is probably not here soon, so yes short term this is more pragmatic.
I went through a bit of fuzzing and fix a few issue, think this would be good to review now.
The CI pipeline was cancelled due to failure one of the required jobs. Job name: test-linux-stable-int Logs: https://gitlab.parity.io/parity/mirrors/substrate/-/jobs/2850187