Generate a skewed workload at shard granularity in mako
Mako's -z option can generate a skewed workload based on Zipf distribution, but this is at the record level. It would be useful to skew workloads over shards, rather than records, so some are busier than others.
A new option -zs could divide the record count by the default shard size. While running, zipfian_next() could select a shard from that set. Then a random record could be selected from that shard.
I have written a FDB workload SkewedReadWriteWorkload (#7087 ) doing the similar thing.
If mako is not a requirement for your case, it can be helpful.
I will try it, thank you. The reason I use mako is it allows me to create a custom configuration with many ss. I think the workloads only can be used in a simulation environment, if so getting workloads to work outside simulation would be a nice enhancement.
I think the workloads only can be used in a simulation environment, if so getting workloads to work outside simulation would be a nice enhancement.
Many workloads can be used outside simulation, e.g., ConsistencyCheck.
This workload can be used outside simulation test.
I used it for many DD rebalance tests on a real cluster, and the commented setting in the toml file is the setting I used in real cluster. To use workload outside simulation, you need some fdbserver -r multitest as described here.
this will help, thanks.