Adding some flags on the memory usage & improve the cluster building process
Feature Request
We request 1G for fixture.Small and 4G for fixture.Medium. Several TiKV of 4G instances will quickly eat up the resource of the k8s cluster, while our tests may not use that much of memory.
On my laptop, starting a test cluster with 1PD,1TiDB,3TiKV leads to insufficient memory while docker reports that it only costs 4G/24G memory. Yep, it only used 4G memory, but the resource of the k8s cluster "exausted".
So:
-
Give a much smaller memory resource request value. Maybe the actual usage when a binary started. We can set up a large limit, but there is no reason to waste the resource. If you are worried about OOM of tests, a better testing scheduler should be applied.
-
Currently, there is no way to modify the recommended cluster configuration of TiPocket. But what I want is that I can use the recommended cluster as a fallback/default config set, then set based on my own needs. The one who created the test know best how much memory the test will cost, is not it?
If binlog returned cluster.Cluster rather than the concrete struct, then I need to use a cast to override the configuration.
The go way of DefaultConsturctor(...) will have a variadic argument that is func(*Option). Passing WithXXX() will let the constructor handle the dirty details while providing enough customization. RecommendedTiDBCluster can apply this trick.