Allow task autoscaler to scale to nearest divisor of the partition count
Allows the scaler to scale up to the nearest divisor of the partition count. This helps support even distribution of partitions across tasks, lowering total lag across the supervisor. Computes/caches the factors for the partitionCount on supervisor submit, and re-computes them ad-hoc if the topic's partition counts are changed during supervisor execution.
Scale up:
- First, bump by
scaleOutStep. - Then, search in partition count factors for the next highest factor ≥ the new
desiredTaskCount. Set to this value.
Scale down:
- Remains the same. I figured it'd be better to be precise in the scale down (and not cause inadvertent lag by either scaling below the new
desiredTaskCount, or preventing scale down in the case where no factor exists betweencurrentTaskCountanddesiredTaskCount).
I intend to contribute the proportional scaler spoken about here in a separate change.
Description
Allows the scaler to scale up to the nearest divisor of the partition count. This helps support even distribution of partitions across tasks, lowering total lag across the supervisor. I opted to go with static tests in the LagBasedAutoScalerTest.java file, since this was more flexible/easier than start/stop + sleeping until the scheduled threads were able to run.
Release note
Allows the lag-based auto-scaler to scale up to the nearest divisor of the partition count.
Key changed/added classes in this PR
-
indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/supervisor/autoscaler/LagBasedAutoScaler.java -
indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/supervisor/autoscaler/LagBasedAutoScalerConfig.java -
indexing-service/src/test/java/org/apache/druid/indexing/seekablestream/supervisor/autoscaler/LagBasedAutoScalerTest.java
This PR has:
- [ ] been self-reviewed.
- [ ] using the concurrency checklist (Remove this item if the PR doesn't have any relation to concurrency.)
- [ ] added documentation for new or modified features or behaviors.
- [ ] a release note entry in the PR description.
- [ ] added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
- [ ] added or updated version, license, or notice information in licenses.yaml
- [ ] added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
- [ ] added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
- [ ] added integration tests.
- [ ] been tested in a test Druid cluster.
I figured it'd be better to be precise in the scale down (and not cause inadvertent lag by either scaling below the new desiredTaskCount, or preventing scale down in the case where no factor exists between currentTaskCount and desiredTaskCount).
This one is tough, but I think we should think through it a little more. Scaling to a non-factor (assuming evenly balanced partitions), would be no better than scaling to the next lowest factor. Some of the indexers will get the same number of partitions as if we'd gone to the lower factor. If lag does increase here, we'd scale back up. If it doesn't, we'd possibly not scale to the lower factor when we could have. Or we may end up scaling down again after the cool-down period, which is more disruptive.
I figured it'd be better to be precise in the scale down (and not cause inadvertent lag by either scaling below the new desiredTaskCount, or preventing scale down in the case where no factor exists between currentTaskCount and desiredTaskCount).
This one is tough, but I think we should think through it a little more. Scaling to a non-factor (assuming evenly balanced partitions), would be no better than scaling to the next lowest factor. Some of the indexers will get the same number of partitions as if we'd gone to the lower factor. If lag does increase here, we'd scale back up. If it doesn't, we'd possibly not scale to the lower factor when we could have. Or we may end up scaling down again after the cool-down period, which is more disruptive.
Scaling to a non-factor (assuming evenly balanced partitions), would be no better than scaling to the next lowest factor.
Hmm, but at least we remove the worst case where you accidentally remove more tasks than necessary? That might trigger a second scale-up, etc. This seemed worse than any benefit gained from scaling down even more to an even split.
@kfaraz did you have a chance to look at this?
@jtuglu-netflix , have been a little occupied lately, will take a look at this either later today or tomorrow. Thanks for your patience!
@jtuglu-netflix , have been a little occupied lately, will take a look at this either later today or tomorrow. Thanks for your patience!
Sounds good, thank you!
@kfaraz any thoughts here?
@jtuglu-netflix , I was a little caught with some other items. I will finish the review of this PR this week.
@jtuglu-netflix We really want to get this PR in for druid 34. The code freeze for that is July 8th. Can this make it please :)
This pull request has been marked as stale due to 60 days of inactivity. It will be closed in 4 weeks if no further activity occurs. If you think that's incorrect or this pull request should instead be reviewed, please simply write any comment. Even if closed, you can still revive the PR at any time or discuss it on the [email protected] list. Thank you for your contributions.
This pull request/issue has been closed due to lack of activity. If you think that is incorrect, or the pull request requires review, you can revive the PR at any time.