druid icon indicating copy to clipboard operation
druid copied to clipboard

Allow task autoscaler to scale to nearest divisor of the partition count

Open jtuglu1 opened this issue 9 months ago • 7 comments

Allows the scaler to scale up to the nearest divisor of the partition count. This helps support even distribution of partitions across tasks, lowering total lag across the supervisor. Computes/caches the factors for the partitionCount on supervisor submit, and re-computes them ad-hoc if the topic's partition counts are changed during supervisor execution.

Scale up:

  • First, bump by scaleOutStep.
  • Then, search in partition count factors for the next highest factor ≥ the new desiredTaskCount. Set to this value.

Scale down:

  • Remains the same. I figured it'd be better to be precise in the scale down (and not cause inadvertent lag by either scaling below the new desiredTaskCount, or preventing scale down in the case where no factor exists between currentTaskCount and desiredTaskCount).

I intend to contribute the proportional scaler spoken about here in a separate change.

Description

Allows the scaler to scale up to the nearest divisor of the partition count. This helps support even distribution of partitions across tasks, lowering total lag across the supervisor. I opted to go with static tests in the LagBasedAutoScalerTest.java file, since this was more flexible/easier than start/stop + sleeping until the scheduled threads were able to run.

Release note

Allows the lag-based auto-scaler to scale up to the nearest divisor of the partition count.


Key changed/added classes in this PR
  • indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/supervisor/autoscaler/LagBasedAutoScaler.java
  • indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/supervisor/autoscaler/LagBasedAutoScalerConfig.java
  • indexing-service/src/test/java/org/apache/druid/indexing/seekablestream/supervisor/autoscaler/LagBasedAutoScalerTest.java

This PR has:

  • [ ] been self-reviewed.
    • [ ] using the concurrency checklist (Remove this item if the PR doesn't have any relation to concurrency.)
  • [ ] added documentation for new or modified features or behaviors.
  • [ ] a release note entry in the PR description.
  • [ ] added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • [ ] added or updated version, license, or notice information in licenses.yaml
  • [ ] added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • [ ] added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • [ ] added integration tests.
  • [ ] been tested in a test Druid cluster.

jtuglu1 avatar May 08 '25 05:05 jtuglu1

I figured it'd be better to be precise in the scale down (and not cause inadvertent lag by either scaling below the new desiredTaskCount, or preventing scale down in the case where no factor exists between currentTaskCount and desiredTaskCount).

This one is tough, but I think we should think through it a little more. Scaling to a non-factor (assuming evenly balanced partitions), would be no better than scaling to the next lowest factor. Some of the indexers will get the same number of partitions as if we'd gone to the lower factor. If lag does increase here, we'd scale back up. If it doesn't, we'd possibly not scale to the lower factor when we could have. Or we may end up scaling down again after the cool-down period, which is more disruptive.

bsyk avatar May 15 '25 01:05 bsyk

I figured it'd be better to be precise in the scale down (and not cause inadvertent lag by either scaling below the new desiredTaskCount, or preventing scale down in the case where no factor exists between currentTaskCount and desiredTaskCount).

This one is tough, but I think we should think through it a little more. Scaling to a non-factor (assuming evenly balanced partitions), would be no better than scaling to the next lowest factor. Some of the indexers will get the same number of partitions as if we'd gone to the lower factor. If lag does increase here, we'd scale back up. If it doesn't, we'd possibly not scale to the lower factor when we could have. Or we may end up scaling down again after the cool-down period, which is more disruptive.

Scaling to a non-factor (assuming evenly balanced partitions), would be no better than scaling to the next lowest factor.

Hmm, but at least we remove the worst case where you accidentally remove more tasks than necessary? That might trigger a second scale-up, etc. This seemed worse than any benefit gained from scaling down even more to an even split.

jtuglu1 avatar May 15 '25 01:05 jtuglu1

@kfaraz did you have a chance to look at this?

jtuglu1 avatar May 19 '25 00:05 jtuglu1

@jtuglu-netflix , have been a little occupied lately, will take a look at this either later today or tomorrow. Thanks for your patience!

kfaraz avatar May 19 '25 08:05 kfaraz

@jtuglu-netflix , have been a little occupied lately, will take a look at this either later today or tomorrow. Thanks for your patience!

Sounds good, thank you!

jtuglu1 avatar May 19 '25 08:05 jtuglu1

@kfaraz any thoughts here?

jtuglu1 avatar May 24 '25 00:05 jtuglu1

@jtuglu-netflix , I was a little caught with some other items. I will finish the review of this PR this week.

kfaraz avatar Jun 02 '25 06:06 kfaraz

@jtuglu-netflix We really want to get this PR in for druid 34. The code freeze for that is July 8th. Can this make it please :)

cryptoe avatar Jun 30 '25 08:06 cryptoe

This pull request has been marked as stale due to 60 days of inactivity. It will be closed in 4 weeks if no further activity occurs. If you think that's incorrect or this pull request should instead be reviewed, please simply write any comment. Even if closed, you can still revive the PR at any time or discuss it on the [email protected] list. Thank you for your contributions.

github-actions[bot] avatar Nov 09 '25 00:11 github-actions[bot]

This pull request/issue has been closed due to lack of activity. If you think that is incorrect, or the pull request requires review, you can revive the PR at any time.

github-actions[bot] avatar Dec 07 '25 00:12 github-actions[bot]