İsmet BALAT
İsmet BALAT
we checked all ingress with `kubectl get ingress -nprod -oyaml | grep -A 10 slow_start` and just one ingress contains "**slow_start**" configuration (_at the same time, this ingress set weigted...
hi @huangm777 , do you have any update for this issue?
hi @kishorj , why `targetgroupbinding-max-exponential-backoff-delay` value is **16m 40s**? If I set as **~1m**, will effect or break another thing?
If I have no failed item, when I set every minute, no job works, right? Otherwise, when overwhelm the workqueue, If I increase the pod resources, can it continue correctly?
any update? please solve this problem now same issue: https://github.com/kubernetes/autoscaler/issues/6679
@databonanza do you have any solution? Problem still exists for you?
thanks for info. Sure, example issue scenario is below: 1. UI has `weighted random` algorithm and `slow start` is **0** 2. I set `"alb.ingress.kubernetes.io/target-group-attributes" = "slow_start.duration_seconds=60, deregistration_delay.timeout_seconds=35" ` on ingress...
We are using `--targetgroupbinding-max-exponential-backoff-delay=60s` while long time and it is working, no another problem for now
is there any update?
omg, after 6h later, still pods at "Terminating" status and node is "NotReady".  btw, instance is **m5.large**. And I got new instance stdout logs: ``` [ 8080.945657] xfs filesystem...