Chris
Chris
> So the budget you have constructed defines that in the last 15 mins of a every hour you allow **1 disruption at a time**. I have the same question...
We are also seeing something similar in a number of our EKS clusters, using `v2.6.2` of the controller. Our configmap and lease update rates are similar to what was reported...
Hi, Apologies, by "something similar", I meant "what seems like excessively frequent updates for configmaps and leases from the aws-lb-controller" (i.e. once every other second _feels_ high, and also doesn't...
/remove-lifecycle stale
Also seeing this issue after upgrading to `v0.37.3`.
We bumped into this issue as well, as we have a number of cluster policies (i.e. kyverno) for enforcing various standards around how resources and limits are set across the...
We also hit this issue with a dual-stack cluster. We deploy all of the pieces via the `kube-prometheus-stack` chart. By default, the alertmanager peers use the singleStack/ipv4 `alertmanager-operated` service to...
Yes, the same warning message is present: ``` ts=2024-08-08T17:10:22.765Z caller=main.go:181 level=info msg="Starting Alertmanager" version="(version=0.27.0, branch=HEAD, revision=0aa3c2aad14cff039931923ab16b26b7481783b5)" ts=2024-08-08T17:10:22.765Z caller=main.go:182 level=info build_context="(go=go1.21.7, platform=linux/amd64, user=root@22cd11f671e9, date=20240228-11:51:20, tags=netgo)" ts=2024-08-08T17:10:22.777Z caller=cluster.go:179 level=warn component=cluster err="couldn't deduce...
> Thanks for confirming. Using status.podIPs instead of status.podIP fixes the issue "by accident" but it's not a proper resolution for the project. Totally agree, it's just a janky workaround...