Sebastian Cole
Sebastian Cole
I even think this could work well using `node.kubernetes.io/unschedulable` allowing someone to cordon a node and exclude it from being included in the nodesReady check.
Thanks @eytan-avisror we're actually using the custom resource for upgrades (workflows). The flow looks like this 1. patch the InstanceGroup; usually the image id. 2. the launch template is updated...
@backjo not sure I'm following your point. The pods are backed by deployments and statefulsets, but they can only be restarted at particular times (customer maint windows). I'm not sure...
inspired by #320
I've been thinking about this change over the last couple of days, and I think the best/most natural way to implement would be with: #320 + #321 , and an...
I'm happy to fix this up after LaunchTemplates get implemented - should just be a case of adding the new field name and merging them together to ensure backwards compatibility.
What about adding `labels` and merging with `nodeLabels` and removing `nodeLabels` is `v1alpha2`?
looks like ec2 instances have a tag: `aws:ec2launchtemplate:version: 2` which may help with this.
I manually `kubectl apply`d a `rollingUpgrade`, this time setting `drainTimeout: 600` and it succeeded. doing the same thing with `drainTimeout: -1` also errored. ``` apiVersion: upgrademgr.keikoproj.io/v1alpha1 kind: RollingUpgrade metadata: annotations:...
ok, I think I've figured out the chain of events to replicate the failure. 1. The upgrade-manager readme specifies that `strategy.drainTimeout` has a default value of `-1` (I'm assuming to...