AWS Subnet Migration Documentation
Is your feature request related to a problem? Hello! I am looking for some documentation to add clarity around how the controller handles AWS subnet migrations.
For example, let's say:
- I have a subnets named
foo-az-1andfoo-az-2. - Both subnets are tagged
kubernetes.io/role/elb=1 - I want to migrate the ALB to new subnets named:
bar-az-1, andbar-az-2.
Based on the documentation here: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/deploy/subnet_discovery/, it would seem that I just tag bar-az-1 and bar-az-2 with kubernetes.io/role/elb set to 1. However, it's not clear if that would work or what the migration would look like. For example, the documentation says:
During auto-discovery, the controller considers subnets with at least eight available IP addresses. In the case of multiple qualified tagged subnets in an Availability Zone, the controller chooses the first one in lexicographical order by the subnet IDs.
Is the solution to, at first, ensure that foo-az-1, foo-az-2, bar-az-1, and bar-az-2 are all tagged with kubernetes.io/role/elb set to 1? Then once that is completed, untag foo-az-1 and foo-az-2?
It's not clear what the controller would do in this case. I'm also concerned about downtime, if there would be any.
Describe the solution you'd like Some clarity on the how subnet migrations take place.
Describe alternatives you've considered See above.
Hi, thanks for bringing this up. The proposed solution should work, as the controller would use the modify subnets API under the hood to change the subnets. Alternately, you can also use the subnet annotation instead of subnet discovery to explicitly specify the subnets to be used by the load balancer.
Is the solution to, at first, ensure that foo-az-1, foo-az-2, bar-az-1, and bar-az-2 are all tagged with kubernetes.io/role/elb set to 1? Then once that is completed, untag foo-az-1 and foo-az-2?
I have not had success with subnet migration via the subnet auto-discovery feature on v2.8.1. Instead, I had to use the subnet annotation that @aravindsagar suggested. Using the annotation, I have not seen downtime when the LB network interfaces are moved across subnets.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
I had to followed @aravindsagar recommendation; I've commented out all the subnets annotations but no migration was in place, so I took one subnet annotation and I updated it. I'm currently using v2.6. Based on @j-land comment, is there any newer version (after v2.8.1) that supports this behavior/migration without manual changes?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.