Use --endpoint-reconciler-type flag
Is this a BUG REPORT or FEATURE REQUEST?: FEATURE REQUEST
What happened:
Prior to Kubernetes v1.9 the only option was master-count which means master endpoints are only cleaned up when the count changes. There is a new option lease.
The 'lease' reconciler stores endpoints within the storage api for better cleanup of deleted (or removed) API servers
What you expected to happen:
In HA setups when a master node goes down the internal kubernetes service is not cleaned up and the broken endpoint still gets used. This causes issues when any changes occur during an master outage and different cluster services will fail when reaching the bad node.
How to reproduce it (as minimally and precisely as possible):
Bring up an HA cluster and bring one of the nodes down. Run kubectl describe svc kubernetes and you will see that both nodes are still in the Endpoints: field.
Anything else we need to know?: PR Upstream Issue
After doing some testing the lease seems to be very short, after removing the the apiserver, the kubernetes service gets reconciled almost immediately.
Unfortunately the milestone for this flag to become beta status was pushed back to v1.11