2.0: Deleted nodes do not go away in Kubernetes
Summary of Issue: After deleting nodes from the cluster details page, Kubernetes always shows the deleted nodes as NotReady
Steps to Reproduce:
- Spin up a cluster with multiple nodes in DO.
- Once the cluster is ready, manually delete a node in the control plane.
- ssh into the master
- Run kubectl get nodes
Expected Results: The deleted nodes do not show up in the cluster.
Actual Results: The deleted nodes continually show as NotReady.
Dev Info: (fill out and add links to log files)
-
go version: 1.10 - SG latest commit hash or release tag: dddedff38267d34e16763fde1955ca4dec1cc5d7
- Number of Masters and Nodes: Any
- cloud provider: DO
this option?
--min-request-timeout int Default: 1800
An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load.
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/
--node-monitor-grace-period duration Default: 40s
Amount of time which we allow running Node to be unresponsive before marking it unhealthy. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status.
--node-monitor-period duration Default: 5s
The period for syncing NodeStatus in NodeController.
--node-startup-grace-period duration Default: 1m0s
Amount of time which we allow starting Node to be unresponsive before marking it unhealthy.
or this....
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/
Looks like control is just removing a droplet for this step. To gracefully shut down a node it should be removed from a kubernetes cluster before.
I think to solve this SG should send a delete node request to the kubernetes api following machine termination.
Unable to test until DO remove node is fixed.