Hi,
A little background information:
- rancher version: v2.2.6
- kubernetes version: v1.14.3
- cluster: custom nodes
- cloud_provider: AWS
- air-gap environment: true
- rancher HA: true
- kubernetes nodes: 3 controlplane/etcd nodes, 2 worker nodes
I am trying to perform a drain of a worker node so that I may perform an AMI patch without disrupting any services. I am able to perform the drain by running kubectl:
kubectl drain <insert_node_name> --ignore-daemonsets
However, when I try to perform the drain via the Rancher UI, nothing happens. I checked the rancher logs and found the following:
2019/10/01 19:11:25 [ERROR] nodeDrain: [ip-XX-XX-XX-XX] in cluster [c-abcd1] : <nil>
2019/10/01 19:11:26 [INFO] Draining node ip-XX-XX-XX-XX in c-abcd1
2019/10/01 19:11:26 [ERROR] nodeDrain: [ip-XX-XX-XX-XX] in cluster [c-abcd1] : <nil>
2019/10/01 19:11:26 [INFO] Draining node ip-XX-XX-XX-XX in c-abcd1
2019/10/01 19:11:26 [ERROR] nodeDrain: [ip-XX-XX-XX-XX] in cluster [c-abcd1] : <nil>
2019/10/01 19:11:26 [INFO] Draining node ip-XX-XX-XX-XX in c-abcd1
2019/10/01 19:11:26 [ERROR] nodeDrain: [ip-XX-XX-XX-XX] in cluster [c-abcd1] : <nil>
2019/10/01 19:11:26 [INFO] Draining node ip-XX-XX-XX-XX in c-abcd1
2019/10/01 19:11:27 [ERROR] nodeDrain: [ip-XX-XX-XX-XX] in cluster [c-abcd1] : <nil>
2019/10/01 19:11:27 [INFO] Draining node ip-XX-XX-XX-XX in c-abcd1
2019/10/01 19:11:27 [ERROR] nodeDrain: [ip-XX-XX-XX-XX] in cluster [c-abcd1] : <nil>
2019/10/01 19:11:27 [INFO] Draining node ip-XX-XX-XX-XX in c-abcd1
2019/10/01 19:11:27 [ERROR] nodeDrain: [ip-XX-XX-XX-XX] in cluster [c-abcd1] : <nil>
I tried draining the node using the default options as well as with high grace periods and drain timeouts. Observed the same errors from the logs of the container.
Anyone no why this is happening?
Thanks so much for the help,
Zach