Hey guys,
I don’t know if this is expected behavior, but in our tests for handling shutdowned nodes, our workflow is as follows:
Rancher Hosts Overview -> Disable Host “worker-3”
$ kubectl get nodes
worker-1 Ready 1d v1.7.7-rancher1
worker-2 Ready 1d v1.7.7-rancher1
worker-3 Ready 1d v1.7.7-rancher1
If we evacuate the Host, Kubernetes respawns our deployments and services on the same inactive rancher host, as for Kubernetes the Host is actually “Ready”.
Is this expected behavior?
Regards Chris