RKE up cluster - Destroying first node borks cluster

I’m trying to understand if I’m missing something in the basic setup of RKE clusters, but for now I’m confused.

I have a 3-node RKE cluster using a cluster.yml config. Eventually we will use this for a Rancher 2.2 HA install but for now I’m having some fun with it, breaking it and trying to recover.

If I delete all services from node #2 or node #3 (using a cleanup.sh that purges all dockers, deletes contents of a bunch of folders e.g. /var/lib/etcd, /etc/kubernetes, /var/lib/rancher, /var/lib/kubelet et al), I can still run “kubectl get node” and it shows the afflicted node as “NotReady”.

If I then run “rke up --config cluster.yml”, it recovers the node and eventually “kubectl get node” shows the node as Ready.

However, if I do this to node #1, none of the kubectl commands work - granted my kubeconfig file points to that node’s IP, as we have no load balancer for port 6443. If I edit my kubeconfig file to point to node#2 (which is functional), the kubectl commands STILL hang.

“rke up --config cluster.yml” fails:
INFO[0090] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [10.104.83.21]
INFO[0090] [healthcheck] service [kube-controller-manager] on host [10.104.83.20] is healthy
INFO[0090] [healthcheck] service [kube-controller-manager] on host [10.104.83.21] is healthy
INFO[0090] [controlplane] Successfully started [rke-log-linker] container on host [10.104.83.20]
INFO[0090] [controlplane] Successfully started [rke-log-linker] container on host [10.104.83.21]
INFO[0090] [remove/rke-log-linker] Successfully removed container on host [10.104.83.21]
INFO[0090] [remove/rke-log-linker] Successfully removed container on host [10.104.83.20]
INFO[0090] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [10.104.83.21]
INFO[0090] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [10.104.83.20]
INFO[0090] [healthcheck] service [kube-scheduler] on host [10.104.83.20] is healthy
INFO[0090] [healthcheck] service [kube-scheduler] on host [10.104.83.21] is healthy
INFO[0091] [controlplane] Successfully started [rke-log-linker] container on host [10.104.83.20]
INFO[0091] [controlplane] Successfully started [rke-log-linker] container on host [10.104.83.21]
INFO[0091] [remove/rke-log-linker] Successfully removed container on host [10.104.83.20]
INFO[0091] [remove/rke-log-linker] Successfully removed container on host [10.104.83.21]
FATA[0141] [controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck: Service [kube-apiserver] is not healthy on host [10.104.83.19]. Response code: [403], response body: {“kind”:“Status”,“apiVersion”:“v1”,“metadata”:{},“status”:“Failure”,“message”:“forbidden: User “kube-apiserver” cannot get path “/healthz””,“reason”:“Forbidden”,“details”:{},“code”:403}
, log: I0430 16:43:39.971173 1 establishing_controller.go:73] Starting EstablishingController

Clearly there’s something special about node #1. The only path forward I’ve found so far is to destroy services on all nodes and build a new cluster with rke up (and if this were production, probably restore from etcd snapshot afterwards). Is this a limitation in RKE or can I make it “survive” any node’s destruction better than this?

Bump- Can anyone tell me what I’m missing here? I would assume knocking down 1 of the 3 nodes of the “Local” cluster in Rancher HA is fine, but if I happen to knock down the first one, the kube-apiserver for the remaining nodes goes out to lunch.