Redundancy to imported cluster

Installed a single-node Rancher 2.5.5 a while ago.
Imported a 8 node Kubernetes cluster, with 3 nodes that are control-plane/etcd. Works like a charm.

If I kill (by disconnecting the network connection) the node I performed the cluster import on (master01) the entire cluster becomes unavailable in Rancher until the node is back up again. If I kill any other control-plane node (master02, master03) only the node in question becomes unavailable. For Kubernetes it doesn’t seem to matter what node I kill off, as long as two are up it works correctly when managed outside Rancher.

I was under the impression that since Rancher correctly recognizes all my control-plane nodes as such, it simply would talk to any remaining control-plane node if one goes down. It does not appear to be so.

Tried reading all docs regarding this but I simply don’t understand what I’m missing. Either I’ve missed something very simple or the only option is to have a load balancer between Rancher and the contor-plane nodes.

Did you import your cluster by pointing at only master01 or by pointing at a load balancer or a DNS hostname that points to master01, master02, & master03?

I’m guessing you did the first, and if so then Rancher (and any kubeconfig files from Rancher) will think master01 is the only control plane node. If you set up the worker nodes that way I’m not sure what, if anything, with them is smart enough to know about master02 & master03, but they might all be using only master01 too.