Hello,
We have a Rancher installation running on version v2.4.15 currently hosting one Custom RKE cluster on version 1.17.6 running in docker 19.3.12. The RKE cluster is composed of 10 workers and 3 master nodes.
We made a mistake when we initially lifted the cluster and we didn’t provisioned a load-balancer for Rancher to communicate with the 3 RKE masters seamlessly, making Rancher server to only communicate with one master instead of balancing requests between the 3 masters.
We noticed this because we had an outage recently with one of our master servers and we noticed that Rancher was always trying to do kube-api requests on the IP of that one server, ignoring the existence of the other two. on the UI the cluster was unmanageable. we want to avoid this forcing rancher to go through a load balancer or at the least be multi master node aware
After diving into the documentation we’ve been struggling and unable to find a proper recommendation to reconfigure Rancher to communicate with masters via the load-balancer’s IP…
Could someone point us to the right direction on how to achieve this?
Note: We also posted this question on your slack channel but we decided to post here as it might be more appropriate.
I’m interested in this conversation as well. Partly because I’ve noticed when I reboot one of the masters, I can’t get to the gui until that master is back on-line
We just had another problem with one our masters and we decided that we will re-deploy that master elsewhere (using a different IP).
It happens that the offending master is the one that Rancher is using to communicate with the cluster, which leaves us with tiny big problem since rancher is apparently only communicating with the cluster through that master (despite the cluster having 3 masters that are correctly displayed in the cluster page).
We know have a catch22 in our hands, because we really need to ensure that Rancher will use other masters before removing this one with the risk of rendering the cluster unreachable…
Rancher team, could you please shed some light on how can we reconfigure the cluster to ensure that Rancher will communicate with all the cluster’s masters or a given load-balancer ip?