We have a Rancher installation running on version
v2.4.15 currently hosting one Custom RKE cluster on version
1.17.6 running in docker
19.3.12. The RKE cluster is composed of 10 workers and 3 master nodes.
We made a mistake when we initially lifted the cluster and we didn’t provisioned a load-balancer for Rancher to communicate with the 3 RKE masters seamlessly, making Rancher server to only communicate with one master instead of balancing requests between the 3 masters.
We noticed this because we had an outage recently with one of our master servers and we noticed that Rancher was always trying to do
kube-api requests on the IP of that one server, ignoring the existence of the other two. on the UI the cluster was unmanageable. we want to avoid this forcing rancher to go through a load balancer or at the least be multi master node aware
After diving into the documentation we’ve been struggling and unable to find a proper recommendation to reconfigure Rancher to communicate with masters via the load-balancer’s IP…
Could someone point us to the right direction on how to achieve this?
Note: We also posted this question on your slack channel but we decided to post here as it might be more appropriate.