HA for master nodes

hi, i am very new to rancher and kubernetes. I want to create HA of multiple master nodes using rancher GUI?
please, let me know the procedure.


HA install for Rancher is documented at https://rancher.com/docs/rancher/v2.x/en/installation/ha/, creating production ready clusters within Rancher is documented at https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/production/

just want to confirm that whether Layer 4 load balancer(tcp) works perfectly fine on VM machines? I read somewhere it won’t supports. only layer 7 loadbalancer supports this.

Create your cluster using RKE, andin your cluster.yaml declare three nodes that are control plane. Its that easy.



##then declare al your worker nodes

An L4 load balancer definitely works and is recommended. We deploy to AWS and use an NLB.

hello yeti,
thanks for immediate reply.
As of now I have created 2 master nodes and 1 worker node.
now i have to test certain cases like :

  1. if my first master node gets down, then whether the second master node is able to take entire load? moreover, i need to confirm is it ok to test this with 2 master node or it requires 3 master nodes ?

Please read the documentation linked, in https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/production/#count-of-etcd-nodes it clearly states 2 etcd nodes does not give you fault tolerance.

@ [kamlesh] It is generally good practice to always use an odd number of masters, as the control-plane nodes perform leader elections.

Leader election is the mechanism that guarantees that only one instance of the kube-scheduler — or one instance of the kube-controller-manager — is actively making decisions, while all the other instances are inactive, but ready to take leadership if something happens to the active one.

I thought so but all the K8s docs say you only need two Masters… Do you have any reference to validate that the Masters perform leader election?

There you go! https://rancher.com/docs/rancher/v2.x/en/troubleshooting/kubernetes-resources/#kubernetes-controller-manager-leader

https://medium.com/michaelbi-22303/deep-dive-into-kubernetes-simple-leader-election-3712a8be3a99 & others. Just google it

Words are getting conflated here. There is nothing we call a “master” in Rancher, nodes have the “control plane” or “etcd” role.

etcd has leader election and a "master " inside of itself. You should always have an odd number etcd nodes. There is no reason to ever have an even number except temporarily during a failure or on the way up (or down) to the next odd number; even is strictly worse than odd. And 2 is the absolute worst number to have, because you still have no fault tolerance (if either goes down you have no quorum) but have introduced twice as many hard drives, power supplies, NICs, DIMMs, CPUs etc that could fail.

Control plane nodes talk to etcd, provide the API, and tell worker nodes to do things. More than one provides redundancy in case one fails (and can sometimes horizontally scale load). You do not need an odd number of them. If you have more than one then you need a load balancer or DNS round-robin to distribute requests from users/nodes to the healthy control plane nodes.


i am little bit confuse regarding number of control plane and etcd required for HA of master. currently i have updated my cluster with 3 master node (each having 1 etcd role and 1 control plane role) and 1 worker node (which has 1 worker role only).
is it right to move forward ?
or some ground level changes still required before start with installation.

Again, there is nothing called a “master”. To survive the failure of any one node, you want:

  • 3 or 5 nodes with the etcd roles
  • 2 or more control plane
  • 2 or more worker

A single node can have one or more of those roles (i.e. 3 nodes with all 3 roles satisfies the above). Combining etcd and control plane together is common.

can we put roles (etcd, control plane, and worker ) on the same node? will they work fine ?