About the dimension of controlplane und etcd nodes

Hi Support,

first of all, i would like to thank you guys for your great and impressive work.

We are planning to set up multiple clusters with Rancher 2.
In this manner, we need to set up etcd and controlplane nodes on each of them. In the 1.6 documentation, there are hints about the size for those nodes (Kubernetes in Rancher). Like >=1 CPU & 1.5GB RAM for an etcd node.

Are these still valid values for 2.x clusters?

Regards
Chris

All requirements are listed in the 2.x docs here: https://rancher.com/docs/rancher/v2.x/en/installation/requirements/

1 Like

Hi superseb,

thank you for your reply.
I actually found that page, but hoped i was getting this wrong. That means i have the same requirements (especially Hardware power) for

  • the Rancher Server itself,
  • each etcd Node
  • and each Node in the control plane.

So if I’m getting this right, each API-Server in each cluster needs to have access to the same compute power as the rancher server itself?
In comparison, if the etcd node in a Rancher 1.6 installation need that less compute power?
Old: 1 CPU 1.5 GB vs.
New: 4 CPU 16GB

If it’s really correct, can someone explain to me why there is so much difference between 1.6 and 2.1?

I’m also interested in this question.

The documentation says that:

Whether you’re configuring Rancher to run in a single-node or high-availability setup, each node running Rancher Server must meet the following requirements.

If I understand this right, then the etcd or control plane nodes do not run the rancher server. The documentation is therefore very precise for which type of node the requirements have to be fulfilled. For nodes running Rancher Server.

So the question is, what requirements are needed for the etcd and control plane nodes?

Best regards,
Christian

It all depends on how you assigned roles to your nodes.

Rancher server runs as a deployment in Kubernetes. Deployments run on worker nodes. If you have 3 nodes that all have the controlplane, etcd and worker roles, then Rancher server runs on the same nodes as etcd. If you separated out your roles, so that etcd and controlplane were on different nodes than your worker nodes, then Rancher server would NOT run on the same nodes as etcd.

If you separated them out, then you would probably want to follow the Kubernetes hardware configuration guide for etcd sizing.

It should also be noted, that according to Rancher, the Kubernetes cluster that runs Rancher server should ONLY be used for running Rancher, and not your workloads.
https://rancher.com/docs/rancher/v2.x/en/installation/ha/

Important:
For the best performance, we recommend this Kubernetes cluster to be dedicated 
only to run Rancher. After the Kubernetes cluster to run Rancher is setup, you 
can create or import clusters for running your workloads.

So what I believe is the recommended way, and is also what we have done, is to create a 3-node cluster where all nodes run all 3 roles. This cluster runs our Rancher server instance and is sized according to Rancher’s recommendation. We then create other Kubernetes clusters that we imported into Rancher, and we run all of our workloads in those clusters. That way, you can size your workload clusters however you see fit based on your requirements.