Rookie questions about Rancher 2.X

Hello,

I am planning to install rancher 2.X cluster on-premise and you can see the architecture at the file attached here.

Could you please reply to my questions below?

  1. Is the architecture correct and recommended?
  2. Does each worker node have to have cluster.yml includes all nodes information?
  3. If answer of the question no.2 is yes, whenever node added or removed, does cluster.yml of all nodes have to be updated?

Thank you.

If you are going to have “hundreds of worker nodes” then you may not want to make the 3 HA nodes also be worker nodes, so I would remove that role from those nodes.

I presume you are going to use RKE to create the cluster. If so, then there is one cluster.yml that includes information about all of the nodes.

If you want to add another worker node, then you can run the rke command with the parameter --update-only and it won’t touch the controlplane nodes, and will only update the workers. If you are making a change that affects the controlplane (binds, version, etc) then you run rke up and it will make changes to all of the applicable nodes.

Your single cluster.yml file would look like this:

nodes:
  - address: 10.0.1.1
    user: admin
    role: [controlplane,etcd]
  - address: 10.0.1.2
    user: admin
    role: [controlplane,etcd]
  - address: 10.0.1.3
    user: admin
    role: [controlplane,etcd]
  - address: 10.0.1.X
    user: admin
    role: [worker]

You can see all possible config options at https://rancher.com/docs/rke/v0.1.x/en/example-yamls/

Thank you for your reply.

Yes, you are correct. Three nodes are only for master role. I updated the architecture.

I have one more question.

Is RKE need to be installed only in controlplane nodes, not worker nodes? Also does cluster.yml exist only in the controlplane node?

RKE is not something you install in the cluster. It is just a binary program that you run on your laptop, or some computer separate from the cluster. The cluster.yml is also located on that same computer, although you should also keep it in a Git repo or source control system.

You should set up ssh keys between your computer and the hosts so that RKE doesn’t need to ask for a password every time. When you run rke up, your computer SSH’es to each host, and runs the docker commands to start the necessary containers. It also will save a kubectl config file in the same directory that you can put into your ~/.kube directory so that you can use kubectl to access the new cluster once it is ready.

Another thing I should point out. I presume you are using RKE to create a Kubernetes cluster to install Rancher 2.0 into (RKE is not the same thing as Rancher). If so, then the Rancher docs say that the cluster that you install Rancher into, should ONLY be used to run Rancher, and not your workloads. So what you should do, is create a 3-node cluster using RKE, where all 3 nodes have all roles configured. Once that cluster is up, you can install Rancher into that cluster.

THEN, you can create the “workload” cluster that you diagrammed, and once that second cluster is running, you do an IMPORT into Rancher, and then Rancher will be able to manage the second cluster.

https://rancher.com/docs/rancher/v2.x/en/installation/ha/

Really thank you.

I will try it. :slight_smile:

@shubbard343
Would you recommend placing all the cluster/workload requirements in one place like here - https://rancher.com/docs/rke/v0.1.x/en/example-yamls/#full-cluster-yml-example
Then use rke to build the cluster.
Run the rke with parameter --update-only to add extra worker nodes as needed.

you can use this file below … chnage your ip and ssh key path accordingly.

nodes:

  • address: “180.179.114.782”
    port: “22”
    internal_address: “195.134.187.125”
    role:
    • controlplane
    • worker
    • etcd
      user: centos
      docker_socket: /var/run/docker.sock
      ssh_key_path: /root/rke/sandboxKey.pem
  • address: “180.179.114.438”
    port: “22”
    internal_address: “195.134.187.146”
    role:
    • controlplane
    • worker
    • etcd
      user: centos
      docker_socket: /var/run/docker.sock
      ssh_key_path: /root/rke/sandboxKey.pem
      services:
      etcd:
      snapshot: true
      retention: “6h”
      creation: “24h”

Required for external TLS termination with

ingress-nginx v0.22+

ingress:
provider: nginx
options:
use-forwarded-headers: “true”