Understanding the steps to Install rancher on multi-node env

This post was flagged by the community and is temporarily hidden.

You’re right somewhat, but missing an important part here.

Rancher manages Kubernetes clusters. So first of all, you run it somewhere, then you create or attach one or more downstream clusters to manage with it and do your work in those downstream clusters.

Rancher itself decided to handle being high availability by turning itself into a Kubernetes app as well, rather than reinventing the wheel on that, so that’s where you install Kubernetes and use helm to install Rancher to it. On the other hand, this kinda’ balloons your node count, though not all of them need to be that beefy (if you look at the system specs for Kubernetes nodes to run Rancher from their docs it’s pretty small).

Right now I’m doing a lab environment experimenting for a production use case that would have a proxy in DMZ and would want high availability everywhere possible, so what I ended up doing with my VMs to have a 5 compute node cluster required 12 VMs (3 VMs for Kubernetes to run Rancher, 3 downstream Kubernetes control plane nodes, 5 downstream Kubernetes workers, plus the external proxy that would be in the DMZ for the production use case, which for laziness I’ve just pointed to round robin through the default nginx ingress controller from RKE2 on the 5 downstream worker nodes).

Depending on what instructions you read it may also suggest you need two additional VMs to load balance the three node Rancher Kubernetes cluster and the three nodes for the downstream control plane. However I worked around that by creating a hostname in DNS with three A records (though during install I needed to narrow it down to just the first one I installed on and then expand to all three after), so if you can do this in a lab environment where you have control over your own DNS that can make life easier. If you do go the additional load balancer route, I don’t remember what all ports you need to forward, I think at least 9345 & 6443, but don’t recall if others are needed or not off the top of my head as my one attempt following those instructions fell through for other reasons and I found out DNS would work fine after and just did that.

Good luck.

Hi, sorry for the looooooong delay ! :slight_smile:

Hope I understand right.

If I’m now on this stage, that I have k8s cluster that looks ready. can I move to the ‘Rancher’ step ?

This is my output now right now :

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS      RESTARTS        AGE
default       command-demo                               0/1     Completed   0               3d23h
kube-system   calico-kube-controllers-555bc4b957-d4rxc   1/1     Running     0               3d23h
kube-system   calico-node-kc5mq                          0/1     Running     0               3d23h
kube-system   calico-node-lrjgc                          0/1     Running     0               3d23h
kube-system   calico-node-tb2pc                          0/1     Running     0               3d23h
kube-system   coredns-6d4b75cb6d-979xx                   1/1     Running     0               3d23h
kube-system   coredns-6d4b75cb6d-cn2gv                   1/1     Running     0               3d23h
kube-system   etcd-k8s-master                            1/1     Running     0               3d23h
kube-system   kube-apiserver-k8s-master                  1/1     Running     0               3d23h
kube-system   kube-controller-manager-k8s-master         1/1     Running     0               3d23h
kube-system   kube-proxy-8dzlg                           1/1     Running     0               3d23h
kube-system   kube-proxy-r2k8n                           1/1     Running     0               3d23h
kube-system   kube-proxy-tkkrr                           1/1     Running     0               3d23h
kube-system   kube-scheduler-k8s-master                  1/1     Running     0               3d23h
kube-system   weave-net-2z47s                            2/2     Running     1 (3d23h ago)   3d23h
kube-system   weave-net-r9l5r                            2/2     Running     1 (3d23h ago)   3d23h
kube-system   weave-net-w2pq6                            2/2     Running     1 (3d23h ago)   3d23h```

Maybe? It seems a little weird that you have weave and calico both installed but I haven’t looked into CNIs/network plugins all that much. If you can use kubectl and get a list of your pods then helm certainly might work now.

On the other hand, if you were going for high availability, you’d need more than one control plane node and that pod listing looks like you made one control plane and two workers. For something like Rancher rather than doing it this way I’d suggest doing as they do with their own base Kubernetes distros which is to make each node both control plane & worker (IIRC you can do this by removing the node taints on the worker nodes so they’ll be chosen for jobs.

oh, great to know.
Thanks for the info !