What happens to the clusters if the Rancher instance goes down?

I’m a bit confused of what the Rancher instance does compared to what the k8s master node does.

Is the Rancher node used constantly, or is it really only used for monitoring and/or when managing the clusters (e.g. scaling)?

Rancher is essentially a wrapper around the Kubernetes API. When you create or edit a workload, Rancher translates it into Kubernetes API calls and sends them to Kubernetes. Rancher does not do any of the management (scaling/scheduling/etc) itself. That is all done by Kubernetes.

If you have Rancher managing multiple clusters, and your Rancher instance goes down, you only lose the ability to see and update the workloads, but they will all continue to work while Rancher is unavailable.

1 Like

Perfect, thank you.

So the Rancher instance doesn’t necessarily have to be HA?

The HA means that the Rancher server instance is managed by Kubernetes. It doesn’t need to be HA, but you will want HA if you plan to use it in production.

For example, if you have a 3-node Kubernetes cluster with Rancher installed, then Rancher runs as a deployment in the Kubernetes cluster. It runs multiple instances of the Rancher server, so if one of the nodes dies, there are still 2 more instances running so you don’t notice any downtime (in terms of accessing the Rancher UI, not your real workloads). In non-HA, if the host you are running Rancher on dies, then you will need to manually recover it.

Running in HA also means that Rancher uses the etcd from Kubernetes meaning that Rancher’s database is also has HA.

I have a question about single node Rancher,

If Rancher goes down
mean used kubectl to manage k8s cluster was failed

Because in kube config, server url direct Rancher server url, not k8s master url

Is there a possibility that manage k8s cluster direct to master API url without Rancher translates it ?

Yes, presuming that your cluster has the credentials already set up. If you used RKE to create the cluster, it will create a file kube_config_cluster.yml in the same directory where you created the cluster from. That config will allow you to access the cluster directly and not through rancher.

Do you know how could we directly kubectl to the k8s cluster if the rancher UI instance is down?
The file that we have is just kubectl config

The kubectl config file is what tells the kubectl command how to authenticate, and what host to talk to. Changing the kubectl config will change how kubectl talks to the the cluster (or which cluster if you have more than 1).

You need to have the kubectl config file that points directly to the Kubernetes cluster. If you currently do not have access to the cluster, it may be hard to get this. If you created the cluster with RKE, then use the kube_config_cluster.yml file as your kubectl config to talk directly to the cluster.

The way you can (likely) tell the difference, is that the Rancher kubectl file will have a server line that looks like https://rancher.yourdomain.com/k8s/clusters/c-cb2ua and the direct file will look something like server: "https://192.168.11.21:6443".

If your Kubernetes was provisioned via a cloud provider, then you’ll need to follow the steps from them to get a kube config file for the cluster.

Fast way to get HELM and/or KUBECTL working with Rancher…

To get HELM and KUBECTL working with rancher…

Install KUBECTL and HELM

  • mkdir ~/.kube
  • cd .kube
  • nano config

Go into Rancher and copy contents of the “Kubeconfig file”

Paste into open “config” file you have in NANO

save this file

  • cd ~

Run kubectl and/or HELM and it will work

1 Like