Rancher HA concept questions

Hi,

I want to setup Rancher in HA and off course the kubernetes cluster needs also be HA.

  1. As I understand HA concept says to setup a kubernetes cluster with RKE and run rancher a as deployment / statefulset there. So kubernetes (RKE) takes care of the HA.
    What is the desired paradigm? Do I use this RKE cluster where rancher runs also for my workload or do I need to provision a second kubernetes cluster via rancher which runs my workload?

2.How to deal with no external Loadbalancer available?
The first picture here is showing that it is recommended to use HA with an external loadbalancer:
https://rancher.com/docs/rancher/v2.x/en/installation/ha/

In previous installations I used metalLB als loadbalancer. I am unfamiliar with helm. If I understood right, rancher was installed via helm in the HA manual provided on rancher website.
What is the best practice to use metallb as loadbalancer and to get rid of the nginx-ingress pods which are using host network? Does the solution will be kept after the next kubernetes or rancher update?

Thanks, Andreas

  1. This cluster that RKE creates is only to be used for Rancher Server, to satisfy HA. You should not run any workloads there. Only purpose is to run Rancher. Once that cluster is up, you can access the Rancher GUI and provision or import other clusters, and those clusters will run your workloads.

  2. To access Rancher in HA mode, you will need a load balancer in front of the HA nodes, that will LB the rancher URL. You could do DNS round robin, but that is not truly HA. You won;t want to install metallb into the “local” cluster. You could spin up a HAProxy machine for this purpose. It just needs to be external to the HA cluster. Whatever LB you choose, it will need to support websockets, or else the GUI will error at you a lot.

I’m also thinking, why not MetalLB ?
Steps would be:

  1. disable nginx ingress with rke
    # Required for external TLS termination with
    # ingress-nginx v0.22+
    ingress:
      provider: none
  1. deploy metalLB with Manifest

  2. re-deploy ingress-nginx (for example with helm), enabling service LoadBalancer (default). You may configure values to point to your deployed certificate and other tuning.

I proceed this way for my on-premises Rancher-created clusters, except Ingress disabled in GUI/Cluster.yaml. Why would it not work with local Cluster ?