How are folks approaching HA with k8s clusters in production?

Yes, the API server runs on all 3 nodes, and the scheduler and controller only run on one node each.

Regarding the load-balancer, yes, you would need a load balancer to send traffic to all 3 nodes for any Ingress traffic to the cluster. We use an HAProxy that points to all of the nodes in each cluster for 80 & 443, since RKE/Rancher automatically creates an Ingress for those ports on all of the nodes. See Ingress and Failover for my example of how to point to your cluster nodes. You would create a frontend and backends for your main Rancher cluster, AND a set for each additional cluster you import into Rancher. I use keepalived to manage VIPs that run on the HAProxy nodes.

For managing the clusters, i.e. kubectl, you don’t point your kube config directly at the cluster. Once the cluster is imported into Rancher, you can create a cluster-specific kube config file from the Rancher UI, but the server endpoint is your Rancher cluster, not the actual cluster where your workloads will run.

For example, the kube config file for one of my clusters has the following:

clusters:
- name: "dev1"
  cluster:
    server: "https://rancher.example.com/k8s/clusters/c-st3pu"

And a second cluster has a kube config like:

clusters:
- name: "cicd"
  cluster:
    server: "https://rancher.example.com/k8s/clusters/c-f1gk8"

The rancher.example.com hostname resolves to the VIP for my main Rancher cluster on the HAProxy/Keepalived hosts, not the actual cluster nodes.

1 Like