What on-prem load balancing / virtual-ip implementation did you use?

Hi @etlweather,

I have almost the same doubts you describe on your post.

Let me describe my scenario and maybe we help each other.

I have an openstack env where I have 3 clusters o k8s, all managed by Rancher 2 (2.0.4). The first one, with 3 nodes, I have the Rancher cluster. This cluster is intended to run only the cattle-system and Rancher stuff. The second cluster is the dev cluster, with 3 masters/etcd and 4 workers, and finally the third is exactly the same config and size of the second cluster, except of the purpose, the third one is for QA.

All clusters was created using the RKE tool, where you have an option to label nodes to run the ingress controllers. As Rancher currently only supports NGINX as ingress controllers we’ve decided to keep the default option, on my old cluster, 1.6, I use traefik. At this point we are using NGINX as ingress controller, on all nodes and we have an upstream NGINX as a reverse proxy to the clusters, UI, kubectl and workloads, all go through the upstream NGINX to the downstream NGINX ingress controllers.

My upstream NGINX is a cluster using keepalived to failover the service between two nodes.

I have a DNS record *.dev.domain.com pointing to the NGINX upstream, and the NGINX have a config to redirect *.dev.domain.com to the dev cluster. The same happens with the DNS record *.qa.domain.com and the NGINX config.

I’ve posted my config here some days ago:

I’m very excited about Envoy and the projects using it, but at this moment I can’t figure out how to put it to work for me. When Rancher support Traefik as ingress controllers, maybe we will change to traefik.

So this was our decisions and I don’t have a list of trade-offs yet, but I now it will appear as soon as user start using it.

This was helpful?

Att,

1 Like