Rancher Ingress controller

The Ingress controller creates a new LB container for each Ingress I define.

Not very practical imo, as those containers have to run on different nodes and or different ports to avoid collision.

The design I ended up with is to have one dedicated, host based Ingress, scaled to two and running on a pair of nodes tagged with a label. This way I only have to expose that port and nodes to the internet.

But now I’m stuck with one giant Ingress definition with all my services listed in one yaml file. This is rather uggly, as I’d prefer creating small Ingress yamls per service, in the service’s git repo.

What is the best practice here?

To be honest I was quite surprised by this behavior, coming form Openshift, where a dedicated Router component is running on the “infra” nodes, and Router yamls behave exactly as I want them to.

I could try the k8s Ingress contoller, but first would like to get Rancher’s / other’s take on this.

I am facing a similar problem. Any lucky on this?

Hey Paulo, I switched to the nginx ingress controller that Kubernetes suggests: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers - the Github link in this paragraph. I’m happy with it.

I wrote a bit longer about it here: https://laszlo.cloud/Rancher-Kubernetes-routing

Hi Laszlo,

Thanks, I was already using the nginx ingress controller and your text just had the missing part (the allow.http=false). I was trying to find a pure k8s solution but this annotation just works! =o)


1 Like