Hi Everyone. I hope you can give me some pointers on how to achieve what I am trying to do…
My system is completely air-gapped, no Internet at all - I have a working 3 node RKE v1.2.9 cluster across 3 x physical servers (running CentOS 7.8).
I want to implement external load balancing so that external clients can navigate to a url (example.com on port 80) from outside the cluster and it will be routed to, say, my nginx deployment running on the RKE cluster that is hosting this site.
Currently with my Rancher 1.6 setup, we use keepalived and traefik (in the docker-compose file I would use a label of “traefik.alias: example” and my DNS would append the domain name. My entry in my DNS is “example CNAME keepalived-VIP” where keepalived-VIP is in my DNS pointing to my keepalived Virtual IP Address. When I navigate to example.com from a browser, it works.
Any suggestions on how to achieve this with kubernetes/RKE would be great as I have been messing with this for a while - I have tried metallb using a range of IPs and even though it works and does give me an IP address that I can plug into my DNS, there is no guarantee I will get the same IP address from metallb if other services have been shutdown in the meantime - I will just get the first available IP in the range if I were to delete my service and re-create it.
not sure if I understand your problem correctly but I’ll try to answer nevertheless.
So if you have a pool of IPs or only one through e.g. MetalLB you need some kind of ingress like ingress-nginx. This ingress will create a service of type LB and grab an IP address of your pool.
Current state: nginx-ingress service has a locked IP from your pool.
Now you create a deployment and a service. The key is not to use service type load balancer if you want to have the same IP (/DNS) for that service all the time. Instead you create and ingress resource where you define your host. If you only have one nginx ingress (class), your ingress of that deployment will default to that one, so you don’t have to do anything additionally.
You can ofc have more than one Nginx Ingress Service of type LB with different IPs but than you need to specify the “class” of your ingress for your deployment.
Final state: Nginx Ingress will receive traffic on the fixed IP and will look to pass the request based on the DNS your client tries to access. If there is a deployment with a correct defined service and ingress listening for this host, Nginx Ingress will direct the traffic to your service.
Came also across this one here, which explains it with code examples.
Hi @avostephan
Thanks very much for your reply, the references you provide will be useful to progress this further. You might have given me a way through, so thanks - greatly appreciated