I have a service running a simple webpage which I am hosting behind a rancher load balancer. The service does not publish the port, and so has to be accessed through the load balancer.
When setting up the load balancer I specify a rule for HTTP, coming on port 80 publicly, my domain, and routing it to the local port 80 on the service. This works fine, when I visit http://prefix.domain.com.
I then attempt to set up a second rule on the load balancer for HTTPS, coming in on port 443, my domain and routing to the local port 80 of the service. When I now navigate to http://prefix.domain.com it still works, but when i goto https://prefix.domain.com I get a time out. I have the correct ssl certificates hooked up so don’t know what went wrong. I was under the impression the load balancer would compare the incoming request to the rules from top to bottom in order of precedence.
The only work around I can find, is to remove the HTTPS rule and create a second load balancer for that rule.
I have the domain of the HTTPS because I intend to use this load balancer for multiple hostnames so need to choose the correct HTTPS connection using the domain.
Looks OK to me. Do you see port 443 exposed on the host?
I think it’s hard to check these days but this should show an suitable entry in the CATTLE_PREROUTING chain of the nat table: sudo iptables --list -t nat -n
I cannot check the entry in CATTLE_PREROUTING right now, but I will later. I would however expect the port to be open because as I mentioned in my blog post I set up a second LB on the same host which would have allowed traffic on 443.
OK, interesting, no issue there then. So, does a wget work locally on the host then?
It would also be worth getting a shell in the lb container, checking that is also listening on 443 with ss -ltn and also taking a look at the generated /etc/haproxy/haproxy.cfg file to see if anything is wrong there.