Load Balancer not allowing both HTTP and HTTPS rules

I have a service running a simple webpage which I am hosting behind a rancher load balancer. The service does not publish the port, and so has to be accessed through the load balancer.

When setting up the load balancer I specify a rule for HTTP, coming on port 80 publicly, my domain, and routing it to the local port 80 on the service. This works fine, when I visit http://prefix.domain.com.

I then attempt to set up a second rule on the load balancer for HTTPS, coming in on port 443, my domain and routing to the local port 80 of the service. When I now navigate to http://prefix.domain.com it still works, but when i goto https://prefix.domain.com I get a time out. I have the correct ssl certificates hooked up so don’t know what went wrong. I was under the impression the load balancer would compare the incoming request to the rules from top to bottom in order of precedence.

The only work around I can find, is to remove the HTTPS rule and create a second load balancer for that rule.

Any ideas?

Can you post the compose version of your attempted configuration please?

Also, in your blog piece, I’m unclear why you are checking for a specific path when redirecting to HTTPS?

The compose for the load-balancer stack are as follows

Docker-Compose

version: '2'
services:
  http-lb:
    image: rancher/lb-service-haproxy:v0.6.4
    ports:
    - 80:80/tcp
    - 443:443/tcp
    labels:
      io.rancher.container.agent.role: environmentAdmin
      io.rancher.container.create_agent: 'true'

Rancher Compose

version: '2'
services:
  http-lb:
    scale: 1
    start_on_create: true
    lb_config:
      certs: []
      default_cert: <domain.com>
      port_rules:
      - hostname: blue.<domain.com>
        priority: 2
        protocol: http
        service: Blue/Blue
        source_port: 80
        target_port: 80
      - hostname: blue.<domain.com>
        priority: 3
        protocol: https
        service: Blue/Blue
        source_port: 443
        target_port: 80
    health_check:
      healthy_threshold: 2
      response_timeout: 2000
      port: 42
      unhealthy_threshold: 3
      initializing_timeout: 60000
      interval: 2000
      reinitializing_timeout: 60000

I have the domain of the HTTPS because I intend to use this load balancer for multiple hostnames so need to choose the correct HTTPS connection using the domain.

Thanks

Looks OK to me. Do you see port 443 exposed on the host?

I think it’s hard to check these days but this should show an suitable entry in the CATTLE_PREROUTING chain of the nat table: sudo iptables --list -t nat -n

Can you try a wget... on the local host?

I cannot check the entry in CATTLE_PREROUTING right now, but I will later. I would however expect the port to be open because as I mentioned in my blog post I set up a second LB on the same host which would have allowed traffic on 443.

If you alter the LB stack to be

Docker Compose

version: '2'
services:
  https-lb:
    image: rancher/lb-service-haproxy:v0.6.4
    ports:
    - 443:443/tcp
    labels:
      io.rancher.container.agent.role: environmentAdmin
      io.rancher.container.create_agent: 'true'
  http-lb:
    image: rancher/lb-service-haproxy:v0.6.4
    ports:
    - 80:80/tcp
    labels:
      io.rancher.container.agent.role: environmentAdmin
      io.rancher.container.create_agent: 'true'

Rancher Compose

version: '2'
services:
  https-lb:
    scale: 1
    start_on_create: true
    lb_config:
      certs: []
      default_cert: <domain.com>
      port_rules:
      - hostname: blue.<domain.com>
        priority: 1
        protocol: https
        service: Blue/Blue
        source_port: 443
        target_port: 2368
    health_check:
      healthy_threshold: 2
      response_timeout: 2000
      port: 42
      unhealthy_threshold: 3
      initializing_timeout: 60000
      interval: 2000
      reinitializing_timeout: 60000
  http-lb:
    scale: 1
    start_on_create: true
    lb_config:
      certs: []
      config: "frontend 80  \nacl lepath path_beg -i /.well-known/acme-challenge \
        \ \nredirect scheme https code 301 if  !lepath !{ ssl_fc }"
      port_rules:
      - hostname: blue.<domain.com>
        priority: 2
        protocol: http
        service: Blue/Blue
        source_port: 80
        target_port: 80
    health_check:
      healthy_threshold: 2
      response_timeout: 2000
      port: 42
      unhealthy_threshold: 3
      initializing_timeout: 60000
      interval: 2000
      reinitializing_timeout: 60000

This works

Understood that it works as two separate LBs, it just might help narrow things down if we know it at least listens and responds (locally) on 443.

Looking at the CATTLE_PREROUTING chain of the nat table, I see the entries for 443

...
MARK       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:443 MARK set 0x1068
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:443 to:<IP>:443
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:443 ADDRTYPE match dst-type LOCAL to:<IP>:443
...

OK, interesting, no issue there then. So, does a wget work locally on the host then?

It would also be worth getting a shell in the lb container, checking that is also listening on 443 with ss -ltn and also taking a look at the generated /etc/haproxy/haproxy.cfg file to see if anything is wrong there.

1 Like