Load Balancer Sporadic 503's with multiple port bindings

I’ve been working on using Rancher for manager our dashboard applications, part of this has involved exposing multiple kibana containers from the same port, and one kibana 3 container exposing on port 80. However I’ve hit a problem where it’ll work 80% of the time, but randomly a JS or CSS file will 503 which will cause the dashboard to not load correctly.

I have configured the LB to send requests on specified ports: 5602, 5603, 5604 to specific containers, so I setup the following docker-compose.yml config:

  image: rancher/load-balancer-service
  - 5602:5602
  - 5603:5603
  - 5604:5604
  - kibana3:kibana3
  - kibana4-logging:kibana4-logging
  - kibana4-metrics:kibana4-metrics
    io.rancher.loadbalancer.target.kibana3: 5602=80
    io.rancher.loadbalancer.target.kibana4-logging: 5603=5601
    io.rancher.loadbalancer.target.kibana4-metrics: 5604=5601

Everything works as expected, but I get sporadic 503’s. When I go into the container and look at the haproxy.cfg I see:

frontend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_frontend
        bind *:5603
        mode http

        default_backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend

backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
        mode http
        timeout check 2000
        option httpchk GET /status HTTP/1.1
        server cbc23ed9-a13a-4546-9001-a82220221513 check port 5601 inter 2000 rise 2 fall 3
        server 851bdb7d-1f6b-4f61-b454-1e910d5d1490
        server 215403bb-8cbb-4ff0-b868-6586a8941267

The IPs listed are all three Kibana containers, the first container has a health check has it, but none of the others do (kibana3/kibana4.1 dont have a status endpoint). My understanding of the docker-compose config is it should have only the one server per backend, but all three appear to be listed, I assume this is in part down to the sporadic 503s, and removing this manually and restarting the haproxy service does seem to solve the problem. I’ve tried this via docker-compose.yml file and through the interface and the problem occurs on both.

If particular log files are required I can access them and post as needed.

I am configuring the load balancer incorrectly or is this worth raising as a Github issue with Rancher?

This issue should describe the issue that you are seeing.

Please let me know if you have further questions.