Directing traffic to multiple stacks using a Rancher Load Balancer?

Hi, folks. I’ve searched the forums and have found a few threads sort of like what I’m trying to do, but nothing close enough that I’ve been able to find a solution.

Here’s the situation. I have a number of different stacks running in an environment. Each of these stacks has an HTTP “entry point” (e.g. the frontend) that is assigned a random port by Rancher. So for example I have a setup like this:

  • foo_stack/nginx on host1 listening on port 5221
  • bar_stack/nginx on host2 listening on port 6590
  • baz_stack/frozzle on host1 listening on port 2850

Each of these services are accessible; e.g. I can visit http://host1:5221 and get to foo_stack or visit http://host2:6590 and get to bar_stack.

But as I move closer to production for these services, I’d like to set up a way to access these services from a standard location. So for example I’d like to visit and have it route me to foo_stack, and I’d like to visit http;// and have it route me to baz_stack, even if an upgrade/redeploy of the stacks causes them to move to a different host or port within the Rancher cluster.

The approach I’m taking (and I could be way off…so please let me know if there’s a better way) is to have one Rancher host set up as the “dedicated” front-end system. I do this by assigning it a label of “role=public_frontend” and then creating a bunch of CNAMEs for that host in DNS, so that,, and all resolve to the “public_frontend” container host. I then created a rancher-compose.yml file that looks like this:

  image: rancher/load-balancer-service
    - "80:80"
    - foo_stack/nginx:web1
    - bar_stack/nginx:web2
    - baz_stack/frozzle:web3
    io.rancher.scheduler.affinity:host_label: role=public_frontend

I’ve read up and down and I feel like I’m doing it all right with the external links and the labels. But it just doesn’t work. What seems to happen is that using the configuration above, I’m able to access and it routes to the foo_stack, but accesses to and don’t do anything at all, just generate an haproxy 503 about “no backend server is available to process the request”.

What am I missing here? I have tried every combination of labels that I can think of, and the behavior just doesn’t seem to make any sense.

As an aside, how do I see the access/error logs for the haproxy instance within the Rancher load balancer? And for that matter, is there a way for me to examine the haproxy.cfg that Rancher generated? Having those pieces of data would probably help a lot in my troubleshooting process.

Thanks for your help!

Adding a "global" Load Balancer vs service per container looks similar, but I think @jmp already had the cross-stack load balancing working and was just confused about how to get a load balancer to run its own private stack.

Maybe I should consider using this instead of the Rancher load balancer service?

1 Like

How to see the haproxy.cfg:

Logs aren’t accessible currently, but will be added for our final 1.2.0 release.

Nothing jumps out at me at what might be worng, but check out your haproxy.cfg.