We have couple of applications which are behind rancher load balancer(which is sitting on seperate server). Whenever we reboot those servers, rancher load balancer containers alone getting into initialization state and it wont load anything. Its not happening for all load balancer containers and not happening all the time, but to random ones. Restarting just the loadbalancer container alone didnt fix the issue. I had to fix it by the using following ways.
- Restarting the entire stack which is affected.
- If above doesn’t work, stop the rancherlb first, then restart(stop/start) the target container, then start the rancherlb
- If both doesn’t work, get into the rancherlb through commandline and do a force apply of configuration by: haproxy –f /etc/haproxy/haproxy.cfg
PS: rancher loadbalancer image is pushed into our private registry from where we are pulling it.
Do anyone knows about any possible bug in the rancher load balancer?
If you have any stack config to share, I would like to reproduce this behavior. Can you also tell what versions you are running? OS/Rancher/Docker?
I’m pretty sure I had the same issue: Failed to get lb config
No solution, sadly, just a workaround.
superseb, we have a single tomcat container with a loadbalancer as below
- source_port: 8080
tzrlk, I feel like its more related to, in what order containers in the specific stack is getting started. Normally, loadbalancers start only after the target containers get started right? When we deploy the catalog for the first time, it started in correct order. But in subsequent stop/start, it starts in any order. So sometimes it goes to initialisation state and sometimes it doesnt. Do we need to add any more parameters like “depends_on” to decide the order and will it work for rancher loadbalancer?