[rancher-compose] How to upgrade Load Balancer service for N targets

Hello,

I’m looking for some advice on the best way to upgrade advanced routing rules of a Load Balancer service.

I have a public Load Balancer which does advanced routing to N stacks e.g. request host: foo -> target stack/service: foo, request host header: bar -> target stack/service: bar.

The CICD keeps adding/deleting N stacks via rancher-compose, which means I need to keep upgrading the Load Balancer service for the links to the stacks.

rancher-compose --upgrade requires providing the full final compose which means all stack deployments/teardown needs to know the existing LB service compose to add/remove external_links and io.rancher.loadbalancer.target entries.

How can I achieve this ? Is there a way to a part upgrade or would there be another way of designing this ?

Thanks !

We need to employ the same strategy as well. The use of a ‘master’ or ‘global’ LB of which its configuration is dynamic services come and go, particularly in non-production environments.

I can think of a few methods, but first I believe we want to see if this is possible with rancher-compose in a simple way way, for e.g. a minimal compose with the new service link to add/sync to the LB.

The other method I thought of if a container service that discovers services by labels or other metadata and will add new services to the loadBalancerService resource. This kind of method has been done in the past such as by use of tags in a cloud provider or management platform.

I am keen to hear what the best practice may be, one single master LB for 80 and 443 is definitely a very common setup and I’m sure others are either already applying a solution or pondering. Thanks!

Not 100% sure if that will help but if you use the API, you can easily do an upgrade without having to have a rancher-compose/docker-compose file with all the settings. I.e. you can first read the current configuration of the service (I only did it against a normal service, not a LB so that’s the area I don’t know), and then perform an upgrade giving that configuration.

I packaged that action into a script: https://github.com/etlweather/gaucho

Oh also of note if you want a zero-downtime upgrade of your load balancer - you will need to run 2 instances at least. So you will either need another generic LB in front to expose a single point (IP address) of access, or you can use a floating IP address which is what I did.

hi @etlweather that is exactly what we aimed to achieve with the Rancher+Keepalived container… Ideally I’d like to make a catalog service soon, but there is still work to improve it…

I’d like to add multiple VIPs, and some other options which can be configured externally through env variables, for exaemple…

We currently have redundant hardware load balancers. This setup will supercede those. If there is a capacity issue you can always run a dedicated host just for loadbalancing with keepalived/vrrp for example… but in most cases I think it may be overkill…

I’d like to test having a set of IPs on DNS (similar to what other solutions propose), but instead of managing IPs at the dns level when there is a failure, I’d do that within keepalived & rancher…

ex:
www.myhost.com A 123.456.789.10
www.myhost.com A 123.456.789.11
www.myhost.com A 123.456.789.12

Rancher keepalived:
Physical Host A - 123.456.789.10 (primary) - 123.456.789.11 (bkp) - 123.456.789.12 (bkp)
Physical Host B - 123.456.789.11 (primary) - 123.456.789.12 (bkp) - 123.456.789.10 (bkp)
Physical Host C - 123.456.789.12 (primary) - 123.456.789.13 (bkp) - 123.456.789.11 (bkp)

Normal case: DNS round robin will distribute requests “evenly” to all 3 hosts. Each host can then loadbalance to services/containers inside rancher on the same or other hosts.
Bad case: Host A is down, Host B will answer 123.456.789.10 traffic along with its primary 123.456.789.11 traffic…
Worst case: Host A & B down, Host C will answer all IPs and balancer all traffic…

The “Bad case” could happen while you do a rolling upgrade of a LB also, for example… Just need to be careful not to run too much load/too near capacity, as you are prone to receiving an increase in traffic if a host fails…

I think when this enhancement goes through, it would be able to accomplish what you need.

While I’m all for the improvements suggested, I think this thread is going slightly off topic.

Is there no other way to upgrade an lb without the full compose?

@alysum - did you try my service upgrade script against a load balancer?

I just gave it a try against a LB service and it works - you have to tell it to stop first (the default is start the new container first which is a problem if you have both containers - new and old - on the same host due to port conflicts).

The command is:
./services.py upgrade 1s198 --start_first=False
where 1s198 is the service id.

If you want a no-downtime solution, then you need to be running LB containers on every host (global mode) - at least every host matching a scheduler rule - and those hosts will need to have a shared IP (floating IP) which can be done using the solution @RVN_BR and I mentioned earlier using keepalived or similar mechanism.

I wanted a similar feature out of the rancher load balancers for automatic host routing for stacks through http. So i started a dynamic backend haproxy load balancer image which uses the rancher-metadata to dynamically route http host headers to a stack name, eg. $stack_name.$domain -> $stack_name/app
Its obviously still in development but is working: https://hub.docker.com/r/nodeintegration/rancher-haproxy/