Load balancer scheduling not keeping instance on specific node

few times this week, it did happened. I have 4 hosts on which I expect to have a loadbalancer instance on each. For an unknown reason two instances are migrating on 2 hosts which means they were two running on two hosts (instead of one per host). It wouldn’t be so bad if at least the scheduling would keep one of them on a specific host. How can I prevent the LBs migrating to other host and how to make sure one of them stays on a specific host ?

here’s the configuration :

docker-compose.yml

version: '2'
services:
  lb01:
    image: rancher/lb-service-haproxy:v0.7.5
    ports:
    - 80:80/tcp
    - 443:443/tcp
    labels:
      io.rancher.container.agent.role: environmentAdmin
      io.rancher.container.create_agent: 'true'
      io.rancher.scheduler.affinity:host_label_soft: name=node02

rancher-compose.yml

version: '2'
services:
  lb01:
    scale: 4
    start_on_create: true
    lb_config:
      certs:
      - certificates1
      default_cert: default1
      port_rules:
      - hostname: example.com
        priority: 1
        protocol: http
        service: prod/example.com
        source_port: 80
        target_port: 80

Does each of the load balancer nodes have the same label “name=node02”. That sounds to me like it is specific to one host.

What I have found works is to set a label on the LB hosts of “role=loadbalancer”. In the Rancher LB config I set the scheduling label “io.rancher.scheduler.affinity:host_label: role=loadbalancer”. Notice that the label is host_label and not host_label_soft as you have it. The _soft label says that it should have the label, whereas host_label says that it must have the label.

1 Like

great the “must have” did the work as long as all the nodes had the corresponding label

thanks

Hi

I’m still having the same issue and funnily enough it’s always the same node that loose its loadbalancer. The actual configuration :

docker-compose.yml

version: '2'

services:
lb03:
image: rancher/lb-service-haproxy:v0.7.9
ports:
- 80:80/tcp
- 443:443/tcp
labels:
io.rancher.scheduler.affinity:host_label: role=loadbalancer
io.rancher.container.agent.role: environmentAdmin
io.rancher.container.create_agent: ‘true’

rancher-compose.yml

version: '2'
services:
  lb03:
    scale: 4
    start_on_create: true
    lb_config:
      certs:
      - cert
      config: |-
        timeout client 90000
        timeout connect 9000
        timeout server 90000
      default_cert: default-cert
      port_rules:

all the 4 hosts have the label role=loadbalancer but from time to time one LB instance migrate from one host (always the same) to another. I’d like to track the issue so I know why the LB instance is moving away from this specific host