few times this week, it did happened. I have 4 hosts on which I expect to have a loadbalancer instance on each. For an unknown reason two instances are migrating on 2 hosts which means they were two running on two hosts (instead of one per host). It wouldn’t be so bad if at least the scheduling would keep one of them on a specific host. How can I prevent the LBs migrating to other host and how to make sure one of them stays on a specific host ?
Does each of the load balancer nodes have the same label “name=node02”. That sounds to me like it is specific to one host.
What I have found works is to set a label on the LB hosts of “role=loadbalancer”. In the Rancher LB config I set the scheduling label “io.rancher.scheduler.affinity:host_label: role=loadbalancer”. Notice that the label is host_label and not host_label_soft as you have it. The _soft label says that it should have the label, whereas host_label says that it must have the label.
all the 4 hosts have the label role=loadbalancer but from time to time one LB instance migrate from one host (always the same) to another. I’d like to track the issue so I know why the LB instance is moving away from this specific host