Single stack instance per host

I want to be able to run a stack on a host with a given label - I know how to do this and it works well.

I also want to be certain that all containers in my stack run on the same server - I thought
defining sidekicks to one of the containers would do the trick, but Rancher just seems to hang. When I took the sidekicks line out, it worked, but not how I want it to work.

So I tried something like this:


I started the first stack, and it worked as expected. Then I ran the same exact command, with the same stack name, and it sort did nothing, at least I couldn’t tell that it did anything. So I tried again, but named the stack differently and it loaded the second on a different host. For some reason, I thought there would be two sets of the application stack, running on different hosts.

Do stacks always run on different hosts if these labels are set? Or did I just get lucky?

Let’s assume your stack is called “Default” and service “MyApp”. So those labels mean that the container must but scheduled on to a host with the label type=myhostlabel, and that you should to be on a host with a container with the label ${stack_name} and ${service_name} are variables that get replaced with the appropriate info from the service when scheduling it, so that you don’t have to hardcode “Default” and “MyApp” into the label here.

So the scheduler finds all the hosts with type=myhostlabel first. Then it looks through them to see if any have the label. Since you’re searching for a container of it’s own service, if this is the first time you’ve started the service there won’t be any. From the remaining list of eligible hosts, the container is scheduled on to the one with the least number of containers on it.

If you then scale up that container, there now are container(s) with a matching label, so they will be scheduled onto the same host. If the service uses a public host port or has other constraints on it that make that host ineligible, that soft rule will be ignored and the rest of the rules evaluated again, as above.

Similarly, when you use a different stack name there is now no hosts with a container with a label matching NewStackName/MyApp, so the soft constraint is dropped and it can go to any other eligible host.

If you just want all the containers for the services in a stack to run on a single host from a set of them that have type=myhostlabel, you don’t need to involve the service name… use

io.rancher.scheduler.affinity:host_label: type=myhostlabel

Actually I want all of containers in the stack to run on thename host. But I only want one instance of the stack per host. So if I run “rancher-compose –p Default myapp” –f myapp.yml up” I want all of the containers defined in myapp.yml to run on the same host which has type=myhostlabel. Then if I run the same command again, I want all of the containers defined in myapp.yml to all run on a different host with type=myhostlabel.

Would I just need add the “_ne” and use io.rancher.scheduler.affinity:container_label_soft_ne instead?

io.rancher.scheduler.affinity:host_label: type=myhostlabel

If you definitely don’t want it to run on the same host, then you’d have to put in a hard affinity label.


The _ne rule you describe would make it so that no 2 containers for the stack would go on the same host. For the first container there would be no other containers with, then subsequent containers would not be allowed to go on the same host as ones that do have it. So that is the opposite of what you want (I think).

If you “run it again” with the same -p then that will do nothing because the containers for myapp in the Default stack are already up and running. If you want multiple copies of the app you need different stack names.

So what I think you want is something like you have a hosted app with “web” & “db” services defined in a single docker-compose.yml, and you want to spin up one stack with it for each customer such that all of customer1 goes to (any host with type=myhostlabel) and all of customer2 goest to (another host with type=myhostlabel, but not the same host as any other customer is using).

I don’t think that is currently possible with the scheduler rules in place today, because of the combination of affinity of several services together + anti-affinity to the another set of the same services. There’s no way to say ‘hard’ affinity on the stack name because if you do that the first one can’t ever deploy… but using _soft on that one makes it impossible to do negative affinity to separate the stacks.

what if I run it with a different “-p” ? Would that make any difference?

If not, what’s your recommendation?

No… running with the same -p will just do nothing the 2nd time, because it compares the yml file to reality, sees that those services already exist and match the yml.

Using a different stack name will create a new one, but there as far as I can come up with there isn’t a set of rules that will guarantee the affinity + anti-affinity you want.

Ok, you’ve established that right now, Rancher doesn’t support this scenario by simply setting labels. Your “web” & “db” description is a good example, of what I wan to do, and this doesn’t seem like a totally unique scenario. But what are my options?

There has to be a way to target a specific host for running a stack.

Creating a new environment, with only one host with the target label would work, right?

Can I set some size parameters so that only one of my stacks will fit on a given host?

Could I set a label on the host, to match the stack name? I’d have to do this step first, but at least people could direct their stack to a specific host. For example, in my docker-compose.yml file I could set “io.rancher.scheduler.affinity:host_label: type=${stack_name}”. But I’d have to ensure that label was added to the target host first.

I don’t think this is a really common scenario, but agree that there should be a way to handle it. The key issue is that a strict “container has the label” can’t ever get the first one created when it is referring a label that will itself be created by adding the container. So I think we need a special case or different modifier to make a rule like that “soft”, but only if nothing matches it.

One host per environment would work (no need for labels and affinity rules then).

There are no size parameters like that. The algorithm is to determine the hosts that match the scheduling rules, then put the container on the host that has the least number of containers. So they naturally spread out and you need to reign that in per customer.

Targeting a unique host label would also work and is I think the best solution if it is practical for you to setup those labels. Checking if any hosts have a label and adding it to one if not could be automated through the API. Using the stack name variable like that as the value is a good idea.