It is kind of sad that I cant quite do this. My usecase is mostly performance. I will explain in case you decide to support this use case sometime in the future.
I have a couple user-facing containers, a, b, and c. I also have some “back-end services” f, g, h, with no public facing endpoint. a, b, and c are linked to f, g, h though links. I have a fleet of about 10 instances, which are all load balanced. All instances run all services.
A user comes in to instance 1 and hits service a, a then needs to talk to f and g. b also has to talk to g.
For performance reasons it would make sense that service A would hit service F on the same instance and not go to another instance. Ideally there would be a link that would lets say “prefer container on the same host” so that it would still failover to another host if needed.
Round-robin balancing between containers is nice when a single instance acts as a gateway, but if every instance is already being load balanced, it is a performance hit without any real benefit.
As far as answers already provided:
- I am already using host scheduling, that is not the issue, I know how to put A and G on the same box, I just want to make sure A only talks to G on the same host instead of going to the G on another host.
- Creating a separate “service” for every box (i.e. A1 A2 A3 on host 1 2 3) is a nightmare and is not auto-scalable. Deployments would be WAY to complicated.
- Sidekicks dont work because only the “master” can talk to the sidekick, but I would need A, and B to talk to G for instance. Which service would G be a sidekick for?
As a side note, in case your wondering, I use AWS ELB to loadbalance all traffic to a single port on the instances, then I use rancher loadbalancer simply for routing, sending A traffic to A based on url. So if loadbalancers could also prefer same host if possible that would be awesome.