Is it possible to link to containers on same host

I have a couple containers that I want to always be on the same host, so I have set rules to make sure that happens, however I also have multiple hosts with these containers.

Imagine two servers, each with a service A and B.

I want service A to only talk to the B on the same host and NOT to “round-robin” with the container on the other host. Is there a way to link service A and B so they only talk to eachother and not the A and B on the other host?

Hi,
one way to solve your issue is to apply a label to your container to stick with a host with a dedicated label .

  • io.rancher.scheduler.affinity:host_label: mykey=myvalue
    This label implies that a container stick to a host.
    It is not dynamic, so when you lost your host, you’ve got a big problem.

  • io.rancher.scheduler.affinity:container_label (a better alternative)

  • io.rancher.scheduler.affinity:container (a better alternative)

The 2 last alternatives permit to stick to the same host, but redistribute containers if host fails.

my 2 cents,

Charles.

There is no way to have service A only talk to service B on the same host. If this is your case, you’d probably want to create services for each host and use the scheduling rules (lots of options) to make it schedule each group of services on specific hosts.

Isn’t that a good case to use sidekicks containers…

Rancher supports the colocation, scheduling, and lock step scaling of a set of services by allowing users to group these services by using the notion of sidekicks

It is kind of sad that I cant quite do this. My usecase is mostly performance. I will explain in case you decide to support this use case sometime in the future.

I have a couple user-facing containers, a, b, and c. I also have some “back-end services” f, g, h, with no public facing endpoint. a, b, and c are linked to f, g, h though links. I have a fleet of about 10 instances, which are all load balanced. All instances run all services.
A user comes in to instance 1 and hits service a, a then needs to talk to f and g. b also has to talk to g.
For performance reasons it would make sense that service A would hit service F on the same instance and not go to another instance. Ideally there would be a link that would lets say “prefer container on the same host” so that it would still failover to another host if needed.

Round-robin balancing between containers is nice when a single instance acts as a gateway, but if every instance is already being load balanced, it is a performance hit without any real benefit.

As far as answers already provided:

  • I am already using host scheduling, that is not the issue, I know how to put A and G on the same box, I just want to make sure A only talks to G on the same host instead of going to the G on another host.
  • Creating a separate “service” for every box (i.e. A1 A2 A3 on host 1 2 3) is a nightmare and is not auto-scalable. Deployments would be WAY to complicated.
  • Sidekicks dont work because only the “master” can talk to the sidekick, but I would need A, and B to talk to G for instance. Which service would G be a sidekick for?

As a side note, in case your wondering, I use AWS ELB to loadbalance all traffic to a single port on the instances, then I use rancher loadbalancer simply for routing, sending A traffic to A based on url. So if loadbalancers could also prefer same host if possible that would be awesome.

Sorry for replying to such an old post but I realized I have exactly the same problem. I have a group of hosts (in different datacenters, let’s call them H1, H2, H3) and two microservices (A, B) deployed to all of them (one instance of each microservice per host). Traffic is already load-balanced thanks to CloudFlare load balancer so I don’t want to e.g. service A on H1 talk to service B on H2.

I was able to tweak Rancher load balancers to work like that thanks to label io.rancher.lb_service.target=only-local but I’d like to have something similar for service links. Is it possible in current version? If no, can somebody suggest a good way to deal with such deployment scenario?

I’ve created GitHub issue for this feature: https://github.com/rancher/rancher/issues/11593