Rancher deployment - many environments

I would like to discuss with you experts any pro-cons in the following scenario.

We have the following requirements:

  1. The same service is deployed to multiple (>=30) nodes.
  2. Nodes should not be able to reach on another.
  3. In a few cornercases two or three nodes needs inter-container networking

So I started deploying into separate environments and thats OK not having too many environments but after a while this will not scale and I think I’ll get maintenance hell (rancher upgrades, infrastructure stacks etc).
So another possibillity would be adding all hosts to one environment and use network policy to filter out traffic between containers.

Is it possible to create an environment excluding all managed network between nodes? (islands within environment)
I think having hosts in an environment not being able to communicate with each other would create al lot of rancher log errors etc.

br hw

If you install the Network Policy Manager from the catalog you can configure the environment to block communication between stacks but allow things that are explicitly linked together (or more complicated things from the API).

yes I’ve been testing NPM and I think it might work. My only consern would be vxlan/ipsec processes on hosts which would not be working since not allowed.
I know I can build a template without ipsec/vxlan networking but I’ve not seen how rancher would cope with such a setup.

The policy manager doesn’t allow/block IPSec communication itself, but routing of individual flows of 10.42.x.y pairs. You still have one contiguous mutually-reachable layer 3 overlay network.

so if I deploy hosts on isolated networks, not being able to communicate with each others, in the same environment…

would’nt this result in ever-initializing healtchecks and other networking issues?

I will start testing out this scenario…just wanted to ask :slight_smile:


Yes, the hosts in an environment need to be able to talk to each other.

Again the idea is an environment where all the containers could talk to each other, but the policy manager allows only the ones you want to actually talk to each other. Managed iptables.

If you need literally isolated hosts that cannot communicate with each other at all then you need one environment each.