I’m currently running Rancher behind the firewall. But I’d love to run the Rancher server in a AWS VPC so that I can have Rancher environments in AWS and local.
I found Is Rancher right for my multi-DC use case? which sounds like I need ports UDP 500 and 4500 opened for network tunnels between hosts to be created.
- Will hosts behind the firewall be able to establish network connections to hosts in AWS when the ports are only initially open one way?
- If I cannot have hosts communicate traversing the firewall, is it still safe to have environments containing only hosts behind the firewall and environments only containing AWS instances outside?
The host agent connects to the rancher server over HTTP(s)/WebSockets, and the host always opens the connection. So the server does not need to be able to open a connection to the host directly.
Overlay network communication is IPSec from the host the source container is on to the one the target is on. IPSec is UDP and UDP is “connectionless”, so communication has to work for both ways independently.
The IP address each host is registered with (and shown on the hosts screen) is what other hosts try to connect to to talk to containers on that host. So if those are not all mutually reachable, e.g. you have a set of machines in-house on private IPs and another set in AWS, they will only be able to communicate within their islands. Generally you’d be better off having each set in a separate Environment in Rancher so that you can’t accidentally schedule containers that try to communicate across the gap.
1 Like
Thanks. This is really helpful in understanding a bit more about the networking. It sounds like what I want to do is sane as long as the environments have hosts either all local or AWS.