At them moment, our rancher servers and hosts are in the same AWS VPC and can therefore communicate over a private network with each other (requires manually setting the CATTLE_AGENT_IP each and every host which is a major PITA, but at least it can be done). Our longer term build out however involves other rancher “environments” in remote VPC’s that arelocal to our multiple working locations (one in India, Europe, and east Asia).
The rancher docs indicate that the hosts need to accept inbound SSH communication from the rancher servers. While no bid deal when they are on the same private network, I’m confused as to how this would work with the remote environments. Does thie mean that every remote host needs to have a public internet-accessible IP that can accept connections from the server? That almost certainly won’t pass muster with our security team. Almost everything we do is in the private subnets. Alternatively, does this mean we need a separate rancher server cluster for each remote environment? That’s also not ideal.
I suppose we could setup a VPN connection between the various VPC’s. I wish AWS made those kinds of peering connections a bit more automated.
Thoughts? Suggestions?