Remote Rancher Environments

At them moment, our rancher servers and hosts are in the same AWS VPC and can therefore communicate over a private network with each other (requires manually setting the CATTLE_AGENT_IP each and every host which is a major PITA, but at least it can be done). Our longer term build out however involves other rancher “environments” in remote VPC’s that arelocal to our multiple working locations (one in India, Europe, and east Asia).

The rancher docs indicate that the hosts need to accept inbound SSH communication from the rancher servers. While no bid deal when they are on the same private network, I’m confused as to how this would work with the remote environments. Does thie mean that every remote host needs to have a public internet-accessible IP that can accept connections from the server? That almost certainly won’t pass muster with our security team. Almost everything we do is in the private subnets. Alternatively, does this mean we need a separate rancher server cluster for each remote environment? That’s also not ideal.

I suppose we could setup a VPN connection between the various VPC’s. I wish AWS made those kinds of peering connections a bit more automated.

Thoughts? Suggestions?

SSH is just for using the Add Hosts screens to create a VM through docker-machine. After creating the VM with the provider it SSHes (from the server container where it is running) to the IP the driver returns so that it can install and configure Docker, run the Rancher agent, etc. If you’re not using machine you do not need to expose SSH.

If you do it just needs to be accessible from the server container(s) IP(s), and can be authorized keys only, not allowing password login (not sure which provider/driver/os combinations configure it that way by default though off-hand, other than RancherOS).