Rancher Hybrid AWS and Private - VPN not established

Hi

I have a Rancher environment set up in AWS and works fine. All the hosts connect using their private IPs, as using the public IP is not possible in AWS due to SNAT. Now I want to add a host which is in our private data centre. The problem is (I think) that the CATTLE_AGENT_IP which the existing AWS hosts publish is not reachable by this remote host, and therefore the VPN cannot be established.

I tried changing the -e CATTLE_AGENT_IP to the aws hostname, which resolved internally to the private IP and externally to the public one, but this did not work.

So, the question is, is there any way of creating a hybrid AWS + Private Rancher cluster?

Thanks

1 Like

I presume by your description that you have an existing rancher server, and some hosts in AWS? Now you are trying to add a rancher host in your datacenter?

If so, then I believe CATTLE_AGENT_IP should be whatever IP address your private datacenter host will be seen as to the outside world. Your command should be something like:

sudo docker run -e CATTLE_AGENT_IP="<your-dc-external-ip>"  --rm --privileged \
-v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher \
rancher/agent:v1.2.2 https://<your-aws-public-ip>/v1/scripts/<your-agent-string>

Hi @shubbard343. The problem is that on aws VPC the rancher hosts communicate using their private ips - this is necessary both because hosts within a PC cannot actually access the public ips due to the way nat works on aws, and even if you could the cost would be high.

I sorted it out by running a pfsense ami on aws, opened a VPN to data centre so that my internal host could access the aws vpc private ips and then added the host to rancher.