How to create hosts in a private subnet in an AWS VPC

I’ve got a VPC on AWS with 2 subnets, setup according to Amazon’s scenario 2. This means a public subnet, complete with public DNS/IP addresses; and a private one that only has private IP addresses. The public has addresses in the range 10.0.1.0/16 and the private on 10.0.0.0/16.

I have setup a rancher server on a host in the public VPC. From the UI I am able to launch and provision hosts on the public subnet without issues. However, when I try to set up a host on the private subnet, I get machines launched which hang on the SSH step and then produce an error: Error running provisioning: Something went wrong running an SSH command!

Now obviously the machine cannot access private subnet hosts on any public IPs so I should mention that I’ve checked the box in the UI that asks Rancher to use private IP addresses. I am using the rancher OS AMI in all cases. I see a similar topic from July that ends with no answer (How to create an AWS VPC isolated environment with rancher 0.30); is this a supported scenario in Rancher and if so, what am I doing wrong? If not, are there plans to make this work? It seems like using SSH on the private IP address should be straightforward …

1 Like

You might check that the internal subnet can reach the external subnet, you also might need to modify the CATTLE_* environment variables to the host so that everything resolves correctly.

I also tried to get this working. Having two subnets in the same VPC, one for private hosts and one for public hosts. On the private hosts I started the rancher/agent with CATLE_AGENT_IP=‘private_ip_of_host’ but this didn’t helped. Private and public hosts can still not communicate with each other.

Would be great if you could enable this use case or provide any further help here?

Thanks