How to create an AWS VPC isolated environment with rancher 0.30

I have been attempting to create a VPC with multiple subnets and security groups and have been finding that the interface doesn’t allow it.

Setup is as follows:

Subnets/Security Groups: management, public, private
All the SG’s are allowed to access every other SG’s ports in and outgoing. I have set the api’s address to the internal amazon address.

Assumption is that this would work as the booted up boxes would attempt to connect to to the private address on the rancher server which is under the management SG, the interface just creates the machine and then sits there, after quite a while it produces an error “Maximum number of retries (60) exceeded”.

It appears that the reason for this is rancher-server is attempting to connect to these machines via their public IP?.
Is there a way for the management to be forced to use private addresses instead?

I can create a machine in one of the public/private subnets/sg’s and use the manual steps to add a machine to rancher and this works fine.

Only way I have been able to get rancher working is allowing all ports in both directions, even restricting the ports to the required ones doesn’t work (must be a few unlisted ones it needs).

I have tried enabling the public ssh port as well, this doesn’t change anything.

I can connect from the rancher-server container to a node, but have no idea what the passphrase is so can’t check connectivity the other way around.

Is this network structure supported? or should i just settle with adding machines manually?

@cf_alister just to clarifying: you are attempting to create EC2 instances via our UI, is that correct?

When you do that, we use docker-machine to create the instance. It interacts with the Amazon API to create the host and then SSHes into the host to setup the docker daemon. It should do this over the public IP/host name. So, the necessary communication would be for the rancher/server node to be able to reach port 22 of a public IP and for the compute node’s security group to allow incoming traffic on port 22. Seems like you already have that thought, right?

We’ll try to reproduce this sometime soon . If you have any more details about how you’re configuring the EC2 host, that will be helpful.

@cjellick Correct, after playing with this a bit more i have worked out where i was going wrong so that it at least allows the public side to work as expected.

I was expecting the address range to be one of the visible ip ranges, subnets private range (172.77.)/docker container range (10.42.) but instead after digging deeper i found that it is actually the (172.17.*) rancher? range.

After allowing communication for these ranges on UDP ports 500 & 4500 and SSH for UI to hosts public ip’s everything started working again.

Maybe the range it’s expecting could probably be added to the ‘Add Host’ page? This would have helped me at least. (didn’t want a blanket catch all 0.0.0.0/0 )

Given the information that you provided, private hosts would be impossible currently? So ticking private IP would be a bad idea at least for now? I do plan to have some hosts private only, for db’s, back-end services, etc. i have attached an image that may explain what i am trying to achieve better. All outgoing is currently unrestricted.

Thanks for your help :smile:

1 Like

Hey there I was wondering how everything went with your current setup? Are you still using rancher with AWS?