Deploying Rancher into a private subnet of an AWS VPC

We’ve just run a successful POC of Rancher and now looking to undertake a more robust build.

Our POC ran with the Rancher server and machines sitting in a Public subnet which seems like a standard Rancher deployment model.

The deployment model we have in mind is a VPC with private and public subnets which is a standard pattern we use in other AWS hosted applications.

It will look something like this: Private Rancher

I did a search and found a couple of other threads similar, but the difference is that they are putting the Rancher Server/UI in the Public subnet, whereas our Rancher Server/UI will be in the Private subnet.

Apart from the extra overhead of managing ELB’s for the UI itself as well as any services hosted within Rancher, is there anything else we should be aware of when deploying Rancher into a Private subnet?

I was never able to get the private side of things going like i wanted it, I just ended up just making it all public, and restricting access with security groups. It may be a lot better now with the latest version, but we invested on the public side so i’ll just leave it alone :slight_smile:

good luck with it do let us know how you get on.

Just providing an update on this, we have deployed as per the simplified diagram above and everything looks good so far.

Will post further updates as we progress.

1 Like

Hi @jonathan_k

Now when I deploy a new load balance it is publishing the private ip address, how can I fix that? any ideia?

We have three Rancher Environments each running in a different VPC and cross region,

Rancher Master - (VPC 10.3.0.0) - us-west-1
Development Environment (VPC 10.2.0.0)- us-west-1
Staging Environment (VPC 10.3.0.0) - us-west-1
Production Environment (10.1.0.0)- us-east-1

We were able to use VPC peering for the VPCs in us-west-1, but if we wanted to put clusters in other VPCs or regions it required that the Rancher Master be publicly available.

What we ended up doing was keeping the Rancher Master in the VPC and putting it behind a public ELB forwarding ports 80 to a nginx container that redirects http to https (https://github.com/Demandbase/docker-nginx-https-redirect - using proxy_protocol configuration) and 443 to port 8080 on the Rancher server.

Access is controlled by Security Groups that included the IP addresses of each NAT gateway server in the VPCs.

This allowed us to access our Rancher master from anywhere and keep it closed off to only our infrastructure.

The one trick we found was that just forwarding the ELB web ports is not enough, since the Rancher UI uses websockets. This requires some AWS ELB cli configuration for proxy_protocol

#Create a new policy called rancher-master-proxyprotocol-policy for the rancher-master ELB
aws elb create-load-balancer-policy --load-balancer-name rancher-master --policy-name rancher-master-proxyprotocol-policy --policy-type-name ProxyProtocolPolicyType --policy-attributes AttributeName=ProxyProtocol,AttributeValue=true --region=us-west-1

#Configure the instance backend ports 8080 and 8081 to use the new policy
aws elb set-load-balancer-policies-for-backend-server --load-balancer-name rancher-master --instance-port 8080 --policy-names rancher-master-proxyprotocol-policy --region=us-west-1
aws elb set-load-balancer-policies-for-backend-server --load-balancer-name rancher-master --instance-port 8081 --policy-names rancher-master-proxyprotocol-policy --region=us-west-1

With this in place we created a Route53 CNAME pointing to the ELB name and use it as our $RANCHER_URL when standing up new cluster instances. Now we can put Rancher Environments anywhere that the ELB Security Group allows access from.

1 Like