Feedback on Rancher 2.0 AWS Architecture


#1

Good morning,

My team and I are building a Kubernetes Architecture from scratch on AWS, and we plan on using Rancher 2.0 to help us build our backend. I have an idea of what I want the backend to be, but that depends on the capabilities that Rancher 2.0 has out of the box. I would like to briefly describe my architecture, and get some feedback to see if this is possible using Rancher 2.0 once it is released in March.

I plan on having 1 VPC, with 16 private and public subnets. The public ones being mainly used for Nat Gateways and Bastion Hosts. On the private Subnet, I will stand up a Ubuntu Docker EC2 Instance, with an Auto Scaling Group of 1, to make sure it is always up, and I plan to put it behind an AWS Application Load Balancer (ALB). The ALB will have SSL Termination, and forward ports 443. Once I am logged in to Rancher, I will create more hosts for Rancher Server, ETCD and K8s instances using the RKE approach. I want to use the ALB as an Ingress Controller to K8s in Rancher, and I would like to put AWS API Gateway in front of my API(s) deployed on Rancher as well.

I am not sure whether to use Ingress Controllers or Node Ports to expose my services, currently I am using Node Ports in Minikube. Will Rancher 2.0 have support for AWS ALB Ingress Controller and will the UI be able to help me set this up? Does it make sense to front my API(s) on Rancher with AWS API Gateway?

I am definitely new to setting up something this complex, and any advice from more experienced developers and archtiects would definitely be welcomed.

Thanks,

Steven.


#2

IMHO this is much to complicated setup. Why do you want 16 subnets, 6 should be enough?


#3

It is definitely a good amount of Subnets :slight_smile, only 4 will be used for Rancher, the rest are being used for Big Data applications such as Spark and Flink, and other parts of the app. Aside from the Subnet count, do you see any other part that could be simplified?


#4

Hai,
Databases are business-critical entities and data loss or leak leads to major operational risk scenarios in any organization. A single operational or architectural failure can lead to significant loss of time and resources and this necessitates failover systems or procedures to mitigate a loss scenario. Prior to migrating a database architecture to Kubernetes, it is essential to complete a cost-benefit analysis of running a database cluster on a container Amazon Web Services Architecture versus bare metal, including the potential pitfalls of doing so by evaluating disaster recovery requirements for Recovery Time Objective (RTO) and Recovery Point Objective (RPO).