Q: Best practice to configure Rancher/K8s on AWS

I’m struggling to understand how to properly expose web applications deployed on AWS with Rancher in a Kubernetes cluster. As a starting point, I’ve got a single rancher/server set up using an external database (RDS) and have set up a Kubernetes cluster with 4 nodes. Using kubectl, I’ve put up the ‘my-nginx’ example deployment and using the Rancher UI, I’ve rolled out the Guestbook example from the catalog.

After some messing around, I feel that I should create ingress load balancers to handle ‘user’ traffic coming in but these run on the nodes in the cluster and I am hesitant to expose these directly to the outside world. I was assuming I would use an AWS ELB or ELBv2 mapped to the ingress load balancer instances, and there would be some way that the actual instance(s) running the load balancers would be automatically registered/deregistered with the ELB. Having to manually set up instances in the ELB seems wrong given that the cluster is supposed to heal itself if it loses nodes or services need to be moved around.

Since I seem to be struggling with figuring this out, it seems like my thought process is somehow going against the grain on this and so I’m looking for some advice/best practice on how to deploy this in the real world.