Starter questions for 5. Load balancing / RKE HA

Hi,

2 questions after 1 day into rancher/k8s.

Question 1
I’m pretty well versed in docker and have been running and deploying docker contains/environments through GIT CI/CD, etc for some time. However, I never really went down the Swarm side of things with docker. I feel I really should be able to figure this issue out myself but that being said, I don’t think I’m not approaching it from the right angle. So, for fear of asking a stupid question…

I have 3 nodes, each running RKE on Ubuntu 18.04. Everything was, disconcertingly, far easier to to set up than I was expecting. Node layout:
Node 1 - Master Node
Node 2 - All roles
Node 3 - Worker Node

So first quick test - deploy a NGINX container using Nodeport setup to ensure all nodes can route to pod/container. Set up with one instance and increase that instance to 2 to test the creation of scaling. All works perfectly. Now I need to allow ingress into my 2 nginx containers. I set up an ingress point under the Load balancing section. I give it a hostname that I plan to add to my onsite DNS server on my domain. I point it at the nginx workload on port 80 and save it all. No problems.

Now I’m left with the issue of which “public” (i.e. real LAN IP) address, should be tied to that DNS entry? I could tie the IP of one of the nodes to that DNS entry and they would work as I’m using Nodeport setup but what happens when that node dies? If I continue down that road, it means I’m setting up round robin DNS for every node in the cluster which obviously doesn’t feel right. I guess my confusion stems from thinking that by adding the load balancer/ingress solution, I would be able to assign a VIP that was accessable and routable on all nodes. That doesn’t appear to be the case and I’m struggling to find any documentation that just does a simple walkthrough that answers this specific question.

2nd question
I’d like to start as I mean to go on in terms of scalability so I’ve spent some time reading up on the differences between k3s/RKE solutions. I like the idea of k3s using dqlite (experimental at the mo) so ideally I would like to aim for this type of setup. In the interim however, while learning, I would like to build RKE in a HA setup and I was looking to get confirmation that this is possible and if it is generally accepted as being a viable production type setup.

Thanks for any pointers you can give.

1. public address
for hosted setups, use an external load balancer https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/

for other setups, I only have some hints and guesses (tying myself):

2. ha
For me it sounds like ha with rke works fine. But I have limited experience. For the number of nodes and roles, check https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/production/recommended-architecture/
(by the way, dqlite seems to be replaced soon)