Confirmation I have understood the ingress process


Could someone confirm that I have understood the setup of the Ingress process please? Here’s how I understand the process using a very simple setup.

  • 3 node cluster. 1 is the primary rancher server, the remaining 2 nodes hold all roles.
  • I’ve spun up an nginx pod and exposed port 80 on the container.
  • I then spawn a second instance. I now have 1 pod on and the other on
  • The Service is created automatically (I end up with 2, unknown yet as to why but not important for now).
  • Both instances of nginx are available via their randomly assigned ports at http:// and http://
  • Now I create an Ingress and add the hostname as the hostname for the ingress point. I point the ingress point at my nginx workload.
  • Rancher creates yet another Service for ingress (really don’t understand why it needs so many).

At this point, my assumption is that this ingress service is copied to all worker nodes. All I need to do now is add all my worker node IPs to my DNS under the hostname test-nginx.
Granted, Round Robin DNS is not a load balancer but for a simple PoC, it’ll do. The L7 ingress will check the host headers for the target hostname and direct traffic to one of the pods in the matching workload.

Is that the basic idea? Are these the correct steps? This is how I have it working but I feel I’m missing something.

Side question: In the design above (3 node cluster. 1 is the primary rancher server, the remaining 2 nodes hold all roles), I wanted to see what would happen if I removed the 3rd node ( I was expecting everything to keep going as I’d be left with 2 etcd nodes and a Control Plane node and Kubernetes would spin up an additional pod on the remaining worker node. What actually happened was that the Rancher server had a melt down and was unresponsive with the error message:

Error: Get dial tcp i/o timeout

Any pointers as to why it fails would be appreciated as well as confirmation on whether I have the ingress setup process correct.


Which version of rancher is this? Because for 2.4+ this doesn’t sound like a valid cluster.

For none HA setup you could have 1 rancher server node. But then you’d need a valid Kubernetes cluster which has to be an odd number of nodes for etcd to work.