Understanding standard cluster and Ingress behavior (Nginx)

Hey there,

I have setup a cluster with one master (without worker role) and 2 worker nodes, deployed your hello world app and set up an Ingress.

Result: For the Ingress, there is being created a .xip.io link holding one IP address of one of my nodes.
xip.io simply resolves the URL into the IP adress as I have learned from their homepage.

First question: When I curl an IP address directly (curl -s instead of using the xip-link it gives me a 404. Why can I access the app via xip-link but not directly via the node IP-address, when xip does nothing but resolving to the IP address? Shouldn’t I be able to also directly curl my second node and access the page as the nginx controllers are deployed to all nodes?

In general, the app runs fine accessing it via the xip-link until I shut down the node whose IP address has been adopted into the link. Clearly, I cannot use the xip link anymore as the node holding this IP address is dead.

Now I want to make my app accessible again running on just one worker node. What would be the best strategy for that?

Question: When I now add another Ingress it automatically takes over again the IP address from the shut down node into its xiop-URL - why is it doing this? Of course, the URL doesnt work. Changing the IP address to my second node in the YAML file and thereby changing the URL doesn’t help --> Link still broken.
When I create a new ingress and define the hostname manually as xip-link but holding the IP-address of my second node, also the URL gets changed to the IP address of my first node and is being broken.
What is happening here behind the curtains?

Side quesion: is it possible to define two Ingress for one and the same service?

Many thanks in advance for your support,

It sounds like you’re not understanding how a L7 ingress works.

A single ingress point isn’t intended to accept traffic for a single service, rather it can route traffic for several different services. It uses name based virtual-hosting to accomplish this, per the rules you’ve setup under ‘Load Balancing’ in Rancher.

For example, if you’ve setup a service named ‘blue’ and then a Load Balancing rule that says:
blue.my.domain > blue (service)

Then any traffic that comes into the ingress point for ‘blue.my.domain’ will be routed to the service ‘blue’

The ingress depends on the destination host in the request to know where the the traffic should go, so it’s not surprising that when you curl the IP address of the ingress point directly, it has no idea which backing service you’re trying to hit, so it just returns a 404.

So you need two parts for your traffic to reach the service: you need a DNS record that will properly resolve the traffic to your ingress point (e.g. A record: blue.my.domain -> ingressIP) and a rule in your ingress to route that traffic to the appropriate backing service.

Hope this helps.


Yes, you are right :slight_smile:

Many thanks, you absolutelty pointed me in the right direction of getting it straight.