Getting ingress to work with default Rancher 2.x setup

I have been trying to get the ingress stuff to work in Rancher 2 by simply using the UI (no kubectl), but I am unable to get it to route.

Here are the steps I’ve taken

  • Added a cluster with all 3 roles (this is not on the same server as rancher itself, so no port conflicts)
  • Add a simple workload with nginx image, don’t bind any ports (I have also checked whether nginx actually works by binding a random NodePort and checking the URL and I am getting the correct “Welcome to Nginx” default page)
  • Add an Ingress with a .xip.io domain which routes to one of the following (I’ve tried both)
    • Workload - nginx - port 80
    • Service - nginx - 80

Whenever I try to access the xip domain (eg with curl -vL) I get Failed to connect to ... port 80: No route to host

Am I missing something?

By default, the NGINX Ingress controller listens on port 80 and port 443. If those do not return 404 - default backend after setting up the cluster, it’s either not able to start (check the NGINX ingress controller log) or there is a firewall in between or on the host preventing it to access it.

It seems the NGINX ingress controller is running just fine and there are no TCP ports blocked in the iptables.

Here is the NGINX ingress log right after

nginx version: nginx/1.13.12
W1022 20:21:56.044643       5 client_config.go:533] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1022 20:21:56.045135       5 main.go:183] Creating API client for https://10.43.0.1:443
I1022 20:21:56.119876       5 main.go:227] Running in Kubernetes cluster version v1.11 (v1.11.3) - git (clean) commit a4529464e4629c21224b3d52edfe0ea91b072862 - platform linux/amd64
I1022 20:21:56.125415       5 main.go:100] Validated ingress-nginx/default-http-backend as the default backend.
I1022 20:21:56.868740       5 nginx.go:250] Starting NGINX Ingress controller
I1022 20:21:56.918263       5 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"15e8d92e-d4b1-11e8-a2ac-96000009010a", APIVersion:"v1", ResourceVersion:"511", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I1022 20:21:56.918502       5 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"15e6ea40-d4b1-11e8-a2ac-96000009010a", APIVersion:"v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I1022 20:21:56.928582       5 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"15c7b99b-d4b1-11e8-a2ac-96000009010a", APIVersion:"v1", ResourceVersion:"507", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/nginx-configuration
I1022 20:21:58.071116       5 nginx.go:271] Starting NGINX process
I1022 20:21:58.071678       5 leaderelection.go:175] attempting to acquire leader lease  ingress-nginx/ingress-controller-leader-nginx...
W1022 20:21:58.073978       5 controller.go:349] Service "ingress-nginx/default-http-backend" does not have any active Endpoint
I1022 20:21:58.074119       5 controller.go:169] Configuration changes detected, backend reload required.
I1022 20:21:58.119725       5 leaderelection.go:184] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I1022 20:21:58.119836       5 status.go:197] new leader elected: nginx-ingress-controller-9tqwx
I1022 20:21:58.444298       5 controller.go:179] Backend successfully reloaded.
W1022 20:22:01.407385       5 controller.go:349] Service "ingress-nginx/default-http-backend" does not have any active Endpoint
W1022 20:22:04.740741       5 controller.go:349] Service "ingress-nginx/default-http-backend" does not have any active Endpoint
W1022 20:22:08.241360       5 controller.go:349] Service "ingress-nginx/default-http-backend" does not have any active Endpoint
W1022 20:22:11.960042       5 controller.go:349] Service "ingress-nginx/default-http-backend" does not have any active Endpoint
I1022 20:22:25.804079       5 controller.go:169] Configuration changes detected, backend reload required.
I1022 20:22:26.242559       5 controller.go:179] Backend successfully reloaded.

That looks fine yes, if you log in to a host with 3 roles and run curl http://127.0.0.1 and curl http://IP_OF_NODE and curl http://XIP_DOMAIN, what does it return? To rule out local vs remote access.

Interestingly enough none of those route correctly,

* Rebuilt URL to: http://{IP}/
*   Trying {IP}...
* TCP_NODELAY set
* connect to {IP} port 80 failed: No route to host
* Failed to connect to {IP} port 80: No route to host
* Closing connection 0
curl: (7) Failed to connect to {IP} port 80: No route to host

Aside from this, when I launch a default nginx worload and bind it to a random NodePort I am able to access the randomly bound port through the node’s IP address so the server is accessible externally but 80 and 443 through the ingress controller are simply not routing somehow

Update;
I even tried to accept all in the iptables

iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT

But even then it does not seem to be working

Setting the default chain doesn’t help if you have a daemon running managing your iptables. What OS are you on and what firewall is present?

Are all 3 commands I gave you returning No route to host?

I was actually able to fix it in the mean time, it ended up being the iptables anyways. I cleaned up the existing rules, which were also remnants from the Rancher 1.6 agent and used ufw to open the necessary ports. Thanks for the help :slight_smile:

Hi - I am having this problem as well. I don’t have a firewall running, iptables is there, but I doubt that is causing the problem.

The error I get is connection refused on both 80 and 443. The only way I could get this working is by running a node port on the pod on port 80 and then it worked, but obviously this is then not using the ingress.

I am running on an RKE2 cluster: v1.24.6+rke2r1

I have 1 master node and 3 worker nodes, I get connection refused across all nodes. All ingress pods are healthy

I’ll create a separate topic