Default ingress not working RKE2 cluster: v1.24.6+rke2r1

Hi - I have set up and RKE2 cluster and everything is healthy, however I cannot connect to the default ingress that has been deployed. Everything is showing healthy but I get connection refused error on port 80 and 443 on all nodes.

I don’t have a firewall running, iptables is there, but I believe that is configured properly, because if I create a pod and run it with a node port on the container on port 80 I can connect to each node on port 80, but obviously this is not using the ingress which I need to be working.

I am running on an RKE2 cluster: v1.24.6+rke2r1

I have 1 master node and 3 worker nodes.

I just reran this on a fresh install of Ubuntu and same problem. I suspected it could be because of Cilium so I deleted the cluster and reran the install without a cni set (so defaults to Canal)

This works straight away with the default ingress.

Is there something additional that needs to be done for the Cilium CNI for the default ingress to work?

If I deploy the RKE2 server with Canal as the defaul and then upgrade to Cilium I end up with a working implementation of Cilium and a working default ingress. Correction - only until I reboot and then the ingress starts erroring with connection refused.

However if I launch RKE2 server with Cilium to begin with the default ingress doesn’t work and I cannot convert to Canal to ‘get it working’.

Hello Pete,

Have you checked that the URL you are using to access Rancher UI matches the host rule of the Ingress ?

Here is how:

  • Edit the running Ingress
kubectl edit ingress -n cattle-system -o yaml
  • Scroll down and replace values depicted below with yours.
    If you are using a wildcard (or a url specific) cert, then update step 2 accordingly, either leave the wildcard in place or update to url as in step 1.

image

Hope this helps.

Rodrigo

It turns out the default ingress doesn’t work with cilium out of the box, this bug fix resolves it: