Rancher ingress basic question

Hi

I have three nodes.

  1. Rancher node ( only runs rancher container)
  2. Master (etc and control plane only but no worker )
  3. Worker

I am trying to test ingress. I deployed an httpd container (apache) and chose Node port (random). Then on ingress I typed a hostname (nginx.localdomain) , path ( /) , service name (apache ) and port set to 80.

This is not working. When I hit the url from the browser , I can’t reach port 80. On my dns server I pointed nginx.localdomain to my master node IP. My understanding is ingress should route from a master node to a worker node. What am i doing wrong?

Thanks
Paras.

Hallo,
how do you start the rancher container. You should use for example only 8443 for the rancher management interface.
It is working running rancher + etc + controlplane on one system and wokers seperate on own systems.

Ralf

rancher container is running on a dedicated virtual machine. Etcd + Controlplane on another vm and worker on a third vm. I don’t think i have issues with the rancher container. My question is about ingress for a httpd app that i deployed.

I have the same thing happening here too. A little different set up but still its getting me… :frowning:

If you check the Workloads in your System project you would notice that rancher/nginx-ingress-controller containers are only running on the Worker nodes. There is no ingress on the control nodes. Therefore your loadbalancer or DNS should bring you to the worker nodes on ports 80 and 443 in order to reach ingress and then your apache.

This means if we are doing multi node cluster setup we need an external load balancer. It this true?

Yes. There are possible shortcuts, but they come with significant danger therefore the installation guide calls for a pair of load balancers.

Just so you beware:

  1. You could add all the worker IPs to a round robin DNS, but expect a super-painful failover. The theory says you could set low TTL on these records but it never works flawlessly in practice.
  2. You can run keepalived or such to add a VIP directly to the worker nodes, but there are even more implications: The IP can flip to another node when a node is under higher load breaking the connections when it flips. Or worse there may be more trouble around iptables rules on the worker node interfering with keepalived. And having the second IP on the node might cause troubles to the node setup.

Anyway, LB it is. LB is not that difficult. Here is our configuration for nginx as an example. Just apt-get install nginx and drop the following to your /etc/nginx/nginx.conf changing the worker names/IPs:

load_module /usr/lib/nginx/modules/ngx_stream_module.so;
worker_processes 4;
worker_rlimit_nofile 40000;
events {
    worker_connections 8192;
}
http {
    server {
        listen         80;
        return 301 https://$host$request_uri;
    }
    server {
        listen 127.0.0.1:81;
        location /nginx_status { stub_status; }
    }
}
stream {
    upstream worker_servers {
        least_conn;
        server worker1:443 max_fails=3 fail_timeout=5s;
        server worker2:443 max_fails=3 fail_timeout=5s;
        server worker3:443 max_fails=3 fail_timeout=5s;
    }
    server {
        listen     443;
        proxy_pass worker_servers;
    }
}

Thanks Anthony for the details.