Rancher 2.x - Port and IP Confusion

Hello folks,

I recently deployed Rancher 1.X on my testfarm and was very pleased. Obviously I want to try 2.X - so I installed it.

My setup:

  • 2x Centos 7, installed with the newest Docker CE Version
  • One is running my Rancher-Container (internal IP: 192.168.0.225)
  • The other one is configured as Node. (internal IP: 192.168.0.228)

I just played around with it a bit and deployed a container.

My question:
It was no problem to deploy the container, I am just wondering about the port / expose-part of my container. I just thought I can expose my container to port 80 or port 443 - but I cant. In rancher v1, I was able to connect via Node-IP:80 or Node-IP:443. I simply configured a loadbalancer and the magic was done.

Is there a “hidden” way I didnt found? I just need some tips to find my way in networking with containers :slight_smile:

Thank you very much
Atomique

Ps.: Please dont hesitate to ask me some details!

1 Like

Hi,
I have the same problem, my master and node is on the same bare metal machine nevertheless.
Maybe it is a port issue ?

I also need some hints that loadbalancing work with rke ingress controller by default.

Hey,

I’m so sorry, just forgot to answer my post.

I resolved it: Just create a Host-Port for that Container and dont forget to open that port in your Firewall (Rancherserver and Ranchernode). For that Port 80/443 -> I found out that these ports are used by that inrgess loadbalancer which you cant disable. You are forced to use it.

Just create a container with no exposed ports, create a ingress loadbalancer and give it his own dns name container.domain.tld (internal or external) - dont forget to set it into your DNS :slight_smile:

My solution at this point:

Opnsense with haproxy redirecting 80/443 to my rancher machine -> Ingress-Loadbalancer takes that traffic on 80/443 and redirects it to that container.

I hope it helped - sorry for answering that late.

Have a nice time

Atomique

EDIT: How can I mark this thread as solved?

You’re not actually forced to use it. You can modify the cluster.yaml configuration when you create your cluster to disable it.

1 Like

Nice! I will give this a try! Thank you for showing me this and sorry for that wrong information :slight_smile:

Not that it was easy to find - took me a while myself to stumble upon it!