I’m completely new to k8s and rancher. I have 5 BT3 Pro Mini PC 4GB + 64GB Processeur Intel Atom x5 - Z8350 at home that I’m trying to use to mount a cluster with rancher 2.
Ip are classics (192.168.0.10[0-4]). debian 9 and 10 are installed, docker too, no selinux, no firewall …
I have installed rancher on the first (192.168.0.100) and I can create a cluster with the other. I don’t have any problem to create a cluster but, whatever the CNI I’m choosing, I can’t access to the workload behind an ingress.
So, is there a tip or trick ?
I’m following the tutorial, deploying a nginx image as a workload and adding ingress to the port 80 of it. I always have an openresty error 503…
The best I’ve done was to deploy a workload on one node and able to access it directly but the nodeport was not available on each node.
I’m pretty sure that I have a problem with the CNI. Which one to use ?
There are a few objects that you need to tie together for this to work.
So you will have a “deployment”, which will be your nginx web server that you mention running on port 80. This is internal to the cluster only. you cannot access this from outside the cluster. To get external access, you create an ingress and point that at a service.
The ingress object will look for a specific host header and when that matches, it will forward traffic to a service (or directly to a workload, but using a service is more flexible and I prefer it).
So if your nginx pod is already up and running, then just create a service of type ClusterIP, and define the service port (can be anything), and the target port (the port your web server is listening on).
Then create an ingress, that looks for some host name (or it can generate an xip.io one for you) and point that at the service/service port.
Note - you need DNS A record that will resolve your host to ingress node IP. Or better yet a DNS record that resolves to an external load balancer that will spread requests between nodes. But for home lab testing, not required.
Thanks for your response, I have setup succesfully a workload and an ingress by recreating the cluster using flannel CNI and editiing the configuration to specify the interface :