Problems with custom nodes

Hi,

I’m completely new to k8s and rancher. I have 5 BT3 Pro Mini PC 4GB + 64GB Processeur Intel Atom x5 - Z8350 at home that I’m trying to use to mount a cluster with rancher 2.

Ip are classics (192.168.0.10[0-4]). debian 9 and 10 are installed, docker too, no selinux, no firewall …
I have installed rancher on the first (192.168.0.100) and I can create a cluster with the other. I don’t have any problem to create a cluster but, whatever the CNI I’m choosing, I can’t access to the workload behind an ingress.

So, is there a tip or trick ?

I’m following the tutorial, deploying a nginx image as a workload and adding ingress to the port 80 of it. I always have an openresty error 503…

The best I’ve done was to deploy a workload on one node and able to access it directly but the nodeport was not available on each node.

I’m pretty sure that I have a problem with the CNI. Which one to use ?

There are a few objects that you need to tie together for this to work.

So you will have a “deployment”, which will be your nginx web server that you mention running on port 80. This is internal to the cluster only. you cannot access this from outside the cluster. To get external access, you create an ingress and point that at a service.

The ingress object will look for a specific host header and when that matches, it will forward traffic to a service (or directly to a workload, but using a service is more flexible and I prefer it).

So if your nginx pod is already up and running, then just create a service of type ClusterIP, and define the service port (can be anything), and the target port (the port your web server is listening on).

Then create an ingress, that looks for some host name (or it can generate an xip.io one for you) and point that at the service/service port.

Then when you go to http://whatever.host.you.specified, it will hit your ingress and get routed to service and pod.

Note - you need DNS A record that will resolve your host to ingress node IP. Or better yet a DNS record that resolves to an external load balancer that will spread requests between nodes. But for home lab testing, not required.

Thanks for your response, I have setup succesfully a workload and an ingress by recreating the cluster using flannel CNI and editiing the configuration to specify the interface :

# 
# Cluster Config
# 
docker_root_dir: /var/lib/docker
enable_cluster_alerting: false
enable_cluster_monitoring: false
enable_network_policy: false
local_cluster_auth_endpoint:
  enabled: true
name: sandbox
# 
# Rancher Config
# 
rancher_kubernetes_engine_config:
  addon_job_timeout: 30
  authentication:
    strategy: x509
  ignore_docker_version: true

  ingress:
    provider: nginx
  kubernetes_version: v1.15.5-rancher1-2
  monitoring:
    provider: metrics-server

    network:
      plugin: flannel
      flannel_network_provider:
      iface: enp1s0

  network:
    options:
      flannel_backend_type: vxlan
    plugin: flannel


  services:
    etcd:
      backup_config:
        enabled: true
        interval_hours: 12
        retention: 6
        safe_timestamp: false
      creation: 12h
      extra_args:
        election-timeout: 5000
        heartbeat-interval: 500
      gid: 0
      retention: 72h
      snapshot: false
      uid: 0
    kube_api:
      always_pull_images: false
      pod_security_policy: false
      service_node_port_range: 30000-32767
  ssh_agent_auth: false
windows_prefered_cluster: false

can you post the yaml for the deployment, the service and the ingress objects?