MIgration from Minikube & Docker to Rancher & Podman, Configuring a simple local cluster: etcd error

hello, I’m new here and very new to Kubernetes. My limited experience has been with Minikube. In our current use case, we have created a number of scripts to create a Minikube cluster, verify it’s active, generate manifest files, launch pods, etc. We handle everything on a Linux server and use Minikube Dashboard to access the environment to monitor for health and operations remotely. Thus far everything we have done is maintained and accessed via command line operations on the server. Our client has asked that we move to Podman, and we have also decided to start moving our work over to Rancher. We will start to spin up new Servers for the work we will be implementing, but that won’t start for a while. For now, we need, want, to move what we are currently using over with minimal disruption in the work we have implemented.

What we have chosen. For now, we are looking to utilize RKE with K8. and implement our current work with Docker as a first step. Looking over the different installation methods on Rancher’s document site, we want to avoid running Rancher itself as a Docker Container and prefer to have it installed directly on our system. For our current setup, we want access to the cluster on the local machine and will work on configuring remote deployment at a later time.

For my initial work, I’ve been referencing Setting up a High-availabiliity RKE Kubernetes Cluster how-to. I’ve managed to install all the CLI tools, Kubectl and RKE on a Ubuntu 22.04 Virtual Machine via VB. However, I have been struggling to understand the configuration file.

I have tried to manually create the configuration file, and use the automated prompt. In both situations, I’m receiving two warnings and an error.

Currently, I’m using a simple configuration file.

nodes:
- address:
user:
role:
- etcd
- controlplane
- worker

Can someone point me in the direction of how to get a simple configuration up and running on the local machine and a resource for understanding Configuration files?

I’ve looked at other resources on the errors, but I’m having issues understanding what is happening and why.

@mascenzi80 Hi and welcome to the Forum :smile:
It took me awhile to get rke up and running with rancher, free courses and SUSE blogs helped a lot… check out https://www.rancher.academy/

Here is the config I used for my local home lab cluster…

# Cluster Nodes
nodes:
  - address: 192.168.xxx.xx0
    port: "22"
    user: root
    role: 
      - controlplane
      - etcd
      - worker
    docker_socket: /var/run/docker.sock
  - address: 192.168.xxx.xx1
    user: root
    port: "22"
    role:
      - controlplane
      - etcd
      - worker
    docker_socket: /var/run/docker.sock
  - address: 192.168.xxx.xx2
    user: root
    role:
      - controlplane
      - etcd
      - worker
    docker_socket: /var/run/docker.sock

# Name of the K8s Cluster
cluster_name: example-cluster

services:
  kube-api:
    # IP range for any services created on Kubernetes
    # This must match the service_cluster_ip_range in kube-controller
    service_cluster_ip_range: 192.168.0.0/16
    # Expose a different port range for NodePort services
    service_node_port_range: 30000-32767    
    pod_security_policy: false

  kube-controller:
    # CIDR pool used to assign IP addresses to pods in the cluster
    cluster_cidr: 192.167.0.0/16
    # IP range for any services created on Kubernetes
    # This must match the service_cluster_ip_range in kube-api
    service_cluster_ip_range: 192.168.0.0/16
  
  kubelet:
    # Base domain for the cluster
    cluster_domain: cluster.local
    # IP address for the DNS service endpoint
    cluster_dns_server: 192.168.xxx.xxx
    # Fail if swap is on
    fail_swap_on: false

network:
  plugin: calico

# Specify DNS provider (coredns or kube-dns)
dns:
  provider: coredns

# Kubernetes Authorization mode
# Enable RBAC
authorization:
  mode: rbac

# Specify monitoring provider (metrics-server)
monitoring:
  provider: metrics-server`

Set up the cluster with;

export KUBECONFIG=$(pwd)/kube_config_cluster.yml

kubectl cluster-info

kubectl config current-context

helm repo add jetstack https://charts.jetstack.io --force-update
helm repo add rancher-latest  https://releases.rancher.com/server-charts/latest --force-update
helm repo update

helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version=1.11.0 --set installCRDs=true

kubectl -n cert-manager rollout status deploy/cert-manager
kubectl -n cert-manager rollout status deploy/cert-manager-webhook

kubectl create namespace cattle-system

helm install rancher rancher-latest/rancher --namespace cattle-system --set hostname=<hostname> --set replicas=3 --version=2.7.X--set bootstrapPassword=iamadmin

The <hostname> resolves via /etc/hosts file with the three ip addresses resolving to this…

###
192.168.xxx.xx0	example.com example
192.168.xxx.xx1	example.com example
192.168.xxx.xx2	example.com example

Note: I also provisioned the vm’s with vagrant and libvirt.

1 Like

@malcolmlewis1 Thank you! One thing that I’m going to attempt to do today is spin up four VMs and attempt to configure a simple cluster the typical way it seems Rancher is used. Your config file is very similar to most of the examples that I find.

In your configure file, you are using three nodes. In this situation, you are using the three nodes. Which are you using a fourth machine(host) to operate from? My understanding is typically you have the user’s machine, and Rancher is used to configure other machines for the cluster. Services would be any packages you wanted to be installed on the machines. Seeing how we wouldn’t want to have to install Kubernetes manually on the three machines, you would simply install Kubernetes on your host machine, and the configure file would access the nodes and install services, such as Kubectl on it for you. Am I understanding your config file correctly?

@mascenzi80 Yes a local machine (openSUSE Tumbleweed) that has rke, helm and kubectl added to my user directory path (I run Rancher Desktop so that has helm and kubectl). So aside from the setting up of the vm’s, it’s all done on localhost.

The vm’s are accessed over ssh so I copy my pub key over to them as well, no swap on them and add the following sysctl config;

cat /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

As well as install docker and enable docker…

1 Like

@malcolmlewis1. Ok, that all makes perfect sense to me. However, can I point Rancher to my local machine to create a cluster? I would like to use Rancher in a similar fashion to how I am currently using Minikube.

@mascenzi80 Not sure what you mean, all the commands are run on a machine with helm, kubectl and rke installed… In this case my localhost, but have other systems I can grab the kube_config_cluster.yml via scp and then just export KUBECONFIG=$(pwd)/kube_config_cluster.yml and run kubectl commands (and have the hosts in /etc/hosts file).

@malcolmlewis1, hopefully, this clears up what I’m looking to accomplish. I have three machines. Machine1 is my work computer. This machine should go unchanged, nothing should be installed on it. This will change in the future, but for now, it should be unchanged.

Machine2 and Machine3 are Servers running Ubuntu 22.04.

Machine2 is currently running Minikube and is our core development environment, currently. Four users use this machine and access their own user accounts. Each user, using a build script, generates their own cluster using a unique profile for their user. Minikube allows us to generate separate clusters that we develop while keeping all development synchronized via GIT. Machine2 is going to be retired soon.

Machine 3, also running Ubuntu 22.04, needs to be configured for use with Rancher. We would like to use Rancher in the same context that we are currently running Minikube. Rancher installed on Machine3 and able to generate clusters on Machine3.

The goal is to not have Rancher running on our individual, personal/work, computers. Is it possible, even if its not ideal, to use Rancher in a similar way to how we are currently using Minikube?

@mascenzi80 Ahh ok, I was assuming all three machines as a HA cluster, so you just want a single instance of rancher running RKE on machine 3 running Ubuntu.

What version of RKE are you going to use, or RKE2?

1 Like

I’m currently using rke 1.4.9.

@mascenzi80 So just a further clarification then, you want to deploy RKE[2] cluster(s) with rancher, so the underlying engine also needs to be RKE, or can use k3s?

@malcolmlewis1

I made some changes to my config file, which seems to have gotten me past the SSH error, well sort of.

nodes:
    - address: machine.local
       port: "22"
       internal_address: ""
       role:
           - controlplane
           - etcd
           - worker
       user: <username>
       docker_socket: /var/run/docker.sock
       ssh_key: ""
       ssh_key_path: ~/.ssh/id_ed25519

cluster_name: example-cluster

It still shows that it can’t set up an SSH tunnel, but now it’s saying that it can retrieve docker info. Previously it didn’t say anything about the docker file.

I meant to put the screen shot of the warning.

@malcolmlewis1, The underlying engine should be RKE not k3s.

@mascenzi80 what version of docker, needs to be less than equal to 23…

All good here on openSUSE Leap 15.5 in a virtual machine. I added my user to the docker group gpasswd -a <username> docker on the vm. Copied my key over with ssh-copy-id -i ~/.ssh/id_ed25519 username@rke-vm then on the local machine;

rke up ./cluster.yml 
INFO[0000] Running RKE version: v1.4.9                  
INFO[0000] Initiating Kubernetes cluster                
INFO[0000] [dialer] Setup tunnel for host [192.168.xxx.xxx]
....
....
INFO[0118] [addons] Setting up user addons              
INFO[0118] [addons] no user addons defined              
INFO[0118] Finished building Kubernetes cluster successfully 

And check from the local machine;

kubectl get nodes -o wide
NAME             STATUS   ROLES                      AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION                 CONTAINER-RUNTIME
192.168.xxx.xxx   Ready    controlplane,etcd,worker   4m54s   v1.26.8   192.168.xxx.xxx   <none>        openSUSE Leap 15.5   5.14.21-150500.55.12-default   docker://23.0.6-ce

kubectl get pods -o wide -A

NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE     IP                NODE             NOMINATED NODE   READINESS GATES
ingress-nginx   ingress-nginx-admission-create-8fmxx      0/1     Completed   0          4m37s   192.167.180.133   192.168.xxx.xxx   <none>           <none>
ingress-nginx   ingress-nginx-admission-patch-8nxm6       0/1     Completed   1          4m37s   192.167.180.134   192.168.xxx.xxx   <none>           <none>
ingress-nginx   nginx-ingress-controller-n6bss            1/1     Running     0          4m37s   192.167.180.135   192.168.xxx.xxx   <none>           <none>
kube-system     calico-kube-controllers-8b49b64c9-qgdxt   1/1     Running     0          5m8s    192.167.180.129   192.168.xxx.xxx   <none>           <none>
kube-system     calico-node-ns9ll                         1/1     Running     0          5m8s    192.168.xxx.xxx    192.168.xxx.xxx   <none>           <none>
kube-system     coredns-66b64c55d4-tgkrr                  1/1     Running     0          4m58s   192.167.180.131   192.168.xxx.xxx   <none>           <none>
kube-system     coredns-autoscaler-5567d8c485-k98rq       1/1     Running     0          4m58s   192.167.180.130   192.168.xxx.xxx   <none>           <none>
kube-system     metrics-server-7886b5f87c-vj6kk           1/1     Running     0          4m48s   192.167.180.132   192.168.xxx.xxx   <none>           <none>
kube-system     rke-coredns-addon-deploy-job-gcwwp        0/1     Completed   0          5m      192.168.xxx.xxx    192.168.xxx.xxx   <none>           <none>
kube-system     rke-ingress-controller-deploy-job-8hl28   0/1     Completed   0          4m40s   192.168.xxx.xxx    192.168.xxx.xxx   <none>           <none>
kube-system     rke-metrics-addon-deploy-job-8qcjv        0/1     Completed   0          4m50s   192.168.xxx.xxx    192.168.xxx.xxx   <none>           <none>
kube-system     rke-network-plugin-deploy-job-zgmxm       0/1     Completed   0          5m10s   192.168.xxx.xxx    192.168.xxx.xxx   <none>           <none>

1 Like

@malcolmlewis1 Unbelievable. You know the issue. I didn’t apply the ssh key to the server, ssh-copy-id. I never considered that it still needed to apply the ssh-key in some other manner that would need to be used to access ITSELF. haha. Ok, its building just fine now.

however, I’m not able to use kubectl to view the nodes. when I run kubectl get nodes I get a connection refused. I’ll start digging into that.

thank you so much for your help.

1 Like

@mascenzi80 You need to export KUBECONFIG=$(pwd)/kube_config_cluster.yml I prefer this that setting it permanently…

1 Like

yes, sir, I had just found that. Thank you for the assistance in either case. Thank you for your time and support.

@malcolmlewis1, One last thing, as I’m attempting to do my documentation.

In the VM you created to help me out. What version of Kubectl, RKE, and Docker are you using?

@mascenzi80 As follows;

Localhost;

kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.6
Kustomize Version: v4.5.7
Server Version: v1.26.8

rke version
INFO[0000] Running RKE version: v1.4.9                  
Server Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.8", GitCommit:"395f0a2fdc940aeb9ab88849e8fa4321decbf6e1", GitTreeState:"clean", BuildDate:"2023-08-24T00:43:07Z", GoVersion:"go1.20.7", Compiler:"gc", Platform:"linux/amd64"}

Virtual machine;

rke-master:~> docker -v
Docker version 23.0.6-ce, build 9dbdbd4b6d76
1 Like