Problems on setting up a HA local rancher system

Hey there, Im trying to set p a local ha rancher system so I can learn how to set one up on our production env. I have 4 VMs set up on a local nat network.
Vm 1 has nginx rke helm and kubectl ip 10.0.2.12
Vm 2,3,4 have rke helm and kubectl ip 10.0.2.7 , 10.0.2.8 , 10.0.2.9

rke cluster config

nodes:
  - address: 10.0.2.7
    internal_address: 10.0.2.7
    user: dev
    role: [controlplane,worker,etcd]
  - address: 10.0.2.8
    internal_address: 10.0.2.8
    user: dev
    role: [controlplane,worker,etcd]
  - address: 10.0.2.9
    internal_address: 10.0.2.9
    user: dev
    role: [controlplane,worker,etcd]

services:
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h

I am able to set up the kubernetes cluster but somethings are happening…
In the tutorial on the rancher docs it says that when you run:

rke up --config ./rancher-cluster.yml
export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml
kubectl get nodes

that you should get the three nodes. I do and they all say that they are ready:

10.0.2.7   Ready    controlplane,etcd,worker   95s   v1.15.5
10.0.2.8   Ready    controlplane,etcd,worker   96s   v1.15.5
10.0.2.9   Ready    controlplane,etcd,worker   96s   v1.15.5

But when I do:

kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
ingress-nginx   default-http-backend-5bcc9fd598-9rxg5     1/1     Running     0          20m
ingress-nginx   nginx-ingress-controller-8j4sm            1/1     Running     0          20m
ingress-nginx   nginx-ingress-controller-k9gdc            1/1     Running     0          20m
ingress-nginx   nginx-ingress-controller-xxwlh            1/1     Running     0          20m
kube-system     canal-9xdlr                               2/2     Running     0          20m
kube-system     canal-jmmph                               2/2     Running     0          20m
kube-system     canal-lp2cw                               2/2     Running     0          20m
kube-system     coredns-799dffd9c4-twgch                  1/1     Running     1          20m
kube-system     coredns-autoscaler-84766fbb4-msdpv        1/1     Running     0          20m
kube-system     metrics-server-59c6fd6767-szmxx           1/1     Running     1          20m
kube-system     rke-coredns-addon-deploy-job-xsv7j        0/1     Completed   0          20m
kube-system     rke-ingress-controller-deploy-job-xlknv   0/1     Completed   0          20m
kube-system     rke-metrics-addon-deploy-job-fnprk        0/1     Completed   0          20m
kube-system     rke-network-plugin-deploy-job-hr8xn       0/1     Completed   0          20m

It looks all good, but I have three nodes and I only see 2/2. In the tutorial it also says 3/3…

This is my first concern. Second which is the real killer is when I go to install cert-manager with helm…

After running the first commands everything goes well… until I get to the last part.

helm install \
  --name cert-manager \
  --namespace cert-manager \
  --version v0.9.1 \
  jetstack/cert-manager

In which after like 2 mins I get :

Error: Could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request

What am I doing wrong…?

Note: Please for give me if I have forgotten to say things… I wrote this before lunch and I am not running on all cylinders.

I can;t help with cert-manager, but for you other question - this all looks fine. The “2/2” you are seeing is not the number of nodes, but the number of pods that are running. Canal is running 2 pods per node. you can see this with

kubectl describe daemonsets canal -n kube-system

The “canal” daemonset includes 2 containers

1 Like

Thank you for responding… Now I understand about the pods!

Found out my problem was with my server os. I using Debian but when i moved over to ubuntu server I got it work perfectly! Thanks!