Hi,
I did my first install of K3s, with 2 nodes, one for the master, and the second one for the worker. The installation was really easy, I did not expected to have a real kubernetes cluster running on a few minutes. However, there is some thing I am not sure if I did correctly, because I expected different behaviours.
For example, when I did a get node, to know roles, the worker is not tagged.
k3s kubectl get node
NAME STATUS ROLES AGE VERSION
ip-172-31-39-16.eu-west-3.compute.internal Ready control-plane,master 32m v1.21.5+k3s2
ip-172-31-39-179.eu-west-3.compute.internal Ready 9s v1.21.5+k3s2
First doubt: Why the worker node is tagged as “none” instead of worker ?
I tryed then to launch nginx, to see where it was going to run. In fact, I launched 4 pods of nginx. I got 3 of them running at the server, and 1 running on the worker. I can not understand why, since neither the server has the “worker” role assigned, neither the worker was tagged correctly.
See below comands:
k3s kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
k3s kubectl scale --replicas=4 deployment/nginx
deployment.apps/nginx scaled
k3s kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-6799fc88d8-wsf5m 1/1 Running 0 111s
nginx-6799fc88d8-584hx 1/1 Running 0 36s
nginx-6799fc88d8-smpxc 1/1 Running 0 36s
nginx-6799fc88d8-599v8 1/1 Running 0 36s
k3s kubectl describe pod nginx-6799fc88d8-smpxc
Name: nginx-6799fc88d8-smpxc
Namespace: default
Priority: 0
Node: ip-172-31-39-16.eu-west-3.compute.internal/172.31.39.16
k3s kubectl describe pod nginx-6799fc88d8-599v8
Name: nginx-6799fc88d8-599v8
Namespace: default
Priority: 0
Node: ip-172-31-39-16.eu-west-3.compute.internal/172.31.39.16
]# k3s kubectl describe pod nginx-6799fc88d8-wsf5m
Name: nginx-6799fc88d8-wsf5m
Namespace: default
Priority: 0
Node: ip-172-31-39-179.eu-west-3.compute.internal/172.31.39.179
k3s kubectl describe pod nginx-6799fc88d8-584hx
Name: nginx-6799fc88d8-584hx
Namespace: default
Priority: 0
Node: ip-172-31-39-179.eu-west-3.compute.internal/172.31.39.179
So, 3 instances running on node master (not sure why is accepting nodes, if it has no the tag or worker on the roles!) and 1 instance running on node worker (that also, should not accept pods, since is not tagged correctly).
can some one please help me to understand whay I am doing wrong ?
Thks!