Worker roles missing on new RKE cluster on Ubuntu


I’ve installed my first RKE cluster on Ubuntu-20.04.3
I followed the quickstart guide, and configured 1 controller and 2 workers.

root@tk8sc1:~# /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get nodes
NAME     STATUS   ROLES                       AGE     VERSION
tk8sc1   Ready    control-plane,etcd,master   2d13h   v1.22.5+rke2r1
tk8sw1   Ready    <none>                      15h     v1.22.5+rke2r1
tk9sw2   Ready    <none>                      15h     v1.22.5+rke2r1

As you can see the worker roles have not been applied.
I read the troubleshooting page, which says to check whether the kubelet+kube-proxy containers are running, they are not and neither are the images defined, but the page doesn’t mention what to do next.
I’m not sure what I’ve missed or what I should do next, I’d appreciate any help for my next steps.

root@tk8sw1:~# docker ps -a -f=name='kubelet|kube-proxy'


I don’t recall if worker roles show up or are empty and I’m not where I can check my cluster conveniently. Maybe try running things and see if they show up on the pods? If you add an -o wide to a kubectl command querying for pods or services or whatnot a lot of them show what node the pod is running on.

You have some containers running if you get any response from kubectl (you’d need a minimum of etcd & kube-apiserver – both of which you can see if you change your get nodes to get pods -A at the end of your kubectl command), so you may not be using Docker? I know RKE2 & K3S don’t use the docker daemon for running containers so that’ll show up blank, but I thought RKE still used docker daemon.