RKE2 node roles from gui vs cli

When I create an RKE2 cluster from the gui, all nodes end up with a role:

rancher3:~ # kubectl get nodesNAME STATUS ROLES AGE VERSION
rancher3 Ready control-plane,etcd,master,worker 13m v1.31.2+rke2r1
rancher4 Ready control-plane,etcd,master,worker 5m1s v1.31.2+rke2r1
rancher5 Ready control-plane,etcd,master,worker 5m14s v1.31.2+rke2r1
rancher6 Ready control-plane,etcd,master,worker 5m36s v1.31.2+rke2r1

When I do the same with RKE2 cli, all the agent nodes end up with no role or status.

I have followed this document to setup the agent and server with the cli.

I have gone back and relabeled the nodes, but I have seen conflicting reports about whether this works or should be done.

My intent in a home lab setup was to use rke2 cli method to setup a cluster and then import it to Rancher. There’s a difference in deployment that I can’t explain and hoping for some pointers.

Thanks.

Just reproduced it again by adding two new RKE2 nodes to the cluster. The two new nodes end up with no role, but the gui shows them as workers.

Is this expected behavior?


rancher3:~ # kubectl get nodes
NAME       STATUS   ROLES                       AGE    VERSION
rancher3   Ready    control-plane,etcd,master   28d    v1.31.3+rke2r1
rancher4   Ready    control-plane,etcd,worker   11d    v1.31.3+rke2r1
rancher5   Ready    etcd,worker                 28d    v1.31.3+rke2r1
rancher6   Ready    etcd,worker                 27d    v1.31.3+rke2r1
rancher7   Ready    <none>                      172m   v1.31.3+rke2r1
rancher8   Ready    <none>                      34m    v1.31.3+rke2r1