How to schedule pods on master nodes?

Hi, I have a cluster of 4 nodes running rke2:

$ kubectl get node
NAME                          STATUS   ROLES         AGE    VERSION
kubernetes02                  Ready    etcd,master   109d   v1.19.7+rke2r1
kubernetes03                  Ready    etcd,master   109d   v1.19.7+rke2r1
server2                       Ready    <none>        109d   v1.19.7+rke2r1
server3                       Ready    etcd,master   109d   v1.19.7+rke2r1

RKE was deployed manually, not by rancher itself.

When I deploy pods they are automatically started on node server2 which is a plain worker. But I also want to allow workloads on server3 and kubernetes03.

How can I tell kubernetes to use these nodes as hyperconverged nodes (master, etcd, worker on a single machine) ?

Thanks, Andreas

I would have thought that was the default. I believe you need to use node-taints if you wish to not have pods run on those nodes, but all nodes are available to have pods scheduled in . Please correct me if I’m wrong.

Thanks, don’t know if I looked wrong or of I had too less workload.
I found out, if I increase the number of replicas of a deployment it really distributes over the cluster. But it is only distributed if there is enough workload. If I just create containers with a single container with replicas=1, which are using only a few ressources (such as gitlab runner itself), then I expected sth. like round robin the nodes. That does not happen. But if the runners spawn up enough build containers, then it is distributed.

So it works to me.