Workloads running on control nodes

Hi. I am new here. I hope I am in the correct place. I am using Rancher 2.6 and I just imported an HA RKE2 cluster (for testing - 3 control nodes and 2 worker nodes). I am concerned because I recently noticed that it started to schedule workloads to the control nodes. Why would this happen? And how do I ensure that this does NOT happen again. Everything is pretty much default. As far as I can tell each node type is properly labelled, etc. Any information whatsoever would be greatly appreciated. I can provide a million more details, if required.

If I recall correctly, default RKE2 install via CLI doesn’t taint the master nodes and assumes you may want to launch jobs on them. If you install RKE2 from the UI and you un-check worker role then it’ll taint the nodes properly. You can also just manually apply the taint to the nodes with kubectl now that it’s installed.

High Availability - RKE2 - Rancher's Next Generation Kubernetes Distribution is the spot in the RKE2 docs that talks about a suggested taint, though if you do the install through the UI as I mentioned above it’ll put two different taints on your server nodes (or at least did for me). Probably both are equivalent?

Hi. Thanks very much for the response. I found that bit in the documentation now. I guess I am just worried now that if I apply the taint, the pods that make my VIP and HA work might fail in the future. So, for now we are just using some node affinity commands in the manifests to ensure that the worker nodes get the pods. Good enough work around for our POC. You know, for now… Thanks again for helping to confirm all this.