I have an AWS EKS based cluster with 2 default nodes (no labels or taints added) running on rancher 2.4.4.
I’ve deployed a workload with workload type: “deployment” and the node scheduling is set to automatically pick nodes for each pod. No rules or tolerations are set. The scheduler is set to “default-scheduler” and no priorities.
Whenever I increment the scale of the workload, new pods are always created on the same node no matter how many I upscale to.
I’m looking for a way to ensure that if a pod of a specific workload is already running on a node, the next pod gets scheduled on the next available node and so on. (aka fill up all empty nodes if any exist, round robin, etc).
What I’ve tried so far
Tried using the “1 per node” workload type, but it forces me to always have at least 1 running pod on each node. So if I had 3 nodes, and I wanted to run the pod only on 2 nodes, I wouldn’t be able to do that with this workload type (which seems intended and that’s fine).
I’ve also looked into https://kubernetes.io/docs/reference/scheduling/config/#profiles which is what I’m assuming the “Scheduler” in the workload settings refers to, but I haven’t found a way to configure my own scheduler in Rancher.
It seems to me like a pretty common scenario so I feel like I might be missing something very simple. Any guidance is appreciated. Thanks!