Running a workload on specific nodes

I would like to deploy hosts with specific capabilities, for example some nodes with more RAM, others with higher CPU for workloads that will require this. How can I schedule a workload for specific nodes in my cluster so that I am aligning the workload with the capabilities I have provisioned?

1 Like

Looks like I can just tag using Kubernetes labels. So a second question is if I label like this, can I enforce the use of labelled nodes through RBAC policy to deploy workloads for specific projects or namespaces.

1 Like

I’m not sure what you’re trying to get at with the RBAC policy, but if you want to map a workload to nodes that have been labeled you can use k8s affinity. In the workload UI it’s under the Node scheduling. There you can require all of with the label matching what you want :wink:

Hi. Thanks, this is helpful. So what I am attempting to do equates to mapping and access control. So with labeling it seems I can map a workload to where I want it to go. With access control, I am hoping to restrict scheduling on specific nodes for certain namespaces within a project.

So I haven’t looked at that, however I think ClusterRole on RBAC has cluster-scoped resources access like nodes. So that might be a place to look.

I appreciate this and will dig a bit deeper. Hoping the API references come soon too.Yeah, the point of all of this is to ensure I can set up a bunch of nodes with specific resources. Internally, I want workloads to be mapped to the instance type best suited. So trying to take the guesswork out of what ought be scheduled where as projects and namespaces are created. Hoping to just enforce some policy that will keep things on the right machines in essence. With the labels, my hope is also to monitor nodes by their label in prometheus and to scale nodes up or down when certain thresholds are reached.

I’ve worked and had quite a bit of luck with what sounds like a similar issue in several cases (one complex was gpu of various instance types mapping for ml with autoscaling and allocation. There’s usually quite a few ways you can go and some can be very complex while others are very simple if you buy in on k8s principles :slight_smile: Feel free to hit me up on the Rancher users slack if you want to go a bit deeper in what you’re trying to do. I’m josmo there as well :slight_smile:

Thanks @josmo. I will look for you on slack as I get further. This seems a thing that Rancher and Kubernetes could solve well. I need to experiment a bit first. Appreciate the insight and help. :slight_smile: