Remove role from node

Hello there.

How is it possible to remove a role from a specific node (etcd / controlplane / worker)? I manage to remove the labels - but this does not remove the components from the nodes.

I also tried to redeploy the rancher-agent with the according flags. This didn’t work neither.

Is there a way to remove roles without having to delete / purge the node?

I’m grateful for any inputs.

I realize that this is a very old post, but I have the exact same question, 3 years later.

We are currently running a very small cluster with just 2 nodes, each sharing all 3 roles (etd, controlplane and worker). We want to build out the cluster so that it more closely resembles a proper, production-ready cluster.

The current plan is:

  1. Add 3 new nodes with the 2 system roles (etcd and controlplane).
  2. Add 1 new node with the worker role.
  3. Evacuate the system roles (etcd and controlplane) from the original 2 hosts.

The idea is that, once the process is complete, we will have 3 hosts sharing the etcd and controlplane roles and 3 separate hosts with the worker role.

Is there a way to remove the system roles from our existing nodes without completely removing the nodes from the cluster?

My understanding is remove from the cluster is the way to do it, but you can drain them and do one at a time as you add others to keep from losing anything, and you can re-add them after they’re removed and wiped sufficiently clean (not sure what all you need to do, but some of my Rancher uninstall attempts made things worse and I just went back to an older VM hard drive copy for simplicity).

Thank you for the reply, @wcoateRR.

I have been looking around for more information and it seems like it should be possible to remove a role from an RKE without completely removing the node. Unfortunately, everything I have found so far says that it requires editing cluster.yml and I cannot, for the life of me, find where that file lives.

If I Edit the cluster in Cluster Manager there is an option to Edit as YAML but the YAML that is presented does not have a nodes key that I can see.

I can view a given node’s configuration as YAML and see the key:value pairs that represent roles, but it seems like this is something that should be controlled from the cluster and pushed downstream rather than making an edit at the node level and then pushing that upstream to the cluster.

If I can’t find anything more concrete then I may just have to go the route of completely removing and then re-adding the original nodes one at a time.

From what I can tell, while Rancher tracks roles for nodes so it can talk to them properly, I think after that it mainly just uses Kubernetes API. So while I can see from a tracking stance it making more sense to push downstream, I’m not sure if that’s a “push downstream” sort of command and could also see it needing to happen on the node. That’s also where with as transient as Kubernetes nodes are supposed to be, it just makes more sense to me to re-image in between as it’s not a lot of work now and if something lingering gets left I’ll easily put way more time into figuring it out (and maybe just needing to re-image anyway).

It feels like a cop-out, but it also comes down to using the functions that I know the devs expect users to use versus trying to bypass and do things that would work but require more internal knowledge than people who aren’t the devs tend to have.

Thank you again for the feedback, @wcoateRR.

Unfortunately we are using Local Persistent Storage for some workloads so our nodes are not as transient as they could be. We’re also in a situation where a 3rd party hosting provider is responsible for the nodes at an OS level while we are responsible for everything K8s-related that’s running on them so re-imaging is more difficult than it would be if we controlled everything.

With that said, it seems like completely removing the node and then re-adding it is the way to go.

Thanks again for your input.