As a DevOps of a relativly large cluster there is a point when nodes get named and sorted weired.
When upgrading nodes by just cordon, drain and delete them safely, everything works as expected.
But in case I just want to safely shutdown a node I don’t want to participate in the cluster anymore this is not really possible.
Example:
- worker1
- worker2
- worker3
I want to scale down to 2 worker nodes and I want worker 2 do be deleted.
If the nodepool down scale is used it will just delete worker3.
Of course I shoudn’t care and just look at the last created node. But thats a pretty hard task if you got a
- worker 1
- …
- worker 10
and rancher deletes worker 1 instead of 10 because it maximum confused about the naming.
What would be awesome is something like a nodepool downscale delete priority.
Back to the worker1 to worker3 example I could cordon and drain worker 2 and the priority could be to delete drained nodes first instead of healthy, maybe deployment running nodes.
Is this is already implemented? I couldn’t find it.