I want to change the roles of a node from worker to worker,etcd. (Yes, it is a dev setup.) It is a custom managed cluster, so the node was added with sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.3.5 --server https://rancher.XXX:443 --token XXX --ca-checksum XXX --worker
.
How can I do this? I guess this is documented somewhere, but I do not find it, sorry.
What worked is simply starting the command once again with the desired parameters. But then two instances are running. Which one should be stopped? I stopped the new one after some seconds, when the UI showed the changes. But myabe I should have stopped the old one before starting the new one?
But finally I recreated my dev cluster, because etcd was in error state all the time. The new cluster with 3 instances of etcd seems to run much more stable. (Scaleing from 1 to 3 was the original idea.)
hi,How did you solve the problem?
Now I have a node used for etcd, and I want to change it to work only. I don’t know how to operate without deleting the node. Thank you