Possible to remove cluster from 2.4.8 server and import it on a fresh 2.5 server?


due to a wrong installation method (single node) we would like to migrate our existing kubernetes cluster to a newer and HA-rancher-kubernetes cluster.

Can someone tell me if it’s safe to do following:

  1. remove (previously imported) cluster from our 2.4.8 single-node rancher installation
  2. register this cluster again on our new kubernetes-managed 2.5 rancher-cluster?

We already tried this with our development cluster and it worked fine, the only thing which was necessary to do was to:

  • create user/admin accounts again
  • reassign all namespaces to the corresponding rancher projects

Would be nice to get some more opinions on this, right now it looks more or less safe :smiley:

Also does someone know what happens if one kubernetes cluster is registered/imported to two rancher instances at the same time (like 2.4.8 and 2.5 at the same time) - I know its probably really a bad Idea - just want to get a better understanding if I’m wrong :slight_smile:

Ok, we did it #yolo.

Was no problem, worked fine.

Did you ever get any feedback on the “single Kubs cluster and more than one Rancher UI” question? Have native K8S cluster that is being managed by Rancher and would like to add the same cluster to a second Rancher instance for failover should the main Rancher instance go poof!

You cannot register one cluster to multiple servers.

Actually you can, we did this last week since our old rancher installation (hosted on a managed kubernetes cluster, which is not recommended) didnt respond anymore, even with help from digital ocean support.

We added another rancher installation (without removing the old installation). Everything works, but the project-namespace relations were broken and we had to assign all namespaces again to newly-created projects. Also somehow our rancher-alerting is still showint to the wrong rancher installation, even tough we changed all config files with references to the old rancher installation.

But yeah, it works more or less :stuck_out_tongue: (BUT ITS NOT RECOMMENDED!)

That doesn’t sound like two servers actively managing it. There are controllers running in the cluster and others looking for stuff in it that will fight over what the correct state is.

Yeah, we are not using both at the same time (one of the installations is not reachable also).

Is there any way to remove an old rancher installation, without having access?