Recover access to orphaned cluster

So I lost my Rancher server badly and it’s unrecoverable. I had a cluster set up with that server. The cluster is still running, I can access the workload frontends. I was able to recover access through kubectl with the help of a saved kubeconfig so I thought I’d set up a new Rancher server and import the cluster into that. Just to find out I cannot move cluster namespaces into Rancher projects so that they become visible on the UI. Also it seems that the import changed the credentials (a new user account was created at least), so the kubectl access doesn’t work any more.

I have ssh access to the nodes and able to see the containers through docker cli. Is there a way I can get the credentials back to configure kubeconfig again?

I’m in the same boat as you. I lost my control node in a single-node install, installed a new one and can’t figure out how to bring my orphaned cluster in. Did you figure it out @iben12?

Hi,
Unfortunately not. I ended up (after saving as many manifests as possible while still having kubectl acces) destroying the whole cluster and rebuilding it from scratch.
Sorry for you, man.

I just found these posts after losing a Rancher installation. Clusters are still up, but no obvious way to import them into a new Rancher. Still got SSH access to the nodes?

Go to your new Rancher, choose “Add cluster”, “Import existing cluster”. After entering a name and pressing “Create”, it will print the commands for importing the existing cluster.

Now, SSH into the head node and check what containers are running (we use Docker, so Docker example below):

# Login with SSH
ssh root@node1
Last login: Fri Feb  7 16:41:52 2020 from xxx.xxx.xxx.xxx

# Find container running the cattle node agent
docker ps |  grep k8s_agent_cattle-node-agent
ff6141bbe174        87468cfad9b5                           "run.sh"                 About a minute ago   Up About a minute                       k8s_agent_cattle-node-agent-vklg9_cattle-system_...

# Start a shell within this container
docker exec -it ff6141bbe174 /bin/sh

# Execute the join commands
sh-4.4# curl --insecure -sfL https://<rancher ip here>/v3/import/<unique token from rancher, see import command>.yaml | kubectl apply -f -
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver unchanged
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master unchanged
namespace/cattle-system unchanged
serviceaccount/cattle unchanged
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding unchanged
secret/cattle-credentials-4ba2560 created
clusterrole.rbac.authorization.k8s.io/cattle-admin unchanged
deployment.apps/cattle-cluster-agent configured
daemonset.apps/cattle-node-agent configured

Too late for OP, but hopefully someone can benefit from this.

1 Like

I tried the above procedure with no luck. The imported cluster is stuck in the Waiting - “Waiting for API to be available” state

Hi ,
@superseb wrote a solution https://gist.github.com/superseb/076f20146e012f1d4e289f5bd1bd4971