Hi,
I’ve added a cluster with an old version of Kubernetes to Rancher. When realizing it’s not supported I removed it from Rancher but in doing so I lost most of my functionality on the cluster. The kubectl command from a management node with direct connection to the cluster, not through Rancher, only worked on a very high level, like getting Pods and Nodes, but I couldn’t get any logs from Pods for example. It threw a Permission denied error on kube-apiserver. This was related apparently to the service account created / used by Rancher having some kind of exclusivity on the cluster.
Adding the cluster back to Rancher, even though it provides no functionality, added the service account back and all went back to normal.
My question is, how can it be that a service account created by Rancher when adding a cluster, has such a disastrous effect when removing it?
Anyone have experienced this issue or can indicate what the root cause might be and how to remove a cluster from Rancher without breaking access to said cluster?
This is more workaround rather than answer, but if you download the kubeconfig file from Rancher for the cluster does that give you all the expected access from external kubectl and does it work after removing from Rancher?
Looking into RBAC in Kubernetes is still on my future todo list, so I can’t be of much help other than the workaround suggestion.
Hi,
Thanks for your reply. No, that doesn’t work, guess I didn’t clarify correct. The clusters we use are not build / provisioned by Rancher but built using RKE on a management node. So when I remove the cluster using the web interface it cleaned up parts on the cluster and in doing so removed access to the cluster up to a certain level. I.e. from my local machine with the kubeconfig pointing to Rancher I of course had no more access since the cluster was removed.
From a management node with ‘admin’ access to the cluster, since it was built from there, it throws the error Permission denied to ‘kube-apiserver’.
With that other thread mentioning that users are stored in the Rancher cluster’s local cluster for wanting to query with kubectl, that might be a breadcrumb. Though you’d think it’d still have to have a local admin somehow.