Why is my Kubernetes cluster inaccessible when I quit Rancher?

I set up a Kubernetes cluster in Rancher 2.0 alpha21, and everything seemed to be working as expected. I was running the Rancher container on my laptop, and the rancher agent was installed on a fresh installation of Ubuntu on a machine in our server room. When I stopped the Rancher image on my laptop, kubectl can no longer connect to my cluster. It complains about a certificate error.

Is this the expected behavior? Why does a Rancher container need to be running for kubectl to work?

The generated kubeconfig file points to the server container, to support our access control and some parts of RBAC that are enforced by rancher (e.g. access to namespaces via project).

Thanks Vincent, for the reply here as well as on the IRC channel. It’s much appreciated!

Do you know of any way to bypass this though? I’d like to be able to use kubectl without needing the Rancher image. Is this possible?

I’ve heard this type of request once or twice before but am not sure exactly why you/people want this in the first place. Is it just not wanting to run a separate container/machine? We don’t really document it yet but the server container can be run as a k8s deployment that runs inside of the cluster it’s managing.

You can create a service account in the cluster and use it directly, but you’re losing the access control/RBAC integration and managing groups of namespaces as projects and such that Rancher adds.

So I don’t see what the desire really is here, other than as emergency admin access. If you just want a bare kubernetes cluster and none of the additional management, you can just run rke directly to make a cluster.

Well the main thing is that it’s running outside of the cluster. If that node fails, we’re in trouble, which invalidates one of the main advantages of running a cluster in the first place. Additionally, I can’t recover access to the cluster because if the node running Rancher fails, so will my kubectl configuration.

Running these services within Kubernetes would solve this problem though. I didn’t know that was possible.