Kubernetes Dashboard shows a blank page

I’m trying to deploy a Kubernernetes cluster, but the K8s dashboard does not show anything but a blank page.

The kubernetes-dashboard container is constantly being restarted by the systsm.

admin@kubernetes-01:~$ docker ps -a | grep dash
9eaa50938a1b        gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0   "/dashboard --port=90"   About a minute ago   Exited (1) About a minute ago                       k8s_kubernetes-dashboard.4c14cf00_kubernetes-dashboard-2492700511-bkpb2_kube-system_9ce5bbae-2f94-11e7-82e8-0239833c796d_dee14982
8bbe69fbd1c6        gcr.io/google_containers/pause-amd64:3.0                     "/pause"                 4 days ago           Up 4 days                                           k8s_POD.d8dbe16c_kubernetes-dashboard-2492700511-bkpb2_kube-system_9ce5bbae-2f94-11e7-82e8-0239833c796d_b2dad5b3
a2542c3fb1a3        gcr.io/google_containers/pause-amd64:3.0                     "/pause"                 4 days ago           Exited (0) 4 days ago                               k8s_POD.d8dbe16c_kubernetes-dashboard-2492700511-bkpb2_kube-system_9ce5bbae-2f94-11e7-82e8-0239833c796d_1c47d646

The logs from kubernetes-dashboard show:

admin@kubernetes-01:~$ docker logs 9eaa50938a1b
Using HTTP port: 9090
Creating API server client for https://10.43.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: the server has asked for the client to provide credentials
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md

The logs from kubelet show:

admin@kubernetes-01:~$ docker logs 7e63ad36ab5d --tail 30
I0530 16:41:44.453406   32660 docker_manager.go:2533] Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-2492700511-bkpb2_kube-system(9ce5bbae-2f94-11e7-82e8-0239833c796d)
E0530 16:41:44.456331   32660 pod_workers.go:184] Error syncing pod 9ce5bbae-2f94-11e7-82e8-0239833c796d, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-2492700511-bkpb2_kube-system(9ce5bbae-2f94-11e7-82e8-0239833c796d)"
I0530 16:41:48.875111   32660 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/9ce5bbae-2f94-11e7-82e8-0239833c796d-io-rancher-system-token-1ch6h" (spec.Name: "io-rancher-system-token-1ch6h") pod "9ce5bbae-2f94-11e7-82e8-0239833c796d" (UID: "9ce5bbae-2f94-11e7-82e8-0239833c796d").
I0530 16:41:49.157853   32660 docker_manager.go:2519] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-2492700511-bkpb2"
I0530 16:41:49.159966   32660 docker_manager.go:2533] Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-2492700511-bkpb2_kube-system(9ce5bbae-2f94-11e7-82e8-0239833c796d)
E0530 16:41:49.161420   32660 pod_workers.go:184] Error syncing pod 9ce5bbae-2f94-11e7-82e8-0239833c796d, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-2492700511-bkpb2_kube-system(9ce5bbae-2f94-11e7-82e8-0239833c796d)"
I0530 16:41:52.919990   32660 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/9cba5772-2f94-11e7-82e8-0239833c796d-default-token-c2fbd" (spec.Name: "default-token-c2fbd") pod "9cba5772-2f94-11e7-82e8-0239833c796d" (UID: "9cba5772-2f94-11e7-82e8-0239833c796d").
I0530 16:41:54.917302   32660 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/9c976940-2f94-11e7-82e8-0239833c796d-io-rancher-system-token-1ch6h" (spec.Name: "io-rancher-system-token-1ch6h") pod "9c976940-2f94-11e7-82e8-0239833c796d" (UID: "9c976940-2f94-11e7-82e8-0239833c796d").
I0530 16:41:55.141544   32660 docker_manager.go:2519] checking backoff for container "tiller" in pod "tiller-deploy-3991468440-vd2qt"

I am running Rancher 1.6.0. Any ideas how to fix it? I don’t know if it is related, but the IP address “10.43.0.1” appears to be incorrect. All of the other containers in the kube cluster are in the 10.42.x.x range.

1 Like

Has anyone seen this, or know how to fix it?

Did you find any solution? I am also facing the same issue.

I didn’t find a way to fix the problem. I ended up deleting the environment and creating a new one with new VM hosts. Luckily I was still testing and did not loose any configurations. I think the problem was because I was re-using hosts from a different environment. When I started over with a fresh environment and hosts it came up fine.