Failed to connect to proxy" error="x509: certificate has expired or is not yet valid

I was using built-in self-signed certificates to launch rancher server on my linux desktop as follows:

docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:v2.2.1

I then connected to https://<rancher_server_ip> (rancher UI) to create a cluster and added a node. I run a customized node run command given by the rancher UI on a bare metal machine as follows:

sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.2.1 --server https://10.145.101.250 --token z6lrblw7bcvb5j2nc6bp4n52wwqwp8nxqw9g9fzd4t8q6hcj8ckh77 --ca-checksum fc0178816bd4261e3e77556eb323410989690b72d3d11afa6a158235bba3ad75 --etcd --controlplane

On the bare metal machine, I kept getting the following error messages:

time=“2019-04-25T19:01:10Z” level=info msg="Connecting to wss://10.145.101.250/v3/connect/register with token z6lrblw7bcvb5j2nc6bp4n52wwqwp8nxqw9g9fzd4t8q6hcj8ckh77"
time=“2019-04-25T19:01:10Z” level=info msg=“Connecting to proxy” url="wss://10.145.101.250/v3/connect/register"
time=“2019-04-25T19:01:10Z” level=error msg=“Failed to connect to proxy” error="x509: certificate has expired or is not yet valid"
time=“2019-04-25T19:01:10Z” level=error msg=“Failed to connect to proxy” error="x509: certificate has expired or is not yet valid"

It lasted for about an hour before it finally connected and K8S node came up.

Any ideas?

Thanks!

The time and/or timezone is wrong on the server and/or node, so the agent thinks the certificate is not good yet because it was issued in the “future”.

Very appreciated. Now it works.

Unfortunately, I got the following errors after I created the kubernetes cluster with etcd/controlplane node and worker node:

E0426 23:06:09.730429 18456 pod_workers.go:190] Error syncing pod 154d1181-6877-11e9-9a2f-ecf4bbc7e58c (“cattle-cluster-agent-b7444dcd8-4z745_cattle-system(154d1181-6877-11e9-9a2f-ecf4bbc7e58c)”), skipping: failed to “StartContainer” for “cluster-register” with CrashLoopBackOff: “Back-off 2m40s restarting failed container=cluster-register pod=cattle-cluster-agent-b7444dcd8-4z745_cattle-system(154d1181-6877-11e9-9a2f-ecf4bbc7e58c)”
E0426 23:06:18.730630 18456 pod_workers.go:190] Error syncing pod 0a75832e-6877-11e9-9a2f-ecf4bbc7e58c (“metrics-server-58bd5dd8d7-2q5gw_kube-system(0a75832e-6877-11e9-9a2f-ecf4bbc7e58c)”), skipping: failed to “StartContainer” for “metrics-server” with CrashLoopBackOff: “Back-off 1m20s restarting failed container=metrics-server pod=metrics-server-58bd5dd8d7-2q5gw_kube-system(0a75832e-6877-11e9-9a2f-ecf4bbc7e58c)”
I0426 23:06:18.908109 18456 kubelet.go:1953] SyncLoop (PLEG): “kube-dns-58bd5b8dd7-sfpgp_kube-system(077ac947-6877-11e9-9a2f-ecf4bbc7e58c)”, event: &pleg.PodLifecycleEvent{ID:“077ac947-6877-11e9-9a2f-ecf4bbc7e58c”, Type:“ContainerDied”, Data:“b7232f01db010ac89151815a1ccf60e6eec561b8a217311eef59a1053792d3fc”}
I0426 23:06:18.908240 18456 kubelet.go:1953] SyncLoop (PLEG): “kube-dns-58bd5b8dd7-sfpgp_kube-system(077ac947-6877-11e9-9a2f-ecf4bbc7e58c)”, event: &pleg.PodLifecycleEvent{ID:“077ac947-6877-11e9-9a2f-ecf4bbc7e58c”, Type:“ContainerStarted”, Data:“8cd163d96e9c2a9f5d3df7605aa3eee219245eddd4a3847b44a559bfa7da0b3a”}
I0426 23:06:18.908277 18456 kubelet.go:1953] SyncLoop (PLEG): “kube-dns-58bd5b8dd7-sfpgp_kube-system(077ac947-6877-11e9-9a2f-ecf4bbc7e58c)”, event: &pleg.PodLifecycleEvent{ID:“077ac947-6877-11e9-9a2f-ecf4bbc7e58c”, Type:“ContainerStarted”, Data:“807bd55c60904fd1c646fb5fe32cddf0b5b2898acff66824f739eff2bc8cbf9b”}
I0426 23:06:21.988108 18456 kubelet.go:1953] SyncLoop (PLEG): “kube-dns-58bd5b8dd7-sfpgp_kube-system(077ac947-6877-11e9-9a2f-ecf4bbc7e58c)”, event: &pleg.PodLifecycleEvent{ID:“077ac947-6877-11e9-9a2f-ecf4bbc7e58c”, Type:“ContainerDied”, Data:“12cf5c02e12b7cdc5f464f50553a3404279295f592570c5a82651683409c21e8”}
I0426 23:06:23.038522 18456 kubelet.go:1953] SyncLoop (PLEG): “kube-dns-58bd5b8dd7-sfpgp_kube-system(077ac947-6877-11e9-9a2f-ecf4bbc7e58c)”, event: &pleg.PodLifecycleEvent{ID:“077ac947-6877-11e9-9a2f-ecf4bbc7e58c”, Type:“ContainerStarted”, Data:“625f76668fef9a1a7fbcea02886ebc9210b8947f265735a9614fc79c72a6c4d9”}
E0426 23:06:23.730627 18456 pod_workers.go:190] Error syncing pod 154d1181-6877-11e9-9a2f-ecf4bbc7e58c (“cattle-cluster-agent-b7444dcd8-4z745_cattle-system(154d1181-6877-11e9-9a2f-ecf4bbc7e58c)”), skipping: failed to “StartContainer” for “cluster-register” with CrashLoopBackOff: “Back-off 2m40s restarting failed container=cluster-register pod=cattle-cluster-agent-b7444dcd8-4z745_cattle-system(154d1181-6877-11e9-9a2f-ecf4bbc7e58c)”
E0426 23:06:29.730687 18456 pod_workers.go:190] Error syncing pod 0a75832e-6877-11e9-9a2f-ecf4bbc7e58c (“metrics-server-58bd5dd8d7-2q5gw_kube-system(0a75832e-6877-11e9-9a2f-ecf4bbc7e58c)”), skipping: failed to “StartContainer” for “metrics-server” with CrashLoopBackOff: “Back-off 1m20s restarting failed container=metrics-server pod=metrics-server-58bd5dd8d7-2q5gw_kube-system(0a75832e-6877-11e9-9a2f-ecf4bbc7e58c)”
I0426 23:06:30.731726 18456 prober.go:111] Readiness probe for “kube-dns-58bd5b8dd7-sfpgp_kube-system(077ac947-6877-11e9-9a2f-ecf4bbc7e58c):kubedns” failed (failure): Get http://10.42.1.3:8081/readiness: dial tcp 10.42.1.3:8081: connect: connection refused
E0426 23:06:37.730699 18456 pod_workers.go:190] Error syncing pod 154d1181-6877-11e9-9a2f-ecf4bbc7e58c (“cattle-cluster-agent-b7444dcd8-4z745_cattle-system(154d1181-6877-11e9-9a2f-ecf4bbc7e58c)”), skipping: failed to “StartContainer” for “cluster-register” with CrashLoopBackOff: “Back-off 2m40s restarting failed container=cluster-register pod=cattle-cluster-agent-b7444dcd8-4z745_cattle-system(154d1181-6877-11e9-9a2f-ecf4bbc7e58c)”
I0426 23:06:40.731761 18456 prober.go:111] Readiness probe for “kube-dns-58bd5b8dd7-sfpgp_kube-system(077ac947-6877-11e9-9a2f-ecf4bbc7e58c):kubedns” failed (failure): Get http://10.42.1.3:8081/readiness: dial tcp 10.42.1.3:8081: connect: connection refused
E0426 23:06:42.730414 18456 pod_workers.go:190] Error syncing pod 0a75832e-6877-11e9-9a2f-ecf4bbc7e58c (“metrics-server-58bd5dd8d7-2q5gw_kube-system(0a75832e-6877-11e9-9a2f-ecf4bbc7e58c)”), skipping: failed to “StartContainer” for “metrics-server” with CrashLoopBackOff: “Back-off 1m20s restarting failed container=metrics-server pod=metrics-server-58bd5dd8d7-2q5gw_kube-system(0a75832e-6877-11e9-9a2f-ecf4bbc7e58c)”

Any ideas?

Thanks!

I didn’t make the questions clear enough. I was asking about the cattle-cluster-agent/metrics-server/kube-dns pods not being able to come up in the running state due to the fact that they were not able to connect to API server either because of IP port or invalid certificate. Thanks!

Here are the logs:

[root@localhost ~]# kubectl logs metrics-server-58bd5dd8d7-xwdrh -n kube-system
I0429 21:21:40.092818 1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
W0429 21:21:42.279112 1 authentication.go:245] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by ‘kubectl create rolebinding -n kube-system ROLE_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA’
Error: Get https://10.43.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 10.43.0.1:443: connect: no route to host

I was using network plugin: flannel

Thanks!

Any progress with that? Because I’m now struggling with ranche 2.4.8 on ubuntu and tls termination on ALB in AWS.