Unable to update expired certs gui

Hi all,
i’m very frustrated because after one year the UI rancher is offline for expired certs.
I had a rancher server via docker image v:2.3.3, to enable again the UI I set the OS datetime back from 1 mounth. After read the guide to update rancher server, so I reset the datetime to current value and I updated to version v2.3.10 and execute as:

docker run -d --volumes-from rancher-data --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.3.10
But after a few seconds the server fails, follow the log:
level=info msg=“Starting k3s v0.8.0 (f867995f)”
time=“2021-01-09T19:05:39.248361500Z” level=info msg=“Running kube-apiserver --advertise-port=6443 --allow-privileged=true --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=http://localhost:2379 --insecure-port=0 --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key”
E0109 19:05:39.252980 26 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0109 19:05:39.253444 26 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0109 19:05:39.253503 26 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0109 19:05:39.253625 26 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0109 19:05:39.253660 26 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0109 19:05:39.253679 26 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0109 19:05:39.432916 26 genericapiserver.go:315] Skipping API batch/v2alpha1 because it has no resources.
W0109 19:05:39.441205 26 genericapiserver.go:315] Skipping API node.k8s.io/v1alpha1 because it has no resources.
E0109 19:05:39.477473 26 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0109 19:05:39.477517 26 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0109 19:05:39.477587 26 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0109 19:05:39.477788 26 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0109 19:05:39.477822 26 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0109 19:05:39.477841 26 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
time=“2021-01-09T19:05:39.488749833Z” level=info msg=“Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --secure-port=0”
time=“2021-01-09T19:05:39.502929850Z” level=fatal msg=“starting tls server: Get https://localhost:6444/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions: x509: certificate has expired or is not yet valid”
2021/01/09 19:05:39 [FATAL] k3s exited with: exit status 1

How fix this problem? Rancher run only if I set the OS datetime back. Many thanks.

1 Like

Have you found a solution to this issue?

So this is a known issue with Rancher single node. serving-cert expired · Issue #32210 · rancher/rancher · GitHub. With newer versions of Rancher, a restart of the container resolves this issue. But you can use the following commands to manually fix the issue.

Also, Rancher v2.3.x is EOL as of April 7th, 2021 so I would recommend upgrading.
https://www.rancher.cn/support-maintenance-terms/

docker exec -it your_rancher_container_id sh -c "rm -rf /var/lib/rancher/k3s/server/tls/dynamic-cert.json"
docker exec -it your_rancher_container_id sh -c "k3s kubectl --insecure-skip-tls-verify  delete secrets -n kube-system k3s-serving"
docker exec -it your_rancher_container_id sh -c "k3s kubectl --insecure-skip-tls-verify  delete secrets -n cattle-system serving-cert"