Rebooted vm running rancherserver and k3s won't start / no UI

Rancher was running great on my bare metal VMWare setup (VMWare cloud provisioner with direct ESXi provisioning across 3 servers, HA setup with 3 etcd pods, using the rancherOS iso’s) until I had an issue where the VM running the rancher/rancher container was rebooted (perhaps uncleanly). I tried a number of things to resolve it and ended up restoring from the last known good snapshot, but still am facing the exact same error.

The only thing I can think of that might be a bit abnormal about this setup was that I am running metal-lb so that I could use some load balancer capabilities.

The actual k8s cluster is still running fine, but it’s headless at the moment as I can’t get the rancher UI to run.

Full log of rancher/rancher container run below… it just keeps repeating the last line “Waiting for k3s to start” so I’ve truncated that.

– Full Log (and pastebin if that’s easier to digest - https://pastebin.com/raw/64bh4CxK )

2020/01/01 18:26:44 [INFO] Rancher version v2.3.3 is starting
2020/01/01 18:26:44 [INFO] Listening on /tmp/log.sock
2020/01/01 18:26:44 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:auto Embedded:false KubeConfig: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false NoCACerts:false ListenConfig:<nil> AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Features:}
2020/01/01 18:26:44 [INFO] Running etcd --data-dir=management-state/etcd
2020-01-01 18:26:44.853828 W | pkg/flags: unrecognized environment variable ETCD_URL_arm64=https://github.com/etcd-io/etcd/releases/download/v3.3.14/etcd-v3.3.14-linux-arm64.tar.gz
2020-01-01 18:26:44.853894 W | pkg/flags: unrecognized environment variable ETCD_URL_amd64=https://github.com/etcd-io/etcd/releases/download/v3.3.14/etcd-v3.3.14-linux-amd64.tar.gz
2020-01-01 18:26:44.853909 W | pkg/flags: unrecognized environment variable ETCD_UNSUPPORTED_ARCH=amd64
2020-01-01 18:26:44.854144 W | pkg/flags: unrecognized environment variable ETCD_URL=ETCD_URL_amd64
2020-01-01 18:26:44.854221 I | etcdmain: etcd Version: 3.3.14
2020-01-01 18:26:44.854240 I | etcdmain: Git SHA: 5cf5d88a1
2020-01-01 18:26:44.854370 I | etcdmain: Go Version: go1.12.9
2020-01-01 18:26:44.854395 I | etcdmain: Go OS/Arch: linux/amd64
2020-01-01 18:26:44.854408 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2020-01-01 18:26:44.855853 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2020-01-01 18:26:44.864264 I | embed: listening for peers on http://localhost:2380
2020-01-01 18:26:44.864598 I | embed: listening for client requests on localhost:2379
2020-01-01 18:26:45.055160 I | etcdserver: recovered store from snapshot at index 3900040
2020-01-01 18:26:45.060058 I | mvcc: restore compact to 3585117
2020-01-01 18:26:45.081355 I | etcdserver: name = default
2020-01-01 18:26:45.081557 I | etcdserver: data dir = management-state/etcd
2020-01-01 18:26:45.081699 I | etcdserver: member dir = management-state/etcd/member
2020-01-01 18:26:45.081858 I | etcdserver: heartbeat = 100ms
2020-01-01 18:26:45.081966 I | etcdserver: election = 1000ms
2020-01-01 18:26:45.082074 I | etcdserver: snapshot count = 100000
2020-01-01 18:26:45.082260 I | etcdserver: advertise client URLs = http://localhost:2379
2020-01-01 18:26:47.871864 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 3938636
2020-01-01 18:26:47.876943 I | raft: 8e9e05c52164694d became follower at term 21
2020-01-01 18:26:47.877015 I | raft: newRaft 8e9e05c52164694d [peers: [8e9e05c52164694d], term: 21, commit: 3938636, applied: 3900040, lastindex: 3938636, lastterm: 21]
2020-01-01 18:26:47.877288 I | etcdserver/api: enabled capabilities for version 3.3
2020-01-01 18:26:47.877324 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 from store
2020-01-01 18:26:47.877341 I | etcdserver/membership: set the cluster version to 3.3 from store
2020-01-01 18:26:47.879370 I | mvcc: restore compact to 3585117
2020-01-01 18:26:47.892236 W | auth: simple token is not cryptographically signed
2020-01-01 18:26:47.893254 I | etcdserver: starting server... [version: 3.3.14, cluster version: 3.3]
2020-01-01 18:26:47.896828 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-01-01 18:26:48.677924 I | raft: 8e9e05c52164694d is starting a new election at term 21
2020-01-01 18:26:48.678087 I | raft: 8e9e05c52164694d became candidate at term 22
2020-01-01 18:26:48.678184 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 22
2020-01-01 18:26:48.678277 I | raft: 8e9e05c52164694d became leader at term 22
2020-01-01 18:26:48.678548 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 22
2020-01-01 18:26:48.679953 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
2020-01-01 18:26:48.679978 I | embed: ready to serve client requests
2020-01-01 18:26:48.681047 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
2020/01/01 18:26:48 [INFO] Waiting for k3s to start
2020/01/01 18:26:49 [INFO] Waiting for k3s to start
time="2020-01-01T18:26:50.089465960Z" level=info msg="Starting k3s v0.8.0 (f867995f)"
time="2020-01-01T18:26:50.134790689Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=http://localhost:2379 --insecure-port=0 --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
E0101 18:26:50.181507      32 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0101 18:26:50.189653      32 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0101 18:26:50.189874      32 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0101 18:26:50.190102      32 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0101 18:26:50.190276      32 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0101 18:26:50.190476      32 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0101 18:26:50.582728      32 genericapiserver.go:315] Skipping API batch/v2alpha1 because it has no resources.
W0101 18:26:50.606974      32 genericapiserver.go:315] Skipping API node.k8s.io/v1alpha1 because it has no resources.
E0101 18:26:50.679062      32 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0101 18:26:50.679136      32 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0101 18:26:50.679282      32 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0101 18:26:50.679543      32 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0101 18:26:50.679615      32 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0101 18:26:50.679657      32 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
2020/01/01 18:26:50 [INFO] Waiting for k3s to start
time="2020-01-01T18:26:50.714475271Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --secure-port=0"
time="2020-01-01T18:26:50.723684035Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
W0101 18:26:50.831891      32 authorization.go:47] Authorization is disabled
W0101 18:26:50.832118      32 authentication.go:55] Authentication is disabled
E0101 18:26:51.050098      32 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
time="2020-01-01T18:26:51.066613597Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.64.0.tgz"
time="2020-01-01T18:26:51.069465440Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
E0101 18:26:51.070535      32 prometheus.go:138] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
E0101 18:26:51.074921      32 prometheus.go:150] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
E0101 18:26:51.075144      32 prometheus.go:162] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
E0101 18:26:51.075334      32 prometheus.go:174] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
E0101 18:26:51.075519      32 prometheus.go:189] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
E0101 18:26:51.075661      32 prometheus.go:202] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
E0101 18:26:51.075856      32 prometheus.go:214] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
time="2020-01-01T18:26:51.090567275Z" level=info msg="Listening on :6443"
E0101 18:26:51.091006      32 prometheus.go:138] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
E0101 18:26:51.091257      32 prometheus.go:150] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
E0101 18:26:51.091483      32 prometheus.go:162] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
E0101 18:26:51.091660      32 prometheus.go:174] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
E0101 18:26:51.091869      32 prometheus.go:189] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
E0101 18:26:51.092026      32 prometheus.go:202] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
E0101 18:26:51.092239      32 prometheus.go:214] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
time="2020-01-01T18:26:51.593529539Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
2020/01/01 18:26:51 [INFO] Waiting for k3s to start
time="2020-01-01T18:26:51.693750997Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
time="2020-01-01T18:26:51.694330594Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
time="2020-01-01T18:26:51.694365973Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.0.2:6443 -t ${NODE_TOKEN}"
E0101 18:26:51.708482      32 prometheus.go:138] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
E0101 18:26:51.708562      32 prometheus.go:150] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
E0101 18:26:51.708787      32 prometheus.go:162] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
E0101 18:26:51.708960      32 prometheus.go:174] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
E0101 18:26:51.709093      32 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
E0101 18:26:51.709170      32 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
E0101 18:26:51.709326      32 prometheus.go:214] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
E0101 18:26:51.710418      32 prometheus.go:138] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
E0101 18:26:51.718242      32 prometheus.go:150] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
E0101 18:26:51.720511      32 prometheus.go:162] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
E0101 18:26:51.720597      32 prometheus.go:174] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
E0101 18:26:51.720646      32 prometheus.go:189] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
E0101 18:26:51.720690      32 prometheus.go:202] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
E0101 18:26:51.720754      32 prometheus.go:214] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
E0101 18:26:51.720891      32 prometheus.go:138] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
E0101 18:26:51.720947      32 prometheus.go:150] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
E0101 18:26:51.721007      32 prometheus.go:162] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
E0101 18:26:51.721077      32 prometheus.go:174] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
E0101 18:26:51.721124      32 prometheus.go:189] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
E0101 18:26:51.721177      32 prometheus.go:202] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
E0101 18:26:51.721245      32 prometheus.go:214] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
E0101 18:26:51.721511      32 prometheus.go:138] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
E0101 18:26:51.721554      32 prometheus.go:150] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
E0101 18:26:51.721817      32 prometheus.go:162] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
E0101 18:26:51.721893      32 prometheus.go:174] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
E0101 18:26:51.721944      32 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
E0101 18:26:51.722013      32 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
E0101 18:26:51.722078      32 prometheus.go:214] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
E0101 18:26:51.722232      32 prometheus.go:138] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
E0101 18:26:51.722275      32 prometheus.go:150] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
E0101 18:26:51.722350      32 prometheus.go:162] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
E0101 18:26:51.722442      32 prometheus.go:174] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
E0101 18:26:51.722528      32 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
E0101 18:26:51.722575      32 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
E0101 18:26:51.722681      32 prometheus.go:214] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
E0101 18:26:51.722831      32 prometheus.go:138] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
E0101 18:26:51.722877      32 prometheus.go:150] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
E0101 18:26:51.722942      32 prometheus.go:162] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
E0101 18:26:51.723083      32 prometheus.go:174] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
E0101 18:26:51.723142      32 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
E0101 18:26:51.723186      32 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
E0101 18:26:51.723252      32 prometheus.go:214] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
time="2020-01-01T18:26:51.813394996Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
time="2020-01-01T18:26:51.813430793Z" level=info msg="Run: k3s kubectl"
time="2020-01-01T18:26:51.813448087Z" level=info msg="k3s is up and running"
2020/01/01 18:26:52 [INFO] Waiting for k3s to start
time="2020-01-01T18:26:52.752024020Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
time="2020-01-01T18:26:53.265146197Z" level=info msg="Starting batch/v1, Kind=Job controller"
2020/01/01 18:26:53 [INFO] Waiting for k3s to start
time="2020-01-01T18:26:53.966518301Z" level=info msg="Starting /v1, Kind=Service controller"
time="2020-01-01T18:26:54.066820661Z" level=info msg="Starting /v1, Kind=Pod controller"
time="2020-01-01T18:26:54.167015607Z" level=info msg="Starting /v1, Kind=Endpoints controller"
time="2020-01-01T18:26:54.267279336Z" level=info msg="Starting /v1, Kind=Node controller"
2020/01/01 18:26:54 [INFO] Waiting for k3s to start
2020/01/01 18:26:55 [INFO] Waiting for k3s to start
2020/01/01 18:26:56 [INFO] Waiting for k3s to start
2020/01/01 18:26:57 [INFO] Waiting for k3s to start
2020/01/01 18:26:58 [INFO] Waiting for k3s to start
2020/01/01 18:26:59 [INFO] Waiting for k3s to start
2020/01/01 18:27:00 [INFO] Waiting for k3s to start
2020/01/01 18:27:01 [INFO] Waiting for k3s to start
2020/01/01 18:27:02 [INFO] Waiting for k3s to start
2020/01/01 18:27:03 [INFO] Waiting for k3s to start
2020/01/01 18:27:04 [INFO] Waiting for k3s to start
2020/01/01 18:27:05 [INFO] Waiting for k3s to start
2020/01/01 18:27:06 [INFO] Waiting for k3s to start
2020/01/01 18:27:07 [INFO] Waiting for k3s to start
2020/01/01 18:27:15 [INFO] Waiting for k3s to start
2020/01/01 18:27:16 [INFO] Waiting for k3s to start
W0101 18:27:17.110549      32 controllermanager.go:445] Skipping "root-ca-cert-publisher"
2020/01/01 18:27:17 [INFO] Waiting for k3s to start
2020/01/01 18:27:18 [INFO] Waiting for k3s to start
2020/01/01 18:27:19 [INFO] Waiting for k3s to start
2020/01/01 18:27:20 [INFO] Waiting for k3s to start
2020/01/01 18:27:21 [INFO] Waiting for k3s to start
2020/01/01 18:27:22 [INFO] Waiting for k3s to start
2020/01/01 18:27:23 [INFO] Waiting for k3s to start
2020/01/01 18:27:24 [INFO] Waiting for k3s to start
2020/01/01 18:27:25 [INFO] Waiting for k3s to start
2020/01/01 18:27:26 [INFO] Waiting for k3s to start
2020/01/01 18:27:27 [INFO] Waiting for k3s to start
E0101 18:27:28.313754      32 prometheus.go:138] failed to register depth metric certificate: duplicate metrics collector registration attempted
E0101 18:27:28.313812      32 prometheus.go:150] failed to register adds metric certificate: duplicate metrics collector registration attempted
E0101 18:27:28.314034      32 prometheus.go:162] failed to register latency metric certificate: duplicate metrics collector registration attempted
E0101 18:27:28.314189      32 prometheus.go:174] failed to register work_duration metric certificate: duplicate metrics collector registration attempted
E0101 18:27:28.314258      32 prometheus.go:189] failed to register unfinished_work_seconds metric certificate: duplicate metrics collector registration attempted
E0101 18:27:28.314304      32 prometheus.go:202] failed to register longest_running_processor_microseconds metric certificate: duplicate metrics collector registration attempted
E0101 18:27:28.314422      32 prometheus.go:214] failed to register retries metric certificate: duplicate metrics collector registration attempted
W0101 18:27:28.641740      32 shared_informer.go:312] resyncPeriod 57221933246104 is smaller than resyncCheckPeriod 63905656943583 and the informer has already started. Changing it to 63905656943583
E0101 18:27:28.643698      32 resource_quota_controller.go:171] initial monitor sync has error: [couldn't start monitor for resource "management.cattle.io/v3, Resource=clustercatalogs": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustercatalogs", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectloggings": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectloggings", couldn't start monitor for resource "management.cattle.io/v3, Resource=monitormetrics": unable to monitor quota for resource "management.cattle.io/v3, Resource=monitormetrics", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterloggings": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterloggings", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectalertgroups": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectalertgroups", couldn't start monitor for resource "management.cattle.io/v3, Resource=multiclusterapps": unable to monitor quota for resource "management.cattle.io/v3, Resource=multiclusterapps", couldn't start monitor for resource "helm.cattle.io/v1, Resource=helmcharts": unable to monitor quota for resource "helm.cattle.io/v1, Resource=helmcharts", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterscans": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterscans", couldn't start monitor for resource "management.cattle.io/v3, Resource=etcdbackups": unable to monitor quota for resource "management.cattle.io/v3, Resource=etcdbackups", couldn't start monitor for resource "management.cattle.io/v3, Resource=nodes": unable to monitor quota for resource "management.cattle.io/v3, Resource=nodes", couldn't start monitor for resource "project.cattle.io/v3, Resource=apps": unable to monitor quota for resource "project.cattle.io/v3, Resource=apps", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=listenerconfigs": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=listenerconfigs", couldn't start monitor for resource "management.cattle.io/v3, Resource=clustertemplaterevisions": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustertemplaterevisions", couldn't start monitor for resource "management.cattle.io/v3, Resource=rkeaddons": unable to monitor quota for resource "management.cattle.io/v3, Resource=rkeaddons", couldn't start monitor for resource "management.cattle.io/v3, Resource=catalogtemplates": unable to monitor quota for resource "management.cattle.io/v3, Resource=catalogtemplates", couldn't start monitor for resource "management.cattle.io/v3, Resource=notifiers": unable to monitor quota for resource "management.cattle.io/v3, Resource=notifiers", couldn't start monitor for resource "project.cattle.io/v3, Resource=apprevisions": unable to monitor quota for resource "project.cattle.io/v3, Resource=apprevisions", couldn't start monitor for resource "management.cattle.io/v3, Resource=clustermonitorgraphs": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustermonitorgraphs", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusteralertgroups": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralertgroups", couldn't start monitor for resource "management.cattle.io/v3, Resource=globaldnsproviders": unable to monitor quota for resource "management.cattle.io/v3, Resource=globaldnsproviders", couldn't start monitor for resource "management.cattle.io/v3, Resource=preferences": unable to monitor quota for resource "management.cattle.io/v3, Resource=preferences", couldn't start monitor for resource "management.cattle.io/v3, Resource=nodepools": unable to monitor quota for resource "management.cattle.io/v3, Resource=nodepools", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusteralerts": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralerts", couldn't start monitor for resource "management.cattle.io/v3, Resource=clustertemplates": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustertemplates", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectnetworkpolicies": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectnetworkpolicies", couldn't start monitor for resource "project.cattle.io/v3, Resource=pipelineexecutions": unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelineexecutions", couldn't start monitor for resource "project.cattle.io/v3, Resource=sourcecoderepositories": unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecoderepositories", couldn't start monitor for resource "project.cattle.io/v3, Resource=sourcecodeproviderconfigs": unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecodeproviderconfigs", couldn't start monitor for resource "project.cattle.io/v3, Resource=pipelines": unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelines", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "management.cattle.io/v3, Resource=podsecuritypolicytemplateprojectbindings": unable to monitor quota for resource "management.cattle.io/v3, Resource=podsecuritypolicytemplateprojectbindings", couldn't start monitor for resource "management.cattle.io/v3, Resource=rkek8ssystemimages": unable to monitor quota for resource "management.cattle.io/v3, Resource=rkek8ssystemimages", couldn't start monitor for resource "management.cattle.io/v3, Resource=nodetemplates": unable to monitor quota for resource "management.cattle.io/v3, Resource=nodetemplates", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterroletemplatebindings": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterroletemplatebindings", couldn't start monitor for resource "management.cattle.io/v3, Resource=catalogtemplateversions": unable to monitor quota for resource "management.cattle.io/v3, Resource=catalogtemplateversions", couldn't start monitor for resource "management.cattle.io/v3, Resource=globaldnses": unable to monitor quota for resource "management.cattle.io/v3, Resource=globaldnses", couldn't start monitor for resource "management.cattle.io/v3, Resource=projects": unable to monitor quota for resource "management.cattle.io/v3, Resource=projects", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectroletemplatebindings": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectroletemplatebindings", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectalerts": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectalerts", couldn't start monitor for resource "management.cattle.io/v3, Resource=multiclusterapprevisions": unable to monitor quota for resource "management.cattle.io/v3, Resource=multiclusterapprevisions", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectmonitorgraphs": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectmonitorgraphs", couldn't start monitor for resource "project.cattle.io/v3, Resource=pipelinesettings": unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelinesettings", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterregistrationtokens": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterregistrationtokens", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectalertrules": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectalertrules", couldn't start monitor for resource "management.cattle.io/v3, Resource=rkek8sserviceoptions": unable to monitor quota for resource "management.cattle.io/v3, Resource=rkek8sserviceoptions", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=addons": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=addons", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectcatalogs": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectcatalogs", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusteralertrules": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralertrules", couldn't start monitor for resource "project.cattle.io/v3, Resource=sourcecodecredentials": unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecodecredentials"]
2020/01/01 18:27:28 [INFO] Waiting for k3s to start
2020/01/01 18:27:29 [INFO] Waiting for k3s to start
E0101 18:27:30.451379      32 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "management.cattle.io/v3, Resource=projectroletemplatebindings": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectroletemplatebindings", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectmonitorgraphs": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectmonitorgraphs", couldn't start monitor for resource "management.cattle.io/v3, Resource=preferences": unable to monitor quota for resource "management.cattle.io/v3, Resource=preferences", couldn't start monitor for resource "management.cattle.io/v3, Resource=rkek8sserviceoptions": unable to monitor quota for resource "management.cattle.io/v3, Resource=rkek8sserviceoptions", couldn't start monitor for resource "management.cattle.io/v3, Resource=clustercatalogs": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustercatalogs", couldn't start monitor for resource "management.cattle.io/v3, Resource=etcdbackups": unable to monitor quota for resource "management.cattle.io/v3, Resource=etcdbackups", couldn't start monitor for resource "management.cattle.io/v3, Resource=clustertemplates": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustertemplates", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterregistrationtokens": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterregistrationtokens", couldn't start monitor for resource "project.cattle.io/v3, Resource=apprevisions": unable to monitor quota for resource "project.cattle.io/v3, Resource=apprevisions", couldn't start monitor for resource "project.cattle.io/v3, Resource=sourcecodecredentials": unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecodecredentials", couldn't start monitor for resource "helm.cattle.io/v1, Resource=helmcharts": unable to monitor quota for resource "helm.cattle.io/v1, Resource=helmcharts", couldn't start monitor for resource "management.cattle.io/v3, Resource=podsecuritypolicytemplateprojectbindings": unable to monitor quota for resource "management.cattle.io/v3, Resource=podsecuritypolicytemplateprojectbindings", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusteralerts": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralerts", couldn't start monitor for resource "management.cattle.io/v3, Resource=rkek8ssystemimages": unable to monitor quota for resource "management.cattle.io/v3, Resource=rkek8ssystemimages", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectloggings": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectloggings", couldn't start monitor for resource "management.cattle.io/v3, Resource=nodetemplates": unable to monitor quota for resource "management.cattle.io/v3, Resource=nodetemplates", couldn't start monitor for resource "project.cattle.io/v3, Resource=pipelinesettings": unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelinesettings", couldn't start monitor for resource "management.cattle.io/v3, Resource=catalogtemplateversions": unable to monitor quota for resource "management.cattle.io/v3, Resource=catalogtemplateversions", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusteralertrules": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralertrules", couldn't start monitor for resource "management.cattle.io/v3, Resource=globaldnses": unable to monitor quota for resource "management.cattle.io/v3, Resource=globaldnses", couldn't start monitor for resource "project.cattle.io/v3, Resource=sourcecodeproviderconfigs": unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecodeproviderconfigs", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=addons": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=addons", couldn't start monitor for resource "management.cattle.io/v3, Resource=catalogtemplates": unable to monitor quota for resource "management.cattle.io/v3, Resource=catalogtemplates", couldn't start monitor for resource "management.cattle.io/v3, Resource=notifiers": unable to monitor quota for resource "management.cattle.io/v3, Resource=notifiers", couldn't start monitor for resource "management.cattle.io/v3, Resource=nodepools": unable to monitor quota for resource "management.cattle.io/v3, Resource=nodepools", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectnetworkpolicies": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectnetworkpolicies", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectalertgroups": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectalertgroups", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterroletemplatebindings": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterroletemplatebindings", couldn't start monitor for resource "management.cattle.io/v3, Resource=multiclusterapprevisions": unable to monitor quota for resource "management.cattle.io/v3, Resource=multiclusterapprevisions", couldn't start monitor for resource "project.cattle.io/v3, Resource=pipelineexecutions": unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelineexecutions", couldn't start monitor for resource "management.cattle.io/v3, Resource=monitormetrics": unable to monitor quota for resource "management.cattle.io/v3, Resource=monitormetrics", couldn't start monitor for resource "management.cattle.io/v3, Resource=globaldnsproviders": unable to monitor quota for resource "management.cattle.io/v3, Resource=globaldnsproviders", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=listenerconfigs": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=listenerconfigs", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterloggings": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterloggings", couldn't start monitor for resource "management.cattle.io/v3, Resource=nodes": unable to monitor quota for resource "management.cattle.io/v3, Resource=nodes", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectalerts": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectalerts", couldn't start monitor for resource "project.cattle.io/v3, Resource=sourcecoderepositories": unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecoderepositories", couldn't start monitor for resource "management.cattle.io/v3, Resource=multiclusterapps": unable to monitor quota for resource "management.cattle.io/v3, Resource=multiclusterapps", couldn't start monitor for resource "management.cattle.io/v3, Resource=clustertemplaterevisions": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustertemplaterevisions", couldn't start monitor for resource "management.cattle.io/v3, Resource=projects": unable to monitor quota for resource "management.cattle.io/v3, Resource=projects", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterscans": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterscans", couldn't start monitor for resource "management.cattle.io/v3, Resource=clustermonitorgraphs": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustermonitorgraphs", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusteralertgroups": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralertgroups", couldn't start monitor for resource "management.cattle.io/v3, Resource=rkeaddons": unable to monitor quota for resource "management.cattle.io/v3, Resource=rkeaddons", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectalertrules": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectalertrules", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectcatalogs": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectcatalogs", couldn't start monitor for resource "project.cattle.io/v3, Resource=pipelines": unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelines", couldn't start monitor for resource "project.cattle.io/v3, Resource=apps": unable to monitor quota for resource "project.cattle.io/v3, Resource=apps"]
2020/01/01 18:27:30 [INFO] Waiting for k3s to start
2020/01/01 18:27:31 [INFO] Waiting for k3s to start
2020/01/01 18:27:32 [INFO] Waiting for k3s to start
2020/01/01 18:27:33 [INFO] Waiting for k3s to start
2020/01/01 18:27:34 [INFO] Waiting for k3s to start
2020/01/01 18:27:35 [INFO] Waiting for k3s to start
2020/01/01 18:27:36 [INFO] Waiting for k3s to start
2020-01-01 18:27:37.185967 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/k3s\" " with result "range_response_count:1 size:475" took too long (127.913141ms) to execute
2020-01-01 18:27:37.186255 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:543" took too long (301.061494ms) to execute
2020/01/01 18:27:37 [INFO] Waiting for k3s to start
2020/01/01 18:27:38 [INFO] Waiting for k3s to start
2020/01/01 18:27:39 [INFO] Waiting for k3s to start
2020/01/01 18:27:40 [INFO] Waiting for k3s to start
2020/01/01 18:27:41 [INFO] Waiting for k3s to start
2020/01/01 18:27:42 [INFO] Waiting for k3s to start
2020/01/01 18:27:43 [INFO] Waiting for k3s to start

Solved… No idea what the actual root cause was here, but since I had a clean snapshot of the VM I think it was something in the VM environment that must’ve changed.

The actual resolution was following the single container deployment backup and restore procedures here - https://blog.kubernauts.io/enterprise-grade-rancher-deployment-guide-ubuntu-fd261e00994c

Steps pasted below for posterity… Running the below on the node running the rancher server docker image

$ docker stop $RANCHER_CONTAINER_NAME
$ docker create --volumes-from $RANCHER_CONTAINER_NAME --name rancher-data rancher/rancher:$RANCHER_CONTAINER_TAG
$ docker run --volumes-from rancher-data -v $PWD:/backup alpine tar zcvf /backup/rancher-data-backup-$RANCHER_VERSION-$DATE.tar.gz /var/lib/rancher
$ docker pull rancher/rancher:latest
$ docker run -d --volumes-from rancher-data --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:latest

and again for posterity, if you need to restore…

$ docker stop $RANCHER_CONTAINER_NAME
$ docker ru --volumes-from $RANCHER_CONTAINER_NAME -v $PWD:/backup \ alpine sh -c “rm /var/lib/rancher/* -rf && \ tar zxvf /$BACKUP-PATH/$RANCHER-BACKUP-NAME.tar.gz”
$ docker start $RANCHER_CONTAINER_NAME

Hi, i v same problem restore from backup rancher doesent work? only full restore all cluster
this info [INFO] Waiting for k3s to start and 443 port not respond… may be you have some ideas? sorry for my english