Rancher on single node restarting

Hi guys, I’m really new to Kubernetes and Rancher and I wanted to setup a test environment on a VPS I got. For this I wanted to install the standalone docker rancher image, but I’m having a lot of problem with it.

The VPS:

  • OS: CentOS
  • CPU: 4
  • Ram: 6 GB

What I did:

  • Installed docker using the command given by Rancher to be sure
  • Changed from the default Docker storage driver to overlay
  • Turned on the ntp, instead of chronyd
  • Started the rancher server with

docker run -d --restart=unless-stopped -p 8080:80 -p 8443:443 --privileged rancher/rancher:latest

The problem:

The rancher server keeps restarting and gives the following logs:

2021/01/17 09:27:23 [INFO] Rancher version v2.5.5 (4bad70073) is starting 2021/01/17 09:27:23 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:true Embedded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Agent:false Features:} 2021/01/17 09:27:23 [INFO] Listening on /tmp/log.sock 2021/01/17 09:27:23 [INFO] Running etcd --data-dir=management-state/etcd --heartbeat-interval=500 --election-timeout=5000 2021-01-17 09:27:23.645253 W | pkg/flags: unrecognized environment variable ETCD_URL_arm64=https://github.com/etcd-io/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-arm64.tar.gz 2021-01-17 09:27:23.645330 W | pkg/flags: unrecognized environment variable ETCD_URL_amd64=https://github.com/etcd-io/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-amd64.tar.gz 2021-01-17 09:27:23.645336 W | pkg/flags: unrecognized environment variable ETCD_UNSUPPORTED_ARCH=amd64 2021-01-17 09:27:23.645340 W | pkg/flags: unrecognized environment variable ETCD_URL=ETCD_URL_amd64 [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2021-01-17 09:27:23.645373 I | etcdmain: etcd Version: 3.4.3 2021-01-17 09:27:23.645379 I | etcdmain: Git SHA: 3cf2f69b5 2021-01-17 09:27:23.645382 I | etcdmain: Go Version: go1.12.12 2021-01-17 09:27:23.645385 I | etcdmain: Go OS/Arch: linux/amd64 2021-01-17 09:27:23.645389 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4 [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2021-01-17 09:27:23.646086 I | embed: name = default 2021-01-17 09:27:23.646095 I | embed: data dir = management-state/etcd 2021-01-17 09:27:23.646099 I | embed: member dir = management-state/etcd/member 2021-01-17 09:27:23.646103 I | embed: heartbeat = 500ms 2021-01-17 09:27:23.646106 I | embed: election = 5000ms 2021-01-17 09:27:23.646110 I | embed: snapshot count = 100000 2021-01-17 09:27:23.646120 I | embed: advertise client URLs = http://localhost:2379 2021-01-17 09:27:23.668832 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32 raft2021/01/17 09:27:23 INFO: 8e9e05c52164694d switched to configuration voters=() raft2021/01/17 09:27:23 INFO: 8e9e05c52164694d became follower at term 0 raft2021/01/17 09:27:23 INFO: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2021/01/17 09:27:23 INFO: 8e9e05c52164694d became follower at term 1 raft2021/01/17 09:27:23 INFO: 8e9e05c52164694d switched to configuration voters=(10276657743932975437) 2021-01-17 09:27:23.679024 W | auth: simple token is not cryptographically signed 2021-01-17 09:27:23.684201 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] 2021-01-17 09:27:23.684390 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10) raft2021/01/17 09:27:23 INFO: 8e9e05c52164694d switched to configuration voters=(10276657743932975437) 2021-01-17 09:27:23.685065 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 2021-01-17 09:27:23.686065 I | embed: listening for peers on 127.0.0.1:2380 raft2021/01/17 09:27:28 INFO: 8e9e05c52164694d is starting a new election at term 1 raft2021/01/17 09:27:28 INFO: 8e9e05c52164694d became candidate at term 2 raft2021/01/17 09:27:28 INFO: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2 raft2021/01/17 09:27:28 INFO: 8e9e05c52164694d became leader at term 2 raft2021/01/17 09:27:28 INFO: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2 2021-01-17 09:27:28.170811 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32 2021-01-17 09:27:28.170863 I | embed: ready to serve client requests 2021-01-17 09:27:28.170926 I | etcdserver: setting up the initial cluster version to 3.4 2021-01-17 09:27:28.171949 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged! 2021-01-17 09:27:28.172904 N | etcdserver/membership: set the initial cluster version to 3.4 2021-01-17 09:27:28.173251 I | etcdserver/api: enabled capabilities for version 3.4 2021/01/17 09:27:28 [INFO] Waiting for k3s to start time="2021-01-17T09:27:28Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/19bb6a9b46ad0013de084cf1e0feb7927ff9e4e06624685ff87f003c208fded1" 2021/01/17 09:27:29 [INFO] Waiting for k3s to start 2021/01/17 09:27:30 [INFO] Waiting for k3s to start time="2021-01-17T09:27:30.596048184Z" level=info msg="Starting k3s v1.18.8+k3s1 (6b595318)" time="2021-01-17T09:27:30.807486594Z" level=info msg="Active TLS secret (ver=) (count 7): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.17.0.2:172.17.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:1042dae54ecd440e586d608fc064d597df082ce324a7538cbdf3d291a8a6e360]" time="2021-01-17T09:27:30.815895279Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=http://localhost:2379 --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments. I0117 09:27:30.816562 32 server.go:645] external host was not specified, using 172.17.0.2 I0117 09:27:30.816943 32 server.go:162] Version: v1.18.8+k3s1 2021/01/17 09:27:31 [INFO] Waiting for k3s to start 2021/01/17 09:27:32 [INFO] Waiting for k3s to start I0117 09:27:32.205525 32 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0117 09:27:32.205548 32 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0117 09:27:32.206584 32 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0117 09:27:32.206615 32 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0117 09:27:32.228674 32 master.go:270] Using reconciler: lease I0117 09:27:32.260094 32 rest.go:113] the default service ipfamily for this cluster is: IPv4 W0117 09:27:32.549191 32 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources. W0117 09:27:32.559009 32 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W0117 09:27:32.570631 32 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0117 09:27:32.605964 32 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0117 09:27:32.610429 32 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0117 09:27:32.626424 32 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0117 09:27:32.645985 32 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. W0117 09:27:32.646027 32 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. I0117 09:27:32.655846 32 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0117 09:27:32.655884 32 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. 2021/01/17 09:27:33 [INFO] Waiting for k3s to start 2021/01/17 09:27:34 [INFO] Waiting for k3s to start I0117 09:27:34.621029 32 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt I0117 09:27:34.621041 32 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt I0117 09:27:34.621354 32 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key I0117 09:27:34.621808 32 secure_serving.go:178] Serving securely on 127.0.0.1:6444 I0117 09:27:34.622808 32 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0117 09:27:34.622829 32 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller I0117 09:27:34.624075 32 tlsconfig.go:240] Starting DynamicServingCertificateController I0117 09:27:34.624660 32 controller.go:81] Starting OpenAPI AggregationController I0117 09:27:34.624819 32 crd_finalizer.go:266] Starting CRDFinalizer I0117 09:27:34.625239 32 apiservice_controller.go:94] Starting APIServiceRegistrationController I0117 09:27:34.625252 32 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0117 09:27:34.625303 32 available_controller.go:387] Starting AvailableConditionController I0117 09:27:34.625313 32 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0117 09:27:34.625359 32 autoregister_controller.go:141] Starting autoregister controller I0117 09:27:34.625368 32 cache.go:32] Waiting for caches to sync for autoregister controller I0117 09:27:34.625642 32 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt I0117 09:27:34.625674 32 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt I0117 09:27:34.658656 32 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0117 09:27:34.658682 32 customresource_discovery_controller.go:209] Starting DiscoveryController I0117 09:27:34.658700 32 naming_controller.go:291] Starting NamingConditionController I0117 09:27:34.658718 32 establishing_controller.go:76] Starting EstablishingController I0117 09:27:34.658735 32 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController I0117 09:27:34.658902 32 crdregistration_controller.go:111] Starting crd-autoregister controller I0117 09:27:34.658908 32 shared_informer.go:223] Waiting for caches to sync for crd-autoregister I0117 09:27:34.658914 32 shared_informer.go:230] Caches are synced for crd-autoregister I0117 09:27:34.658913 32 controller.go:86] Starting OpenAPI controller E0117 09:27:34.665405 32 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.43.0.1": cannot allocate resources of type serviceipallocations at this time E0117 09:27:34.665846 32 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg: I0117 09:27:34.723045 32 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller I0117 09:27:34.725365 32 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0117 09:27:34.725489 32 cache.go:39] Caches are synced for AvailableConditionController controller I0117 09:27:34.725490 32 cache.go:39] Caches are synced for autoregister controller 2021/01/17 09:27:35 [INFO] Waiting for k3s to start I0117 09:27:35.621118 32 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0117 09:27:35.621463 32 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0117 09:27:35.627098 32 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 I0117 09:27:35.631130 32 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 I0117 09:27:35.631160 32 storage_scheduling.go:143] all system priority classes are created successfully or already exist. I0117 09:27:36.025278 32 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0117 09:27:36.057503 32 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io 2021/01/17 09:27:36 [INFO] Waiting for k3s to start W0117 09:27:36.188019 32 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.2] I0117 09:27:36.188906 32 controller.go:606] quota admission added evaluator for: endpoints I0117 09:27:36.192031 32 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io I0117 09:27:36.656200 32 registry.go:150] Registering EvenPodsSpread predicate and priority function I0117 09:27:36.656243 32 registry.go:150] Registering EvenPodsSpread predicate and priority function time="2021-01-17T09:27:36.657023464Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --secure-port=0" time="2021-01-17T09:27:36.659094160Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true" time="2021-01-17T09:27:36.664133174Z" level=info msg="Waiting for cloudcontroller rbac role to be created" I0117 09:27:36.665258 32 controllermanager.go:161] Version: v1.18.8+k3s1 I0117 09:27:36.665915 32 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252 I0117 09:27:36.665968 32 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-controller-manager... time="2021-01-17T09:27:36.667206169Z" level=info msg="Creating CRD addons.k3s.cattle.io" I0117 09:27:36.668491 32 registry.go:150] Registering EvenPodsSpread predicate and priority function I0117 09:27:36.668520 32 registry.go:150] Registering EvenPodsSpread predicate and priority function W0117 09:27:36.670306 32 authorization.go:47] Authorization is disabled W0117 09:27:36.670320 32 authentication.go:40] Authentication is disabled I0117 09:27:36.670329 32 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 time="2021-01-17T09:27:36.672900016Z" level=info msg="Creating CRD helmcharts.helm.cattle.io" time="2021-01-17T09:27:36.689229480Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available" I0117 09:27:36.690916 32 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I0117 09:27:36.697825 32 leaderelection.go:252] successfully acquired lease kube-system/kube-controller-manager I0117 09:27:36.698365 32 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"a711cc37-ac7f-4a40-a720-90839815e881", APIVersion:"v1", ResourceVersion:"160", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 7b1c39d5cc73_5ba0bbbd-cbf0-4140-9d1e-847df811f371 became leader I0117 09:27:36.698405 32 event.go:278] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"kube-controller-manager", UID:"6f1b185e-e08e-4f7d-97a8-9f5b9d228f9e", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"163", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 7b1c39d5cc73_5ba0bbbd-cbf0-4140-9d1e-847df811f371 became leader I0117 09:27:36.770575 32 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... I0117 09:27:36.777892 32 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler I0117 09:27:37.063538 32 plugins.go:100] No cloud provider specified. I0117 09:27:37.066491 32 shared_informer.go:223] Waiting for caches to sync for tokens I0117 09:27:37.071520 32 controller.go:606] quota admission added evaluator for: serviceaccounts I0117 09:27:37.073600 32 controllermanager.go:533] Started "job" I0117 09:27:37.073738 32 job_controller.go:144] Starting job controller I0117 09:27:37.073747 32 shared_informer.go:223] Waiting for caches to sync for job I0117 09:27:37.080862 32 controllermanager.go:533] Started "cronjob" I0117 09:27:37.081005 32 cronjob_controller.go:97] Starting CronJob Manager I0117 09:27:37.087138 32 controllermanager.go:533] Started "pv-protection" W0117 09:27:37.087158 32 controllermanager.go:525] Skipping "root-ca-cert-publisher" I0117 09:27:37.087175 32 pv_protection_controller.go:83] Starting PV protection controller I0117 09:27:37.087190 32 shared_informer.go:223] Waiting for caches to sync for PV protection I0117 09:27:37.097545 32 controllermanager.go:533] Started "deployment" I0117 09:27:37.097680 32 deployment_controller.go:153] Starting deployment controller I0117 09:27:37.097689 32 shared_informer.go:223] Waiting for caches to sync for deployment I0117 09:27:37.105763 32 controllermanager.go:533] Started "statefulset" I0117 09:27:37.105792 32 stateful_set.go:146] Starting stateful set controller I0117 09:27:37.105807 32 shared_informer.go:223] Waiting for caches to sync for stateful set I0117 09:27:37.111574 32 controllermanager.go:533] Started "csrcleaner" I0117 09:27:37.111732 32 cleaner.go:82] Starting CSR cleaner controller E0117 09:27:37.118792 32 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail W0117 09:27:37.118820 32 controllermanager.go:525] Skipping "service" W0117 09:27:37.118840 32 core.go:243] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes. W0117 09:27:37.118847 32 controllermanager.go:525] Skipping "route" I0117 09:27:37.166721 32 shared_informer.go:230] Caches are synced for tokens 2021/01/17 09:27:37 [INFO] Waiting for k3s to start time="2021-01-17T09:27:37.196661558Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available" time="2021-01-17T09:27:37.213533036Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz" time="2021-01-17T09:27:37.214015524Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml" time="2021-01-17T09:27:37.214223744Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml" time="2021-01-17T09:27:37.214378953Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml" time="2021-01-17T09:27:37.315194737Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller" time="2021-01-17T09:27:37.315396310Z" level=info msg="Waiting for master node startup: resource name may not be empty" I0117 09:27:37.315494 32 leaderelection.go:242] attempting to acquire leader lease kube-system/k3s... time="2021-01-17T09:27:37.315890110Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token" time="2021-01-17T09:27:37.315981570Z" level=info msg="To join node to cluster: k3s agent -s https://172.17.0.2:6443 -t ${NODE_TOKEN}" I0117 09:27:37.324492 32 leaderelection.go:252] successfully acquired lease kube-system/k3s 2021-01-17 09:27:37.328058 I | http: TLS handshake error from 127.0.0.1:49800: remote error: tls: bad certificate time="2021-01-17T09:27:37.336551250Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml" time="2021-01-17T09:27:37.336576607Z" level=info msg="Run: k3s kubectl" time="2021-01-17T09:27:37.336584282Z" level=info msg="k3s is up and running" time="2021-01-17T09:27:37.336696737Z" level=info msg="module overlay was already loaded" time="2021-01-17T09:27:37.336716042Z" level=info msg="module nf_conntrack was already loaded" time="2021-01-17T09:27:37.336731814Z" level=info msg="module br_netfilter was already loaded" time="2021-01-17T09:27:37.336962715Z" level=warning msg="failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory" time="2021-01-17T09:27:37.336999933Z" level=warning msg="failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-ip6tables: open /proc/sys/net/bridge/bridge-nf-call-ip6tables: no such file or directory" 2021-01-17 09:27:37.338401 I | http: TLS handshake error from 127.0.0.1:49808: remote error: tls: bad certificate 2021-01-17 09:27:37.346903 I | http: TLS handshake error from 127.0.0.1:49814: remote error: tls: bad certificate I0117 09:27:37.361677 32 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io time="2021-01-17T09:27:37.377569888Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log" time="2021-01-17T09:27:37.377783158Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd" time="2021-01-17T09:27:37.378145884Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\"" I0117 09:27:37.420552 32 controller.go:606] quota admission added evaluator for: deployments.apps time="2021-01-17T09:27:37.426333030Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller" time="2021-01-17T09:27:37.426410896Z" level=info msg="Starting batch/v1, Kind=Job controller" time="2021-01-17T09:27:37.426490335Z" level=info msg="Starting /v1, Kind=Node controller" time="2021-01-17T09:27:37.426544133Z" level=info msg="Starting /v1, Kind=Service controller" time="2021-01-17T09:27:37.426635017Z" level=info msg="Starting /v1, Kind=Pod controller" time="2021-01-17T09:27:37.426698078Z" level=info msg="Starting /v1, Kind=Endpoints controller" time="2021-01-17T09:27:37.670276301Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --node-status-update-frequency=1m --secure-port=0" Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances. I0117 09:27:37.677207 32 controllermanager.go:120] Version: v1.18.8+k3s1 W0117 09:27:37.677240 32 controllermanager.go:132] detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues I0117 09:27:37.677306 32 leaderelection.go:242] attempting to acquire leader lease kube-system/cloud-controller-manager... I0117 09:27:37.686973 32 leaderelection.go:252] successfully acquired lease kube-system/cloud-controller-manager I0117 09:27:37.687025 32 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"cloud-controller-manager", UID:"3b30aa4c-df4b-4906-8a2b-077576955043", APIVersion:"v1", ResourceVersion:"215", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 7b1c39d5cc73_23ece637-8786-4878-9020-7a33aa838e47 became leader I0117 09:27:37.687065 32 event.go:278] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"cloud-controller-manager", UID:"63b70b30-b022-4aef-ae29-4a9721a08bc9", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"216", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 7b1c39d5cc73_23ece637-8786-4878-9020-7a33aa838e47 became leader I0117 09:27:37.692824 32 node_controller.go:110] Sending events to api server. I0117 09:27:37.692876 32 controllermanager.go:247] Started "cloud-node" I0117 09:27:37.694571 32 node_lifecycle_controller.go:78] Sending events to api server I0117 09:27:37.694595 32 controllermanager.go:247] Started "cloud-node-lifecycle" E0117 09:27:37.696492 32 core.go:90] Failed to start service controller: the cloud provider does not support external load balancers W0117 09:27:37.696507 32 controllermanager.go:244] Skipping "service" W0117 09:27:37.696516 32 core.go:108] configure-cloud-routes is set, but cloud provider does not support routes. Will not configure cloud provider routes. W0117 09:27:37.696522 32 controllermanager.go:244] Skipping "route" I0117 09:27:37.836336 32 garbagecollector.go:133] Starting garbage collector controller I0117 09:27:37.836361 32 shared_informer.go:223] Waiting for caches to sync for garbage collector I0117 09:27:37.836401 32 graph_builder.go:282] GraphBuilder running I0117 09:27:37.836341 32 controllermanager.go:533] Started "garbagecollector" I0117 09:27:37.846218 32 controllermanager.go:533] Started "replicaset" I0117 09:27:37.846340 32 replica_set.go:181] Starting replicaset controller I0117 09:27:37.846349 32 shared_informer.go:223] Waiting for caches to sync for ReplicaSet I0117 09:27:37.865402 32 node_lifecycle_controller.go:78] Sending events to api server E0117 09:27:37.865462 32 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided W0117 09:27:37.865492 32 controllermanager.go:525] Skipping "cloud-node-lifecycle" W0117 09:27:37.865502 32 controllermanager.go:525] Skipping "ttl-after-finished" I0117 09:27:37.873071 32 controllermanager.go:533] Started "pvc-protection" I0117 09:27:37.873190 32 pvc_protection_controller.go:101] Starting PVC protection controller I0117 09:27:37.873199 32 shared_informer.go:223] Waiting for caches to sync for PVC protection I0117 09:27:37.897530 32 controllermanager.go:533] Started "endpoint" I0117 09:27:37.897706 32 endpoints_controller.go:182] Starting endpoint controller I0117 09:27:37.897720 32 shared_informer.go:223] Waiting for caches to sync for endpoint I0117 09:27:37.913355 32 controllermanager.go:533] Started "disruption" I0117 09:27:37.913570 32 disruption.go:331] Starting disruption controller I0117 09:27:37.913584 32 shared_informer.go:223] Waiting for caches to sync for disruption time="2021-01-17T09:27:37.915023838Z" level=info msg="Starting /v1, Kind=Secret controller" time="2021-01-17T09:27:37.917872234Z" level=info msg="Active TLS secret k3s-serving (ver=220) (count 7): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.17.0.2:172.17.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:1042dae54ecd440e586d608fc064d597df082ce324a7538cbdf3d291a8a6e360]" I0117 09:27:37.919198 32 controllermanager.go:533] Started "csrapproving" I0117 09:27:37.919311 32 certificate_controller.go:119] Starting certificate controller "csrapproving" I0117 09:27:37.919334 32 shared_informer.go:223] Waiting for caches to sync for certificate-csrapproving W0117 09:27:38.020566 32 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. I0117 09:27:38.021427 32 controllermanager.go:533] Started "attachdetach" I0117 09:27:38.021552 32 attach_detach_controller.go:348] Starting attach detach controller I0117 09:27:38.021567 32 shared_informer.go:223] Waiting for caches to sync for attach detach I0117 09:27:38.069797 32 node_lifecycle_controller.go:384] Sending events to api server. I0117 09:27:38.070041 32 taint_manager.go:163] Sending events to api server. I0117 09:27:38.070106 32 node_lifecycle_controller.go:512] Controller will reconcile labels. I0117 09:27:38.070127 32 controllermanager.go:533] Started "nodelifecycle" I0117 09:27:38.070164 32 node_lifecycle_controller.go:546] Starting node controller I0117 09:27:38.070172 32 shared_informer.go:223] Waiting for caches to sync for taint 2021/01/17 09:27:38 [INFO] Creating CRD clusters.management.cattle.io 2021/01/17 09:27:38 [INFO] Creating CRD settings.management.cattle.io 2021/01/17 09:27:38 [INFO] Creating CRD preferences.management.cattle.io 2021/01/17 09:27:38 [INFO] Creating CRD features.management.cattle.io 2021/01/17 09:27:38 [INFO] Creating CRD clusterrepos.catalog.cattle.io 2021/01/17 09:27:38 [INFO] Creating CRD operations.catalog.cattle.io I0117 09:27:38.251035 32 controllermanager.go:533] Started "persistentvolume-expander" I0117 09:27:38.251244 32 expand_controller.go:319] Starting expand controller I0117 09:27:38.251259 32 shared_informer.go:223] Waiting for caches to sync for expand 2021/01/17 09:27:38 [INFO] Creating CRD apps.catalog.cattle.io 2021/01/17 09:27:38 [INFO] Waiting for CRD apps.catalog.cattle.io to become available time="2021-01-17T09:27:38.330715062Z" level=info msg="Waiting for master node local-node startup: nodes \"local-node\" not found" I0117 09:27:38.370836 32 controllermanager.go:533] Started "endpointslice" I0117 09:27:38.370886 32 endpointslice_controller.go:213] Starting endpoint slice controller I0117 09:27:38.370904 32 shared_informer.go:223] Waiting for caches to sync for endpoint_slice I0117 09:27:38.520402 32 controllermanager.go:533] Started "replicationcontroller" I0117 09:27:38.520471 32 replica_set.go:181] Starting replicationcontroller controller I0117 09:27:38.520480 32 shared_informer.go:223] Waiting for caches to sync for ReplicationController I0117 09:27:38.881131 32 request.go:621] Throttling request took 1.043024439s, request: GET:https://127.0.0.1:6444/apis/rbac.authorization.k8s.io/v1?timeout=32s 2021/01/17 09:27:38 [INFO] Done waiting for CRD apps.catalog.cattle.io to become available 2021/01/17 09:27:39 [INFO] Creating CRD authconfigs.management.cattle.io 2021/01/17 09:27:39 [INFO] Creating CRD groupmembers.management.cattle.io 2021/01/17 09:27:39 [INFO] Creating CRD groups.management.cattle.io 2021/01/17 09:27:39 [INFO] Creating CRD tokens.management.cattle.io 2021/01/17 09:27:39 [INFO] Creating CRD userattributes.management.cattle.io 2021/01/17 09:27:39 [INFO] Creating CRD users.management.cattle.io 2021/01/17 09:27:39 [INFO] Waiting for CRD users.management.cattle.io to become available I0117 09:27:39.176161 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io I0117 09:27:39.176257 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for operations.catalog.cattle.io I0117 09:27:39.176314 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps I0117 09:27:39.176374 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io I0117 09:27:39.176420 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps W0117 09:27:39.176440 32 shared_informer.go:461] resyncPeriod 85436574215445 is smaller than resyncCheckPeriod 85511746314419 and the informer has already started. Changing it to 85511746314419 I0117 09:27:39.176620 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling I0117 09:27:39.176661 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy I0117 09:27:39.176702 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for preferences.management.cattle.io I0117 09:27:39.176778 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges I0117 09:27:39.176827 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io I0117 09:27:39.176860 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io I0117 09:27:39.176887 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch I0117 09:27:39.176914 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io I0117 09:27:39.176944 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io W0117 09:27:39.176961 32 shared_informer.go:461] resyncPeriod 78142820079773 is smaller than resyncCheckPeriod 85511746314419 and the informer has already started. Changing it to 85511746314419 I0117 09:27:39.177102 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts I0117 09:27:39.177154 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io I0117 09:27:39.177185 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io I0117 09:27:39.177212 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates I0117 09:27:39.177237 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions I0117 09:27:39.177267 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps I0117 09:27:39.177318 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps I0117 09:27:39.177346 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps I0117 09:27:39.177383 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch I0117 09:27:39.177422 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for apps.catalog.cattle.io I0117 09:27:39.177480 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints I0117 09:27:39.177517 32 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for addons.k3s.cattle.io I0117 09:27:39.177563 32 controllermanager.go:533] Started "resourcequota" I0117 09:27:39.178008 32 resource_quota_controller.go:272] Starting resource quota controller I0117 09:27:39.178026 32 shared_informer.go:223] Waiting for caches to sync for resource quota I0117 09:27:39.178051 32 resource_quota_monitor.go:303] QuotaMonitor running I0117 09:27:39.185680 32 controllermanager.go:533] Started "daemonset" I0117 09:27:39.185808 32 daemon_controller.go:285] Starting daemon sets controller I0117 09:27:39.185817 32 shared_informer.go:223] Waiting for caches to sync for daemon sets 2021-01-17 09:27:41.082626 W | wal: sync duration of 1.89913789s, expected less than 1s 2021-01-17 09:27:41.087179 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" " with result "range_response_count:1 size:210" took too long (1.901074127s) to execute 2021-01-17 09:27:41.087215 W | etcdserver: read-only range request "key:\"/registry/minions/local-node\" " with result "range_response_count:0 size:5" took too long (1.7544897s) to execute 2021-01-17 09:27:41.087255 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:568" took too long (214.345005ms) to execute 2021-01-17 09:27:41.087352 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/cloud-controller-manager\" " with result "range_response_count:1 size:588" took too long (1.377918981s) to execute 2021-01-17 09:27:41.087376 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/ttl-controller\" " with result "range_response_count:0 size:5" took too long (1.900426372s) to execute 2021-01-17 09:27:41.087393 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:586" took too long (341.156268ms) to execute 2021-01-17 09:27:41.087474 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/users.management.cattle.io\" " with result "range_response_count:1 size:1720" took too long (1.504855137s) to execute 2021-01-17 09:27:41.087546 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/k3s\" " with result "range_response_count:1 size:501" took too long (1.727487147s) to execute I0117 09:27:41.088218 32 trace.go:116] Trace[842837594]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/daemon-set-controller,user-agent:k3s/v1.18.8+k3s1 (linux/amd64) kubernetes/6b59531/tokens-controller,client:127.0.0.1 (started: 2021-01-17 09:27:39.185878404 +0000 UTC m=+8.799731221) (total time: 1.902298151s): Trace[842837594]: [1.902254419s] [1.90224671s] About to write a response I0117 09:27:41.088245 32 trace.go:116] Trace[1963218669]: "Get" url:/api/v1/namespaces/kube-system/configmaps/k3s,user-agent:k3s/v1.18.8+k3s1 (linux/amd64) kubernetes/6b59531,client:127.0.0.1 (started: 2021-01-17 09:27:39.359763304 +0000 UTC m=+8.973616093) (total time: 1.728434979s): Trace[1963218669]: [1.728372112s] [1.728364251s] About to write a response I0117 09:27:41.088632 32 trace.go:116] Trace[1644595984]: "Get" url:/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/users.management.cattle.io,user-agent:rancher/v0.0.0 (linux/amd64) kubernetes/$Format,client:127.0.0.1 (started: 2021-01-17 09:27:39.582228148 +0000 UTC m=+9.196080956) (total time: 1.506384615s): Trace[1644595984]: [1.506321499s] [1.506302037s] About to write a response I0117 09:27:41.088824 32 trace.go:116] Trace[2098889693]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/ttl-controller,user-agent:k3s/v1.18.8+k3s1 (linux/amd64) kubernetes/6b59531/kube-controller-manager,client:127.0.0.1 (started: 2021-01-17 09:27:39.186544705 +0000 UTC m=+8.800397492) (total time: 1.902261922s): Trace[2098889693]: [1.902261922s] [1.902256682s] END I0117 09:27:41.089108 32 trace.go:116] Trace[2075919364]: "Get" url:/api/v1/namespaces/kube-system/endpoints/cloud-controller-manager,user-agent:k3s/v1.18.8+k3s1 (linux/amd64) kubernetes/6b59531/leader-election,client:127.0.0.1 (started: 2021-01-17 09:27:39.709068414 +0000 UTC m=+9.322921244) (total time: 1.380010859s): Trace[2075919364]: [1.379988169s] [1.379979621s] About to write a response I0117 09:27:41.089490 32 trace.go:116] Trace[2077113995]: "Get" url:/api/v1/nodes/local-node,user-agent:k3s/v1.18.8+k3s1 (linux/amd64) kubernetes/6b59531,client:127.0.0.1 (started: 2021-01-17 09:27:39.332188347 +0000 UTC m=+8.946041151) (total time: 1.757244807s): Trace[2077113995]: [1.757244807s] [1.757236516s] END time="2021-01-17T09:27:41.090189144Z" level=info msg="Waiting for master node local-node startup: nodes \"local-node\" not found" 2021/01/17 09:27:41 [INFO] Done waiting for CRD users.management.cattle.io to become available I0117 09:27:41.112820 32 controllermanager.go:533] Started "ttl" I0117 09:27:41.112941 32 ttl_controller.go:118] Starting TTL controller I0117 09:27:41.112950 32 shared_informer.go:223] Waiting for caches to sync for TTL I0117 09:27:41.141048 32 controllermanager.go:533] Started "persistentvolume-binder" I0117 09:27:41.141171 32 pv_controller_base.go:295] Starting persistent volume controller I0117 09:27:41.141181 32 shared_informer.go:223] Waiting for caches to sync for persistent volume I0117 09:27:41.168756 32 controllermanager.go:533] Started "clusterrole-aggregation" I0117 09:27:41.168870 32 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator I0117 09:27:41.168880 32 shared_informer.go:223] Waiting for caches to sync for ClusterRoleAggregator I0117 09:27:41.200037 32 controllermanager.go:533] Started "namespace" I0117 09:27:41.200197 32 namespace_controller.go:200] Starting namespace controller I0117 09:27:41.200207 32 shared_informer.go:223] Waiting for caches to sync for namespace I0117 09:27:41.223695 32 controllermanager.go:533] Started "serviceaccount" I0117 09:27:41.223878 32 serviceaccounts_controller.go:117] Starting service account controller I0117 09:27:41.223889 32 shared_informer.go:223] Waiting for caches to sync for service account I0117 09:27:41.232781 32 controllermanager.go:533] Started "csrsigning" W0117 09:27:41.232809 32 controllermanager.go:512] "bootstrapsigner" is disabled I0117 09:27:41.232957 32 certificate_controller.go:119] Starting certificate controller "csrsigning" I0117 09:27:41.232969 32 shared_informer.go:223] Waiting for caches to sync for certificate-csrsigning I0117 09:27:41.232998 32 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key I0117 09:27:41.237678 32 node_ipam_controller.go:94] Sending events to api server. time="2021-01-17T09:27:41.426856612Z" level=info msg="Connecting to proxy" url="wss://172.17.0.2:6443/v1-k3s/connect" time="2021-01-17T09:27:41.429550106Z" level=info msg="Handling backend connection request [local-node]" time="2021-01-17T09:27:41.430072245Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us" time="2021-01-17T09:27:41.431477598Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/19bb6a9b46ad0013de084cf1e0feb7927ff9e4e06624685ff87f003c208fded1/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=/run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=local-node --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd --node-labels= --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/systemd --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed. I0117 09:27:41.432117 32 server.go:413] Version: v1.18.8+k3s1 time="2021-01-17T09:27:41.433111208Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=local-node --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables" W0117 09:27:41.433256 32 server.go:225] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP. W0117 09:27:41.436381 32 proxier.go:625] Failed to read file /lib/modules/3.10.0-1160.11.1.el7.x86_64/modules.builtin with error open /lib/modules/3.10.0-1160.11.1.el7.x86_64/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules I0117 09:27:41.442674 32 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt W0117 09:27:41.445762 32 proxier.go:635] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W0117 09:27:41.446314 32 proxier.go:635] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W0117 09:27:41.446792 32 proxier.go:635] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W0117 09:27:41.447241 32 proxier.go:635] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules I0117 09:27:41.447463 32 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / W0117 09:27:41.447743 32 proxier.go:635] Failed to load kernel module nf_conntrack_ipv4 with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules I0117 09:27:41.448038 32 container_manager_linux.go:277] container manager verified user specified cgroup-root exists: [] I0117 09:27:41.448058 32 container_manager_linux.go:282] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd SystemCgroupsName: KubeletCgroupsName:/systemd ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} I0117 09:27:41.448184 32 topology_manager.go:126] [topologymanager] Creating topology manager with none policy I0117 09:27:41.448192 32 container_manager_linux.go:312] [topologymanager] Initializing Topology Manager with none policy I0117 09:27:41.448197 32 container_manager_linux.go:317] Creating device plugin manager: true W0117 09:27:41.448462 32 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock". W0117 09:27:41.448537 32 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock". I0117 09:27:41.448613 32 kubelet.go:317] Watching apiserver time="2021-01-17T09:27:41.450055504Z" level=info msg="waiting for node local-node: nodes \"local-node\" not found" I0117 09:27:41.456707 32 kuberuntime_manager.go:211] Container runtime containerd initialized, version: v1.3.3-k3s2, apiVersion: v1alpha2 I0117 09:27:41.457946 32 server.go:1124] Started kubelet I0117 09:27:41.461655 32 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer E0117 09:27:41.463976 32 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache. E0117 09:27:41.464003 32 kubelet.go:1306] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem E0117 09:27:41.467193 32 node.go:125] Failed to retrieve node info: nodes "local-node" not found I0117 09:27:41.469786 32 volume_manager.go:265] Starting Kubelet Volume Manager I0117 09:27:41.470474 32 desired_state_of_world_populator.go:139] Desired state populator starts to run I0117 09:27:41.482558 32 server.go:145] Starting to listen on 0.0.0.0:10250 I0117 09:27:41.484097 32 server.go:393] Adding debug handlers to kubelet server. E0117 09:27:41.486952 32 controller.go:228] failed to get node "local-node" when trying to set owner ref to the node lease: nodes "local-node" not found I0117 09:27:41.502323 32 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach I0117 09:27:41.515002 32 cpu_manager.go:184] [cpumanager] starting with none policy I0117 09:27:41.515025 32 cpu_manager.go:185] [cpumanager] reconciling every 10s I0117 09:27:41.515088 32 state_mem.go:36] [cpumanager] initializing new in-memory state store I0117 09:27:41.520303 32 policy_none.go:43] [cpumanager] none policy: Start I0117 09:27:41.549568 32 status_manager.go:158] Starting to sync pod status with apiserver I0117 09:27:41.549616 32 kubelet.go:1822] Starting kubelet main sync loop. E0117 09:27:41.549708 32 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] W0117 09:27:41.553908 32 manager.go:597] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found I0117 09:27:41.555193 32 plugin_manager.go:114] Starting Kubelet Plugin Manager E0117 09:27:41.557058 32 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "local-node" not found I0117 09:27:41.584536 32 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach E0117 09:27:41.584745 32 kubelet.go:2268] node "local-node" not found I0117 09:27:41.592156 32 kubelet_node_status.go:70] Attempting to register node local-node I0117 09:27:41.671866 32 reconciler.go:157] Reconciler: start to sync state E0117 09:27:41.684873 32 kubelet.go:2268] node "local-node" not found E0117 09:27:41.785020 32 kubelet.go:2268] node "local-node" not found 2021/01/17 09:27:41 [INFO] Starting API controllers I0117 09:27:41.857521 32 node_controller.go:325] Initializing node local-node with cloud provider I0117 09:27:41.859376 32 kubelet_node_status.go:73] Successfully registered node local-node time="2021-01-17T09:27:41.863126126Z" level=info msg="couldn't find node internal ip label on node local-node" time="2021-01-17T09:27:41.863162622Z" level=info msg="couldn't find node hostname label on node local-node" time="2021-01-17T09:27:41.866256604Z" level=info msg="Updated coredns node hosts entry [172.17.0.2 local-node]" time="2021-01-17T09:27:41.939323810Z" level=info msg="couldn't find node internal ip label on node local-node" time="2021-01-17T09:27:41.939354257Z" level=info msg="couldn't find node hostname label on node local-node" I0117 09:27:41.939415 32 node_controller.go:397] Successfully initialized node local-node with cloud provider I0117 09:27:41.939431 32 node_controller.go:325] Initializing node local-node with cloud provider 2021/01/17 09:27:42 [INFO] Starting management.cattle.io/v3, Kind=UserAttribute controller 2021/01/17 09:27:42 [INFO] Starting /v1, Kind=Secret controller 2021/01/17 09:27:42 [INFO] Starting management.cattle.io/v3, Kind=GroupMember controller 2021/01/17 09:27:42 [INFO] Starting management.cattle.io/v3, Kind=Group controller 2021/01/17 09:27:42 [INFO] Starting management.cattle.io/v3, Kind=User controller 2021/01/17 09:27:42 [INFO] Starting management.cattle.io/v3, Kind=Token controller time="2021-01-17T09:27:42.113764913Z" level=info msg="master role label has been set succesfully on node: local-node" I0117 09:27:42.286098 8 leaderelection.go:243] attempting to acquire leader lease kube-system/cattle-controllers... 2021/01/17 09:27:42 [INFO] Starting rbac.authorization.k8s.io/v1, Kind=RoleBinding controller 2021/01/17 09:27:42 [INFO] Starting /v1, Kind=ConfigMap controller 2021/01/17 09:27:42 [INFO] Starting management.cattle.io/v3, Kind=Preference controller 2021/01/17 09:27:42 [INFO] Starting rbac.authorization.k8s.io/v1, Kind=Role controller 2021/01/17 09:27:42 [INFO] Starting management.cattle.io/v3, Kind=Cluster controller 2021/01/17 09:27:42 [INFO] Starting management.cattle.io/v3, Kind=Feature controller 2021/01/17 09:27:42 [INFO] Starting apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition controller 2021/01/17 09:27:42 [INFO] Starting apiregistration.k8s.io/v1, Kind=APIService controller 2021/01/17 09:27:42 [INFO] Starting rbac.authorization.k8s.io/v1, Kind=ClusterRole controller 2021/01/17 09:27:42 [INFO] Starting catalog.cattle.io/v1, Kind=ClusterRepo controller 2021/01/17 09:27:42 [INFO] Starting management.cattle.io/v3, Kind=Setting controller 2021/01/17 09:27:42 [INFO] Starting /v1, Kind=Secret controller 2021/01/17 09:27:42 [INFO] Starting rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding controller I0117 09:27:42.297422 8 leaderelection.go:253] successfully acquired lease kube-system/cattle-controllers 2021/01/17 09:27:42 [INFO] Creating CRD fleetworkspaces.management.cattle.io 2021/01/17 09:27:42 [INFO] Waiting for CRD fleetworkspaces.management.cattle.io to become available 2021/01/17 09:27:42 [INFO] Steve auth startup complete I0117 09:27:42.619661 32 node.go:136] Successfully retrieved node IP: 172.17.0.2 I0117 09:27:42.619698 32 server_others.go:187] Using iptables Proxier. I0117 09:27:42.620242 32 server.go:583] Version: v1.18.8+k3s1 I0117 09:27:42.620893 32 conntrack.go:52] Setting nf_conntrack_max to 131072 I0117 09:27:42.621028 32 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0117 09:27:42.621117 32 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0117 09:27:42.621357 32 config.go:133] Starting endpoints config controller I0117 09:27:42.621379 32 shared_informer.go:223] Waiting for caches to sync for endpoints config I0117 09:27:42.621397 32 config.go:315] Starting service config controller I0117 09:27:42.621412 32 shared_informer.go:223] Waiting for caches to sync for service config I0117 09:27:42.721507 32 shared_informer.go:230] Caches are synced for service config I0117 09:27:42.721619 32 shared_informer.go:230] Caches are synced for endpoints config 2021/01/17 09:27:42 [INFO] Refreshing all schemas 2021/01/17 09:27:42 [INFO] Done waiting for CRD fleetworkspaces.management.cattle.io to become available 2021/01/17 09:27:42 [INFO] Running in single server mode, will not peer connections 2021/01/17 09:27:42 [INFO] Creating CRD apps.project.cattle.io 2021/01/17 09:27:42 [INFO] Creating CRD catalogs.management.cattle.io 2021/01/17 09:27:42 [INFO] Creating CRD catalogtemplates.management.cattle.io 2021/01/17 09:27:42 [INFO] Creating CRD apprevisions.project.cattle.io 2021/01/17 09:27:42 [INFO] Creating CRD catalogtemplateversions.management.cattle.io 2021/01/17 09:27:42 [INFO] Creating CRD pipelineexecutions.project.cattle.io 2021/01/17 09:27:42 [INFO] Creating CRD clusteralerts.management.cattle.io 2021/01/17 09:27:42 [INFO] Creating CRD pipelinesettings.project.cattle.io 2021/01/17 09:27:42 [INFO] Creating CRD pipelines.project.cattle.io 2021/01/17 09:27:42 [INFO] Creating CRD clusteralertgroups.management.cattle.io 2021/01/17 09:27:42 [INFO] Creating CRD sourcecodecredentials.project.cattle.io 2021/01/17 09:27:42 [INFO] Creating CRD clustercatalogs.management.cattle.io 2021/01/17 09:27:42 [INFO] Creating CRD clusterloggings.management.cattle.io 2021/01/17 09:27:42 [INFO] Creating CRD sourcecodeproviderconfigs.project.cattle.io 2021/01/17 09:27:42 [INFO] Creating CRD sourcecoderepositories.project.cattle.io 2021/01/17 09:27:42 [INFO] Refreshing all schemas 2021/01/17 09:27:42 [INFO] Creating CRD clusteralertrules.management.cattle.io 2021/01/17 09:27:42 [INFO] Creating CRD clustermonitorgraphs.management.cattle.io 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind Binding 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind ComponentStatus 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind ConfigMap 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind Endpoints 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind Event 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind LimitRange 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind Namespace 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind Node 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind PersistentVolumeClaim 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind PersistentVolume 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind Pod 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind PodTemplate 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind ReplicationController 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind ResourceQuota 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind Secret 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind ServiceAccount 2021/01/17 09:27:42 [INFO] APIVersion /v1 Kind Service 2021/01/17 09:27:42 [INFO] APIVersion apiregistration.k8s.io/v1 Kind APIService 2021/01/17 09:27:42 [INFO] APIVersion apiregistration.k8s.io/v1beta1 Kind APIService 2021/01/17 09:27:42 [INFO] APIVersion extensions/v1beta1 Kind Ingress 2021/01/17 09:27:42 [INFO] APIVersion apps/v1 Kind ControllerRevision 2021/01/17 09:27:42 [INFO] APIVersion apps/v1 Kind DaemonSet 2021/01/17 09:27:42 [INFO] APIVersion apps/v1 Kind Deployment 2021/01/17 09:27:42 [INFO] APIVersion apps/v1 Kind ReplicaSet 2021/01/17 09:27:42 [INFO] APIVersion apps/v1 Kind StatefulSet 2021/01/17 09:27:42 [INFO] APIVersion events.k8s.io/v1beta1 Kind Event 2021/01/17 09:27:42 [INFO] APIVersion authentication.k8s.io/v1 Kind TokenReview 2021/01/17 09:27:42 [INFO] APIVersion authentication.k8s.io/v1beta1 Kind TokenReview 2021/01/17 09:27:42 [INFO] APIVersion authorization.k8s.io/v1 Kind LocalSubjectAccessReview 2021/01/17 09:27:42 [INFO] APIVersion authorization.k8s.io/v1 Kind SelfSubjectAccessReview 2021/01/17 09:27:42 [INFO] APIVersion authorization.k8s.io/v1 Kind SelfSubjectRulesReview 2021/01/17 09:27:42 [INFO] APIVersion authorization.k8s.io/v1 Kind SubjectAccessReview 2021/01/17 09:27:42 [INFO] APIVersion authorization.k8s.io/v1beta1 Kind LocalSubjectAccessReview 2021/01/17 09:27:42 [INFO] APIVersion authorization.k8s.io/v1beta1 Kind SelfSubjectAccessReview 2021/01/17 09:27:42 [INFO] APIVersion authorization.k8s.io/v1beta1 Kind SelfSubjectRulesReview 2021/01/17 09:27:42 [INFO] APIVersion authorization.k8s.io/v1beta1 Kind SubjectAccessReview 2021/01/17 09:27:42 [INFO] APIVersion autoscaling/v1 Kind HorizontalPodAutoscaler 2021/01/17 09:27:42 [INFO] APIVersion autoscaling/v2beta1 Kind HorizontalPodAutoscaler 2021/01/17 09:27:42 [INFO] APIVersion autoscaling/v2beta2 Kind HorizontalPodAutoscaler 2021/01/17 09:27:42 [INFO] APIVersion batch/v1 Kind Job 2021/01/17 09:27:42 [INFO] APIVersion batch/v1beta1 Kind CronJob 2021/01/17 09:27:42 [INFO] APIVersion certificates.k8s.io/v1beta1 Kind CertificateSigningRequest 2021/01/17 09:27:42 [INFO] APIVersion networking.k8s.io/v1 Kind NetworkPolicy 2021/01/17 09:27:42 [INFO] APIVersion networking.k8s.io/v1beta1 Kind IngressClass 2021/01/17 09:27:42 [INFO] APIVersion networking.k8s.io/v1beta1 Kind Ingress 2021/01/17 09:27:42 [INFO] APIVersion policy/v1beta1 Kind PodDisruptionBudget 2021/01/17 09:27:42 [INFO] APIVersion policy/v1beta1 Kind PodSecurityPolicy 2021/01/17 09:27:42 [INFO] APIVersion rbac.authorization.k8s.io/v1 Kind ClusterRoleBinding 2021/01/17 09:27:42 [INFO] APIVersion rbac.authorization.k8s.io/v1 Kind ClusterRole 2021/01/17 09:27:42 [INFO] APIVersion rbac.authorization.k8s.io/v1 Kind RoleBinding 2021/01/17 09:27:42 [INFO] APIVersion rbac.authorization.k8s.io/v1 Kind Role 2021/01/17 09:27:42 [INFO] APIVersion rbac.authorization.k8s.io/v1beta1 Kind ClusterRoleBinding 2021/01/17 09:27:42 [INFO] APIVersion rbac.authorization.k8s.io/v1beta1 Kind ClusterRole 2021/01/17 09:27:42 [INFO] APIVersion rbac.authorization.k8s.io/v1beta1 Kind RoleBinding 2021/01/17 09:27:42 [INFO] APIVersion rbac.authorization.k8s.io/v1beta1 Kind Role 2021/01/17 09:27:42 [INFO] APIVersion storage.k8s.io/v1 Kind CSIDriver 2021/01/17 09:27:42 [INFO] APIVersion storage.k8s.io/v1 Kind CSINode 2021/01/17 09:27:42 [INFO] APIVersion storage.k8s.io/v1 Kind StorageClass 2021/01/17 09:27:42 [INFO] APIVersion storage.k8s.io/v1 Kind VolumeAttachment 2021/01/17 09:27:42 [INFO] APIVersion storage.k8s.io/v1beta1 Kind CSIDriver 2021/01/17 09:27:42 [INFO] APIVersion storage.k8s.io/v1beta1 Kind CSINode 2021/01/17 09:27:42 [INFO] APIVersion storage.k8s.io/v1beta1 Kind StorageClass 2021/01/17 09:27:42 [INFO] APIVersion storage.k8s.io/v1beta1 Kind VolumeAttachment 2021/01/17 09:27:42 [INFO] APIVersion admissionregistration.k8s.io/v1 Kind MutatingWebhookConfiguration 2021/01/17 09:27:42 [INFO] APIVersion admissionregistration.k8s.io/v1 Kind ValidatingWebhookConfiguration 2021/01/17 09:27:42 [INFO] APIVersion admissionregistration.k8s.io/v1beta1 Kind MutatingWebhookConfiguration 2021/01/17 09:27:42 [INFO] APIVersion admissionregistration.k8s.io/v1beta1 Kind ValidatingWebhookConfiguration 2021/01/17 09:27:42 [INFO] APIVersion apiextensions.k8s.io/v1 Kind CustomResourceDefinition 2021/01/17 09:27:42 [INFO] APIVersion apiextensions.k8s.io/v1beta1 Kind CustomResourceDefinition 2021/01/17 09:27:42 [INFO] APIVersion scheduling.k8s.io/v1 Kind PriorityClass 2021/01/17 09:27:42 [INFO] APIVersion scheduling.k8s.io/v1beta1 Kind PriorityClass 2021/01/17 09:27:42 [INFO] APIVersion coordination.k8s.io/v1 Kind Lease 2021/01/17 09:27:42 [INFO] APIVersion coordination.k8s.io/v1beta1 Kind Lease 2021/01/17 09:27:42 [INFO] APIVersion node.k8s.io/v1beta1 Kind RuntimeClass 2021/01/17 09:27:42 [INFO] APIVersion discovery.k8s.io/v1beta1 Kind EndpointSlice 2021/01/17 09:27:42 [INFO] APIVersion catalog.cattle.io/v1 Kind Operation 2021/01/17 09:27:42 [INFO] APIVersion catalog.cattle.io/v1 Kind App 2021/01/17 09:27:42 [INFO] APIVersion catalog.cattle.io/v1 Kind ClusterRepo 2021/01/17 09:27:42 [INFO] APIVersion helm.cattle.io/v1 Kind HelmChart 2021/01/17 09:27:42 [INFO] APIVersion k3s.cattle.io/v1 Kind Addon 2021/01/17 09:27:42 [INFO] APIVersion management.cattle.io/v3 Kind Group 2021/01/17 09:27:42 [INFO] APIVersion management.cattle.io/v3 Kind Catalog 2021/01/17 09:27:42 [INFO] APIVersion management.cattle.io/v3 Kind CatalogTemplateVersion 2021/01/17 09:27:42 [INFO] APIVersion management.cattle.io/v3 Kind CatalogTemplate 2021/01/17 09:27:42 [INFO] APIVersion management.cattle.io/v3 Kind Preference 2021/01/17 09:27:42 [INFO] APIVersion management.cattle.io/v3 Kind Token 2021/01/17 09:27:42 [INFO] APIVersion management.cattle.io/v3 Kind Cluster 2021/01/17 09:27:42 [INFO] APIVersion management.cattle.io/v3 Kind AuthConfig 2021/01/17 09:27:42 [INFO] APIVersion management.cattle.io/v3 Kind UserAttribute 2021/01/17 09:27:42 [INFO] APIVersion management.cattle.io/v3 Kind User 2021/01/17 09:27:42 [INFO] APIVersion management.cattle.io/v3 Kind Feature 2021/01/17 09:27:42 [INFO] APIVersion management.cattle.io/v3 Kind Setting 2021/01/17 09:27:42 [INFO] APIVersion management.cattle.io/v3 Kind GroupMember 2021/01/17 09:27:42 [INFO] APIVersion management.cattle.io/v3 Kind FleetWorkspace 2021/01/17 09:27:42 [INFO] APIVersion project.cattle.io/v3 Kind AppRevision 2021/01/17 09:27:42 [INFO] APIVersion project.cattle.io/v3 Kind PipelineExecution 2021/01/17 09:27:42 [INFO] APIVersion project.cattle.io/v3 Kind App 2021/01/17 09:27:42 [INFO] APIVersion project.cattle.io/v3 Kind PipelineSetting 2021/01/17 09:27:42 [INFO] Creating CRD clusterregistrationtokens.management.cattle.io 2021/01/17 09:27:42 [INFO] Waiting for CRD pipelines.project.cattle.io to become available 2021/01/17 09:27:43 [INFO] Creating CRD clusterroletemplatebindings.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD clusterscans.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD composeconfigs.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD dynamicschemas.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD etcdbackups.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD globalrolebindings.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD globalroles.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD kontainerdrivers.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD multiclusterapps.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD multiclusterapprevisions.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD monitormetrics.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD nodedrivers.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD nodepools.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD nodetemplates.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD nodes.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD notifiers.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD podsecuritypolicytemplateprojectbindings.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD podsecuritypolicytemplates.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD projectalerts.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD projectalertgroups.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD projectcatalogs.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD projectloggings.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD projectalertrules.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD projectmonitorgraphs.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD projectnetworkpolicies.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD projectroletemplatebindings.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD projects.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD rkek8ssystemimages.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD rkek8sserviceoptions.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD rkeaddons.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD roletemplates.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD cisconfigs.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD cisbenchmarkversions.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD samltokens.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD templates.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD templateversions.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD templatecontents.management.cattle.io time="2021-01-17T09:27:43.508793917Z" level=info msg="waiting for node local-node CIDR not assigned yet" 2021/01/17 09:27:43 [INFO] Done waiting for CRD pipelines.project.cattle.io to become available 2021/01/17 09:27:43 [INFO] Waiting for CRD sourcecodecredentials.project.cattle.io to become available 2021/01/17 09:27:43 [INFO] Creating CRD globaldnses.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD globaldnsproviders.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD clustertemplates.management.cattle.io 2021/01/17 09:27:43 [INFO] Creating CRD clustertemplaterevisions.management.cattle.io 2021/01/17 09:27:43 [INFO] Waiting for CRD clustertemplaterevisions.management.cattle.io to become available 2021/01/17 09:27:43 [INFO] Watching metadata for batch/v1, Kind=Job 2021/01/17 09:27:43 [INFO] Watching metadata for k3s.cattle.io/v1, Kind=Addon 2021/01/17 09:27:43 [INFO] Watching metadata for apiregistration.k8s.io/v1, Kind=APIService 2021/01/17 09:27:43 [INFO] Watching metadata for storage.k8s.io/v1, Kind=VolumeAttachment 2021/01/17 09:27:43 [INFO] Watching metadata for /v1, Kind=Service 2021/01/17 09:27:43 [INFO] Watching metadata for management.cattle.io/v3, Kind=Catalog 2021/01/17 09:27:43 [INFO] Watching metadata for /v1, Kind=ResourceQuota 2021/01/17 09:27:43 [INFO] Watching metadata for admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration 2021/01/17 09:27:43 [INFO] Watching metadata for project.cattle.io/v3, Kind=AppRevision 2021/01/17 09:27:43 [INFO] Watching metadata for /v1, Kind=Pod 2021/01/17 09:27:43 [INFO] Watching metadata for /v1, Kind=ServiceAccount 2021/01/17 09:27:43 [INFO] Watching metadata for scheduling.k8s.io/v1, Kind=PriorityClass 2021/01/17 09:27:43 [INFO] Watching metadata for /v1, Kind=Secret 2021/01/17 09:27:43 [INFO] Watching metadata for /v1, Kind=Event 2021/01/17 09:27:43 [INFO] Watching metadata for apps/v1, Kind=ControllerRevision 2021/01/17 09:27:43 [INFO] Watching metadata for discovery.k8s.io/v1beta1, Kind=EndpointSlice 2021/01/17 09:27:43 [INFO] Watching metadata for management.cattle.io/v3, Kind=CatalogTemplate 2021/01/17 09:27:43 [INFO] Watching metadata for storage.k8s.io/v1, Kind=CSINode 2021/01/17 09:27:43 [INFO] Watching metadata for coordination.k8s.io/v1, Kind=Lease 2021/01/17 09:27:43 [INFO] Watching metadata for apps/v1, Kind=DaemonSet 2021/01/17 09:27:43 [INFO] Watching metadata for admissionregistration.k8s.io/v1, Kind=MutatingWebhookConfiguration 2021/01/17 09:27:43 [INFO] Watching metadata for /v1, Kind=ReplicationController 2021/01/17 09:27:43 [INFO] Watching metadata for certificates.k8s.io/v1beta1, Kind=CertificateSigningRequest 2021/01/17 09:27:43 [INFO] Watching metadata for batch/v1beta1, Kind=CronJob 2021/01/17 09:27:43 [INFO] Watching metadata for catalog.cattle.io/v1, Kind=Operation 2021/01/17 09:27:43 [INFO] Watching metadata for management.cattle.io/v3, Kind=Feature 2021/01/17 09:27:43 [INFO] Watching metadata for project.cattle.io/v3, Kind=App 2021/01/17 09:27:43 [INFO] Watching metadata for management.cattle.io/v3, Kind=CatalogTemplateVersion 2021/01/17 09:27:43 [INFO] Watching metadata for management.cattle.io/v3, Kind=GroupMember 2021/01/17 09:27:43 [INFO] Watching metadata for /v1, Kind=LimitRange 2021/01/17 09:27:43 [INFO] Watching metadata for management.cattle.io/v3, Kind=Cluster 2021/01/17 09:27:43 [INFO] Watching metadata for management.cattle.io/v3, Kind=User 2021/01/17 09:27:43 [INFO] Watching metadata for management.cattle.io/v3, Kind=Token 2021/01/17 09:27:43 [INFO] Watching metadata for /v1, Kind=PodTemplate 2021/01/17 09:27:43 [INFO] Watching metadata for project.cattle.io/v3, Kind=PipelineSetting 2021/01/17 09:27:43 [INFO] Watching metadata for apiextensions.k8s.io/v1, Kind=CustomResourceDefinition 2021/01/17 09:27:43 [INFO] Watching metadata for management.cattle.io/v3, Kind=UserAttribute 2021/01/17 09:27:43 [INFO] Watching metadata for management.cattle.io/v3, Kind=FleetWorkspace 2021/01/17 09:27:43 [INFO] Watching metadata for /v1, Kind=Namespace 2021/01/17 09:27:43 [INFO] Watching metadata for management.cattle.io/v3, Kind=AuthConfig 2021/01/17 09:27:43 [INFO] Watching metadata for /v1, Kind=PersistentVolumeClaim 2021/01/17 09:27:43 [INFO] Watching metadata for management.cattle.io/v3, Kind=Setting 2021/01/17 09:27:43 [INFO] Watching metadata for autoscaling/v1, Kind=HorizontalPodAutoscaler 2021/01/17 09:27:43 [INFO] Watching metadata for /v1, Kind=PersistentVolume 2021/01/17 09:27:43 [INFO] Watching metadata for rbac.authorization.k8s.io/v1, Kind=RoleBinding 2021/01/17 09:27:43 [INFO] Watching metadata for storage.k8s.io/v1, Kind=CSIDriver 2021/01/17 09:27:43 [INFO] Watching metadata for /v1, Kind=ConfigMap 2021/01/17 09:27:43 [INFO] Watching metadata for /v1, Kind=Node 2021/01/17 09:27:43 [INFO] Watching metadata for policy/v1beta1, Kind=PodDisruptionBudget 2021/01/17 09:27:43 [INFO] Watching metadata for policy/v1beta1, Kind=PodSecurityPolicy 2021/01/17 09:27:43 [INFO] Watching metadata for apps/v1, Kind=StatefulSet 2021/01/17 09:27:43 [INFO] Watching metadata for storage.k8s.io/v1, Kind=StorageClass 2021/01/17 09:27:43 [INFO] Watching metadata for node.k8s.io/v1beta1, Kind=RuntimeClass 2021/01/17 09:27:43 [INFO] Watching metadata for rbac.authorization.k8s.io/v1, Kind=Role 2021/01/17 09:27:43 [INFO] Watching metadata for management.cattle.io/v3, Kind=Preference 2021/01/17 09:27:43 [INFO] Watching metadata for networking.k8s.io/v1beta1, Kind=Ingress 2021/01/17 09:27:43 [INFO] Watching metadata for apps/v1, Kind=ReplicaSet 2021/01/17 09:27:43 [INFO] Watching metadata for networking.k8s.io/v1beta1, Kind=IngressClass 2021/01/17 09:27:43 [INFO] Watching metadata for rbac.authorization.k8s.io/v1, Kind=ClusterRole 2021/01/17 09:27:43 [INFO] Watching metadata for catalog.cattle.io/v1, Kind=ClusterRepo 2021/01/17 09:27:43 [INFO] Watching metadata for networking.k8s.io/v1, Kind=NetworkPolicy 2021/01/17 09:27:43 [INFO] Watching metadata for events.k8s.io/v1beta1, Kind=Event 2021/01/17 09:27:43 [INFO] Watching metadata for catalog.cattle.io/v1, Kind=App 2021/01/17 09:27:43 [INFO] Watching metadata for project.cattle.io/v3, Kind=PipelineExecution 2021/01/17 09:27:43 [INFO] Watching metadata for management.cattle.io/v3, Kind=Group 2021/01/17 09:27:43 [INFO] Watching metadata for /v1, Kind=Endpoints 2021/01/17 09:27:43 [INFO] Watching metadata for apps/v1, Kind=Deployment 2021/01/17 09:27:43 [INFO] Watching metadata for helm.cattle.io/v1, Kind=HelmChart 2021/01/17 09:27:43 [INFO] Watching metadata for rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding 2021/01/17 09:27:43 [INFO] Refreshing all schemas 2021/01/17 09:27:44 [INFO] Done waiting for CRD sourcecodecredentials.project.cattle.io to become available 2021/01/17 09:27:44 [INFO] Waiting for CRD sourcecodeproviderconfigs.project.cattle.io to become available 2021/01/17 09:27:44 [INFO] Refreshing all schemas 2021/01/17 09:27:44 [INFO] Done waiting for CRD clustertemplaterevisions.management.cattle.io to become available 2021/01/17 09:27:44 [INFO] Waiting for CRD globaldnsproviders.management.cattle.io to become available 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind Binding 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind ComponentStatus 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind ConfigMap 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind Endpoints 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind Event 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind LimitRange 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind Namespace 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind Node 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind PersistentVolumeClaim 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind PersistentVolume 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind Pod 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind PodTemplate 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind ReplicationController 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind ResourceQuota 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind Secret 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind ServiceAccount 2021/01/17 09:27:44 [INFO] APIVersion /v1 Kind Service 2021/01/17 09:27:44 [INFO] APIVersion apiregistration.k8s.io/v1 Kind APIService 2021/01/17 09:27:44 [INFO] APIVersion apiregistration.k8s.io/v1beta1 Kind APIService 2021/01/17 09:27:44 [INFO] APIVersion extensions/v1beta1 Kind Ingress 2021/01/17 09:27:44 [INFO] APIVersion apps/v1 Kind ControllerRevision 2021/01/17 09:27:44 [INFO] APIVersion apps/v1 Kind DaemonSet 2021/01/17 09:27:44 [INFO] APIVersion apps/v1 Kind Deployment 2021/01/17 09:27:44 [INFO] APIVersion apps/v1 Kind ReplicaSet 2021/01/17 09:27:44 [INFO] APIVersion apps/v1 Kind StatefulSet 2021/01/17 09:27:44 [INFO] APIVersion events.k8s.io/v1beta1 Kind Event 2021/01/17 09:27:44 [INFO] APIVersion authentication.k8s.io/v1 Kind TokenReview 2021/01/17 09:27:44 [INFO] APIVersion authentication.k8s.io/v1beta1 Kind TokenReview 2021/01/17 09:27:44 [INFO] APIVersion authorization.k8s.io/v1 Kind LocalSubjectAccessReview 2021/01/17 09:27:44 [INFO] APIVersion authorization.k8s.io/v1 Kind SelfSubjectAccessReview 2021/01/17 09:27:44 [INFO] APIVersion authorization.k8s.io/v1 Kind SelfSubjectRulesReview 2021/01/17 09:27:44 [INFO] APIVersion authorization.k8s.io/v1 Kind SubjectAccessReview 2021/01/17 09:27:44 [INFO] APIVersion authorization.k8s.io/v1beta1 Kind LocalSubjectAccessReview 2021/01/17 09:27:44 [INFO] APIVersion authorization.k8s.io/v1beta1 Kind SelfSubjectAccessReview 2021/01/17 09:27:44 [INFO] APIVersion authorization.k8s.io/v1beta1 Kind SelfSubjectRulesReview 2021/01/17 09:27:44 [INFO] APIVersion authorization.k8s.io/v1beta1 Kind SubjectAccessReview 2021/01/17 09:27:44 [INFO] APIVersion autoscaling/v1 Kind HorizontalPodAutoscaler 2021/01/17 09:27:44 [INFO] APIVersion autoscaling/v2beta1 Kind HorizontalPodAutoscaler 2021/01/17 09:27:44 [INFO] APIVersion autoscaling/v2beta2 Kind HorizontalPodAutoscaler 2021/01/17 09:27:44 [INFO] APIVersion batch/v1 Kind Job 2021/01/17 09:27:44 [INFO] APIVersion batch/v1beta1 Kind CronJob 2021/01/17 09:27:44 [INFO] APIVersion certificates.k8s.io/v1beta1 Kind CertificateSigningRequest 2021/01/17 09:27:44 [INFO] APIVersion networking.k8s.io/v1 Kind NetworkPolicy 2021/01/17 09:27:44 [INFO] APIVersion networking.k8s.io/v1beta1 Kind IngressClass 2021/01/17 09:27:44 [INFO] APIVersion networking.k8s.io/v1beta1 Kind Ingress 2021/01/17 09:27:44 [INFO] APIVersion policy/v1beta1 Kind PodDisruptionBudget 2021/01/17 09:27:44 [INFO] APIVersion policy/v1beta1 Kind PodSecurityPolicy 2021/01/17 09:27:44 [INFO] APIVersion rbac.authorization.k8s.io/v1 Kind ClusterRoleBinding 2021/01/17 09:27:44 [INFO] APIVersion rbac.authorization.k8s.io/v1 Kind ClusterRole 2021/01/17 09:27:44 [INFO] APIVersion rbac.authorization.k8s.io/v1 Kind RoleBinding 2021/01/17 09:27:44 [INFO] APIVersion rbac.authorization.k8s.io/v1 Kind Role 2021/01/17 09:27:44 [INFO] APIVersion rbac.authorization.k8s.io/v1beta1 Kind ClusterRoleBinding 2021/01/17 09:27:44 [INFO] APIVersion rbac.authorization.k8s.io/v1beta1 Kind ClusterRole 2021/01/17 09:27:44 [INFO] APIVersion rbac.authorization.k8s.io/v1beta1 Kind RoleBinding 2021/01/17 09:27:44 [INFO] APIVersion rbac.authorization.k8s.io/v1beta1 Kind Role 2021/01/17 09:27:44 [INFO] APIVersion storage.k8s.io/v1 Kind CSIDriver 2021/01/17 09:27:44 [INFO] APIVersion storage.k8s.io/v1 Kind CSINode 2021/01/17 09:27:44 [INFO] APIVersion storage.k8s.io/v1 Kind StorageClass 2021/01/17 09:27:44 [INFO] APIVersion storage.k8s.io/v1 Kind VolumeAttachment 2021/01/17 09:27:44 [INFO] APIVersion storage.k8s.io/v1beta1 Kind CSIDriver 2021/01/17 09:27:44 [INFO] APIVersion storage.k8s.io/v1beta1 Kind CSINode 2021/01/17 09:27:44 [INFO] APIVersion storage.k8s.io/v1beta1 Kind StorageClass 2021/01/17 09:27:44 [INFO] APIVersion storage.k8s.io/v1beta1 Kind VolumeAttachment 2021/01/17 09:27:44 [INFO] APIVersion admissionregistration.k8s.io/v1 Kind MutatingWebhookConfiguration 2021/01/17 09:27:44 [INFO] APIVersion admissionregistration.k8s.io/v1 Kind ValidatingWebhookConfiguration 2021/01/17 09:27:44 [INFO] APIVersion admissionregistration.k8s.io/v1beta1 Kind MutatingWebhookConfiguration 2021/01/17 09:27:44 [INFO] APIVersion admissionregistration.k8s.io/v1beta1 Kind ValidatingWebhookConfiguration 2021/01/17 09:27:44 [INFO] APIVersion apiextensions.k8s.io/v1 Kind CustomResourceDefinition 2021/01/17 09:27:44 [INFO] APIVersion apiextensions.k8s.io/v1beta1 Kind CustomResourceDefinition 2021/01/17 09:27:44 [INFO] APIVersion scheduling.k8s.io/v1 Kind PriorityClass 2021/01/17 09:27:44 [INFO] APIVersion scheduling.k8s.io/v1beta1 Kind PriorityClass 2021/01/17 09:27:44 [INFO] APIVersion coordination.k8s.io/v1 Kind Lease 2021/01/17 09:27:44 [INFO] APIVersion coordination.k8s.io/v1beta1 Kind Lease 2021/01/17 09:27:44 [INFO] APIVersion node.k8s.io/v1beta1 Kind RuntimeClass 2021/01/17 09:27:44 [INFO] APIVersion discovery.k8s.io/v1beta1 Kind EndpointSlice 2021/01/17 09:27:44 [INFO] APIVersion catalog.cattle.io/v1 Kind Operation 2021/01/17 09:27:44 [INFO] APIVersion catalog.cattle.io/v1 Kind App 2021/01/17 09:27:44 [INFO] APIVersion catalog.cattle.io/v1 Kind ClusterRepo 2021/01/17 09:27:44 [INFO] APIVersion helm.cattle.io/v1 Kind HelmChart 2021/01/17 09:27:44 [INFO] APIVersion k3s.cattle.io/v1 Kind Addon 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind MultiClusterAppRevision 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind NodePool 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind Notifier 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ProjectMonitorGraph 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind RkeAddon 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind Preference 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ProjectRoleTemplateBinding 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind Cluster 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ClusterAlert 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind MonitorMetric 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind Template 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind GlobalRoleBinding 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind PodSecurityPolicyTemplate 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind TemplateVersion 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ClusterAlertGroup 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ClusterMonitorGraph 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind Project 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind NodeDriver 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind NodeTemplate 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ProjectAlert 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind GroupMember 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind FleetWorkspace 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ClusterRegistrationToken 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind TemplateContent 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind Catalog 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ProjectAlertGroup 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind RkeK8sSystemImage 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind CatalogTemplate 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ClusterRoleTemplateBinding 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ComposeConfig 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind EtcdBackup 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind RkeK8sServiceOption 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind SamlToken 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind GlobalDns 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind Token 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ClusterCatalog 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ClusterScan 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind DynamicSchema 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ProjectCatalog 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind AuthConfig 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind UserAttribute 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind User 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ClusterAlertRule 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind Node 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ProjectAlertRule 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind GlobalDnsProvider 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ClusterLogging 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind RoleTemplate 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind Feature 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind MultiClusterApp 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind PodSecurityPolicyTemplateProjectBinding 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ProjectNetworkPolicy 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind Setting 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind GlobalRole 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind CisBenchmarkVersion 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ClusterTemplateRevision 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind Group 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind CatalogTemplateVersion 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind KontainerDriver 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ProjectLogging 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind CisConfig 2021/01/17 09:27:44 [INFO] APIVersion management.cattle.io/v3 Kind ClusterTemplate 2021/01/17 09:27:44 [INFO] APIVersion project.cattle.io/v3 Kind App 2021/01/17 09:27:44 [INFO] APIVersion project.cattle.io/v3 Kind PipelineSetting 2021/01/17 09:27:44 [INFO] APIVersion project.cattle.io/v3 Kind SourceCodeProviderConfig 2021/01/17 09:27:44 [INFO] APIVersion project.cattle.io/v3 Kind SourceCodeRepository 2021/01/17 09:27:44 [INFO] APIVersion project.cattle.io/v3 Kind SourceCodeCredential 2021/01/17 09:27:44 [INFO] APIVersion project.cattle.io/v3 Kind Pipeline 2021/01/17 09:27:44 [INFO] APIVersion project.cattle.io/v3 Kind AppRevision 2021/01/17 09:27:44 [INFO] APIVersion project.cattle.io/v3 Kind PipelineExecution 2021/01/17 09:27:44 [INFO] Done waiting for CRD sourcecodeproviderconfigs.project.cattle.io to become available 2021/01/17 09:27:44 [INFO] Waiting for CRD sourcecoderepositories.project.cattle.io to become available 2021/01/17 09:27:44 [INFO] Done waiting for CRD globaldnsproviders.management.cattle.io to become available 2021/01/17 09:27:44 [INFO] Waiting for CRD projectroletemplatebindings.management.cattle.io to become available 2021/01/17 09:27:45 [INFO] Done waiting for CRD sourcecoderepositories.project.cattle.io to become available 2021/01/17 09:27:45 [INFO] Done waiting for CRD projectroletemplatebindings.management.cattle.io to become available 2021/01/17 09:27:45 [INFO] Waiting for CRD samltokens.management.cattle.io to become available time="2021-01-17T09:27:45.513409541Z" level=info msg="waiting for node local-node CIDR not assigned yet" 2021/01/17 09:27:45 [INFO] Done waiting for CRD samltokens.management.cattle.io to become available 2021/01/17 09:27:45 [INFO] Waiting for CRD clustertemplates.management.cattle.io to become available 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ClusterAlertGroup 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=SamlToken 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ClusterTemplateRevision 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=GlobalRole 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=RkeK8sServiceOption 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ProjectAlert 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ComposeConfig 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=MonitorMetric 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ClusterRegistrationToken 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=RoleTemplate 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=GlobalDns 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=MultiClusterAppRevision 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=NodeTemplate 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=RkeK8sSystemImage 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ClusterMonitorGraph 2021/01/17 09:27:45 [INFO] Watching metadata for project.cattle.io/v3, Kind=SourceCodeRepository 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=Template 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ProjectCatalog 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=PodSecurityPolicyTemplateProjectBinding 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ClusterScan 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=Notifier 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=Project 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=CisBenchmarkVersion 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ClusterTemplate 2021/01/17 09:27:45 [INFO] Watching metadata for project.cattle.io/v3, Kind=SourceCodeProviderConfig 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ClusterRoleTemplateBinding 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ClusterCatalog 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ClusterLogging 2021/01/17 09:27:45 [INFO] Watching metadata for project.cattle.io/v3, Kind=SourceCodeCredential 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=NodeDriver 2021/01/17 09:27:45 [INFO] Watching metadata for project.cattle.io/v3, Kind=Pipeline 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=NodePool 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=Node 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=RkeAddon 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ClusterAlert 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ProjectAlertGroup 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=TemplateContent 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=GlobalDnsProvider 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=MultiClusterApp 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=EtcdBackup 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ProjectMonitorGraph 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=PodSecurityPolicyTemplate 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=DynamicSchema 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=TemplateVersion 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=KontainerDriver 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ProjectRoleTemplateBinding 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ClusterAlertRule 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ProjectNetworkPolicy 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=GlobalRoleBinding 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ProjectLogging 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=CisConfig 2021/01/17 09:27:45 [INFO] Watching metadata for management.cattle.io/v3, Kind=ProjectAlertRule 2021/01/17 09:27:46 [INFO] Done waiting for CRD clustertemplates.management.cattle.io to become available 2021/01/17 09:27:46 [INFO] Waiting for CRD cisbenchmarkversions.management.cattle.io to become available 2021/01/17 09:27:46 [INFO] Done waiting for CRD cisbenchmarkversions.management.cattle.io to become available 2021/01/17 09:27:46 [INFO] Waiting for CRD roletemplates.management.cattle.io to become available 2021/01/17 09:27:47 [INFO] Done waiting for CRD roletemplates.management.cattle.io to become available 2021/01/17 09:27:47 [INFO] Waiting for CRD projectnetworkpolicies.management.cattle.io to become available time="2021-01-17T09:27:47.528900319Z" level=info msg="waiting for node local-node CIDR not assigned yet" 2021/01/17 09:27:47 [INFO] Done waiting for CRD projectnetworkpolicies.management.cattle.io to become available 2021/01/17 09:27:47 [INFO] Waiting for CRD projects.management.cattle.io to become available 2021/01/17 09:27:48 [INFO] Done waiting for CRD projects.management.cattle.io to become available 2021/01/17 09:27:48 [INFO] Waiting for CRD templates.management.cattle.io to become available 2021/01/17 09:27:48 [INFO] Done waiting for CRD templates.management.cattle.io to become available 2021/01/17 09:27:48 [INFO] Waiting for CRD cisconfigs.management.cattle.io to become available 2021/01/17 09:27:49 [INFO] Done waiting for CRD cisconfigs.management.cattle.io to become available 2021/01/17 09:27:49 [INFO] Waiting for CRD rkek8sserviceoptions.management.cattle.io to become available time="2021-01-17T09:27:49.531550022Z" level=info msg="waiting for node local-node CIDR not assigned yet" 2021/01/17 09:27:49 [INFO] Done waiting for CRD rkek8sserviceoptions.management.cattle.io to become available 2021/01/17 09:27:49 [INFO] Waiting for CRD rkeaddons.management.cattle.io to become available 2021/01/17 09:27:50 [INFO] Done waiting for CRD rkeaddons.management.cattle.io to become available 2021/01/17 09:27:50 [INFO] Waiting for CRD templateversions.management.cattle.io to become available 2021/01/17 09:27:50 [INFO] Done waiting for CRD templateversions.management.cattle.io to become available 2021/01/17 09:27:50 [INFO] Waiting for CRD templatecontents.management.cattle.io to become available 2021/01/17 09:27:51 [INFO] Done waiting for CRD templatecontents.management.cattle.io to become available 2021/01/17 09:27:51 [INFO] Waiting for CRD globaldnses.management.cattle.io to become available I0117 09:27:51.247411 32 range_allocator.go:82] Sending events to api server. I0117 09:27:51.247587 32 range_allocator.go:110] No Service CIDR provided. Skipping filtering out service addresses. I0117 09:27:51.247617 32 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses. I0117 09:27:51.247655 32 controllermanager.go:533] Started "nodeipam" I0117 09:27:51.247737 32 node_ipam_controller.go:162] Starting ipam controller I0117 09:27:51.247757 32 shared_informer.go:223] Waiting for caches to sync for node I0117 09:27:51.255553 32 controllermanager.go:533] Started "podgc" I0117 09:27:51.255581 32 gc_controller.go:89] Starting GC controller I0117 09:27:51.255595 32 shared_informer.go:223] Waiting for caches to sync for GC I0117 09:27:51.279601 32 controllermanager.go:533] Started "horizontalpodautoscaling" W0117 09:27:51.279622 32 controllermanager.go:512] "tokencleaner" is disabled I0117 09:27:51.279628 32 horizontal.go:169] Starting HPA controller I0117 09:27:51.279643 32 shared_informer.go:223] Waiting for caches to sync for HPA I0117 09:27:51.280692 32 shared_informer.go:223] Waiting for caches to sync for garbage collector I0117 09:27:51.287143 32 shared_informer.go:223] Waiting for caches to sync for resource quota W0117 09:27:51.313626 32 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="local-node" does not exist I0117 09:27:51.320868 32 shared_informer.go:230] Caches are synced for certificate-csrapproving I0117 09:27:51.326767 32 shared_informer.go:230] Caches are synced for service account I0117 09:27:51.333125 32 shared_informer.go:230] Caches are synced for certificate-csrsigning I0117 09:27:51.349676 32 shared_informer.go:230] Caches are synced for node I0117 09:27:51.349715 32 range_allocator.go:172] Starting range CIDR allocator I0117 09:27:51.349721 32 shared_informer.go:223] Waiting for caches to sync for cidrallocator I0117 09:27:51.349727 32 shared_informer.go:230] Caches are synced for cidrallocator I0117 09:27:51.354382 32 shared_informer.go:230] Caches are synced for expand I0117 09:27:51.370312 32 shared_informer.go:230] Caches are synced for ClusterRoleAggregator I0117 09:27:51.371421 32 range_allocator.go:373] Set node local-node PodCIDR to [10.42.0.0/24] I0117 09:27:51.387647 32 shared_informer.go:230] Caches are synced for PV protection I0117 09:27:51.403197 32 shared_informer.go:230] Caches are synced for namespace E0117 09:27:51.414131 32 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again I0117 09:27:51.414371 32 shared_informer.go:230] Caches are synced for TTL E0117 09:27:51.420078 32 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again E0117 09:27:51.420355 32 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again I0117 09:27:51.424970 32 kuberuntime_manager.go:978] updating runtime config through cri with podcidr 10.42.0.0/24 I0117 09:27:51.425567 32 kubelet_network.go:77] Setting Pod CIDR: -> 10.42.0.0/24 I0117 09:27:51.533979 32 flannel.go:92] Determining IP address of default interface I0117 09:27:51.534285 32 flannel.go:105] Using interface with name eth0 and address 172.17.0.2 I0117 09:27:51.539780 32 kube.go:117] Waiting 10m0s for node controller to sync I0117 09:27:51.539810 32 kube.go:300] Starting kube subnet manager time="2021-01-17T09:27:51.548080718Z" level=info msg="labels have been set successfully on node: local-node" I0117 09:27:51.566142 32 network_policy_controller.go:149] Starting network policy controller 2021/01/17 09:27:51 [INFO] Done waiting for CRD globaldnses.management.cattle.io to become available 2021/01/17 09:27:51 [INFO] Waiting for CRD rkek8ssystemimages.management.cattle.io to become available I0117 09:27:51.758632 32 shared_informer.go:230] Caches are synced for GC I0117 09:27:51.770606 32 shared_informer.go:230] Caches are synced for taint I0117 09:27:51.771421 32 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: W0117 09:27:51.771539 32 node_lifecycle_controller.go:1048] Missing timestamp for Node local-node. Assuming now as a timestamp. I0117 09:27:51.771586 32 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. I0117 09:27:51.772178 32 shared_informer.go:230] Caches are synced for endpoint_slice I0117 09:27:51.772275 32 taint_manager.go:187] Starting NoExecuteTaintManager I0117 09:27:51.773971 32 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"local-node", UID:"a8d67fb4-7246-4ea6-8621-40d88bb6b6df", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node local-node event: Registered Node local-node in Controller I0117 09:27:51.775442 32 shared_informer.go:230] Caches are synced for PVC protection I0117 09:27:51.790784 32 shared_informer.go:230] Caches are synced for daemon sets I0117 09:27:51.790865 32 shared_informer.go:230] Caches are synced for HPA I0117 09:27:51.791396 32 shared_informer.go:230] Caches are synced for job I0117 09:27:51.800503 32 shared_informer.go:230] Caches are synced for endpoint I0117 09:27:51.802605 32 shared_informer.go:230] Caches are synced for deployment I0117 09:27:51.812431 32 shared_informer.go:230] Caches are synced for stateful set I0117 09:27:51.817702 32 shared_informer.go:230] Caches are synced for disruption I0117 09:27:51.817727 32 disruption.go:339] Sending events to api server. I0117 09:27:51.822686 32 shared_informer.go:230] Caches are synced for ReplicationController I0117 09:27:51.823058 32 shared_informer.go:230] Caches are synced for attach detach I0117 09:27:51.844404 32 shared_informer.go:230] Caches are synced for persistent volume I0117 09:27:51.846986 32 controller.go:606] quota admission added evaluator for: replicasets.apps I0117 09:27:51.849182 32 shared_informer.go:230] Caches are synced for ReplicaSet I0117 09:27:51.851721 32 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"1d045030-8202-40e5-8a20-ab98ea96b220", APIVersion:"apps/v1", ResourceVersion:"207", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-7944c66d8d to 1 I0117 09:27:51.893027 32 shared_informer.go:230] Caches are synced for resource quota I0117 09:27:51.932487 32 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7944c66d8d", UID:"636a0c85-f1e0-4f14-92f2-7e5e4966a02b", APIVersion:"apps/v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7944c66d8d-5n8kk I0117 09:27:51.936893 32 shared_informer.go:230] Caches are synced for garbage collector I0117 09:27:51.936909 32 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0117 09:27:51.945502 32 topology_manager.go:233] [topologymanager] Topology Admit Handler I0117 09:27:51.949750 32 controller.go:606] quota admission added evaluator for: events.events.k8s.io I0117 09:27:51.979143 32 shared_informer.go:230] Caches are synced for resource quota I0117 09:27:51.981183 32 shared_informer.go:230] Caches are synced for garbage collector I0117 09:27:52.032678 32 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-vvbsh" (UniqueName: "kubernetes.io/secret/b9937baf-8d7c-49b8-a4f3-3772ac01b16c-coredns-token-vvbsh") pod "coredns-7944c66d8d-5n8kk" (UID: "b9937baf-8d7c-49b8-a4f3-3772ac01b16c") I0117 09:27:52.032713 32 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b9937baf-8d7c-49b8-a4f3-3772ac01b16c-config-volume") pod "coredns-7944c66d8d-5n8kk" (UID: "b9937baf-8d7c-49b8-a4f3-3772ac01b16c") 2021/01/17 09:27:52 [INFO] Done waiting for CRD rkek8ssystemimages.management.cattle.io to become available 2021/01/17 09:27:52 [INFO] Waiting for CRD projectmonitorgraphs.management.cattle.io to become available E0117 09:27:52.286841 32 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "7aa10af07248b68ac9dea26d4a29299bf8612413bd82bb3b69a8f489e89f75e1": open /run/flannel/subnet.env: no such file or directory E0117 09:27:52.286902 32 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-7944c66d8d-5n8kk_kube-system(b9937baf-8d7c-49b8-a4f3-3772ac01b16c)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "7aa10af07248b68ac9dea26d4a29299bf8612413bd82bb3b69a8f489e89f75e1": open /run/flannel/subnet.env: no such file or directory E0117 09:27:52.286919 32 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-7944c66d8d-5n8kk_kube-system(b9937baf-8d7c-49b8-a4f3-3772ac01b16c)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "7aa10af07248b68ac9dea26d4a29299bf8612413bd82bb3b69a8f489e89f75e1": open /run/flannel/subnet.env: no such file or directory E0117 09:27:52.286973 32 pod_workers.go:191] Error syncing pod b9937baf-8d7c-49b8-a4f3-3772ac01b16c ("coredns-7944c66d8d-5n8kk_kube-system(b9937baf-8d7c-49b8-a4f3-3772ac01b16c)"), skipping: failed to "CreatePodSandbox" for "coredns-7944c66d8d-5n8kk_kube-system(b9937baf-8d7c-49b8-a4f3-3772ac01b16c)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-7944c66d8d-5n8kk_kube-system(b9937baf-8d7c-49b8-a4f3-3772ac01b16c)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"7aa10af07248b68ac9dea26d4a29299bf8612413bd82bb3b69a8f489e89f75e1\": open /run/flannel/subnet.env: no such file or directory" I0117 09:27:52.540074 32 kube.go:124] Node controller sync successful I0117 09:27:52.540155 32 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false I0117 09:27:52.559911 32 flannel.go:78] Wrote subnet file to /run/flannel/subnet.env I0117 09:27:52.559926 32 flannel.go:82] Running backend. I0117 09:27:52.559932 32 vxlan_network.go:60] watching for new subnet leases I0117 09:27:52.561972 32 iptables.go:145] Some iptables rules are missing; deleting and recreating rules I0117 09:27:52.561985 32 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN I0117 09:27:52.568987 32 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully I0117 09:27:52.572197 32 iptables.go:145] Some iptables rules are missing; deleting and recreating rules I0117 09:27:52.572220 32 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT I0117 09:27:52.572822 32 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN I0117 09:27:52.576082 32 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully I0117 09:27:52.581262 32 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN I0117 09:27:52.583309 32 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT I0117 09:27:52.586701 32 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT I0117 09:27:52.586836 32 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully I0117 09:27:52.592552 32 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT I0117 09:27:52.600577 32 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN I0117 09:27:52.609645 32 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully 2021/01/17 09:27:52 [INFO] Done waiting for CRD projectmonitorgraphs.management.cattle.io to become available 2021/01/17 09:27:52 [INFO] Waiting for CRD projectloggings.management.cattle.io to become available 2021/01/17 09:27:53 [INFO] Done waiting for CRD projectloggings.management.cattle.io to become available 2021/01/17 09:27:53 [INFO] Waiting for CRD projectalertrules.management.cattle.io to become available 2021/01/17 09:27:53 [INFO] Done waiting for CRD projectalertrules.management.cattle.io to become available 2021-01-17 09:27:59.530590 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:568" took too long (142.146037ms) to execute 2021-01-17 09:27:59.784946 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/cloud-controller-manager\" " with result "range_response_count:1 size:588" took too long (221.437363ms) to execute 2021-01-17 09:28:04.245442 W | etcdserver: request "header:<ID:7587851856377224217 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/local-node\" mod_revision:682 > success:<request_put:<key:\"/registry/leases/kube-node-lease/local-node\" value_size:538 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/local-node\" > >>" with result "size:16" took too long (2.393444207s) to execute 2021-01-17 09:28:04.245600 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:568" took too long (2.663158493s) to execute I0117 09:28:04.245817 32 trace.go:116] Trace[1368955362]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2021-01-17 09:28:01.495620386 +0000 UTC m=+31.109473208) (total time: 2.750156266s): Trace[1368955362]: [2.750116961s] [2.749663619s] Transaction committed I0117 09:28:04.245954 32 trace.go:116] Trace[1011213204]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/local-node,user-agent:k3s/v1.18.8+k3s1 (linux/amd64) kubernetes/6b59531,client:127.0.0.1 (started: 2021-01-17 09:28:01.495456436 +0000 UTC m=+31.109309221) (total time: 2.750474181s): Trace[1011213204]: [2.75041124s] [2.750283736s] Object stored in database I0117 09:28:04.246136 32 trace.go:116] Trace[777238912]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:k3s/v1.18.8+k3s1 (linux/amd64) kubernetes/6b59531/leader-election,client:127.0.0.1 (started: 2021-01-17 09:28:01.582164635 +0000 UTC m=+31.196017442) (total time: 2.663947485s): Trace[777238912]: [2.663903764s] [2.663884301s] About to write a response 2021-01-17 09:28:04.246783 W | wal: sync duration of 2.394986744s, expected less than 1s 2021-01-17 09:28:04.246927 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:586" took too long (2.458920881s) to execute 2021-01-17 09:28:04.246954 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/k3s\" " with result "range_response_count:1 size:501" took too long (1.013737404s) to execute 2021-01-17 09:28:04.247013 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/cattle-controllers\" " with result "range_response_count:1 size:535" took too long (1.87935666s) to execute 2021-01-17 09:28:04.247079 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/cloud-controller-manager\" " with result "range_response_count:1 size:588" took too long (2.408634928s) to execute I0117 09:28:04.247224 32 trace.go:116] Trace[439781376]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:k3s/v1.18.8+k3s1 (linux/amd64) kubernetes/6b59531/leader-election,client:127.0.0.1 (started: 2021-01-17 09:28:01.787624128 +0000 UTC m=+31.401476931) (total time: 2.459565675s): Trace[439781376]: [2.459538989s] [2.459512285s] About to write a response I0117 09:28:04.247635 32 trace.go:116] Trace[105367488]: "Get" url:/api/v1/namespaces/kube-system/configmaps/cattle-controllers,user-agent:rancher/v0.0.0 (linux/amd64) kubernetes/$Format,client:127.0.0.1 (started: 2021-01-17 09:28:02.367327123 +0000 UTC m=+31.981179923) (total time: 1.880287978s): Trace[105367488]: [1.880231577s] [1.880206896s] About to write a response I0117 09:28:04.247837 32 trace.go:116] Trace[452295188]: "Get" url:/api/v1/namespaces/kube-system/configmaps/k3s,user-agent:k3s/v1.18.8+k3s1 (linux/amd64) kubernetes/6b59531,client:127.0.0.1 (started: 2021-01-17 09:28:03.232884655 +0000 UTC m=+32.846737462) (total time: 1.014935193s): Trace[452295188]: [1.014901526s] [1.014893108s] About to write a response I0117 09:28:04.248043 32 trace.go:116] Trace[924872350]: "Get" url:/api/v1/namespaces/kube-system/endpoints/cloud-controller-manager,user-agent:k3s/v1.18.8+k3s1 (linux/amd64) kubernetes/6b59531/leader-election,client:127.0.0.1 (started: 2021-01-17 09:28:01.838174078 +0000 UTC m=+31.452026880) (total time: 2.409844068s): Trace[924872350]: [2.409814345s] [2.409805598s] About to write a response 2021-01-17 09:28:09.992317 W | wal: sync duration of 5.742399712s, expected less than 1s E0117 09:28:11.582015 32 leaderelection.go:320] error retrieving resource lock kube-system/kube-scheduler: Get https://127.0.0.1:6444/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=10s: context deadline exceeded I0117 09:28:11.582065 32 leaderelection.go:277] failed to renew lease kube-system/kube-scheduler: timed out waiting for the condition F0117 09:28:11.582078 32 server.go:244] leaderelection lost 2021-01-17 09:28:11.925942 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/cloud-controller-manager\" " with result "error:context canceled" took too long (7.675687774s) to execute WARNING: 2021/01/17 09:28:11 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" 2021-01-17 09:28:11.926031 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "error:context canceled" took too long (4.822845139s) to execute WARNING: 2021/01/17 09:28:11 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" 2021-01-17 09:28:11.926490 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/k3s\" " with result "error:context canceled" took too long (5.674207376s) to execute 2021-01-17 09:28:11.926522 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "error:context canceled" took too long (5.7453937s) to execute WARNING: 2021/01/17 09:28:11 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2021/01/17 09:28:11 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2021/01/17 09:28:11 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2021/01/17 09:28:11 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" 2021/01/17 09:28:11 [FATAL] k3s exited with: exit status 255

Sometimes it stays up more time, but then it crashes again. If I try to create a cluster and run the command given by rancher the rancher agent gets stuck in “Pre-pulling Kubernetes images” and then exits k3s with the same error. I read some other issues about this problem and it looks like to be disk related?

Just run fio to check if etcd can work on this machine:

fsync/fdatasync/sync_file_range: sync (usec): min=952, max=768535, avg=2732.20, stdev=7767.24 sync percentiles (usec): | 1.00th=[ 1045], 5.00th=[ 1123], 10.00th=[ 1156], 20.00th=[ 1237], | 30.00th=[ 1319], 40.00th=[ 1483], 50.00th=[ 3326], 60.00th=[ 3458], | 70.00th=[ 3556], 80.00th=[ 3687], 90.00th=[ 3884], 95.00th=[ 4113], | 99.00th=[ 5604], 99.50th=[ 6718], 99.90th=[10159], 99.95th=[15008], | 99.99th=[29492]

Thank you

Wow how disheartening to see you didn’t even get a single response from Jan. Similar issue, did you ever determine cause? Thanks.

All the timing warnings at the bottom basically mean the disk is too slow for etcd; probably a spinning plater HDD instead of a SSD, or severely oversubscribed on a shared VPS node.

Hi!

Same problem on a laptop with Intel i7 processor, 16 GB RAM and NVMe Samsung SSD 960 EVO 250GB.

Latest lines of log:

2021/04/21 00:10:25 [INFO] Rancher version v2.5.7 (c824d91cd) is starting
2021/04/21 00:10:25 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:true Embedded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Agent:false Features:}
2021/04/21 00:10:25 [INFO] Listening on /tmp/log.sock
2021/04/21 00:10:25 [INFO] Running etcd --data-dir=management-state/etcd --heartbeat-interval=500 --election-timeout=5000
2021-04-21 00:10:25.895125 W | pkg/flags: unrecognized environment variable ETCD_URL_arm64=https://github.com/etcd-io/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-arm64.tar.gz
2021-04-21 00:10:25.895144 W | pkg/flags: unrecognized environment variable ETCD_URL_amd64=https://github.com/etcd-io/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-amd64.tar.gz
2021-04-21 00:10:25.895147 W | pkg/flags: unrecognized environment variable ETCD_UNSUPPORTED_ARCH=amd64
2021-04-21 00:10:25.895149 W | pkg/flags: unrecognized environment variable ETCD_URL=ETCD_URL_amd64
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2021-04-21 00:10:25.895169 I | etcdmain: etcd Version: 3.4.3
2021-04-21 00:10:25.895173 I | etcdmain: Git SHA: 3cf2f69b5
2021-04-21 00:10:25.895175 I | etcdmain: Go Version: go1.12.12
2021-04-21 00:10:25.895177 I | etcdmain: Go OS/Arch: linux/amd64
2021-04-21 00:10:25.895180 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
2021-04-21 00:10:25.895221 N | etcdmain: the server is already initialized as member before, starting as etcd member...
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2021-04-21 00:10:25.895525 I | embed: name = default
2021-04-21 00:10:25.895531 I | embed: data dir = management-state/etcd
2021-04-21 00:10:25.895534 I | embed: member dir = management-state/etcd/member
2021-04-21 00:10:25.895536 I | embed: heartbeat = 500ms
2021-04-21 00:10:25.895538 I | embed: election = 5000ms
2021-04-21 00:10:25.895543 I | embed: snapshot count = 100000
2021-04-21 00:10:25.895548 I | embed: advertise client URLs = http://localhost:2379
2021-04-21 00:10:25.895552 I | embed: initial advertise peer URLs = http://localhost:2380
2021-04-21 00:10:25.895557 I | embed: initial cluster =
2021-04-21 00:10:25.898367 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 1124
raft2021/04/21 00:10:25 INFO: 8e9e05c52164694d switched to configuration voters=()
raft2021/04/21 00:10:25 INFO: 8e9e05c52164694d became follower at term 100
raft2021/04/21 00:10:25 INFO: newRaft 8e9e05c52164694d [peers: [], term: 100, commit: 1124, applied: 0, lastindex: 1124, lastterm: 100]
2021-04-21 00:10:25.904005 W | auth: simple token is not cryptographically signed
2021-04-21 00:10:25.906104 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
raft2021/04/21 00:10:25 INFO: 8e9e05c52164694d switched to configuration voters=(10276657743932975437)
2021-04-21 00:10:25.906540 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
2021-04-21 00:10:25.906608 N | etcdserver/membership: set the initial cluster version to 3.4
2021-04-21 00:10:25.906644 I | etcdserver/api: enabled capabilities for version 3.4
2021-04-21 00:10:25.908403 I | embed: listening for peers on 127.0.0.1:2380
raft2021/04/21 00:10:34 INFO: 8e9e05c52164694d is starting a new election at term 100
raft2021/04/21 00:10:34 INFO: 8e9e05c52164694d became candidate at term 101
raft2021/04/21 00:10:34 INFO: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 101
raft2021/04/21 00:10:34 INFO: 8e9e05c52164694d became leader at term 101
raft2021/04/21 00:10:34 INFO: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 101
2021-04-21 00:10:34.406694 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
2021-04-21 00:10:34.406716 I | embed: ready to serve client requests
2021-04-21 00:10:34.407211 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
2021/04/21 00:10:34 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6443/version?timeout=15m0s": dial tcp 127.0.0.1:6443: connect: connection refused
time="2021-04-21T00:10:34.546759503Z" level=info msg="Starting k3s v1.18.8+k3s1 (6b595318)"
time="2021-04-21T00:10:34.547005534Z" level=info msg="Cluster bootstrap already complete"
time="2021-04-21T00:10:34.556996692Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=http://localhost:2379 --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments.
I0421 00:10:34.557500      40 server.go:645] external host was not specified, using 172.17.0.2
I0421 00:10:34.557723      40 server.go:162] Version: v1.18.8+k3s1
I0421 00:10:34.560482      40 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0421 00:10:34.560490      40 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0421 00:10:34.561205      40 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0421 00:10:34.561212      40 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0421 00:10:34.576140      40 master.go:270] Using reconciler: lease
I0421 00:10:34.588728      40 rest.go:113] the default service ipfamily for this cluster is: IPv4
W0421 00:10:34.808383      40 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources.
W0421 00:10:34.815795      40 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0421 00:10:34.824179      40 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0421 00:10:34.837126      40 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0421 00:10:34.839878      40 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0421 00:10:34.850455      40 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0421 00:10:34.863961      40 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0421 00:10:34.863978      40 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0421 00:10:34.870998      40 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0421 00:10:34.871032      40 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0421 00:10:36.072031      40 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I0421 00:10:36.072038      40 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0421 00:10:36.072197      40 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
I0421 00:10:36.072594      40 secure_serving.go:178] Serving securely on 127.0.0.1:6444
I0421 00:10:36.072654      40 autoregister_controller.go:141] Starting autoregister controller
I0421 00:10:36.072659      40 cache.go:32] Waiting for caches to sync for autoregister controller
I0421 00:10:36.072675      40 tlsconfig.go:240] Starting DynamicServingCertificateController
I0421 00:10:36.072714      40 available_controller.go:387] Starting AvailableConditionController
I0421 00:10:36.072729      40 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0421 00:10:36.072845      40 naming_controller.go:291] Starting NamingConditionController
I0421 00:10:36.072921      40 crd_finalizer.go:266] Starting CRDFinalizer
I0421 00:10:36.072939      40 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0421 00:10:36.072974      40 controller.go:86] Starting OpenAPI controller
I0421 00:10:36.072847      40 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0421 00:10:36.072859      40 establishing_controller.go:76] Starting EstablishingController
I0421 00:10:36.072873      40 crdregistration_controller.go:111] Starting crd-autoregister controller
I0421 00:10:36.073081      40 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
I0421 00:10:36.072994      40 customresource_discovery_controller.go:209] Starting DiscoveryController
I0421 00:10:36.073134      40 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0421 00:10:36.073143      40 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0421 00:10:36.073236      40 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0421 00:10:36.073263      40 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I0421 00:10:36.074839      40 controller.go:81] Starting OpenAPI AggregationController
I0421 00:10:36.072685      40 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0421 00:10:36.074864      40 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
E0421 00:10:36.093553      40 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0421 00:10:36.172833      40 cache.go:39] Caches are synced for autoregister controller
I0421 00:10:36.172945      40 cache.go:39] Caches are synced for AvailableConditionController controller
I0421 00:10:36.173128      40 shared_informer.go:230] Caches are synced for crd-autoregister
I0421 00:10:36.173286      40 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller
I0421 00:10:36.174921      40 cache.go:39] Caches are synced for APIServiceRegistrationController controller
2021/04/21 00:10:36 [INFO] Waiting for server to become available: the server is currently unable to handle the request
I0421 00:10:37.072466      40 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0421 00:10:37.072523      40 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0421 00:10:37.075674      40 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0421 00:10:38.082221      40 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0421 00:10:38.082266      40 registry.go:150] Registering EvenPodsSpread predicate and priority function
time="2021-04-21T00:10:38.082894727Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --secure-port=0"
time="2021-04-21T00:10:38.083927293Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
I0421 00:10:38.089520      40 controllermanager.go:161] Version: v1.18.8+k3s1
time="2021-04-21T00:10:38.089906478Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --node-status-update-frequency=1m --secure-port=0"
Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
I0421 00:10:38.090173      40 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0421 00:10:38.090203      40 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
I0421 00:10:38.093056      40 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0421 00:10:38.093070      40 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0421 00:10:38.094257      40 authorization.go:47] Authorization is disabled
W0421 00:10:38.094267      40 authentication.go:40] Authentication is disabled
I0421 00:10:38.094274      40 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0421 00:10:38.095433      40 controllermanager.go:120] Version: v1.18.8+k3s1
W0421 00:10:38.095454      40 controllermanager.go:132] detected a cluster without a ClusterID.  A ClusterID will be required in the future.  Please tag your cluster to avoid any future issues
I0421 00:10:38.095481      40 leaderelection.go:242] attempting to acquire leader lease  kube-system/cloud-controller-manager...
time="2021-04-21T00:10:38.102596376Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz"
time="2021-04-21T00:10:38.102814519Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
time="2021-04-21T00:10:38.102919555Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
time="2021-04-21T00:10:38.103020719Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
I0421 00:10:38.194835      40 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I0421 00:10:38.203553      40 leaderelection.go:242] attempting to acquire leader lease  kube-system/k3s...
time="2021-04-21T00:10:38.203526992Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
time="2021-04-21T00:10:38.203604655Z" level=info msg="Waiting for master node  startup: resource name may not be empty"
time="2021-04-21T00:10:38.204207810Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2021-04-21T00:10:38.204279643Z" level=info msg="To join node to cluster: k3s agent -s https://172.17.0.2:6443 -t ${NODE_TOKEN}"
I0421 00:10:38.214572      40 leaderelection.go:252] successfully acquired lease kube-system/k3s
2021-04-21 00:10:38.226329 I | http: TLS handshake error from 127.0.0.1:53056: remote error: tls: bad certificate
time="2021-04-21T00:10:38.233024180Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
time="2021-04-21T00:10:38.233041025Z" level=info msg="Run: k3s kubectl"
time="2021-04-21T00:10:38.233047651Z" level=info msg="k3s is up and running"
time="2021-04-21T00:10:38.233096121Z" level=warning msg="Failed to find cpuset cgroup, you may need to add \"cgroup_enable=cpuset\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
time="2021-04-21T00:10:38.233106080Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
time="2021-04-21T00:10:38.233115708Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
2021/04/21 00:10:38 [FATAL] k3s exited with: exit status 1

Following instructions at Rancher Docs: Manual Quick Start

Also new to Kubernetes and Rancher and trying to learn.

Info from neofetch:

                   -`                    peracchi@nitro
                  .o+`                   --------------
                 `ooo/                   OS: Arch Linux x86_64
                `+oooo:                  Host: Nitro AN515-51 V1.22
               `+oooooo:                 Kernel: 5.11.15-arch1-2
               -+oooooo+:                Uptime: 26 mins
             `/:-:++oooo+:               Packages: 191 (pacman)
            `/++++/+++++++:              Shell: bash 5.1.4
           `/++++++++++++++:             Resolution: 1920x1080
          `/+++ooooooooooooo/`           Terminal: /dev/pts/0
         ./ooosssso++osssssso+`          CPU: Intel i7-7700HQ (8) @ 3.800GHz
        .oossssso-````/ossssss+`         GPU: NVIDIA GeForce GTX 1050 Ti Mobile
       -osssssso.      :ssssssso.        GPU: Intel HD Graphics 630
      :osssssss/        osssso+++.       Memory: 250MiB / 15889MiB
     /ossssssss/        +ssssooo/-
   `/ossssso+/:-        -:/+osssso+-
  `+sso+:-`                 `.-/+oso:
 `++:.                           `-/+/
 .`                                 `/

It’s a bug, found this: [BUG] Cluster fails to start on cgroup v2 · Issue #493 · rancher/k3d · GitHub

A temporary fix is disable cgroup v2 and revert to cgroup v1…

On Arch Linux set the following kernel parameter:

systemd.unified_cgroup_hierarchy=0

hi, I also meet this problem. How to solve it? Thank you!

Same issue here, on a virtual-server with SSD-storage. Both on “stable” and on "latest.