K3s Raspberry PI 4, Ubunty 20.10 64-bit server

So installing with the standard link, on 20.04 I had a working cluster but when reflashing the drives i accidentally choose 20.10 Server in PI-Imager.

Now servers will not start… Can anyone see what is broken in 20.10 installation?

Oct 17 17:51:04 rpi-20 systemd[1]: k3s.service: Scheduled restart job, restart counter is at 4.
Oct 17 17:51:04 rpi-20 systemd[1]: k3s.service: Consumed 32.645s CPU time.
Oct 17 17:51:05 rpi-20 k3s[3417]: time="2021-10-17T17:51:05.180442669Z" level=info msg="Starting k3s v1.21.5+k3s2 (724ef700)"
Oct 17 17:51:05 rpi-20 k3s[3417]: time="2021-10-17T17:51:05.181169031Z" level=info msg="Cluster bootstrap already complete"
Oct 17 17:51:05 rpi-20 k3s[3417]: time="2021-10-17T17:51:05.245957049Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
Oct 17 17:51:05 rpi-20 k3s[3417]: time="2021-10-17T17:51:05.246157287Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
Oct 17 17:51:05 rpi-20 k3s[3417]: time="2021-10-17T17:51:05.246692226Z" level=info msg="Database tables and indexes are up to date"
Oct 17 17:51:05 rpi-20 k3s[3417]: time="2021-10-17T17:51:05.257582049Z" level=info msg="Kine listening on unix://kine.sock"
Oct 17 17:51:05 rpi-20 k3s[3417]: time="2021-10-17T17:51:05.258214819Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Oct 17 17:51:05 rpi-20 k3s[3417]: Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
Oct 17 17:51:05 rpi-20 k3s[3417]: I1017 17:51:05.262387    3417 server.go:656] external host was not specified, using 192.168.1.51
Oct 17 17:51:05 rpi-20 k3s[3417]: I1017 17:51:05.263234    3417 server.go:195] Version: v1.21.5+k3s2
Oct 17 17:51:05 rpi-20 k3s[3417]: I1017 17:51:05.274952    3417 shared_informer.go:240] Waiting for caches to sync for node_authorizer
Oct 17 17:51:05 rpi-20 k3s[3417]: I1017 17:51:05.278744    3417 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Oct 17 17:51:05 rpi-20 k3s[3417]: I1017 17:51:05.278841    3417 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Oct 17 17:51:05 rpi-20 k3s[3417]: I1017 17:51:05.284132    3417 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Oct 17 17:51:05 rpi-20 k3s[3417]: I1017 17:51:05.284233    3417 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Oct 17 17:51:05 rpi-20 k3s[3417]: I1017 17:51:05.382979    3417 instance.go:283] Using reconciler: lease
Oct 17 17:51:05 rpi-20 k3s[3417]: I1017 17:51:05.480225    3417 rest.go:130] the default service ipfamily for this cluster is: IPv4
Oct 17 17:51:06 rpi-20 k3s[3417]: W1017 17:51:06.608784    3417 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Oct 17 17:51:06 rpi-20 k3s[3417]: W1017 17:51:06.646706    3417 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Oct 17 17:51:06 rpi-20 k3s[3417]: W1017 17:51:06.659763    3417 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Oct 17 17:51:06 rpi-20 k3s[3417]: W1017 17:51:06.683156    3417 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Oct 17 17:51:06 rpi-20 k3s[3417]: W1017 17:51:06.692395    3417 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
Oct 17 17:51:06 rpi-20 k3s[3417]: W1017 17:51:06.715054    3417 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
Oct 17 17:51:06 rpi-20 k3s[3417]: W1017 17:51:06.715157    3417 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
Oct 17 17:51:06 rpi-20 k3s[3417]: I1017 17:51:06.746309    3417 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Oct 17 17:51:06 rpi-20 k3s[3417]: I1017 17:51:06.746398    3417 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Oct 17 17:51:06 rpi-20 k3s[3417]: time="2021-10-17T17:51:06.788342944Z" level=info msg="Waiting for API server to become available"
Oct 17 17:51:06 rpi-20 k3s[3417]: time="2021-10-17T17:51:06.788415258Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
Oct 17 17:51:06 rpi-20 k3s[3417]: time="2021-10-17T17:51:06.790191609Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Oct 17 17:51:06 rpi-20 k3s[3417]: time="2021-10-17T17:51:06.792108255Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false"
Oct 17 17:51:06 rpi-20 k3s[3417]: time="2021-10-17T17:51:06.795799102Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
Oct 17 17:51:06 rpi-20 k3s[3417]: time="2021-10-17T17:51:06.795939008Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.1.51:6443 -t ${NODE_TOKEN}"
Oct 17 17:51:06 rpi-20 k3s[3417]: time="2021-10-17T17:51:06.799860409Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Oct 17 17:51:06 rpi-20 k3s[3417]: time="2021-10-17T17:51:06.800022907Z" level=info msg="Run: k3s kubectl"
Oct 17 17:51:06 rpi-20 k3s[3417]: time="2021-10-17T17:51:06.904058735Z" level=info msg="Cluster-Http-Server 2021/10/17 17:51:06 http: TLS handshake error from 127.0.0.1:57402: remote error: tls: bad certificate"
Oct 17 17:51:06 rpi-20 k3s[3417]: time="2021-10-17T17:51:06.924805615Z" level=info msg="Cluster-Http-Server 2021/10/17 17:51:06 http: TLS handshake error from 127.0.0.1:57408: remote error: tls: bad certificate"
Oct 17 17:51:06 rpi-20 k3s[3417]: time="2021-10-17T17:51:06.973448890Z" level=info msg="certificate CN=rpi-20 signed by CN=k3s-server-ca@1634492959: notBefore=2021-10-17 17:49:19 +0000 UTC notAfter=2022-10-17 17:51:06 +0000 UTC"
Oct 17 17:51:06 rpi-20 k3s[3417]: time="2021-10-17T17:51:06.986018233Z" level=info msg="certificate CN=system:node:rpi-20,O=system:nodes signed by CN=k3s-client-ca@1634492958: notBefore=2021-10-17 17:49:18 +0000 UTC notAfter=2022-10-17 17:51:06 +0000 UTC"
Oct 17 17:51:07 rpi-20 systemd[2213]: var-lib-rancher-k3s-agent-containerd-multiple\x2dlowerdir\x2dcheck626558590-merged.mount: Deactivated successfully.
Oct 17 17:51:07 rpi-20 k3s[3417]: time="2021-10-17T17:51:07.374616405Z" level=info msg="Module overlay was already loaded"
Oct 17 17:51:07 rpi-20 k3s[3417]: time="2021-10-17T17:51:07.374735404Z" level=info msg="Module nf_conntrack was already loaded"
Oct 17 17:51:07 rpi-20 k3s[3417]: time="2021-10-17T17:51:07.374788718Z" level=info msg="Module br_netfilter was already loaded"
Oct 17 17:51:07 rpi-20 k3s[3417]: time="2021-10-17T17:51:07.374837662Z" level=info msg="Module iptable_nat was already loaded"
Oct 17 17:51:07 rpi-20 k3s[3417]: W1017 17:51:07.374966    3417 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 17 17:51:07 rpi-20 k3s[3417]: time="2021-10-17T17:51:07.382097045Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Oct 17 17:51:07 rpi-20 k3s[3417]: time="2021-10-17T17:51:07.382530818Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Oct 17 17:51:08 rpi-20 k3s[3417]: time="2021-10-17T17:51:08.386229205Z" level=info msg="Containerd is now running"
Oct 17 17:51:08 rpi-20 k3s[3417]: time="2021-10-17T17:51:08.433527241Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Oct 17 17:51:08 rpi-20 k3s[3417]: time="2021-10-17T17:51:08.443438003Z" level=info msg="Handling backend connection request [rpi-20]"
Oct 17 17:51:08 rpi-20 k3s[3417]: time="2021-10-17T17:51:08.446037919Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/5b1d238ed00ea9bbae6aa96d867c1264a96ad48b42e55d56570d54d05c8d8a7a/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=rpi-20 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Oct 17 17:51:08 rpi-20 k3s[3417]: time="2021-10-17T17:51:08.447848695Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=rpi-20 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Oct 17 17:51:08 rpi-20 k3s[3417]: Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Oct 17 17:51:08 rpi-20 k3s[3417]: Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Oct 17 17:51:08 rpi-20 k3s[3417]: Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Oct 17 17:51:08 rpi-20 k3s[3417]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Oct 17 17:51:08 rpi-20 k3s[3417]: W1017 17:51:08.448664    3417 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Oct 17 17:51:08 rpi-20 k3s[3417]: I1017 17:51:08.502224    3417 server.go:436] "Kubelet version" kubeletVersion="v1.21.5+k3s2"
Oct 17 17:51:08 rpi-20 k3s[3417]: E1017 17:51:08.551378    3417 node.go:161] Failed to retrieve node info: nodes "rpi-20" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 17 17:51:08 rpi-20 k3s[3417]: I1017 17:51:08.602235    3417 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
Oct 17 17:51:08 rpi-20 k3s[3417]: W1017 17:51:08.602389    3417 manager.go:159] Cannot detect current cgroup on cgroup v2
Oct 17 17:51:09 rpi-20 k3s[3417]: E1017 17:51:09.708226    3417 node.go:161] Failed to retrieve node info: nodes "rpi-20" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 17 17:51:12 rpi-20 k3s[3417]: E1017 17:51:12.005769    3417 node.go:161] Failed to retrieve node info: nodes "rpi-20" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 17 17:51:13 rpi-20 k3s[3417]: time="2021-10-17T17:51:13.437260653Z" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Oct 17 17:51:13 rpi-20 k3s[3417]: W1017 17:51:13.623467    3417 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.630478    3417 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.631563    3417 container_manager_linux.go:291] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.631782    3417 container_manager_linux.go:296] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.631910    3417 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.631960    3417 container_manager_linux.go:327] "Initializing Topology Manager" policy="none" scope="container"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.631996    3417 container_manager_linux.go:332] "Creating device plugin manager" devicePluginEnabled=true
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.632695    3417 kubelet.go:404] "Attempting to sync node with API server"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.632770    3417 kubelet.go:272] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.632867    3417 kubelet.go:283] "Adding apiserver pod source"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.632930    3417 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.639001    3417 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="v1.4.11-k3s1" apiVersion="v1alpha2"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.640878    3417 server.go:1191] "Started kubelet"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.651071    3417 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.653241    3417 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.655749    3417 server.go:409] "Adding debug handlers to kubelet server"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.663765    3417 volume_manager.go:271] "Starting Kubelet Volume Manager"
Oct 17 17:51:13 rpi-20 k3s[3417]: E1017 17:51:13.653247    3417 cri_stats_provider.go:369] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
Oct 17 17:51:13 rpi-20 k3s[3417]: E1017 17:51:13.664985    3417 kubelet.go:1306] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.665164    3417 desired_state_of_world_populator.go:141] "Desired state populator starts to run"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.765144    3417 kuberuntime_manager.go:1044] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.773345    3417 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.774889    3417 kubelet_node_status.go:71] "Attempting to register node" node="rpi-20"
Oct 17 17:51:13 rpi-20 k3s[3417]: I1017 17:51:13.832140    3417 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
Oct 17 17:51:14 rpi-20 k3s[3417]: I1017 17:51:14.026804    3417 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
Oct 17 17:51:14 rpi-20 k3s[3417]: I1017 17:51:14.026974    3417 status_manager.go:157] "Starting to sync pod status with apiserver"
Oct 17 17:51:14 rpi-20 k3s[3417]: I1017 17:51:14.027039    3417 kubelet.go:1846] "Starting kubelet main sync loop"
Oct 17 17:51:14 rpi-20 k3s[3417]: E1017 17:51:14.027251    3417 kubelet.go:1870] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Oct 17 17:51:14 rpi-20 k3s[3417]: E1017 17:51:14.128097    3417 kubelet.go:1870] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Oct 17 17:51:14 rpi-20 k3s[3417]: E1017 17:51:14.328779    3417 kubelet.go:1870] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Oct 17 17:51:14 rpi-20 k3s[3417]: I1017 17:51:14.634573    3417 apiserver.go:52] "Watching apiserver"
Oct 17 17:51:14 rpi-20 k3s[3417]: E1017 17:51:14.729653    3417 kubelet.go:1870] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Oct 17 17:51:14 rpi-20 k3s[3417]: time="2021-10-17T17:51:14.995307710Z" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
Oct 17 17:51:14 rpi-20 k3s[3417]: time="2021-10-17T17:51:14.996946878Z" level=fatal msg="server stopped: http: Server closed"
Oct 17 17:51:15 rpi-20 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Oct 17 17:51:15 rpi-20 systemd[1]: k3s.service: Failed with result 'exit-code'.
Oct 17 17:51:15 rpi-20 systemd[1]: k3s.service: Consumed 18.203s CPU time.

Not an active forum this…

But today someone posted the response elsewhere…

https://gitmemory.cn/repo/k3s-io/k3s/issues/4234

On Ubuntu 21.10 and k3s on raspberry you need to install the extra package

sudo apt install linux-modules-extra-raspi
1 Like