K3-server needs 15-25% CPU

I installed k3s agent and server on my ubuntu 20.04 laptop.

The k3-server process needs 15-25% CPU constantly, although I have not started a single image up to now.

Is there a way to debug this? Or is this cpu usage normal?

Regards,
Thomas

grep -P ': E|systemd.*k3' /var/log/syslog

May 10 22:41:07 yoga15 k3s[16470]: time="2021-05-10T22:41:07.072960107+02:00" level=info msg="Cluster-Http-Server 2021/05/10 22:41:07 http: TLS handshake error from 192.168.178.79:42928: EOF"
May 10 22:41:07 yoga15 k3s[16470]: E0510 22:41:07.171260   16470 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
May 10 22:41:07 yoga15 k3s[16470]: E0510 22:41:07.183886   16470 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
May 10 22:41:07 yoga15 k3s[16470]: E0510 22:41:07.188153   16470 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
May 10 22:41:07 yoga15 k3s[16470]: E0510 22:41:07.279532   16470 node.go:161] Failed to retrieve node info: nodes "yoga15" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
May 10 22:41:08 yoga15 k3s[2492]: E0510 22:41:08.052750    2492 reflector.go:138] object-"kube-system"/"traefik-token-bvqlt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "traefik-token-bvqlt" is forbidden: User "system:node:yoga15" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'yoga15' and this object
May 10 22:41:08 yoga15 k3s[16470]: E0510 22:41:08.064801   16470 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
May 10 22:41:08 yoga15 k3s[16470]: E0510 22:41:08.479773   16470 node.go:161] Failed to retrieve node info: nodes "yoga15" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
May 10 22:41:08 yoga15 k3s[16470]: E0510 22:41:08.479889   16470 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
May 10 22:41:08 yoga15 k3s[16470]: E0510 22:41:08.559775   16470 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
May 10 22:41:10 yoga15 k3s[2492]: E0510 22:41:10.056618    2492 controller.go:187] failed to update lease, error: Operation cannot be fulfilled on leases.coordination.k8s.io "yoga15": the object has been modified; please apply your changes to the latest version and try again
May 10 22:41:10 yoga15 k3s[16470]: E0510 22:41:10.072637   16470 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
May 10 22:41:10 yoga15 k3s[16470]: E0510 22:41:10.848547   16470 server.go:594] healthz server failed: failed to start proxier healthz on 127.0.0.1:10256: listen tcp 127.0.0.1:10256: bind: address already in use
May 10 22:41:10 yoga15 k3s[16470]: E0510 22:41:10.849252   16470 server.go:634] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
May 10 22:41:11 yoga15 k3s[16470]: E0510 22:41:11.203505   16470 proxier.go:1288] can't open "nodePort for kube-system/traefik:http" (:30675/tcp), skipping this nodePort: listen tcp4 :30675: bind: address already in use
May 10 22:41:11 yoga15 k3s[16470]: E0510 22:41:11.203645   16470 proxier.go:1288] can't open "nodePort for kube-system/traefik:https" (:31046/tcp), skipping this nodePort: listen tcp4 :31046: bind: address already in use
May 10 22:41:11 yoga15 k3s[16470]: E0510 22:41:11.780672   16470 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
May 10 22:41:12 yoga15 k3s[16470]: I0510 22:41:12.336918   16470 container_manager_linux.go:292] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
May 10 22:41:12 yoga15 k3s[16470]: E0510 22:41:12.355164   16470 server.go:795] Starting healthz server failed: listen tcp 127.0.0.1:10248: bind: address already in use
May 10 22:41:13 yoga15 k3s[16470]: E0510 22:41:13.040504   16470 kubelet.go:1296] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
May 10 22:41:13 yoga15 systemd[1]: k3s.service: Main process exited, code=exited, status=255/EXCEPTION
May 10 22:41:13 yoga15 systemd[1]: k3s.service: Failed with result 'exit-code'.

This comes every 12 seconds: /var/lib/rancher/k3s/agent/containerd/containerd.log

time="2021-05-10T22:24:36.086113108+02:00" level=info msg="starting containerd" revision= version=v1.4.4-k3s1
time="2021-05-10T22:24:36.113752906+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.con>
time="2021-05-10T22:24:36.113938255+02:00" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." typ>
time="2021-05-10T22:24:36.113974751+02:00" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.container>
time="2021-05-10T22:24:36.114018391+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.fuse-overlayfs\"..." type=i>
time="2021-05-10T22:24:36.114053197+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.contai>
time="2021-05-10T22:24:36.114109126+02:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.>
time="2021-05-10T22:24:36.114133076+02:00" level=info msg="metadata content store policy set" policy=shared
time="2021-05-10T22:24:47.440084127+02:00" level=info msg="starting containerd" revision= version=v1.4.4-k3s1
time="2021-05-10T22:24:47.487804747+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.con>
time="2021-05-10T22:24:47.488012612+02:00" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." typ>
time="2021-05-10T22:24:47.488065320+02:00" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.container>
time="2021-05-10T22:24:47.488097865+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.fuse-overlayfs\"..." type=i>
time="2021-05-10T22:24:47.488139689+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.contai>
time="2021-05-10T22:24:47.488179686+02:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.>
time="2021-05-10T22:24:47.488205830+02:00" level=info msg="metadata content store policy set" policy=shared
time="2021-05-10T22:24:59.222263419+02:00" level=info msg="starting containerd" revision= version=v1.4.4-k3s1
time="2021-05-10T22:24:59.249985610+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.con>
time="2021-05-10T22:24:59.250164038+02:00" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." typ>
time="2021-05-10T22:24:59.250196374+02:00" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.container>
time="2021-05-10T22:24:59.250216348+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.fuse-overlayfs\"..." type=i>
time="2021-05-10T22:24:59.250237959+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.contai>
time="2021-05-10T22:24:59.250259320+02:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.>
time="2021-05-10T22:24:59.250276477+02:00" level=info msg="metadata content store policy set" policy=shared

My fault: you can’t run k3s server and k3s agent on one machine. If you want to do this you need k3d (or minikube or kind),

But for playing around with K8s one server is enough. K3s agents are optional.