Single node 6.2.0 docker installation - Failed to create cgroup; cannot enter cgroup2

Hi everyone,

I’ve seen around that k3s doesn’t support cgroup2.

I have my kernel and docker set to cgroup1 which I can see in docker info

 Cgroup Driver: cgroupfs
 Cgroup Version: 1

In my k3s logs it seems to still be having issues with cgroups. It continues to restart k3s after this cgroups issue. Is there anything else I need to set? I’m not sure why it still thinks I’m using cgroups v2 when docker is running cgroups v1.

1216 06:11:02.997536      76 node_container_manager_linux.go:57] "Failed to create cgroup" err="cannot enter cgroupv2 \"/sys/fs/cgroup/kubepods\" with domain controllers -- it is in an invalid state" cgroupName=[kubepods]
E1216 06:11:02.997570      76 kubelet.go:1384] "Failed to start ContainerManager" err="cannot enter cgroupv2 \"/sys/fs/cgroup/kubepods\" with domain controllers -- it is in an invalid state"
time="2021-12-16T06:11:20.952587036Z" level=info msg="Starting k3s v1.21.7+k3s1 (ac705709)"
time="2021-12-16T06:11:20.953206502Z" level=info msg="Managed etcd cluster bootstrap already complete and initialized"
{"level":"info","ts":"2021-12-16T06:11:21.130Z","caller":"embed/etcd.go:117","msg":"configuring peer listeners","listen-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":"2021-12-16T06:11:21.131Z","caller":"embed/etcd.go:127","msg":"configuring client listeners","listen-client-urls":["http://127.0.0.1:2399"]}

Thanks!!

Well it just looks like upgrading is rough. I believe I had 5.8 previously.

I ended up having to deleting the volume of the container and starting 6.2 fresh (with the cgroup version at 1).

I was using Linode LKE and I was able to import that back in.