K8s after disk full event (RKE 1.3.15)

First of all, I’ve been playing around with a small single server k8s system and my knowledge of the subject is very limited. Recently the disk hosting the overlay2 directory had become full and the most pods on all namespaces were in either evicted or error state.

After realising that the issue was caused by the disk becoming full, I figured I could make some space by running “docker system prune”. So, I did that and it freed enough space for the pods to start up, but then I noticed that all starting pods remained in the Pending state. Then I noticed that the kube-scheduler pod in kube-system ns was also in the pending state. The rest of the pods in kube-system were still running normally.

I was thinking that maybe the pods are in Pending, because the scheduler is not running. So I used “docker stop” and “docker start” to restart the thing.

$ docker start scheduler
Error response from daemon: No such container: scheduler
Error: failed to start containers: scheduler
$ docker start kube-scheduler
Error response from daemon: No such container: kube-scheduler
Error: failed to start containers: kube-scheduler

My assumption is that when I ran the docker system prune it got rid of the related scheduler container, because it was not running at the time when I issued the prune and that’s why it is now complaining about it being missing.

My understanding is that when I initially used “rke up” to start the k8s it pulled all system containers and set everything up. Is there a way for me to redo that for the kube-scheduler only or am I now in the point where tearing down the cluster is the only way forward?