Cannot restore etcd snapshot

Hi,

We have a Rancher cluster on premise, with 1 node for etcd and controlplane and 2 worker nodes. As this is just to gain some feeling about what kind of problems we could get (before we start to use it for serious things) we provisioned pretty weak nodes with 2 CPUs, 4Gi memory and 40Gi disk.

The cluster (and Rancher installation, which runs in 1 docker container on a different machine) is already a few months old, and we could try different things, including installing and removing Helm Charts, and even restoring once the etcd from a snapshot.

With monitoring activated and with longhorn as our persistent storage system, I finally started a few deployments and stateful sets which obviously was too much for the small nodes and the system became unstable, I got the message in the Rancher UI that the etcd in unhealthy. I was thinking, it would be better to restore the etcd again from an earlier snapshot, but this did not work anymore:

This cluster is currently Updating; areas that interact directly with it will not be available until the API is ready.
[Failed to create [kube-cleaner] container on host [192.168.85.20]: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]

Even rebooting all machines did not change anything. The worker machines continued to start the docker container running before and got veryy irresponsive, while Rancher was trying to bring etcd up on the control node, which did not succeed. I find no answers in internet for such a behaviour. How should I proceed?