K3s.service stops working

I am observing a strange behaviour in my k3s-Testcluster built of 6 virtual machines (3 masters, 3 workers). Sometimes, when I do some operations with kubectl (e.g. deleting a pod) k3s.service stops working on my master nodes.

I am seeing etcd messages like

"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk"

or

"apply request took too long","took":"100.464864ms","expected-duration":"100ms"

Which countermeasures could I possibly take here?

@elronzo Hi and welcome to the Forum :smile:
Sounds like your setup has some disk i/o issues that need to be resolved… Maybe check on the systems in the cluster with top and iostat in the first instance. Are these physical nodes, or vm’s?