I am observing a strange behaviour in my k3s-Testcluster built of 6 virtual machines (3 masters, 3 workers). Sometimes, when I do some operations with kubectl (e.g. deleting a pod) k3s.service stops working on my master nodes.
I am seeing etcd messages like
"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk"
or
"apply request took too long","took":"100.464864ms","expected-duration":"100ms"
Which countermeasures could I possibly take here?