ETCD abnormal logs causing leader election

I have cluster of ETCD and it keeps throwing abnormal error.

Our kubernetes specification as below:
kubernetes version: v1.20.9
rke version: v1.2.11
docker version: v20.10.12
etcdctl version: 3.4.15
API version: 3.4

The error logs as below:

2023-10-22 16:00:06.602248 W | etcdserver: read-only range request "key:"/registry/minions/" " with result “range_response_count:1 size:6784” took too long (5.616212618s) to execute
2023-10-22 16:00:06.602293 W | etcdserver: read-only range request "key:"/registry/ingress/" range_end:"/registry/ingress0" count_only:true " with result “range_response_count:0 size:9” took too long (6.168605716s) to execute
2023-10-22 16:00:06.602349 W | etcdserver: read-only range request "key:"/registry/services/endpoints/" range_end:"/registry/services/endpoints0" count_only:true " with result “range_response_count:0 size:9” took too long (6.009593928s) to execute
2023-10-22 16:00:06.616903 W | wal: sync duration of 5.577689397s, expected less than 1s
2023-10-22 16:00:06.617082 W | etcdserver: failed to send out heartbeat on time (exceeded the 500ms timeout for 4.584809717s, to 813f0539f39d81a7)
2023-10-22 16:00:06.617092 W | etcdserver: server is likely overloaded
2023-10-22 16:00:06.617096 W | etcdserver: failed to send out heartbeat on time (exceeded the 500ms timeout for 4.584825437s, to 1036734fdd37f1aa)
2023-10-22 16:00:06.617099 W | etcdserver: server is likely overloaded
2023-10-22 16:00:11.904338 W | etcdserver: read-only range request "key:"/registry/leases/kube-system/kube-scheduler" " with result “error:context canceled” took too long (9.997295066s) to execute
2023-10-22 16:00:27.729688 W | etcdserver: server is likely overloaded
WARNING: 2023/10/22 16:00:27 grpc: Server.processUnaryRPC failed to write status: connection error: desc = “transport is closing”
raft2023/10/22 16:00:27 INFO: 8543d04aa73eed57 [logterm: 47292, index: 81211444, vote: 8543d04aa73eed57] ignored MsgVote from 813f0539f39d81a7 [logterm: 47292, index: 81211437] at term 47292: lease is not expired (remaining ticks: 3)
2023-10-22 16:00:27.866093 I | embed: rejected connection from “10.88.18.104:44778” (error “EOF”, ServerName “”)
2023-10-22 16:00:27.882253 I | embed: rejected connection from “10.88.18.104:44794” (error “EOF”, ServerName “”)
2023-10-22 16:00:27.884784 W | etcdserver: failed to revoke 71aa8b1e854f7514 (“etcdserver: request timed out”)
2023-10-22 16:00:27.885728 W | etcdserver: failed to revoke 6d578a7800047fe1 (“etcdserver: request timed out”)
2023-10-22 16:00:27.886259 I | embed: rejected connection from “maskedip:33984” (error “EOF”, ServerName “”)
2023-10-22 16:00:27.886809 I | embed: rejected connection from “maskedip:44796” (error “EOF”, ServerName “”)
2023-10-22 16:00:27.887162 I | embed: rejected connection from “:33996” (error “EOF”, ServerName “”)
raft2023/10/22 16:00:27 INFO: found conflict at index 81211438 [existing term: 47292, conflicting term: 47294]
raft2023/10/22 16:00:27 INFO: replace the unstable entries from index 81211438
2023-10-22 16:00:27.979185 W | rafthttp: closed an existing TCP streaming connection with peer 1036734fdd37f1aa (stream Message writer)

Not sure if it is related but our kube-controller and kube-scheduler will restart itself periodically causing the disconnection of some applications.