@vincent Do you have any insight on how to change the following values for the cluster?
- kubelet: node-status-update-frequency
- controller-manager: node-monitor-period
- controller-manager: node-monitor-grace-period
- controller-manager: pod-eviction-timeout
I tried creating a new cluster and editing the YAML file before starting it. I added:
services:
kublet:
extra_args:
node-status-update-frequency: "5s"
kube-controller:
extra_args:
node-monitor-period: "2s"
node-monitor-grace-period: "16s"
pod-eviction-timeout: "30s"
After the cluster creation, these changes disappeared from the cluster config in YAML except for node-status-update-frequency
. So this did not result in marking the node as unhealthy faster when I killed it. And the pods were re-created only 5 minutes after it was detected as unhealthy.