Kubelet stopped posting node status

Hello

I have a two node test rancher install. First node runs etcd and worker. Second node is just a worker. It was working normally but whenever I restart the nodes, the nodes status says Unavailable Kubelet stopped posting node status and I had to remove and create the cluster again. I am running rancher:latest.

What should I check to start troubleshooting?

Thanks in advance
Paras.

Are you keeping the same ips between reboots? Does the master come backup before the slave?

It was not the IP change. I think I found the fix. I ran this command:

mount --make-rshared /

and then service docker restart

The cluster nodes are now in ready state. Not sure why I need this.

Thanks
Paras.

Hi Paras,
I am in to the same issue, only difference is I have 3 nodes, one for etcd, control pane and workers.
Where should I run the command you mentioned above. can you please give the steps so that I can move on in this, I am newbie. …

I did a recreation of the cluster and it worked …

On a linux node, add the following kernel parameters to /etc/sysctl.d/50-kubelet.conf

kernel.panic=10
kernel.panic_on_oops=1
vm.overcommit_memory=1

Then, run the command “sysctl --system -p” to apply them. Kubelet should then recover by itself.