Kubelet stopped posting node status


I have a two node test rancher install. First node runs etcd and worker. Second node is just a worker. It was working normally but whenever I restart the nodes, the nodes status says Unavailable Kubelet stopped posting node status and I had to remove and create the cluster again. I am running rancher:latest.

What should I check to start troubleshooting?

Thanks in advance

Are you keeping the same ips between reboots? Does the master come backup before the slave?

It was not the IP change. I think I found the fix. I ran this command:

mount --make-rshared /

and then service docker restart

The cluster nodes are now in ready state. Not sure why I need this.


Hi Paras,
I am in to the same issue, only difference is I have 3 nodes, one for etcd, control pane and workers.
Where should I run the command you mentioned above. can you please give the steps so that I can move on in this, I am newbie. …

I did a recreation of the cluster and it worked …