Rancher Cluster unavailable after reboot

Hey,
I installed a single node variant of rancher with a rancher-os base and have now some trouble since I rebooted rancher-os.
The cluster I created yesterday is now in an “unavailable” state.
Because it is a single node installation I’m not sure whats the problem here. The host is shown as active but the rancher agent ist not running. Not sure if this is necessary, so I tried to add the node again and restarted the kubernetes api and etcd container and later the complete host without success.

During my rancher installation I configured the ports like here:

Any Idea what I can try/do to make this work again ?

1 Like

Can you please have a look into the logs from rancher?
Please also have a look into the docker logs of kube-apiserver.

Do you mean the docker logs from the rancher server/kube api server container ?