Container/pod persistence when rebooting host

Hi All,

I am moving into Rancher 2.0 from 1.X ( I have been using 1.X since just before its GA release).

My situation is after a reboot of the host/node in a cluster. (The host being a RancherOS v1.4.0 host) the containers/pods seem to get re-created instead of just start them up again.
When I do a docker ps -a I can see the old containers from before the reboot including the Kubernetes containers but new ones have been created instead.

This is the expected outcome now with Rancher 2.X ? I may be missing something as I am new to Kubernetes as I have been using Cattle with 1.X

With Rancher 1.X all the containers would remain and in turn the volumes attached to the containers holding their data/config.

Any insight or help would be greatly appreciated.

Thanks in advance,
Craig

2 Likes

I’m new to kubernetes as well but i have noticed that persistent volume claims seem too persist unless deleted.

i can delete the containers and reboot the instances and if the volume from the claim still exists it will reattach to the containers when they boot up.

look at the elastic search chart in the k8s incubator and the rabbitmq-ha as examples

Thanks @Greg_Keys. After reading more into it. It seems I should be using a statefulset. I’ll give it a try.