Hi there,
I’m very new in the whole container world, with much more a ops background than dev background. I’m in charge to discover container environment for my company, and until now, I really like Rancher.
As an OPS, I started building my infrastructure without knowing that to run on it It could maybe explain why some of my questions are, saying irrelevant I thank you for your kindness
As we (as I think many company) have an existing VMware infrastructure, I would put our ‘v0 infra’ on VMware, before maybe going on bar metal (or something else)
So far, I was able to build a rancher Kubernetes cluster (using RKE). this cluster run on three VM (centOS 7 based, with rancher 2.3.5 / kubernetes 1.17.0 / Docker 19.3.5).
From this cluster, I was able to create a second ‘auto deployed cluster’ with currently 5 VM auto deployed with rancheros, and with vSphere configured as a cloud provider. I can now deploy containers with a persistent storage on an VMwware datastore.
This persistent storage is in fact a vmdk file (ie a VMware ‘disk’), so it makes sense that rancher also reconfigures one of the VM of the cluster to actually access to this persistent storage.
(Long explanation, forgive me…)
But here are the questions:
- The persistent datastore is a little too connected to one of my VM, so if the VM goes down, rancher/kubernetes don’t try to restart the container on another VM (lost a big part of the interest)
- I did another experiment, removing the VM holding the container from the rancher console. This works a little to well : rancher not only destroy the VM, but, as the persistent storage is still connected on the VM, the persistent storage is removed, and then, when Rancher/kubernetes tries to reconfigure the container on another node “something is missing” (obviously)
Are these two behaviour expected, or do I have something misconfigured ?
If the behaviours are expected, do you really use the vSphere storage, and have you some usecases ?
Many thanks for having read all my stuff !
Pascal