I’m currently working on a project to set up an Elasticsearch cluster hosted in our datacenter. I have a bunch of hardware on the way in and did some initial planning. After posting my configuration to the Elasticsearch forums it was suggested that because of the extra resources that I have on the hardware that I run ES on Docker images. I have figured out how to get all of that working just fine in a test environment with a couple VM’s. Since I’m getting 11 servers in to accomplish all of this I decided to look into orchestrating all of this just to make things a lot easier. I have a configuration working with ephemeral pods that do not persist data but now I’m trying to figure out how to get persistence to work.
What I would like to do is set up 3 directories on each of my 4 fast and 4 slow data storage hosts so that I can set up a Daemon Set to run 3 pods on each host which will pick up one of the available PV’s that are not in use by other pods. I have figured out how to just create PV’s which live on each host but then I have to configure a workload for each one of these volumes. On my test system this isn’t that bad but when I have 4 fast storage hosts and 4 slow storage nodes each with 3 images the number of workloads that I need to configure and mange starts getting a little much and just managing the docker containers independently may be a better solution. Is there a way to just set up some pools of local volumes that an image running on the host can grab when it comes up and start working with it?
I’m new at working with Kubernetes and Rancher so I may just be completely misunderstanding how Storage Classes, Persistent Volumes, and Persistent Volume Claims work together so any pointers in the right direction are appreciated.