Local Storage Help

Hi all,

I’m currently working on a project to set up an Elasticsearch cluster hosted in our datacenter. I have a bunch of hardware on the way in and did some initial planning. After posting my configuration to the Elasticsearch forums it was suggested that because of the extra resources that I have on the hardware that I run ES on Docker images. I have figured out how to get all of that working just fine in a test environment with a couple VM’s. Since I’m getting 11 servers in to accomplish all of this I decided to look into orchestrating all of this just to make things a lot easier. I have a configuration working with ephemeral pods that do not persist data but now I’m trying to figure out how to get persistence to work.

What I would like to do is set up 3 directories on each of my 4 fast and 4 slow data storage hosts so that I can set up a Daemon Set to run 3 pods on each host which will pick up one of the available PV’s that are not in use by other pods. I have figured out how to just create PV’s which live on each host but then I have to configure a workload for each one of these volumes. On my test system this isn’t that bad but when I have 4 fast storage hosts and 4 slow storage nodes each with 3 images the number of workloads that I need to configure and mange starts getting a little much and just managing the docker containers independently may be a better solution. Is there a way to just set up some pools of local volumes that an image running on the host can grab when it comes up and start working with it?

I’m new at working with Kubernetes and Rancher so I may just be completely misunderstanding how Storage Classes, Persistent Volumes, and Persistent Volume Claims work together so any pointers in the right direction are appreciated.

I have made a little progress I think. I created a storage class with kubernetes.io/no-provisioner set as the provisioner and VolumeBindingMod set to WaitForFirstConsumer. Then I created 3 PV’s with a local plugin and set the Node Affinity so that they are on the hostname that I want. I then created a workload with a persistent volume claim that I named “data” and gave a mount point. This works for the first one that I spin up. Once I increase to more than one pod both try to bind to the same persistent volume. The first pod works fine but any additional volumes do not since ES will lock the directory. I was trying to get the second workload to grab another PV fro mthe Storage Class and the 3rd to do the same. Is there something I’m missing here?