To Storage or not to Storage

I know this has been asked several times or mentioned in several posts… but I am curious to know what people are using for shared storage on Rancher hosts in production now - and what you are planning on using when rancher 1.2 comes out in a few weeks with the new storage stuff? Especially if you are not using AWS.

Is GlusterFS really so bad that it should not be used? It seems to be one of the most advanced solutions out there with regards to being distributed and with replication. I can see some of you are using it in big setups and seem quite happy with GlusterFS.

Is an NFS setup with synched / replicated storage, with something like convoy the way to go?

What other options are there people with good experience with that they would recommend?

Any thoughts you are willing to share are welcome.

Regards

Thomas

2 Likes

No-one have any thoughts to share here?

We are in Azure and using the Azure Files service for persistent storage. I posted our strategy here:

We tried using the GlusterFS/Convoy solution that was in the public catalog, but we found it was really sensitive to underlying system issues, and when it went down it would cause a cascade of stack/service failures. It just wasn’t ready for prime time.

In general in our rancher environments we avoid persistent storage for our applications. We run a couple of services like Redis and RabbitMQ which use the storage for persistence, but really do their work out of memory, so they don’t require really fast storage.

1 Like

So far, AWS EFS with Convoy-EFS has worked really well.
We aren’t doing heavy database workloads off of it, so I can’t speak to how it holds up under high load, though.

The only time we feel it is when doing backup/restores where we’re moving large amounts of data quickly…it’s just slow.

We really want to use AWS EFS but it’s only available if you have all the stack on Amazon… Persistent storage it’s our main problem this days. I didn’t find any solution that fits our needs.:disappointed_relieved:

Have you checked out portworx.com? They have a nice, (though paid) non-vendor-locked-in solution.

Looks good! I will give it a shot tomorrow.

Thanks!

I have also recently been directed to :
https://www.quobyte.com/

This looks very interesting. They do not have official Rancher support yet - but I understand that they are keen to work with people who are looking for this.

I expect we will be giving this a try out in the next couple of weeks.

Due to the lack of mature technology in regard to container storage options, we made a decision to avoid persistent storage as part of the container platform. In order to also increase portability and be cloud IaaS agnostic, we utilize as many cloud services as we can, like CloudAMQP for RabbitaaS, Elasticsearch cloud for ES, instaclustr.com for Cassandra, elephantsql.com for Postgres, etc.

This way we can utilize different cloud vendors for the container platform, and maintain consistent configs for dependent resources.

Hey Everyone , I think I found a good way here , still in progress to test it , it’s using ceph Server.

so the steps to follow right now are the following :

  1. install Ceph Server http://docs.ceph.com/docs/master/start/

  2. Configure the Docker Storage Driver using rbd volume , you can find the tutorial here : http://ceph.com/geen-categorie/getting-started-with-the-docker-rbd-volume-plugin/

  3. Also there is a rbd driver for rancher in this repository https://github.com/niusmallnan/rancher-rbd-catalog

I’ll check the best for production , and confirm here.

Are you still using Ceph? Any feedbacks and updates appreciated! Thanks!

@jama707

I did used it , but actually now I’m Back to square one , I think because of the server I was using , it turned out to be very slow , maybe when you are using the SSD it will work well .

The final way I’m using now in production is to save the data inside the host itself , and using an automated backup mechanism provided by my data center to backup the data just in case.

please let me know if you reach any other solution

Thanks