How to implement the 'Add Rancher EFS Stack' catalog service?

Hi, apologies for what may be an obvious question, I’m new to Rancher and still finding my way about. I’ve also posted this in ‘Convoy’ as it is storage (NFS) related, hope it’s the right place.

I’m just wondering if anyone might be able to give me a steer on how to use the catalog certified ‘Add Rancher EFS Stack’ (Docker volume and Flex Vol plugin for Amazon EFS)? The service installs but I can’t work out how to create/mnt EFS volumes for/to hosts.
For example, I have an existing AWS EFS volume, would it connect to that? And if so how? Or, does it create a new vol? (I’m thinking not?).

What I would like to be able to achieve is to mount /opt/go-server/work (for GoCD server) to an EFS mnt.

Any pointers would be most appreciated, no doubt I’m probably missing something obvious due to my limited comprehension of the service.

+1 here

before 1.2 upgrade , i use convoy-efs flawless , after upgrade i started a new enviorment and cant figure out how mount my efs , for a certified Stack i can find any Documentation , nothing in docker hub , plz staff help

+1

I am also not able to get the “storage-efs” to work.
Is there somewhere a docoumentation or the sources for the “storage-efs” container image?

I’ve documented my struggles with the new rancher-efs service here: Convoy EFS to Rancher EFS

It isn’t that it doesn’t work for me; it actually works quite well and even, in some sense, more logically than convoy-efs did; it’s that the behavior of rancher-efs is so different from convoy-efs that you can’t really migrate from the older one to the new one and nothing exists that works like the old one. At the moment, we’re stumped on how to move some of our existing applications into Rancher 1.2.

Many thanks for the link Matt, I think in this thread we are less further on and are still looking for a basic How-To with regard to implementation.
That said, some of the comments from the other thread re the single zone mount are interesting. I’ve implemented dynamic EFS mounts for AWS OpsWorks instances using a custom Chef recipe & instance meta-data, which you refer to, and it would be good if Rancher-EFS worked in that way, i.e using the corresponding zone mount for the instance (zone).

+1 on Rancher-EFS using the correct AZ matching the host that it’s running on. AFAIK there’s no extra cost to supporting multiple AZs so it doesn’t make sense to me for Rancher not to set it up that way and utilize that.

Amazon just made a change to their EFS service that simplified things for me immensely. I now have been able to get EFS working with both “rancher-efs” and, as of today, “rancher-nfs” as well, so perhaps I can help out here as well.

The question to ask yourself before deciding which service to use (rancher-efs or rancher-nfs) is, "Do you want a new EFS file system for every storage volume you create in the rancher UI or so you want and new sub-directory on just one EFS file system for each storage volume you create in the Rancher UI. For the former, use rancher-efs, and the latter rancher-nfs.

I’d be happy to share the details of my EFS setup in AWS and in Rancher. As anyone who uses AWS knows, there are sometimes lots of little details to get right so that things can talk to each other (security groups, network routes, private/public subnets, etc).

If you could run through the steps of implementing Rancher EFS on AWS, it would be much appreciated.

I went into a bit of detail for the Rancher NFS on-top-of AWS EFS approach in a different thread: Convoy EFS to Rancher EFS

As for the normal Rancher EFS on-top-of AWS EFS, well that’s pretty straightforward. You just go to Catalog->Library in the Rancher 1.2 UI and select the Rancher EFS service. Then enter your AWS secret and access keys. Then every persistent storage volume you create at Infrastructure->Storage will create a new EFS file system in AWS. The Rancher docs show how to use those storage volumes in your stacks and services.

As mentioned above, when using the Rancher EFS -> AWS EFS approach, AWS does start you out with a limit of 10 EFS file systems per account, so you may hit that pretty quickly depending on your usage scenario.

Many thanks for the update Matt, it does sound quite straightforward however I’m sure I must be missing something really obvious…

I’ve added the EFS catalog item, the AWS keys used have admin rights, then I created a volume as you describe at Infrastructure>Storage, however there is nothing gets created in EFS, and also were it to be so I’m wondering how Rancher would know which region to create the volume in? Which security groups to use etc…?

All I seem to end up with is the following :neutral_face:

You’re not doing anything wrong. It’s not exactly intuitive, but the EFS file system won’t really be created in AWS until you actually use it in a container.

We are getting closer with this but are still unable to get any volume mounted. What we have identified is that using Amazon Linux is not supported. We have tried with CentOS today but the process gets stuck in the initializing state.

We are using rancher-nfs, specifying the EFS DNS name as advised.

Hi all, if it helps anybody we have created a detailed How-To for setting up AWS EFS using Rancher-NFS.

https://skeltonthatcher.com/blog/container-clustering-rancher-server-part-3-aws-efs-mounts-using-rancher-nfs/