When you say “it” here, what are you referring to? I would suggest first trying to login to the host nodes where you’re trying to setup the volume and doing a nslookup of the DNS name from there. If that doesn’t work, then naturally the docker container trying to mount the share won’t be able to resolve it either. Once you figure out why the host can’t resolve the name and fix that, I’m betting the rancher service will start working as well.
Just to be clear, for the purposes of my own testing, I:
- Created the EFS file system in AWS and made sure it had mount targets in all of the AZ’s where I was deploying rancher host nodes
- Went the the “Library” for my environment and started the Rancher NFS stack as described in my previous post
- Went to the Infrastructure->Storage menu item and created a persistent volume for the rancher-nfs service
- Went to Infrastructure->Hosts and selected one of my hosts then clicked, “Add Container”; I then setup a no frills ubuntu container (the default) and attached the volume I just created.
- Once the container was up, I went to it in the Rancher UI and selected “Execute Shell” and went to the volume location and created a few files
- I started a second container similar to the first attaching to the same volume and opened another shell to verify that the files were there
- Finally, just to be absolutely sure, I manually mounted the EFS file system on a completely separate EC2 instance and verified that the files I created were where I expected then to be