Not able to mount NFS Share

Hi,

Summary:
I’m trying to mount an NFS share in rancher for use as persistent storage in a container. I get the following error:
mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd

Full Explanation:
I’m using Rancher 2.3.2. I’m trying to mount an NFS share that is being hosted on another server (192.168.1.31) and use that as a persistent volume in my kubernetes cluster.

The persistent storage is just an NFS share with the path (/mnt/user/test) and the server IP

I have confirmed that I can mount this NFS share from another linux computer on the network and I can ping the NFS host from by kubernetes nodes (the underlying OS is RancherOS and I can ping it from there). There is also no username or password required to access the share for now.

I have a test docker container into which I’m trying to mount this share to be used as persistent storage, but I get this error:

In particular, this line:
mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd

Any ideas on how to fix this? Thanks.

EDIT:
adding nolock as an option when creating the persistent share fixed the issue but I’m not sure if this is how it should be fixed or if rpcbind and nfslock services should be running.

on your cluster nodes, you need nfs-common installed (which it already is) and also rpcbind. you might also need to enable and start rpcbind

systemctl start rpcbind
systemctl enable rpcbind

ok so stupid question, since my cluster nodes run RancherOS and it doesn’t seem to have apt or yum, what’s the proper way to install rpcbind. Would it have to run in a system docker?

It also doesn’t have systemctl

We gave up on RancherOS and went to PhotonOS for reasons like this. Also was very troublesome to get our corporate CA chain trusted.

I don’t have an answer, but it feels like Rancher OS might not meet your use case.

I have a few nodes running RancherOS as underlying OS with NFS configured for Persistent Storage for my apps and I am not having any problem like that, everything is working as expected.

How are you trying to mount these volumes? How is you persistent volume configured?
Could you provide us with more information on how you set up your environment?

My setup is as follow:

  • 1 physical machine with proxmox 6.0-1 installed
  • 3 VMs with 4 cores and 8GB ram each
  • each VM was booted from “rancheros-proxmoxve.iso”
  • RancherOS 1.5.4, Linux kernel 4.14.138
  • I copied my cloud config file that contains my ssh keys and installed via standard “ros install” command to install the OS to disk
  • As per the docs, I did the most basic rancher install on node 1
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
rancher/rancher:latest
  • I then deployed kubernetes onto all 3 nodes using the command given on the rancher cluster screen.
  • all the above works successfully

The end result is 3 VMs with RancherOS, node 1 has rancher and kubernetes and nodes 2&3 are just Kubernetes nodes.

The NFS share is on another physical machine on the same network.

The error I got with NFS is described above and worked only after I used the nolock option. From what I understand this is problematic if you want to allow ReadWriteMany so that multiple nodes can access the same NFS share but you don’t want two the same file being written to be multiple processes at the same time so file locking is essential. The proper way to do this is to have rpcbind running.

Let me know if you need anything else.

Thank you, this fixed my issue!

Hi,

Sorry for opening the post again.
I am trying to use external NFS server.
I am using the following deployment

I get this error
MountVolume.SetUp failed for volume “nfs-client-root” : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs XXXX:/mnt/nfs_local/dev_pv /var/lib/kubelet/pods/a71c842f-0f59-4518-b86c-d3e2c8a53c85/volumes/kubernetes.io~nfs/nfs-client-root Output: /usr/sbin/start-statd: line 23: systemctl: command not found mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use ‘-o nolock’ to keep locks local, or start statd.

From this POST Where do i need to install nfs-common - on nodes ( backplane and etcd ) or workers ?
and i need to open firewall rules in in those nodes or workers ? ( i use UFW )

How did you add the nolock option? To which file?