Summary:
I’m trying to mount an NFS share in rancher for use as persistent storage in a container. I get the following error: mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd
Full Explanation:
I’m using Rancher 2.3.2. I’m trying to mount an NFS share that is being hosted on another server (192.168.1.31) and use that as a persistent volume in my kubernetes cluster.
The persistent storage is just an NFS share with the path (/mnt/user/test) and the server IP
I have confirmed that I can mount this NFS share from another linux computer on the network and I can ping the NFS host from by kubernetes nodes (the underlying OS is RancherOS and I can ping it from there). There is also no username or password required to access the share for now.
I have a test docker container into which I’m trying to mount this share to be used as persistent storage, but I get this error:
In particular, this line: mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd
Any ideas on how to fix this? Thanks.
EDIT:
adding nolock as an option when creating the persistent share fixed the issue but I’m not sure if this is how it should be fixed or if rpcbind and nfslock services should be running.
ok so stupid question, since my cluster nodes run RancherOS and it doesn’t seem to have apt or yum, what’s the proper way to install rpcbind. Would it have to run in a system docker?
I have a few nodes running RancherOS as underlying OS with NFS configured for Persistent Storage for my apps and I am not having any problem like that, everything is working as expected.
How are you trying to mount these volumes? How is you persistent volume configured?
Could you provide us with more information on how you set up your environment?
I then deployed kubernetes onto all 3 nodes using the command given on the rancher cluster screen.
all the above works successfully
The end result is 3 VMs with RancherOS, node 1 has rancher and kubernetes and nodes 2&3 are just Kubernetes nodes.
The NFS share is on another physical machine on the same network.
The error I got with NFS is described above and worked only after I used the nolock option. From what I understand this is problematic if you want to allow ReadWriteMany so that multiple nodes can access the same NFS share but you don’t want two the same file being written to be multiple processes at the same time so file locking is essential. The proper way to do this is to have rpcbind running.
Sorry for opening the post again.
I am trying to use external NFS server.
I am using the following deployment
I get this error
MountVolume.SetUp failed for volume “nfs-client-root” : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs XXXX:/mnt/nfs_local/dev_pv /var/lib/kubelet/pods/a71c842f-0f59-4518-b86c-d3e2c8a53c85/volumes/kubernetes.io~nfs/nfs-client-root Output: /usr/sbin/start-statd: line 23: systemctl: command not found mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use ‘-o nolock’ to keep locks local, or start statd.
From this POST Where do i need to install nfs-common - on nodes ( backplane and etcd ) or workers ?
and i need to open firewall rules in in those nodes or workers ? ( i use UFW )