Nfs-server - No Support in current kernel

i wanted to run cpuguy83/nfs-server but the daemon can’t start:

[rancher@RancherOS2_SRV ~]$ docker logs -f dc5fc55c0ec5cf076eeddf33512daf125a9782b7ea7721516ba0c60bb34b220e

  • Not starting NFS kernel daemon: no support in current kernel.
    Setting up watches.
    Watches established.

How can I modify rancheros so it would work?

Regards
Guido

I’ve run this image without issue in the past. IIRC the issue has been caused by a recent update. Unfortunately ‘latest’ is the only version available on Docker hub.

Luckily for me I had an old version stored somewhere that I could use but I didn’t like the idea of relying on some old image so I created my own. I’ve not documented it yet but perhaps try this: https://hub.docker.com/r/itsthenetwork/nfs-server-ubuntu/.

1 Like

Update: I just remembered I actually had this issue on Boot2Docker, not RancherOS. Coincidentally, I need to get this working thanks to a new request so I’ll post an update on what I find.

1 Like

OK, I got an Alpine image working, see here: https://hub.docker.com/r/itsthenetwork/nfs-server-alpine/ but I also ran the first two commands listed here: http://docs.rancher.com/os/configuration/kernel-modules-kernel-headers/ so I don’t know if one will work independently of the other.

1 Like

I tried your container…but there seam to be a small problem.

Starting Confd population of files…
confd 0.12.0-dev
2016-10-26T09:31:04Z RancherOS_STG /usr/bin/confd[11]: INFO Backend set to env
2016-10-26T09:31:04Z RancherOS_STG /usr/bin/confd[11]: INFO Starting confd
2016-10-26T09:31:04Z RancherOS_STG /usr/bin/confd[11]: INFO Backend nodes set to
2016-10-26T09:31:04Z RancherOS_STG /usr/bin/confd[11]: INFO /etc/exports has md5sum 4f1bb7b2412ce5952ecb5ec22d8ed99d should be 43c6557e46ab874a474a7bfc191f7d62
2016-10-26T09:31:04Z RancherOS_STG /usr/bin/confd[11]: INFO Target config /etc/exports out of sync
2016-10-26T09:31:04Z RancherOS_STG /usr/bin/confd[11]: INFO Target config /etc/exports has been updated
Displaying /etc/exports contents…
/nfsshare *(rw,fsid=0,async,no_subtree_check,no_auth_nlm,insecure)
Starting rpcbind… -w
Displaying rpcbind status…
program version netid address service owner
100000 4 tcp6 ::.0.111 - superuser
100000 3 tcp6 ::.0.111 - superuser
100000 4 udp6 ::.0.111 - superuser
100000 3 udp6 ::.0.111 - superuser
100000 4 tcp 0.0.0.0.0.111 - superuser
100000 3 tcp 0.0.0.0.0.111 - superuser
100000 2 tcp 0.0.0.0.0.111 - superuser
100000 4 udp 0.0.0.0.0.111 - superuser
100000 3 udp 0.0.0.0.0.111 - superuser
100000 2 udp 0.0.0.0.0.111 - superuser
100000 4 local /var/run/rpcbind.sock - superuser
100000 3 local /var/run/rpcbind.sock - superuser
Starting NFS in the background…
rpc.nfsd: knfsd is currently up
exporting *:/nfsshare

but when i try to mount i get permission errors

sudo mount 192.168.60.220:/nfsshare test/
mount.nfs: access denied by server while mounting 192.168.60.220:/nfsshare

Yes, I just found that too.

I’ve fixed it such that NFS v4 works now and have tested extensively (note there’s no portmapper required with v4). I’ve been unable to get v3 to work reliably; sometimes I can mount, sometimes I can’t and there’s no discernible logic that I can identify - hence, I’ve disabled it.

About to test a fresh build without kernel-headers installed shortly, will report back.

@sjiveson, could you share Dockerfile of https://hub.docker.com/r/itsthenetwork/nfs-server-alpine/?

I want to build an image with s3fs and nfs-server in alpine because this image not works. The error is the same as posted by @Guido_Steiner with cpuguy83/nfs-server image and I can’t make it running even executing the commands that you have mentioned from rancher docs.

$ sudo ros service enable kernel-headers
$ sudo ros service up -d kernel-headers

doesn’t work for me. Still no permissions.

here the log from the container:

27.10.2016 10:03:41Displaying /etc/exports contents…
27.10.2016 10:03:41/nfsshare *(rw,fsid=0,async,no_subtree_check,no_auth_nlm,insecure)
27.10.2016 10:03:41rpc.nfsd: Unable to access /proc/fs/nfsd errno 2 (No such file or directory).
27.10.2016 10:03:41Please try, as root, ‘mount -t nfsd nfsd /proc/fs/nfsd’ and then restart rpc.nfsd to correct the problem
27.10.2016 10:03:41exporting *:/nfsshare
27.10.2016 10:03:41Starting NFS in the background…

Are you running it with privileged? Driving me insane, this was working flawlessly for me yesterday. Isn’t this morning.

My container starts just fine, I just can’t connect.

UPS :fearful:

Now it works

nfs:
environment:
SHARED_DIRECTORY: /nfsshare
labels:
io.rancher.container.pull_image: always
tty: true
image: itsthenetwork/nfs-server-alpine
privileged: true
volumes:

  • /home/rancher/nfsshare:/nfsshare
    stdin_open: true
    net: host

Can you mount from a client?

from rancheros yes…but convoy-nfs still makes trouble…I got convoy-nfs running with a debian nfs server.

[rancher@RancherOS2_SRV ~]$ sudo mount -o nfsvers=4 192.168.60.201:/ share/
[rancher@RancherOS2_SRV ~]$ cd share
[rancher@RancherOS2_SRV share]$ ls
[rancher@RancherOS2_SRV share]$

27.10.2016 11:23:09time=“2016-10-27T09:23:09Z” level=error msg="mkdir /var/lib/rancher/convoy/convoy-nfs-3b2124eb-8593-482d-8829-dd965b52bb79/mnt/config: permission denied"
27.10.2016 11:23:09{
27.10.2016 11:23:09 “Error”: "mkdir /var/lib/rancher/convoy/convoy-nfs-3b2124eb-8593-482d-8829-dd965b52bb79/mnt/config: permission denied"
27.10.2016 11:23:09}
27.10.2016 11:23:09time=“2016-10-27T09:23:09Z” level=info msg="convoy exited with error: exit status 1"
27.10.2016 11:23:09time=“2016-10-27T09:23:09Z” level=info msg=Exiting.

Looks like permissions are wrong

I changed the nfsshare permission inside of the container now it seams to work with convoy-nfs

[rancher@RancherOS2_SRV ~]$ sudo mount -o nfsvers=4 192.168.60.201:/ share/
[rancher@RancherOS2_SRV ~]$ cd share/
[rancher@RancherOS2_SRV share]$ ls
config gaga
[rancher@RancherOS2_SRV share]$

Unfortunately I know nothing about Convoy.

Any chance you could post a copy of the /usr/bin/nfsd.sh from your working image please? I’d like to compare it to what I have.

@bruno.galindro

It currently looks like this. See the following post for what the entrypoint script looks like. My latest version doesn’t bother with rpcbind and works fine.

FROM alpine:latest

RUN apk add -U -v nfs-utils bash iproute2 && \
    rm -rf /var/cache/apk/* /tmp/* && \
    rm -f /sbin/halt /sbin/poweroff /sbin/reboot && \
    mkdir -p /var/lib/nfs/rpc_pipefs && \
    mkdir -p /var/lib/nfs/v4recovery && \
    mkdir -p /nfs && chmod -R 777 /nfs && \
    echo "rpc_pipefs    /var/lib/nfs/rpc_pipefs rpc_pipefs      defaults        0       0" >> /etc/fstab && \
    echo "nfsd  /proc/fs/nfsd   nfsd    defaults        0       0" >> /etc/fstab

COPY confd-binary /usr/bin/confd
COPY confd/confd.toml /etc/confd/confd.toml
COPY confd/toml/* /etc/confd/conf.d/
COPY confd/tmpl/* /etc/confd/templates/

COPY nfsd.sh /usr/bin/nfsd.sh
COPY .bashrc /root/.bashrc

RUN chmod +x /usr/bin/nfsd.sh /usr/bin/confd

ENTRYPOINT ["/usr/bin/nfsd.sh"]

cat /usr/bin/nfsd.sh

#!/bin/bash

# Make sure we react to these signals by running stop() when we see them - for clean shutdown
# And then exiting
trap "stop; exit 0;" SIGTERM SIGINT

stop()
{
  # We're here because we've seen SIGTERM, likely via a Docker stop command or similar
  # Let's shutdown cleanly
  echo "SIGTERM caught, terminating NFS process(es)..."
  /usr/sbin/exportfs -ua
  pid1=$(pidof rpc.nfsd)
  pid2=$(pidof rpc.mountd)
  kill -TERM $pid1 $pid2 > /dev/null 2>&1
  echo "Terminated."
  exit
}

if [ -z "$SHARED_DIRECTORY" ]; then
  echo "The SHARED_DIRECTORY environment variable is null, exiting..."
  exit 1
fi

# This loop runs till until we've started up successfully
while true; do

  # Check if NFS is running by recording it's PID (if it's not running $pid will be null):
  pid=$(pidof rpc.mountd)

  # If $pid is null, do this to start or restart NFS:
  while [ -z "$pid" ]; do
    echo "Starting Confd population of files..."
    /usr/bin/confd -version
    /usr/bin/confd -onetime
    echo "Displaying /etc/exports contents..."
    cat /etc/exports

    # Only required if v3 will be used
    echo "Starting rpcbind..."
    /sbin/rpcbind -w
    echo "Displaying rpcbind status..."
    /sbin/rpcinfo

    # Only required if v3 will be used
    # /usr/sbin/rpc.idmapd
    # /usr/sbin/rpc.gssd -v
    # /usr/sbin/rpc.statd

    /usr/sbin/rpc.nfsd --debug 8
    /usr/sbin/exportfs -rv
    echo "Starting NFS in the background..."
    /usr/sbin/rpc.mountd --debug all --no-udp --exports-file /etc/exports

    # Check if NFS is now running by recording it's PID (if it's not running $pid will be null):
    pid=$(pidof rpc.mountd)

    # If $pid is null, startup failed; log the fact and sleep for 2s
    # We'll then automatically loop through and try again
    if [ -z "$pid" ]; then
      echo "Startup of NFS failed, sleeping for 2s, then retrying..."
      sleep 2
    fi

  done

  # Break this outer loop once we've started up successfully
  # Otherwise, we'll silently restart and Docker won't know
  break

done

while true; do

  # Check if NFS is STILL running by recording it's PID (if it's not running $pid will be null):
  pid=$(pidof rpc.mountd)
  # If it is not, lets kill our PID1 process (this script) by breaking out of this while loop:
  # This ensures Docker observes the failure and handles it as necessary
  if [ -z "$pid" ]; then
    echo "NFS has failed, exiting, so Docker can restart the container..."
    break
  fi

  # If it is, give the CPU a rest
  sleep 1

done

sleep 1
exit 1
1 Like

Got it pretty much working with Convoy-nfs the only issue i still have is that the mariadb container wants to chown inside the nfs mount. I added no_root_squash to the /etc/exports but looks like your script is replacing the file at a restart.

This pretty cool and my solution for Multihost.

1 Like

Yes :slight_smile: I changed your exports.tmpl to this

{{getenv “SHARED_DIRECTORY”}} *(rw,fsid=0,async,no_subtree_check,no_auth_nlm,insecure,no_root_squash)

and now I can start this:

wordpress:
image: wordpress
links:
- db:mysql
ports:
- ${public_port}:80
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- wordpress:/var/lib/mysql
stdin_open: true
volume_driver: convoy-nfs

and it is working with :slight_smile:

1 Like