Nfs-server - No Support in current kernel

Brilliant. A reboot of my client fixed my problem too.

I’m just building and testing again and have added no_root_squash. I’ll let you know when I push it up to the hub. Done.

@sjiveson thank you very much for assistance.

I can see that you don’t declare EXPOSE in the dockerfile because you use the host network. But I need to use this image with the rancher network, so it will be possible to link a client container with the server container.

So, I think I need to change the above line to the nfsd.sh:

--- /nfsd-original.sh	Thu Oct 27 15:06:39 2016
+++ /nfsd.sh	Thu Oct 27 16:23:22 2016
@@ -52,7 +52,7 @@
     /usr/sbin/rpc.nfsd --debug 8 
     /usr/sbin/exportfs -rv 
     echo "Starting NFS in the background..."
-    /usr/sbin/rpc.mountd --debug all --no-udp --exports-file /etc/exports
+    /usr/sbin/rpc.mountd --debug all -p 32767 --no-udp --exports-file /etc/exports
 
     # Check if NFS is now running by recording it's PID (if it's not running $pid will be null):
     pid=$(pidof rpc.mountd)

And add this to Dockerfile:

EXPOSE 2049/tcp 32767/tcp

Am I right?

After a few hours, I finally could make my final image (alpine + s3fs + nfsserver + convoy-nfs).

baseimage is a private base image that have confd and s6 instaled. It is based on alpine 3.4.

This is the content of my repo:

.
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ Makefile
β”œβ”€β”€ root
β”‚   └── etc
β”‚       β”œβ”€β”€ confd
β”‚       β”‚   β”œβ”€β”€ conf.d
β”‚       β”‚   β”‚   └── exports.toml
β”‚       β”‚   β”œβ”€β”€ confd.toml
β”‚       β”‚   └── templates
β”‚       β”‚       └── exports.tmpl
β”‚       β”œβ”€β”€ cont-init.d
β”‚       β”‚   └── 10-nfs
β”‚       └── services.d
β”‚           β”œβ”€β”€ nfs
β”‚           β”‚   └── run
β”‚           └── s3fs
β”‚               └── run
└── src
    └── s3fs-fuse
        β”œβ”€β”€ AUTHORS
        β”œβ”€β”€ autogen.sh
        β”œβ”€β”€ ChangeLog
        β”œβ”€β”€ configure.ac
        β”œβ”€β”€ COPYING
        β”œβ”€β”€ doc
        β”‚   β”œβ”€β”€ Makefile.am
        β”‚   └── man
        β”‚       └── s3fs.1
        β”œβ”€β”€ INSTALL
        β”œβ”€β”€ Makefile.am
        β”œβ”€β”€ NEWS
        β”œβ”€β”€ README
        β”œβ”€β”€ src
        β”‚   β”œβ”€β”€ cache.cpp
        β”‚   β”œβ”€β”€ cache.h
        β”‚   β”œβ”€β”€ common_auth.cpp
        β”‚   β”œβ”€β”€ common.h
        β”‚   β”œβ”€β”€ curl.cpp
        β”‚   β”œβ”€β”€ curl.h
        β”‚   β”œβ”€β”€ fdcache.cpp
        β”‚   β”œβ”€β”€ fdcache.h
        β”‚   β”œβ”€β”€ gnutls_auth.cpp
        β”‚   β”œβ”€β”€ Makefile.am
        β”‚   β”œβ”€β”€ nss_auth.cpp
        β”‚   β”œβ”€β”€ openssl_auth.cpp
        β”‚   β”œβ”€β”€ s3fs_auth.h
        β”‚   β”œβ”€β”€ s3fs.cpp
        β”‚   β”œβ”€β”€ s3fs.h
        β”‚   β”œβ”€β”€ s3fs_util.cpp
        β”‚   β”œβ”€β”€ s3fs_util.h
        β”‚   β”œβ”€β”€ string_util.cpp
        β”‚   β”œβ”€β”€ string_util.h
        β”‚   β”œβ”€β”€ test_string_util.cpp
        β”‚   └── test_util.h
        └── test
            β”œβ”€β”€ integration-test-common.sh
            β”œβ”€β”€ integration-test-main.sh
            β”œβ”€β”€ Makefile.am
            β”œβ”€β”€ mergedir.sh
            β”œβ”€β”€ passwd-s3fs
            β”œβ”€β”€ rename_before_close.c
            β”œβ”€β”€ require-root.sh
            β”œβ”€β”€ s3proxy.conf
            β”œβ”€β”€ sample_ahbe.conf
            β”œβ”€β”€ sample_delcache.sh
            └── small-integration-test.sh

15 directories, 51 files
src:

This is a clone of https://github.com/s3fs-fuse/s3fs-fuse

Dockerfile:
FROM baseimage

#------------------------------
# EVIRONMENT VARIABLES
#------------------------------

ENV SHARED_DIRECTORY=/s3fs \
    AWSACCESSKEYID=myaccesskey \
    AWSSECRETACCESSKEY=mysecretekey \
    BUCKET=mybucket

#------------------------------
# INSTRUCTIONS
#------------------------------

RUN apk add --update \
    curl \
    fuse \
    iproute2 \
    libstdc++ \
    libxml2 \
    nfs-utils

RUN apk add --virtual .builddeps \
    autoconf \
    automake \
    build-base \
    curl-dev \
    fuse-dev \
    libxml2-dev \
    openssl-dev

RUN rm -rf /var/cache/apk/* /tmp/* && \
    rm -f /sbin/halt /sbin/poweroff /sbin/reboot && \
    mkdir -p /var/lib/nfs/rpc_pipefs && \
    mkdir -p /var/lib/nfs/v4recovery && \
    mkdir -p /nfs && chmod -R 777 /nfs && \
    echo "rpc_pipefs    /var/lib/nfs/rpc_pipefs rpc_pipefs      defaults        0       0" >> /etc/fstab && \
    echo "nfsd  /proc/fs/nfsd   nfsd    defaults        0       0" >> /etc/fstab

COPY src/s3fs-fuse /src/s3fs-fuse

RUN  cd /src/s3fs-fuse && \
     ./autogen.sh && \
     ./configure --with-openssl && \
     make && \
     make install && \
     apk del .builddeps && \
     rm -rf /src/s3fs-fuse && \
     mkdir $SHARED_DIRECTORY

COPY root /

EXPOSE 2049/tcp 32767/tcp
root/etc/cont-init.d/10-nfs:
#!/usr/bin/with-contenv sh
echo "Starting Confd population of files..."
/usr/local/bin/confd -version
/usr/local/bin/confd -onetime
echo ""
echo "Displaying /etc/exports contents..."
cat /etc/exports
echo ""
root/etc/services.d/nfs/run:
#!/usr/bin/with-contenv sh
echo "Starting NFS"
/sbin/rpcbind
/usr/sbin/rpc.nfsd --debug 8 
/usr/sbin/exportfs -rv 
/usr/sbin/rpc.mountd --debug all -p 32767 --no-udp --exports-file /etc/exports -F
root/etc/services.d/s3fs/run:
#!/usr/bin/with-contenv sh

OPTS=""
[ -n "$USE_CACHE" ] && OPTS="$OPTS -o use_cache=$USE_CACHE"
[ -n "$DEFAULT_ACL" ] && OPTS="$OPTS -o default_acl=$DEFAULT_ACL"
[ -n "$RETRIES" ] && OPTS="$OPTS -o retries=$RETRIES"
[ -n "$USE_RRS" ] && OPTS="$OPTS -o use_rrs=$USE_RRS"
[ -n "$USE_SSE" ] && OPTS="$OPTS -o use_sse=$USE_SSE"
[ -n "$CONNECT_TIMEOUT" ] && OPTS="$OPTS -o connect_timeout=$CONNECT_TIMEOUT"
[ -n "$READWRITE_TIMEOUT" ] && OPTS="$OPTS -o readwrite_timeout=$READWRITE_TIMEOUT"
[ -n "$PARALLEL_COUNT" ] && OPTS="$OPTS -o parallel_count=$PARALLEL_COUNT"
[ -n "$URL" ] && OPTS="$OPTS -o url=$URL"

/usr/local/bin/s3fs -f $BUCKET $SHARED_DIRECTORY $OPTS
docker-compose.yml:
s3nfs:
  image: sb-s3nfs
  environment:
    URL: http://s3-sa-east-1.amazonaws.com
  cap_add:
    - SYS_ADMIN
  tty: true
  privileged: true
  stdin_open: true

convoy-nfs:
  labels:
    io.rancher.container.create_agent: 'true'
    io.rancher.scheduler.global: 'true'
  privileged: true
  pid: host
  volumes:
    - /lib/modules:/lib/modules:ro
    - /proc:/host/proc
    - /var/run:/host/var/run
    - /run:/host/run
    - /etc/docker/plugins:/etc/docker/plugins
  image: rancher/convoy-agent:v0.9.0
  command: volume-agent-nfs
  links:
    - s3nfs:s3nfs

convoy-nfs-storagepool:
  labels:
    io.rancher.container.create_agent: 'true'
  image: rancher/convoy-agent:v0.9.0
  volumes:
    - /var/run:/host/var/run
    - /run:/host/run
  command: storagepool-agent
  links:
    - s3nfs:s3nfs
rancher-compose.yml:
s3nfs:
  scale: 1

convoy-nfs:
    metadata:
        nfs_server: "s3nfs"
        mount_dir: "/"
        mount_opts: "proto=tcp,nfsvers=4"
    health_check:
        request_line: GET /healthcheck HTTP/1.0
        port: 10241
        interval: 2000
        response_timeout: 2000
        unhealthy_threshold: 3
        healthy_threshold: 2

convoy-nfs-storagepool:
    metadata:
        nfs_server: "s3nfs"
        mount_dir: "/"
    scale: 1
    health_check:
        request_line: GET /healthcheck HTTP/1.0
        port: 10241
        interval: 2000
        response_timeout: 2000
        unhealthy_threshold: 3
        healthy_threshold: 2
1 Like

Hey @bruno.galindro, I’m pretty sure there’s no need for the expose directive when using Rancher, the ports will be reachable from the other containers.

Impressive image!

Tks @sjiveson

If you have time, could you please check if the issue 6436 created on rancher github occurs with you too? It is related to the image that I’ve created

Sorry but my usage is far simpler so it’s not something I’m ever likely to come across.