Brilliant. A reboot of my client fixed my problem too.
Iβm just building and testing again and have added no_root_squash
. Iβll let you know when I push it up to the hub. Done.
Brilliant. A reboot of my client fixed my problem too.
Iβm just building and testing again and have added no_root_squash
. Iβll let you know when I push it up to the hub. Done.
@sjiveson thank you very much for assistance.
I can see that you donβt declare EXPOSE in the dockerfile because you use the host network. But I need to use this image with the rancher network, so it will be possible to link a client container with the server container.
So, I think I need to change the above line to the nfsd.sh:
--- /nfsd-original.sh Thu Oct 27 15:06:39 2016
+++ /nfsd.sh Thu Oct 27 16:23:22 2016
@@ -52,7 +52,7 @@
/usr/sbin/rpc.nfsd --debug 8
/usr/sbin/exportfs -rv
echo "Starting NFS in the background..."
- /usr/sbin/rpc.mountd --debug all --no-udp --exports-file /etc/exports
+ /usr/sbin/rpc.mountd --debug all -p 32767 --no-udp --exports-file /etc/exports
# Check if NFS is now running by recording it's PID (if it's not running $pid will be null):
pid=$(pidof rpc.mountd)
And add this to Dockerfile:
EXPOSE 2049/tcp 32767/tcp
Am I right?
After a few hours, I finally could make my final image (alpine + s3fs + nfsserver + convoy-nfs).
baseimage is a private base image that have confd and s6 instaled. It is based on alpine 3.4.
This is the content of my repo:
.
βββ Dockerfile
βββ Makefile
βββ root
β βββ etc
β βββ confd
β β βββ conf.d
β β β βββ exports.toml
β β βββ confd.toml
β β βββ templates
β β βββ exports.tmpl
β βββ cont-init.d
β β βββ 10-nfs
β βββ services.d
β βββ nfs
β β βββ run
β βββ s3fs
β βββ run
βββ src
βββ s3fs-fuse
βββ AUTHORS
βββ autogen.sh
βββ ChangeLog
βββ configure.ac
βββ COPYING
βββ doc
β βββ Makefile.am
β βββ man
β βββ s3fs.1
βββ INSTALL
βββ Makefile.am
βββ NEWS
βββ README
βββ src
β βββ cache.cpp
β βββ cache.h
β βββ common_auth.cpp
β βββ common.h
β βββ curl.cpp
β βββ curl.h
β βββ fdcache.cpp
β βββ fdcache.h
β βββ gnutls_auth.cpp
β βββ Makefile.am
β βββ nss_auth.cpp
β βββ openssl_auth.cpp
β βββ s3fs_auth.h
β βββ s3fs.cpp
β βββ s3fs.h
β βββ s3fs_util.cpp
β βββ s3fs_util.h
β βββ string_util.cpp
β βββ string_util.h
β βββ test_string_util.cpp
β βββ test_util.h
βββ test
βββ integration-test-common.sh
βββ integration-test-main.sh
βββ Makefile.am
βββ mergedir.sh
βββ passwd-s3fs
βββ rename_before_close.c
βββ require-root.sh
βββ s3proxy.conf
βββ sample_ahbe.conf
βββ sample_delcache.sh
βββ small-integration-test.sh
15 directories, 51 files
This is a clone of https://github.com/s3fs-fuse/s3fs-fuse
FROM baseimage
#------------------------------
# EVIRONMENT VARIABLES
#------------------------------
ENV SHARED_DIRECTORY=/s3fs \
AWSACCESSKEYID=myaccesskey \
AWSSECRETACCESSKEY=mysecretekey \
BUCKET=mybucket
#------------------------------
# INSTRUCTIONS
#------------------------------
RUN apk add --update \
curl \
fuse \
iproute2 \
libstdc++ \
libxml2 \
nfs-utils
RUN apk add --virtual .builddeps \
autoconf \
automake \
build-base \
curl-dev \
fuse-dev \
libxml2-dev \
openssl-dev
RUN rm -rf /var/cache/apk/* /tmp/* && \
rm -f /sbin/halt /sbin/poweroff /sbin/reboot && \
mkdir -p /var/lib/nfs/rpc_pipefs && \
mkdir -p /var/lib/nfs/v4recovery && \
mkdir -p /nfs && chmod -R 777 /nfs && \
echo "rpc_pipefs /var/lib/nfs/rpc_pipefs rpc_pipefs defaults 0 0" >> /etc/fstab && \
echo "nfsd /proc/fs/nfsd nfsd defaults 0 0" >> /etc/fstab
COPY src/s3fs-fuse /src/s3fs-fuse
RUN cd /src/s3fs-fuse && \
./autogen.sh && \
./configure --with-openssl && \
make && \
make install && \
apk del .builddeps && \
rm -rf /src/s3fs-fuse && \
mkdir $SHARED_DIRECTORY
COPY root /
EXPOSE 2049/tcp 32767/tcp
#!/usr/bin/with-contenv sh
echo "Starting Confd population of files..."
/usr/local/bin/confd -version
/usr/local/bin/confd -onetime
echo ""
echo "Displaying /etc/exports contents..."
cat /etc/exports
echo ""
#!/usr/bin/with-contenv sh
echo "Starting NFS"
/sbin/rpcbind
/usr/sbin/rpc.nfsd --debug 8
/usr/sbin/exportfs -rv
/usr/sbin/rpc.mountd --debug all -p 32767 --no-udp --exports-file /etc/exports -F
#!/usr/bin/with-contenv sh
OPTS=""
[ -n "$USE_CACHE" ] && OPTS="$OPTS -o use_cache=$USE_CACHE"
[ -n "$DEFAULT_ACL" ] && OPTS="$OPTS -o default_acl=$DEFAULT_ACL"
[ -n "$RETRIES" ] && OPTS="$OPTS -o retries=$RETRIES"
[ -n "$USE_RRS" ] && OPTS="$OPTS -o use_rrs=$USE_RRS"
[ -n "$USE_SSE" ] && OPTS="$OPTS -o use_sse=$USE_SSE"
[ -n "$CONNECT_TIMEOUT" ] && OPTS="$OPTS -o connect_timeout=$CONNECT_TIMEOUT"
[ -n "$READWRITE_TIMEOUT" ] && OPTS="$OPTS -o readwrite_timeout=$READWRITE_TIMEOUT"
[ -n "$PARALLEL_COUNT" ] && OPTS="$OPTS -o parallel_count=$PARALLEL_COUNT"
[ -n "$URL" ] && OPTS="$OPTS -o url=$URL"
/usr/local/bin/s3fs -f $BUCKET $SHARED_DIRECTORY $OPTS
s3nfs:
image: sb-s3nfs
environment:
URL: http://s3-sa-east-1.amazonaws.com
cap_add:
- SYS_ADMIN
tty: true
privileged: true
stdin_open: true
convoy-nfs:
labels:
io.rancher.container.create_agent: 'true'
io.rancher.scheduler.global: 'true'
privileged: true
pid: host
volumes:
- /lib/modules:/lib/modules:ro
- /proc:/host/proc
- /var/run:/host/var/run
- /run:/host/run
- /etc/docker/plugins:/etc/docker/plugins
image: rancher/convoy-agent:v0.9.0
command: volume-agent-nfs
links:
- s3nfs:s3nfs
convoy-nfs-storagepool:
labels:
io.rancher.container.create_agent: 'true'
image: rancher/convoy-agent:v0.9.0
volumes:
- /var/run:/host/var/run
- /run:/host/run
command: storagepool-agent
links:
- s3nfs:s3nfs
s3nfs:
scale: 1
convoy-nfs:
metadata:
nfs_server: "s3nfs"
mount_dir: "/"
mount_opts: "proto=tcp,nfsvers=4"
health_check:
request_line: GET /healthcheck HTTP/1.0
port: 10241
interval: 2000
response_timeout: 2000
unhealthy_threshold: 3
healthy_threshold: 2
convoy-nfs-storagepool:
metadata:
nfs_server: "s3nfs"
mount_dir: "/"
scale: 1
health_check:
request_line: GET /healthcheck HTTP/1.0
port: 10241
interval: 2000
response_timeout: 2000
unhealthy_threshold: 3
healthy_threshold: 2
Hey @bruno.galindro, Iβm pretty sure thereβs no need for the expose directive when using Rancher, the ports will be reachable from the other containers.
Impressive image!
Tks @sjiveson
If you have time, could you please check if the issue 6436 created on rancher github occurs with you too? It is related to the image that Iβve created
Sorry but my usage is far simpler so itβs not something Iβm ever likely to come across.