NFS, the real reason why my service is not updating?

Hi folks
I am testing my NFS setup to include a Volume that is supposed to be hosted through the NFS server running on my local host. My set up is very small and limited at a single machine where rancher is running and the NFS service is running on this machine. Due to security I have restricted access to ports 111 and 2049 as follows

-A INPUT ! -s -p tcp -m tcp --dport 111 -j DROP
-A INPUT -s -p tcp -m tcp --dport 111 -j ACCEPT
-A INPUT ! -s -p udp -m udp --dport 111 -j DROP
-A INPUT -s -p udp -m udp --dport 111 -j ACCEPT
-A INPUT ! -s -p tcp -m tcp --dport 2049 -j DROP
-A INPUT ! -s -p tcp -m tcp --dport 2049 -j DROP
-A INPUT -s -p tcp -m tcp --dport 2049 -j ACCEPT
-A INPUT ! -s -p udp -m udp --dport 2049 -j DROP
-A INPUT -s -p udp -m udp --dport 2049 -j ACCEPT
-A INPUT -s -p tcp -m tcp --dport 111 -j ACCEPT
-A INPUT -s -p udp -m udp --dport 111 -j ACCEPT
-A INPUT -s -p tcp -m tcp --dport 2049 -j ACCEPT
-A INPUT -s -p udp -m udp --dport 2049 -j ACCEPT

Rancher is running on the machine with ip ending with 90.104.
If I run sudo mount /media/tmpnfs
mount.nfs: Connection timed out

as you can see i get a time out
My docker-compose looks like

version: '2'
    external: true
    driver: rancher-nfs
    image: ghost
    stdin_open: true
    - TestVolume:/tmp/test
    tty: true
    - 2368:2368/tcp
      io.rancher.container.pull_image: always

And if I check for Volumes
$ rancher volumes -a | grep Test
1v1063 TestVolume inactive rancher-nfs

Volume is inactive as it is supposed to be. At the moment my Stack is Stuck ! because i think the new container is timing out while waiting for the nfs to be mounted. But this error never shows, instead I have to go on the nfs service to see the error

24/11/2017 23:57:33+ mount_nfs /nfs /tmp/hy8y6 ,nfsvers=4
24/11/2017 23:57:33+ local
24/11/2017 23:57:33+ local exportDir=/nfs
24/11/2017 23:57:33+ local mountDir=/tmp/hy8y6
24/11/2017 23:57:33+ local opts=,nfsvers=4
24/11/2017 23:57:33+ local error
24/11/2017 23:57:33++ ismounted /tmp/hy8y6
24/11/2017 23:57:33++ local mountPoint=/tmp/hy8y6
24/11/2017 23:57:33+++ findmnt -n /tmp/hy8y6
24/11/2017 23:57:33+++ cut '-d ’ -f1
24/11/2017 23:57:33++ local mountP=
24/11/2017 23:57:33++ ‘[’ ‘’ == /tmp/hy8y6 ']'
24/11/2017 23:57:33++ echo 0
24/11/2017 23:57:33+ ‘[’ 0 == 0 ']'
24/11/2017 23:57:33+ mkdir -p /tmp/hy8y6
24/11/2017 23:57:33+ local cmd=mount
24/11/2017 23:57:33+ ‘[’ ‘!’ -z ,nfsvers=4 ']'
24/11/2017 23:57:33+ cmd='mount -o ,nfsvers=4’
24/11/2017 23:57:33+ cmd='mount -o ,nfsvers=4 /tmp/hy8y6’
24/11/2017 23:57:33++ mount -o ,nfsvers=4 /tmp/hy8y6
25/11/2017 00:01:56+ error=‘mount.nfs: Connection timed out’

I would expect to the see error on the nfs stack (ie my stack has no problems and shows as healthy but if it cannot connect it should fail)
Also i would like to see the error also in the application stack where the ghost image is running but at the moment from the ux or even CLI it is not obvious.

I assume this is a NFS troubleshooting task but if you have any suggestions please let me know. So far trying to mount from the host has not worked and I assume the docker range i have added is not enough ?

The mount is initiated from the host, so the should match and allow. In these cases you can enable iptables logging and tail that log while trying to mount to see what’s being dropped (if there is any). If that is all good, you can check the nfs server.

If you can share config details of the nfs server, I can reproduce.

Ok so based on what you are saying I should be able to just allow my host ip and DROP all other packets? Dont I need to explicitly allow connectivity from all clients? How do i work out what this ip range is?

I assume I need to whitelist the IP from the docker network to allow NFS access on the same machine? NFS in my case (for the moment) is running on my local server also running the rancher host.

My NFS config is
/nfs *(rw,sync,no_root_squash,no_subtree_check)

BTW , I tried enabling iptables logging and the machine just started dropping all packets, 90 minutes downtime so I am little reluctant to try that again.
I have also removed the firewall rules for the moment so NFS is now working but i cannot leave it as it is.

Just to let you know I eventually checked a container that had a working nfs mount and I have found out that NFS in the contains has a client and server IP that is the same. So the ip is not from the container but I guess rancher does some magic to then mount the volume as if it was on the actual host where the container is running. At the moment I have fixed it by adding IPTABLES rules that allow traffic only from the same IP address. It works with my small setup