nfs4 mount error on SLES 12

Hi,
We have a problem using the ‘mount’ in SLES 12 (Kernel = Linux sles12 3.12.28-4-default)

In /etc/fstab

xxx.xxx.xxx.xxx:/directory /backups nfs defaults 0 0

mount -v /backups

mount.nfs: timeout set for Fri Nov 14 12:57:21 2014
mount.nfs: trying text-based options ‘vers=4,addr=xxx.xxx.xxx.xxx,clientaddr=xxx.xxx.xxx.xxx’
mount.nfs: mount(2): Invalid argument
mount.nfs: an incorrect mount option was specified

if we do it manually. the error is the same

mount -t nfs4 -v xxx.xxx.xxx.xxx:/directory /backups

mount.nfs: timeout set for Fri Nov 14 12:59:35 2014
mount.nfs: trying text-based options ‘vers=4,addr=xxx.xxx.xxx.xxx,clientaddr=xxx.xxx.xxx.xxx’
mount.nfs: mount(2): Invalid argument
mount.nfs: an incorrect mount option was specified

using NFSv3, mount is correct

mount -t nfs -o vers=3 xxx.xxx.xxx.xxx:/directory /backups

This same configuratión on SLES11 Sp3 works fine with nfs3,nfs4 and nfs4.1

The NFS Server is a NETAPP

Any idea?

Regards
Nacho

Nachoperez,

I just installed SLES 12, and in my attempts to use NFS I’ve not had any problem doing an NFS4 mount, either manually or based on /etc/fstab. I don’t have NETAPP as my NFS Server, but your error suggests that mount.nfs doesn’t like your syntax, it doesn’t suggest that the NFS Server is rejecting something.

In one test, I had a failure similar to yours (invalid option) with fstab usage, but that was because I mistyped the word “defaults” and only put in “default”. When I typed it correctly, I had no issue.

If you’re sure you’ve got no typos, then I’d recommend you check:

  1. which mount.nfs #this should display /sbin/mount.nfs

  2. rpm -qf /sbin/mount.nfs #this should show package name/version: nfs-client-1.3.0-6.9

  3. chkbin /sbin/mount.nfs
    and then post here the contents of the log file it creates at /var/log/nts_chkbin_mount.nfs*
    (don’t post just the screen display output, get the whole log file.)

Darcy

Hi,

Thanks for your comments !!

The workaround is add option sec=sys on mount in /etc/fstab

Example:

xxx.xxx.xxx.xxx:/directory /backups nfs sec=sys 0 0
xxx.xxx.xxx.xxx:/directory /backups nfs sec=sys,minorversion=1 0 0 (for nfs4.1)

With this flag , work fine.

Thanks again
Regards
Nacho

Thank you for this information I just had this same problem. I updated /etc/fstab as you did and verified that your solution works. I also add this to the mount command and that also worked: i.e. (mount -t nfs -o sec=sys linux163:/pub/data/suse/SLES12DVD1 /mnt). Does anyone have insight as to what is causing this behavior?

The documentation states that sec=sys is the default. I looked at /etc/nfsmount.conf and this file has not been changed. I explicitly specify sec=sys and the mount command will work as I expect. What other methods are there to change the default behavior? I did not find any messages in /var/log/messages when this failure occurred. Is this a specific log for this error? The autofs and cron processes are not active.

Hi mikenash,

the docs say that if you’re not explicitly specifying the mode, auto-negotiation will take place. So probably this is not about defaults, but about problems with auto-neg between specific implementations of servers and clients?

Regards,
Jens

Thank you Jens. This system is the only system where this behavior has occurred. I have performed these mounts from many other systems to the same server. So the issue has to be with this specific system. So how does the auto-negotiation behavior change on a system to not take the default? Maybe the question is why does auto-negotiation behave differently on this specific system?

Hi mikenash,

yes, from your description, it seems to boil down to that question. I don’t have enough in-depth experience with that part of NFS to be of much help - maybe you can open a service request or someone else can jump in here?

Regards,
Jens

I found this documentation Document ID: 7016917. Sounds like this issue but not 100% sure. The system with the issue does have a /etc/init.d/nfs file and a working systems does not. Checking the services: systemctl status nfs-rpc.gssd and systemctl status nfsserver-rpc.gssd show they are dead. Am I doing this correct?[CODE]linux283:/etc/init.d # cat /etc/init.d/nfs | grep rpc.gssd
GSSD_BIN=/usr/sbin/rpc.gssd
GSSD_CLIENT_STATE=/run/nfs/nfs-rpc.gssd
GSSD_SERVER_STATE=/run/nfs/nfsserver-rpc.gssd
linux283:/etc/init.d # systemctl status nfs-rpc.gssd
nfs-rpc.gssd.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)

linux283:/etc/init.d # systemctl status nfsserver-rpc.gssd
nfsserver-rpc.gssd.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)[/CODE]

Hi mikenash,

Checking the services: systemctl status nfs-rpc.gssd and systemctl status nfsserver-rpc.gssd show they are dead. Am I doing this correct?

The status “inactive (dead)” result from not seeing service definition files for the service names you requested (“Loaded: not-found (Reason: No such file or directory)”).

Looking at my system, I see the following service names for gss:

[FONT=monospace][COLOR=#ff5454][B]host:~ #[/B][/COLOR][COLOR=#000000] systemctl list-unit-files |grep gss [/COLOR] auth-rpc[COLOR=#ff5454][B]gss[/B][/COLOR][COLOR=#000000]-module.service static [/COLOR] rpc-[COLOR=#ff5454][B]gss[/B][/COLOR][COLOR=#000000]d.service static [/COLOR] rpc-svc[COLOR=#ff5454][B]gss[/B][/COLOR][COLOR=#000000]d.service static [/COLOR] [COLOR=#ff5454][B]host:~ #[/B][/COLOR][/FONT]

and the NFS server is “nfsserver.service”.

Regards,
Jens[FONT=monospace]
[/FONT]

Hello Jens good morning. Thank you for the response. I find that none of the gss services are running and it is the same for nfs. When I start the nfs client and perform the mount the messages changes for the failed mount. In the /var/log/messages I do find a rpc.gssd message. Also systemctl status shows that the nfs client is running but systemctl list-unit-files | grep nfs does not show anything. [CODE]The following returns nothing
linux283:~ # systemctl list-unit-files | grep gss
linux283:~ # systemctl list-unit-files | grep nfs
linux283:~ # systemctl status nfsserver.service
nfsserver.service - LSB: Start the kernel based NFS daemon
Loaded: loaded (/etc/init.d/nfsserver)
Active: inactive (dead)

linux283:~ # systemctl status rpc-gssd.service
rpc-gssd.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)

linux283:~ # cat /var/og/messages | grep rpc.gssd
cat: /var/og/messages: No such file or directory
linux283:~ # systemctl start nfs
linux283:~ # systemctl status nfs
nfs.service - LSB: NFS client services
Loaded: loaded (/etc/init.d/nfs)
Drop-In: /run/systemd/generator/nfs.service.d
└─50-insserv.conf-$remote_fs.conf
Active: active (running) since Mon 2016-03-21 09:06:51 EDT; 7s ago
Process: 1889 ExecStart=/etc/init.d/nfs start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs.service
├─1914 /usr/sbin/rpc.idmapd -p /var/lib/nfs/rpc_pipefs
└─1915 /usr/sbin/rpc.gssd -D -p /var/lib/nfs/rpc_pipefs

Mar 21 09:06:51 linux283 nfs[1889]: Starting NFS client services: sm-notify gssd idmapd…done
linux283:~ # systemctl status nfs.service.d
nfs.service.d.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
2016-03-21T09:21:34.348313-04:00 linux283 rpc.gssd[2129]: ERROR: No credentials found for connection to server linux163.rtp.raleigh.ibm.com[/CODE] This problem seems related to gss but not sure how!

Hello Mike,

where are you checking the services? In your first message you checked for nfsserver, so I guessed you’re looking at the server side, but now you’re starting the client, so that’s rather the client’s side? (My guess from the messages so far is that linux163 is the NFS server host and linux283 is the NFS client, and according to “systemctl status nfs” the client-side services are active as expected).

Sine you referenced https://www.suse.com/support/kb/doc.php?id=7016917 - did you update to the according client version?

Also, I don’t know what you’re trying to look at via “systemctl status nfs.service.d”… was that supposed to be the NFS server service (nfsserver.service)? That’d be asking on the wrong side.

Regards,
Jens

Hello Jens, I apologise for rhe confusion.
the linux163 system is the nfs server. I am not doing anything changes to this system because all other systems do not have the problem that linux283 exhibit.
Linu283 does have a /etc/krb5.conf file. It is empty.
There are no gss services running and the nfs client is not running.
When I issue the mount command I recieve an error message for an incorrect mount option.
Then I start the nfs client then start the nfs client.
I then receive an access denied message and the /var/log/messages file has a rpc.gssd error message.

It is not clear to me if the Document ID. 7016917 is related to this problem. There arre no gss services running but I receive a rpc.gssd message when the nfs client is running. I receive an invalid option when the nfs client is not running. On another simular sytem the mount works when the nfs client is not running. So I have three questions.
Why do I receive an incorrect mount option when the nfs client is not running?
Why do I receive access denied when the nfs client is running?
Why am I receiving rpc.gssd when thether are no gss services running?

[CODE]linux283:~ # mount -t nfs linux163:/pub/data/suse/SLES12DVD1 /mnt

mount.nfs: an incorrect mount option was specified
linux283:~ # systemctl start nfs
linux283:~ # mount -t nfs linux163:/pub/data/suse/SLES12DVD1 /mnt
mount.nfs: access denied by server while mounting linux163:/pub/data/suse/SLES12DVD1

In /var/log/messages
2016-03-21T11:28:06.678027-04:00 linux283 rpc.gssd[2592]: ERROR: No credentials found for connection to server linux163.rtp.raleigh.ibm.com[/CODE]

Hi Mike,

Why do I receive an incorrect mount option when the nfs client is not running?

probably because the helper daemons are not (yet) set up. Mount knows nothing about these and won’t start them. Starting the “nfs client” means starting those helper daemons plus mounting the NFS-based file systems mentioned in /etc/fstab.

Why do I receive access denied when the nfs client is running?

There can be plenty of causes. What does the server have to say (i.e. in /var/log/messages / journalctl)?

Why am I receiving rpc.gssd when thether are no gss services running?

According to your earlier post, there is some GSS helper daemon (pid 1915, /usr/sbin/rpc.gssd) running:

linux283:~ # systemctl status nfs nfs.service - LSB: NFS client services Loaded: loaded (/etc/init.d/nfs) Drop-In: /run/systemd/generator/nfs.service.d └─50-insserv.conf-$remote_fs.conf Active: active (running) since Mon 2016-03-21 09:06:51 EDT; 7s ago Process: 1889 ExecStart=/etc/init.d/nfs start (code=exited, status=0/SUCCESS) CGroup: /system.slice/nfs.service ├─1914 /usr/sbin/rpc.idmapd -p /var/lib/nfs/rpc_pipefs └─1915 /usr/sbin/rpc.gssd -D -p /var/lib/nfs/rpc_pipefs

Regards,
Jens

When the nfs client is not running on another Suse 12 system the mount still works. It is only on this system that I receive the incorrect mount option. Also the mount works after updating /etc/nfsmount.conf with Sec=sys. There is nothing I find relavant is /var/log/messages or when I issue
journalctl -b.
After starting the nfs client the /var/log/messages and journalctl -b show the following message: Starting NFS client services: sm-notify gssd idmapd…done. However, systemctl status on the three gss services show them as dead… Issuing the mount produces the rpc.gssd error messages in /var/log/messages and Journaldctl -b. Yet after starting nfs and displaying the status it does show rpc.gssd. On another system this is not started. So nfs starting differently would explain the error messages after nfs start but not before. I did not see anything in /etc/sysconfig/nfs. Where in the configuration would this behavior be modified to cause nfs to start in this manner?
This is nfs running on another system.

root@linux140:/root $~>systemctl status nfs nfs.service - Alias for NFS client Loaded: loaded (/usr/lib/systemd/system/nfs.service; disabled) Drop-In: /run/systemd/generator/nfs.service.d └─50-insserv.conf-$remote_fs.conf Active: active (exited) since Mon 2016-03-21 14:29:21 EDT; 1s ago Process: 1627 ExecStartPost=/usr/bin/mount -at nfs,nfs4 (code=exited, status=0/SUCCESS) Process: 1624 ExecStart=/bin/true (code=exited, status=0/SUCCESS) Main PID: 1624 (code=exited, status=0/SUCCESS)

Hi Mike,

which points at an authentication problem between the NFS client and server.

[QUOTE=mikenash;31955]There is nothing I find relavant is /var/log/messages or when I issue
journalctl -b.[/QUOTE]
You have looked at the logs on server side? Just asking to be clear.

As I wrote earlier in this thread, systemd reports these as unknown services (and as such as not started/dead). You’re mixing systemd services and helper daemons started by the LSB script.

Maybe I over-looked that information - are the OS levels (including installed patches) the same on both the working and the non-working client? Because on linux140 you have /usr/lib/systemd/system/nfs.service, while on linux283 you’re starting the NFS client via the LSB wrapper.

BTW, from that linux140 c&p I now see where you have “nfs.service.d” from :slight_smile:

My current guess is that it boils down to different OS levels.

Regards,
Jens

Greetings Jen, I have not looked at the server side because the issue appears to originate from the client side.
The two client systems that I am working with are SLES 12 but different kernel levels.
linux140 3.12.49-6-default - linux283 3.12.28-4-default
I do not understand why linux140 uses /usr/lib/systemd/system/nfs and linux283 uses /etc/init.d/nfs. An interesting difference where I would not have expected it.
Also interesting because the two systems behave differently before and after the nfs client service has started. On linux283 before the nfsservice starts the error is a bad mount option but does not provide the bad mount option. I suspect that it is the Sec= taking on a value that is not ‘sys’.
Why is there this drastic change with Suse 12 on slightly different kernel levels? I would have expected these nfs services to both use systemd!
Why does linux140 use /usr/lib/systemd/system/nfs?
Why does linux283 use /etc/init.d/nfs?
Can linux283 be changed to use /usr/lib/systemd/system/nfs?
What configuration file is used by /etc/init.d/nfs?
What configuration file is used by /usr/lib/systemd/system/nfs?

Hi Mike,

let me just say that you’re wasting your time chasing problems that may well be fixed by updated packets. It’s not about kernel versions, but NFS client packets - the TID indicated there had been important changes in that area, so not updating means keeping old problems.

I do not understand why linux140 uses /usr/lib/systemd/system/nfs and linux283 uses /etc/init.d/nfs. An interesting difference where I would not have expected it.

Most likely because of you not having the latest updates on linux283.

On linux283 before the nfsservice starts the error is a bad mount option but does not provide the bad mount option.

Most likely there is no bad mount option - the mount options provided probably lead to an error during setup, which likely is indistinguishable from bad options at the upper layers.

Why is there this drastic change with Suse 12 on slightly different kernel levels?

You’re the only one talking about kernels :wink:

I would have expected these nfs services to both use systemd!

Both do. One system is using the LSB wrapper, the other one dedicated units. Most likely due to updates to the NFS client, ans mentioned in the TID. There usually are reasons for updates.

Can linux283 be changed to use /usr/lib/systemd/system/nfs?

Ask again once you’ve updated to the latest patches :slight_smile:

What configuration file is used by *

Both are text files, go check it out - or open a service request, if you want SUSE to respond. We’re volunteers here, this is a peer-to-peer support forum…

Regards,
Jens

PS: “I have not looked at the server side because the issue appears to originate from the client side.”: As has been indicated both by the client-side error message, it could be some reaction of the server to the client request, leading to an according server response. Refusing to check the server’s messages for indications doesn’t sound … logical.

Greetings Jens, thanks again for your responses. I do have a lot to learn. I am trying to understand what went wrong and where it went wrong. Putting a fix on it does not help me understand.

Hi Mike,

if updating the NFS client helps, then you could check the sources (or rather the source diffs between your old and the then current version) to see what happened.

Have you at least checked if the working clients use a newer version of the related packets?

Regards,
Jens