SES5 via NFS to VMWare6.5: OK=1, FAIL=3

Happy MidWinter!

I tried to mount the SES5 file service at ESXi 6.5, using various protocols (NFS 3/4) and backend store (Object / CephFS). The result is:

  • Combination NFS v3 + CephFS mounts and stores VMDKs fine;
  • Combination NFS v3 + Object mounts, but reports zero available volume, write is not possible;
  • Both combinations with NFS v4 were not mount.

Additional info:

  • The SES5 cluster is fresh and updated;

  • The deployment process performed by SALT (as documented);

  • The NFS-Ganesha and users configured via OpenATTIC GUI;

  • Links to NFS guides are here and here;

  • The attempts to mount these exports at Linux workstation (Debian) gave almost the same results (1 OK and 3 FAILs). For example, on 138 GB share

Thus, I’m suspecting something wrong with SES5 NFS and Object store stack, or with the configuration engine scripts.
The NFS v4 support in OpenAttic/Ganesha/SES5: does it mean v4 or v4.1 of NFS protocol?
Can anyone give the advice, how to find the workaround to make, for example, combination NFS v4 + Object store operable?

Cheers!

polezhaevdmi,

It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.

These forums are peer-to-peer, best effort, volunteer run and that if your issue
is urgent or not getting a response, you might try one of the following options:

Be sure to read the forum FAQ about what to expect in the way of responses:
http://forums.suse.com/faq.php

If this is a reply to a duplicate posting or otherwise posted in error, please
ignore and accept our apologies and rest assured we will issue a stern reprimand
to our posting bot…

Good luck!

Your SUSE Forums Team
http://forums.suse.com

Hi,

I know this thread is quite old, but I’m sure the topic is not outdated and maybe other users will encounter similar problems.
While I can’t comment on the NFSv4 mount issue (in my lab environment I can mount both v3 and v4 with both CephFS and RGW) I can explain your Input/Output error.

There are defaults configured for max write size and max object size:

[CODE]osd-3:~ # ceph daemon osd.5 config show |grep osd_max_write
“osd_max_write_size”: “90”,

osd-3:~ # ceph daemon osd.5 config show |grep osd_max_object_size
“osd_max_object_size”: “134217728”,
[/CODE]

I don’t know the background but the max object size has been reduced from 100 GB to 128 MB (see this). Here’s another thread in the ceph-users mailing list discussing the max writes.
Considering that I don’t think RGW is the place to store large VM disks. Of course you could tweak that value to your needs but there probably was a good reason for that change. Maybe your requirements are not for RGW but for RBD (or CephFS as you already tried).