The attempts to mount these exports at Linux workstation (Debian) gave almost the same results (1 OK and 3 FAILs). For example, on 138 GB share
Thus, I’m suspecting something wrong with SES5 NFS and Object store stack, or with the configuration engine scripts.
The NFS v4 support in OpenAttic/Ganesha/SES5: does it mean v4 or v4.1 of NFS protocol?
Can anyone give the advice, how to find the workaround to make, for example, combination NFS v4 + Object store operable?
It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.
These forums are peer-to-peer, best effort, volunteer run and that if your issue
is urgent or not getting a response, you might try one of the following options:
Visit http://www.suse.com/support and search the knowledgebase and/or check all
the other support options available.
If this is a reply to a duplicate posting or otherwise posted in error, please
ignore and accept our apologies and rest assured we will issue a stern reprimand
to our posting bot…
I know this thread is quite old, but I’m sure the topic is not outdated and maybe other users will encounter similar problems.
While I can’t comment on the NFSv4 mount issue (in my lab environment I can mount both v3 and v4 with both CephFS and RGW) I can explain your Input/Output error.
There are defaults configured for max write size and max object size:
I don’t know the background but the max object size has been reduced from 100 GB to 128 MB (see this). Here’s another thread in the ceph-users mailing list discussing the max writes.
Considering that I don’t think RGW is the place to store large VM disks. Of course you could tweak that value to your needs but there probably was a good reason for that change. Maybe your requirements are not for RGW but for RBD (or CephFS as you already tried).