My use case is that I want to share /nix/store
across volatile pods, for CI.
To do that, the recommended way upstream is an NFS share. Quoting from there:
The only requirement is to also pass
local_lock=flock
orlocal_lock=all
as mount option to allow the nix packages to take locks on modifications.
So, my question is: how to do that on with Longhorn? It seems to add local_lock=none
on RWX mounts and I can’t find the way to change that.
That’s the question. Now let me tell you what things I’ve tried without success.
I went and created a StorageClass like this:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: longhorn-nix-store
annotations:
field.cattle.io/description: Distributed storage class with specific requirements for sharing a /nix/store
allowVolumeExpansion: true
parameters:
nfsOptions: local_lock=all
numberOfReplicas: "2"
staleReplicaTimeout: "30"
provisioner: driver.longhorn.io
reclaimPolicy: Delete
volumeBindingMode: Immediate
I was hoping that pods mounting a PVC with that storage class got the desired nfsOptions
, which seems to be a hidden parameter. No luck. After doing that, inside the pod I get:
# grep /nix/store /proc/self/mountinfo
2820 2813 0:267 / /nix/store rw,relatime - nfs4 10.43.132.161:/pvc-b3a43232-2ddd-4b68-b146-a7ca87095e2e rw,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=116.202.101.25,local_lock=none,addr=10.43.132.161
It still has local_lock=none
.
I also tried a different PVC like this:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: longhorn-nix-store
annotations:
field.cattle.io/description: Distributed storage class with specific requirements for sharing a /nix/store
allowVolumeExpansion: true
mountOptions:
- local_lock=all
parameters:
numberOfReplicas: "2"
staleReplicaTimeout: "30"
provisioner: driver.longhorn.io
reclaimPolicy: Delete
volumeBindingMode: Immediate
But in this case, the pod share-manager-pvc-73392997-5b1e-41f0-8a99-74de7af0190d
that Longhorn seems to use to set up the volume fails with:
time="2022-07-05T11:41:58Z" level=fatal msg="Error running start command: mount failed: exit status 32\nMounting command: mount\nMounting arguments: -t ext4 -o local_lock=all,defaults /dev/longhorn/pvc-73392997-5b1e-41f0-8a99-74de7af0190d /export/pvc-73392997-5b1e-41f0-8a99-74de7af0190d\nOutput: mount: /export/pvc-73392997-5b1e-41f0-8a99-74de7af0190d: wrong fs type, bad option, bad superblock on /dev/longhorn/pvc-73392997-5b1e-41f0-8a99-74de7af0190d, missing codepage or helper program, or other error.\n."
That seems to mean that it’s trying to pass local_lock=all
to the underlying ext4 volume, and not to the nfs share that should be mounted on the pod above of it.
I hope somebody can help me here, thanks!