Problems getting CephFS to be recognised

Hi, I’m struggling to get CephFS usable on my RKE built cluster.

I have an external Ceph appliance. This works just fine with the ceph-rbd driver. But I also need CephFS for shared volumes and this is an issue.

The CephFS filesystem seems to be good as I can mount it remotely and all works as expected. I’ve deployed the provisioner as documented here: https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/cephfs/deploy/rbac and that starts up ok. Then I add a storageclass, pv, pvc and pod. No errors there but nothing in the provisioner logs and nothing on the CephFSfilesystem at all. I’m pretty sure that the volumes are being created locally on the hosts or something because data is persisted. How do I even go about debugging this? Kubernetes is pretty opaque in a lot of places.

RKE v0.2.8 hyperkube:v1.14.6-rancher1

Just for completeness and to avoid being DenverCoder9 https://xkcd.com/979/ Here’s what went wrong.

What I didn’t realise is that if you have a provisioner then you must not create a PV. If you do that then the PVC attaches directly to the now available manual PV and never asks the provisioner.

There is another issue though. CephFS volumes are always mounted as root which renders them unusable if your process runs as a normal user. Sadly the suggested response to this is to add an init container to chown the volume before the pod starts. Not a clean answer at all. I’m thinking of forking the provisioner for uor internal use to allow us to define the uid/gid of new volumes.