Hi, I’m struggling to get CephFS usable on my RKE built cluster.
I have an external Ceph appliance. This works just fine with the ceph-rbd driver. But I also need CephFS for shared volumes and this is an issue.
The CephFS filesystem seems to be good as I can mount it remotely and all works as expected. I’ve deployed the provisioner as documented here: https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/cephfs/deploy/rbac and that starts up ok. Then I add a storageclass, pv, pvc and pod. No errors there but nothing in the provisioner logs and nothing on the CephFSfilesystem at all. I’m pretty sure that the volumes are being created locally on the hosts or something because data is persisted. How do I even go about debugging this? Kubernetes is pretty opaque in a lot of places.
RKE v0.2.8 hyperkube:v1.14.6-rancher1