I am using Rancher 2.1 and trying to get working Pods with CephFS volumes.
What I do is following:
- Created a docker secret from Rancher UI where i set the Ceph secret
- Created a PV where (system said available)
- selected Ceph Filesystem
- filled cephfs path
- filled Secret field with docker secret name
- Added monitor: ceph_monitor_ip:6789
- Many nodes read-write
- Created PVC with attached PV created above and I got Bound status
- Created deployment where i pass the existinv PVC from step 3
When I check the pod status i get following:
MountVolume.SetUp failed for volume “ce-test-webdata” : CephFS: mount failed: mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ceph -o name=admin,secret=AQCRG4hcpkFkJRAAe/Zr2jaeifMG/IqmTBM87A== monIPAddress:6789:/prod/data/ce-test /var/lib/kubelet/pods/6e8da03c-468b-11e9-88d8-9600001c59c5/volumes/kubernetes.io~cephfs/ce-test-webdata Output: mount: mount monIPAddress:6789:/prod/data/ce-test on /var/lib/kubelet/pods/6e8da03c-468b-11e9-88d8-9600001c59c5/volumes/kubernetes.io~cephfs/ce-test-webdata failed: Connection timed out
When i manually try to mount using the params above on one of the servers itself then it works perfectly. I thought that it might be problem with firewall. I permitted access from all kubernetes and rancher nodes and still same problem.
Do i need to have installed ceph-common and ceph-client (kernel drivers) on each kubernetes node in order to have Ceph-FS working?
Do I need to have ceph-common and ceph-client on docker image itself? (I think that this is not necessary, because mount must be on node level, not on docker image level, but correct me if i am wrong)
What about Ceph secret file path when we create PV, do we need to have the file on each node and we should pass the path that is on nodes?
Thank you in advance!