Hi all. An hour ago my setup worked on “pure” kube over Ubuntu hosts. Now I’ve created a rancher/kubernetes env and see this: MountVolume.SetUp failed for volume "kubernetes.io/iscsi/06d0c8d1-eb28-11e6-af94-feb54ce8ee4e-gogs-data" (spec.Name: "gogs-data") pod "06d0c8d1-eb28-11e6-af94-feb54ce8ee4e" (UID: "06d0c8d1-eb28-11e6-af94-feb54ce8ee4e") with: executable file not found in $PATH
Any ideas?
Here’s the log of the corresponding kubelet container:
05/02/2017 22:43:30E0205 19:43:30.295426 6823 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/iscsi/e8e5bcf1-ebda-11e6-a38a-de4038fa5158-gogs-db\" (\"e8e5bcf1-ebda-11e6-a38a-de4038fa5158\")" failed. No retries permitted until 2017-02-05 19:44:34.295383992 +0000 UTC (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kubernetes.io/iscsi/e8e5bcf1-ebda-11e6-a38a-de4038fa5158-gogs-db" (spec.Name: "gogs-db") pod "e8e5bcf1-ebda-11e6-a38a-de4038fa5158" (UID: "e8e5bcf1-ebda-11e6-a38a-de4038fa5158") with: executable file not found in $PATH
05/02/2017 22:43:30E0205 19:43:30.296964 6823 iscsi_util.go:107] iscsi: could not read iface default error:
05/02/2017 22:43:30E0205 19:43:30.297094 6823 disk_manager.go:50] failed to attach disk
05/02/2017 22:43:30E0205 19:43:30.297122 6823 iscsi.go:214] iscsi: failed to setup
The problem is somewhere in the kubelet container, I suppose
I have the same issue. It seems iscsiadm wasn’t installed in the Rancher managed kubelet container.
I installed iscsi utilities (bad idea, but just to see if there is a ‘next problem’) and it gets further but then fails with:
7/10/2017 5:48:07 PME0710 15:48:07.678830 4675 iscsi_util.go:126] iscsi: failed to sendtargets to portal 192.168.1.190:3260 error: iscsiadm: can't open iscsid.startup configuration file /etc/iscsi/iscsid.conf
7/10/2017 5:48:07 PMiscsiadm: iscsid is not running. Could not start it up automatically using the startup command in the /etc/iscsi/iscsid.conf iscsid.startup setting. Please check that the file exists or that your init scripts have started iscsid.
7/10/2017 5:48:07 PMiscsiadm: can not connect to iSCSI daemon (111)!
7/10/2017 5:48:07 PMiscsiadm: Failed to load module tcp: No such file or directory
7/10/2017 5:48:07 PMiscsiadm: Could not load transport tcp.Dropping interface default.
7/10/2017 5:48:07 PME0710 15:48:07.678973 4675 disk_manager.go:50] failed to attach disk
7/10/2017 5:48:07 PME0710 15:48:07.678986 4675 iscsi.go:214] iscsi: failed to setup
7/10/2017 5:48:07 PME0710 15:48:07.679910 4675 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/iscsi/cd3f4fb8-657c-11e7-99cf-026e601f46e6-trident\" (\"cd3f4fb8-657c-11e7-99cf-026e601f46e6\")" failed. No retries permitted until 2017-07-10 15:50:07.679790016 +0000 UTC (durationBeforeRetry 2m0s). Error: MountVolume.SetUp failed for volume "kubernetes.io/iscsi/cd3f4fb8-657c-11e7-99cf-026e601f46e6-trident" (spec.Name: "trident") pod "cd3f4fb8-657c-11e7-99cf-026e601f46e6" (UID: "cd3f4fb8-657c-11e7-99cf-026e601f46e6") with: exit status 21
It looks like there are a bunch of reasons why things might break if the kubelet can’t manage the iSCSI config of the container host, which might (or might not) be the design with Rancher managed k8s.
Should a Rancher managed k8s environment work with iSCSI k8s volumes?
Now I tried nfs volumes with comparable results that the kubelet != the container host:
E0710 17:46:48.677148 4675 mount_linux.go:119] Mount failed: exit status 32
7/10/2017 7:46:48 PMMounting command: mount
7/10/2017 7:46:48 PMMounting arguments: 10.64.35.78:/trident_trident /var/lib/kubelet/pods/523aef2e-6596-11e7-99cf-026e601f46e6/volumes/kubernetes.io~nfs/trident nfs []
7/10/2017 7:46:48 PMOutput: mount.nfs: rpc.statd is not running but is required for remote locking.
7/10/2017 7:46:48 PMmount.nfs: Either use '-o nolock' to keep locks local, or start statd.
7/10/2017 7:46:48 PMmount.nfs: an incorrect mount option was specified
7/10/2017 7:46:48 PM
7/10/2017 7:46:48 PME0710 17:46:48.680649 4675 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/nfs/523aef2e-6596-11e7-99cf-026e601f46e6-trident\" (\"523aef2e-6596-11e7-99cf-026e601f46e6\")" failed. No retries permitted until 2017-07-10 17:48:48.680440904 +0000 UTC (durationBeforeRetry 2m0s). Error: MountVolume.SetUp failed for volume "kubernetes.io/nfs/523aef2e-6596-11e7-99cf-026e601f46e6-trident" (spec.Name: "trident") pod "523aef2e-6596-11e7-99cf-026e601f46e6" (UID: "523aef2e-6596-11e7-99cf-026e601f46e6") with: mount failed: exit status 32
7/10/2017 7:46:48 PMMounting command: mount
7/10/2017 7:46:48 PMMounting arguments: 10.64.35.78:/trident_trident /var/lib/kubelet/pods/523aef2e-6596-11e7-99cf-026e601f46e6/volumes/kubernetes.io~nfs/trident nfs []
7/10/2017 7:46:48 PMOutput: mount.nfs: rpc.statd is not running but is required for remote locking.
7/10/2017 7:46:48 PMmount.nfs: Either use '-o nolock' to keep locks local, or start statd.
7/10/2017 7:46:48 PMmount.nfs: an incorrect mount option was specified
7/10/2017 7:46:48 PM
My container host runs CoreOS container linux. Could that be related? From that same CoreOS I can mount NFS and iSCSI volumes directly though…
*** UPDATE *** If I run rpcbind service on the container host systemctl enable rpcbind;systemctl start rpcbind
then NFS based container storage mounts properly. iSCSI still having issues…