I’m trying to mount an iSCSI LUN as a persistent volume in rancher v2.8.2.
My rancher is setup in a single ProxMox server where each node is a VM running ubuntu 24.04 LTS.
I have a Synology NAS which supports both NFS ans iSCSI shares. I can setup persistent volumes using NFS and I can mount iscsi luns on other linux hosts or vms running in proxmox so this part works.
Now I’m trying to figure out how that kind of setup is supposed to work with rancher. It doesn’t support iscsi volume definitions as far as I understand. But I can create a volume in pod and it will be provisioned by kubelet when deploying pod.
What I see is that when iscsiadm runs it gets /etc/iscsi/iscsi.conf and it is configured to use systemd to start iscsid inside kubelet container. This doesn’t work as container (ubuntu 22 fwif as far as I can see) doesn’t have systemd and it can’t start daemon.
I found an old workaround suggested in docs which suggests mounting initiator from host into container (iscsiadm and configs are mounted to kubelet). If I do that, iscsiadm doesn’t work as glibc version doesn’t match and it bails. While I can try to make vms wit ubuntu version matching ubuntu in kubelet image I’m not sure that would work.
How would final working system is supposed to work? Run daemon inside container and let it mount luns in host dirs? Any pointers are appreciated.
After some experimenting as far as I understand iscsid on host does actual mounting and iscsiadm populates configs for iscsid to work with (roughly). The advice to mount /etc/iscsi and /lib/… achieves that. Kubelet container uses its own iscsiadm that will write to relevant “db” and triggers iscsid on host to mount device. Then it monitors /dev/by-path for new device to appear so that it could be mounted.
I ended up trying to ignore rancher/k8s/docker and check that I can mount device. And now I see that it fails to do so. It attaches to nas successfully but fails to then create a device. No errors, no anything. The host is running ubuntu 24.04LTS. This is matching what container runs in terms of iscsiadm versions.
I created another vm with ubuntu 24.10 with newer kernel and newer open-iscsi version and it can mount drives without issues. Unfortunately, newer version of open-iscsi uses different directory schema to store data so it won’t be able to use version inside the container to talk to host daemon so it seems, and mounting binaries to container also doesn’t work because of glibc version.
So I’m sort of stuck here with k8s container being incompatible/having bugs in iscsi.