Has anyone gotten openebs working with rancher? We've pulled all our hair out

Openebs works flawlessly on our stand-alone clusters built with kubeadm.

I even rebuilt one of these working clusters with rancher so I know this set of hosts has zero issues, no network problems, nada.

I’ve tried every supposed workaround I’ve found.

Right now trying to work around where you bind mount the node’s /etc/iscsi and /sbin/iscsiadm into the kubelet container.

  Type     Reason                  Age               From                             Message
  ----     ------                  ----              ----                             -------
  Warning  FailedScheduling        1m (x25 over 1m)  default-scheduler                pod has unbound PersistentVolumeClaims (repeated 19 times)
  Normal   SuccessfulAttachVolume  1m                attachdetach-controller          AttachVolume.Attach succeeded for volume "default-openebs-percona-test-pvc-561804592"
  Warning  FailedMount             27s (x8 over 1m)  kubelet, st13p01if-ztds19293201  MountVolume.WaitForAttach failed for volume "default-openebs-percona-test-pvc-561804592" : failed to get any path for iscsi disk, last err seen:
iscsi: failed to sendtargets to portal output: libkmod: ERROR ../libkmod/libkmod.c:514 lookup_builtin_file: could not open builtin file '/lib/modules/4.14.35-1818.2.1.el7uek.x86_64/modules.builtin.bin'
libkmod: ERROR ../libkmod/libkmod.c:586 kmod_search_moddep: could not open moddep file '/lib/modules/4.14.35-1818.2.1.el7uek.x86_64/modules.dep.bin'
libkmod: ERROR ../libkmod/libkmod.c:586 kmod_search_moddep: could not open moddep file '/lib/modules/4.14.35-1818.2.1.el7uek.x86_64/modules.dep.bin'
libkmod: ERROR ../libkmod/libkmod-module.c:832 kmod_module_insert_module: could not find module by name='iscsi_tcp'

Running out of time to include this product in our POC. Rancher started this POC as the leading favorite and hence we are fighting to get this working but simply running out of time and haven’t found any viable support resources.


I just battled with the same issue. And the problem for me was that in the image that I was using had iscsid running. After removing it ( actually providing cloud-config) I could install openESB from helm stable repo

 - [ service, iscsid, stop ]
 - [ apt, remove,-y, open-iscsi ]

So all you did is merely remove/disable iscsid in the rancher containers? So you’re going to run/host custom containers going forward? This is one of the main reasons we are considering a custom kube distro in the first place, to avoid doing stuff like this.

We aren’t having issues deploying openebs. It deploys fine.

Just curious, what OS is running on your nodes?

Yes I removed iscsid from rancher nodes. Something similar done in this blog post Running OpenEBS On Custom Rancher Cluster | by Chandan Sagar Pradhan | OpenEBS

Actually I could deploy openEBS fine, but it jsut did not let mount anything. It seemed to be working but there was conflict with nodes iscsi, witch led weird problems when mounting.

In this case I’m using Hezner cloud and Ubuntu 16.04 LTS

Yea that workaround only works if you’re also running ubuntu at the node level.

We run oracle linux and have no option to run anything else. We are trying to get a centos image certified by our security team and adjust our tools as necessary but ubuntu won’t ever be on that list.


Thanks for the info.

Are you adding the extra_mount option for /lib/modules:/lib/modules to kubelet?

How or where are you running this?


  • [ service, iscsid, stop ]
  • [ apt, remove,-y, open-iscsi ]

That is the cloudconfig script, so when new node is launched it will run those commands.

But of course you can ssh in and run:

sudo service iscsid stop
apt remove -y open-iscsi

@rhugga @Troyhy this blog is useful to know about deploying stateful workloads with OpenEBS and Rancher 2.x

I had same issue where post installing the Open EBS helm chart the Jiva provisioned were unable to be mounted and failing with error . Iscsi driver not loaded
Just SSH’d into my Nodes … and ran this command …

sudo service iscsid stop
sudo apt remove -y open-iscsi


Well dont remove open-iscsi from node / VM jsut disable it. And make sure if you restart the VM or Nodes to run sudo service iscsid stop on these restarted nodes again.