Local PVs fail on custom cluster with library-openebs app

Howdy,
Anyone having luck running local PVs with the ‘library - openebs’ app in Rancher? I have a ‘custom’ bare-metal cluster and can get volumes working on top of cstor pools I’ve defined, but local PVs won’t mount. I was hoping this would just work ‘out of the box’ since it’s listed as a partner app in the Rancher library, but maybe there are some tweaks needed.

Hello Daniel, depending on how you’d like to place the storage, you may have to configure a Dynamic LocalPV StorageClass to use. On your bare-metal hosts, do you have a block-device that you can dedicate to OpenEBS? Would you like to use Hostpath style, or Device?

Hostpath is the easiest, simply mount the additional storage anywhere on the node, and then create a Storage Class that points to it. Something like this:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    cas.openebs.io/config: "- name: StorageType\n  value: \"hostpath\"\n- name: BasePath\n
      \ value: /mnt/disks/ssd0 \n"
  name: localpv-hostpath-ssd0
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

Using LocalPV Device requires that you have some additional, unused block devices available on your nodes. It has some nice advantages, but takes more planning.

Hi Daniel,

If you are using the Local PV hostpath, then you will need to configure the extra binds as follows:

services:
  kubelet:
    extra_binds:
     - /var/openebs/local:/var/openebs/local

Thanks, appreciate the replies.
@kiranmova I had somehow missed the prereq of setting up the extra_binds for kubelet, probably since I was focusing on cstor at first; thanks for helping me circle back.

@bmath When provisioning a local PV based on device, do I assume correctly that the PV to device mapping will be 1-1? Depending on use case, that might be OK, but if there’s going to be a lot of dynamic PV needs on a particular node, that won’t work so well. What in your opinion are the advantages of using a device for PVs instead of a hostpath?

hmm, still not working.
I’ve added the extra_binds entry, restarted the kubelet containers, stepped into them and verified that /var/openebs/local is indeed mounted properly, that I can write to said dir from within the kubelet container. Also verified that the iscsi service on the target node is active. I’ve double-checked that I’ve done everything outlined here:

But when I attempt to dynamically provision a PV using the vanilla storageclass that openebs sets up on install ‘openebs-hostpath’ it still fails with:
Warning FailedMount 3m48s (x53 over 18m) kubelet, node1 Unable to attach or mount volumes: unmounted volumes=[u1-pvc], unattached volumes=[u1-pvc default-token-h2pk7]: error processing PVC dev/u1-pvc: PVC is not bound

Can you share your pvc yaml with us?

sure. It was automatically generated by the Rancher UI when I created a test deployment with a single pod w/ the ubuntu:bionic image and:
Add Volume > Add a new persistent volume (claim) w/ storage class openebs-hostpath:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    field.cattle.io/creatorId: user-xxxxx
  creationTimestamp: "2020-03-06T17:57:03Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    cattle.io/creator: norman
  name: u1-pvc
  namespace: dev
  resourceVersion: "54457852"
  selfLink: /api/v1/namespaces/dev/persistentvolumeclaims/u1-pvc
  uid: 6509f630-b547-4283-b273-344e8734e431
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
  storageClassName: openebs-hostpath
  volumeMode: Filesystem
status:
  phase: Pending

LGTM.

can you also share the output of kc get pv - A?

Do you have 100G available on the fs where /var/openebs is?

Interesting; it doesn’t seem to get as far as to even create a PV.

$ kubectl get pv -A
No resources found

and here’s the pending pvc:

$ kubectl -n dev get pvc
NAME     STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS       AGE
u1-pvc   Pending                                      openebs-hostpath   97

Looking at details on that pvc, at the end it has the message:
Normal WaitForFirstConsumer 2m44s (x422 over 107m) persistentvolume-controller waiting for first consumer to be created before binding

…so it seems to be waiting on the pod to come up, but the pod seems to be stuck in ‘ContainerCreating’ state because it’s waiting for the pvc to become available??

There is plenty of space on the filesystem that /var/openebs/local is mounted on (nearly 1TB)

BTW, what does the -A arg do for kubectl? I don’t see it listed under kubectl options

This is getting more interesting.
Just for kicks and giggles I deployed the percona Deployment found on this doc page:


lo and behold, the PVC setup properly, no problems whatsoever!
I then launched the ‘elasticsearch’ app from the helm catalog, and again, it was able to provision PVCs no problem with the default SC, which is openebs-hostpath.

This is good news, since it tells me that openebs is generally playing nice with my cluster. The question then is why is the single ubuntu deploy with a PVC via the UI failing?