OpenEBS on BareMetal with Rancher 2.2.8 or 2.3.0

Hello Gentlemen,
I am searching for some help to get OpenEBS set up on my Rancher Cluster. My goal is to get automatic provisioning with the OpenEBS Dynamic Local PV provisioner with hostpath running.

My setup is a single node cluster on a root server from Netcup. I use a single node Rancher Server installation on the same server with non-default ports mapped for http/https.

In my tries, the Kubernetes clusters which I am trying to install OpenEBS on were freshly provisioned with the respective Rancher Server version of my installation.

First try with Rancher 2.3.0:

  • Here installation of OpenEBS with default configuration options already failed. As 2.2.8 is currently mentioned as stable and I am a newbie regarding Kubernetes and Rancher I switched to 2.28 because of this.

Second try with Rancher 2.2.8

  • Installation of OpenEBS seems to work. The pods are all up and running according to the description on the OpenEBS website. And the default storage classes for openebs were also installed.

I now have tried two things:

  1. Enable Grafana/Promotheus Monitoring on my Rancher cluster by “Enable Persistent Storage for Grafana/Promotheus” using “openebs-hostpath” the pods simply do not come up and I cannot see any output in the logs. I can see the relevant pvc’s but the pv’s are not created.

  2. In order to debug the stuff I followed the example from the openebs website
    https://docs.openebs.io/docs/next/uglocalpv.html
    and used the chart on this page to spin up a percona database with the “openebs-hostpath” storage class. The pod is also not coming up. When using

kc describe pod percona-5f878cbbd5-c4dnj

I get the output attached below. One further note, I am using RancherOS as operating system and before installing things I activated iscsi according to the description on the openebs page:

sudo ros s enable open-iscsi
sudo ros s up open-iscsi
ros config set rancher.services.user-volumes.volumes [/home:/home,/opt:/opt,/var/lib/kubelet:/var/lib/kubelet,/etc/kubernetes:/etc/kubernetes,/var/openebs]
system-docker rm all-volumes
reboot

One further note: When starting the deployment according to the log below, I can see that openebs successfully create a folder for the host volume given with this path:

/var/openebs/local/pvc-caec8f21-edd6-11e9-89d7-56ade17ea74b

even so the log below mentions that it cannot create the path.

MountVolume.NewMounter initialization failed for volume “pvc-caec8f21-edd6-11e9-89d7-56ade17ea74b” : path “/var/openebs/local/pvc-caec8f21-edd6-11e9-89d7-56ade17ea74b” does not exist

However, the folder is empty, I would assume to have some file representing the volume in there. Can someone help me to get this openebs stuff running? I have no idea how to narrow down the issue further :frowning:
Thanks a lot for your help!

Best regards,
Christoph

Name: percona-5f878cbbd5-c4dnj
Namespace: default
Priority: 0
Node: v220191010512198810/185.163.117.11
Start Time: Sun, 13 Oct 2019 18:30:45 +0200
Labels: name=percona
pod-template-hash=5f878cbbd5
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/percona-5f878cbbd5
Containers:
percona:
Container ID:
Image: percona
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
Args:
–ignore-db-dir
lost+found
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 500m
Requests:
cpu: 500m
Environment:
MYSQL_ROOT_PASSWORD: k8sDem0
Mounts:
/var/lib/mysql from demo-vol1 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f9789 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
demo-vol1:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: demo-vol1-claim
ReadOnly: false
default-token-f9789:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-f9789
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: ak=av:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 4m47s default-scheduler persistentvolumeclaim “demo-vol1-claim” not found
Normal Scheduled 4m41s default-scheduler Successfully assigned default/percona-5f878cbbd5-c4dnj to v220191010512198810
Warning FailedMount 31s (x10 over 4m41s) kubelet, v220191010512198810 MountVolume.NewMounter initialization failed for volume “pvc-caec8f21-edd6-11e9-89d7-56ade17ea74b” : path “/var/openebs/local/pvc-caec8f21-edd6-11e9-89d7-56ade17ea74b” does not exist
Warning FailedMount 22s (x2 over 2m38s) kubelet, v220191010512198810 Unable to mount volumes for pod “percona-5f878cbbd5-c4dnj_default(cadfbaad-edd6-11e9-89d7-56ade17ea74b)”: timeout expired waiting for volumes to attach or mount for pod “default”/“percona-5f878cbbd5-c4dnj”. list of unmounted volumes=[demo-vol1]. list of unattached volumes=[demo-vol1 default-token-f9789]

Hello together,
seems that I got a little bit closer to my problem.
After trying to install Longhorn which is also not working for me, I found a diagnostics script which tells me

node : MountPropagation DISABLED

MountPropagation is disabled on at least one node.
As a result, CSI driver and Base image cannot be supported.

So I assume this is the problem that neither OpenEBS nor Longhorn is working for me. Can somebody give me directions how I can enable the setting. My rancher server is based on the single node docker container. Thanks for your help.

Best regards,
Christoph

Hi Christoph,

We can give you a hand with installing and configuring OpenEBS on your bare metal Rancher deployment if you join us on the OpenEBS Slack. We have a number of users happy with the way Rancher and OpenEBS work together. Just go to:

to request an invitation!

Based on your description it sounds like you are very close, but we’ll need to take a look at your StorageClass definitions and the state of the underlying disks. I’m sure we can help you get to the bottom of it though.

Cheers,
Brian

Hello Brian,
I am already trying since two days to get help via the slack forum you mentioned. Unfortunately, I only get the recommandations to follow the description on the webpage. Of course I already did that before I started asking.
I would really need some more advanced help on how I could debug the topic further.

Best regards,
Christoph

Hi all,

I hope you get this solved eventually. Please post the solution here if you do.

Cheers, Remi.

Hello,
we solved the issue. You need to add some extra_binds to your cluster-config.yml. If you provision your cluster with rancher you can do this where you create the cluster “edit as yaml” In the section services you need to add the kublet section. The section looks for me as follows

services: 
  etcd: 
    backup_config: 
      enabled: true
      interval_hours: 12
      retention: 6
    creation: "12h"
    extra_args: 
      heartbeat-interval: 500
      election-timeout: 5000
    retention: "72h"
    snapshot: false
  kube-api: 
    always_pull_images: false
    pod_security_policy: false
    service_node_port_range: "30000-32767"
  kubelet: 
    extra_binds: 
      - "/etc/iscsi:/etc/iscsi"
      - "/sbin/iscsiadm:/sbin/iscsiadm"
      - "/var/lib/iscsi:/var/lib/iscsi"
      - "/lib/modules"
      - "/var/openebs:/var/openebs"
    fail_swap_on: false

One further note: For local pv / hostpath to work you would just need “/var/openebs/local:/var/openebs/local” As it yet is not clear to me for what the /var/openebs/sparse folder is required, I simply added the entire /var/openebs path.

Best regards,
Christoph

Hi @cjohn001, appreciate you recording your experience here. I’m attempting to run openebs local pv s on rancher 2.3.5: I had successfully run cstor pools out of the box but am hitting a ‘Unable to attach or mount volume’ wall and suspect your issue might be similar to mine.

Do you know if there is a way to edit the cluster config for an existing cluster? I don’t see an option (through the UI at least) to edit the cluster yaml, and it sounds like what you’re describing is only when creating the cluster.

Hello Daniel,
I am sorry, I have no idea.

Best regards,
Christoph

I don’t know how I missed this but there’s an ‘Edit as YAML’ option on existing clusters via the UI.