CEPH Config options for Storage Class

Is there any documentation or examples of how to get CEPH volumes working in 2.0?

What do the Storage Class Parameters map to? What are the “Admin ID” and “User ID” values for?

I tried creating an “admin” and “client” secret, but I can’t figure out how to put the CEPH client auth values into the secret. What is they Key in the secret to use?

When I describe the pvc, I get the following events:

Events:
  Type     Reason              Age               From                         Message
  ----     ------              ----              ----                         -------
  Warning  ProvisioningFailed  9s (x3 over 39s)  persistentvolume-controller  Failed to provision volume with StorageClass "ceph": failed to get admin secret from ["ceph"/"ceph-admin-keyring"]: failed to get secret from ["ceph"/"ceph-admin-keyring"]
1 Like

After doing a lot of troubleshooting/trial-and-error/Googling, I finally found the solution to get CEPH working.

The answers to my questions above about the different fields in the Rancher RBD StorageClass are explained here.

Here are the steps to get CEPH working in your Kubernetes cluster. This is not specifically related to Rancher, and should work with any Kubernetes cluster.

This presumes you already have a working CEPH cluster.

Run the ceph commands on one of your CEPH monitor nodes.

Create a CEPH pool for Kubernetes to use. Modify the 128 values to match your desired placement group count.

ceph osd pool create kube 128 128

Enable the rbd application on your CEPH pool.

ceph osd pool application enable kube rbd

Enable the CRUSH tunables hammer (credit).

ceph osd crush tunables hammer

Create a CEPH auth token for Kubernetes to access the pool.

ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'

Your CEPH cluster (ceph -s) should show health HEALTH_OK and your pgs should be active+clean.

Create a yaml file for your secrets and StorageClass
Change the secret key values based on the ceph commands, and enter in all of your CEPH monitor nodes in the StorageClass.

kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: kube-system
type: "kubernetes.io/rbd"
data:
  # ceph auth get-key client.admin | base64
  key: <your output from above>

---
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
  namespace: kube-system
type: "kubernetes.io/rbd"
data:
  # ceph auth get-key client.kube | base64
  key: <your output from above>
---

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ceph
provisioner: kubernetes.io/rbd
parameters:
  monitors: <mon-ip-1>:6789,<mon-ip-2>:6789,<mon-ip-3>:6789
  adminId: admin
  adminSecretName: ceph-admin-secret
  adminSecretNamespace: kube-system
  pool: kube
  userId: kube
  userSecretName: ceph-secret
  userSecretNamespace: kube-system
  fsType: ext4
  imageFormat: "2"
  imageFeatures: "layering"

I also found that you need to bind /lib/modules into kubelet. This appears to be a Rancher bug.

If you are using RKE, you can add the section:

services:
  kubelet:
    extra_binds:
      - "/lib/modules:/lib/modules"

Now, you should be able to create a pod in Kubernetes that uses a CEPH volume that gets dynamically provisioned when you create the pod.

In the Rancher workload, go to Volumes > Add Volume > Add a new persistent volume (claim)
Enter a name, and select ceph as the storage class, and enter the volume size you want, and click Define.
Type in the path to mount the volume, and you should be good to go!

1 Like

Hello! In my case improvement described above was not enough.
I have the k8s cluster in hetzner managed by rancher 2.0 and separate CEPH cluster.

I’ve needed to add following extra_binds to kubelet config:
extra_binds:

/lib/modules:/lib/modules
/etc/ceph:/etc/ceph
/usr/sbin/modprobe:/usr/sbin/modprobe

1 Like

Are you running Kubernetes 1.12? I found that in one cluster running 1.12, I needed to also mount the /etc/ceph volume, as well as have ceph-common installed on the hosts. In 1.11.3, I only needed the bind that I listed to get it to work,

@shubbard343 Is there a particular key the Secret needs to be? I’m still getting the failed to get admin secret error

The key for the secret is key.

@shubbard343, thanks for your great instructions.

In my case, I had to grant higher privileges to ceph user account to enable it mount the image to the pod:

ceph auth get-or-create client.k8suser1 mon ‘allow r’ osd ‘allow rwx pool=k8spool’ -o /etc/ceph/ceph.client.k8suser1.keyring

And when I copied the keyring file in all nodes, I could create volume without any problem.

Following your tutorial I was able to make it work perfectly with rancher 2.4
Now I have a new cluster with Rancher 2.5 but it does not create the rbd volume anymore.
Please note that in “events” I have no errors.
Can you help me?
Thanks,
Mario