After doing a lot of troubleshooting/trial-and-error/Googling, I finally found the solution to get CEPH working.
The answers to my questions above about the different fields in the Rancher RBD StorageClass are explained here.
Here are the steps to get CEPH working in your Kubernetes cluster. This is not specifically related to Rancher, and should work with any Kubernetes cluster.
This presumes you already have a working CEPH cluster.
Run the ceph
commands on one of your CEPH monitor nodes.
Create a CEPH pool for Kubernetes to use. Modify the 128
values to match your desired placement group count.
ceph osd pool create kube 128 128
Enable the rbd
application on your CEPH pool.
ceph osd pool application enable kube rbd
Enable the CRUSH tunables hammer (credit).
ceph osd crush tunables hammer
Create a CEPH auth token for Kubernetes to access the pool.
ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'
Your CEPH cluster (ceph -s
) should show health HEALTH_OK
and your pgs should be active+clean
.
Create a yaml file for your secrets and StorageClass
Change the secret key values based on the ceph commands, and enter in all of your CEPH monitor nodes in the StorageClass.
kind: Secret
metadata:
name: ceph-admin-secret
namespace: kube-system
type: "kubernetes.io/rbd"
data:
# ceph auth get-key client.admin | base64
key: <your output from above>
---
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: kube-system
type: "kubernetes.io/rbd"
data:
# ceph auth get-key client.kube | base64
key: <your output from above>
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ceph
provisioner: kubernetes.io/rbd
parameters:
monitors: <mon-ip-1>:6789,<mon-ip-2>:6789,<mon-ip-3>:6789
adminId: admin
adminSecretName: ceph-admin-secret
adminSecretNamespace: kube-system
pool: kube
userId: kube
userSecretName: ceph-secret
userSecretNamespace: kube-system
fsType: ext4
imageFormat: "2"
imageFeatures: "layering"
I also found that you need to bind /lib/modules
into kubelet. This appears to be a Rancher bug.
If you are using RKE, you can add the section:
services:
kubelet:
extra_binds:
- "/lib/modules:/lib/modules"
Now, you should be able to create a pod in Kubernetes that uses a CEPH volume that gets dynamically provisioned when you create the pod.
In the Rancher workload, go to Volumes > Add Volume > Add a new persistent volume (claim)
Enter a name, and select ceph
as the storage class, and enter the volume size you want, and click Define
.
Type in the path to mount the volume, and you should be good to go!