Digital Ocean Spaces and kubernetes volumes

Hi everyone, is it currently possible to use Digital Ocean’s Spaces (or any other DO service) for storage with kubernetes cluster running on digital ocean?

I’d like to use rancher on production with DO droplets but I’m failing to find a good way to implement persistent volumes

kops seems to be using it

Yes I got it working yesterday it was actually surprisingly simple. Follow the instructions here: https://github.com/digitalocean/csi-digitalocean

It can all be done in the browser terminal of rancher (Launch kubectl).

You just have to create a secret.yaml file

apiVersion: v1
kind: Secret
metadata:
  name: digitalocean
  namespace: kube-system
stringData:
  access-token: "a05dd2f26b9b9ac2asdas__REPLACE_ME____123cb5d1ec17513e06da"

and create it

kubectl create -f ./secret.yml

and then

kubectl apply -f https://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-v0.1.0.yaml

Now you will see digital ocean storage class under, storage classes named do-block-storage

1 Like

Thanks for your solution, could you please tell me what happens when given PVC requests 5 gigabytes and it fills it up? will pod that’s using that PVC stop working because of insufficient disk space, will PVC increase its request quota to more gigabytes or maybe nothing happens once the limit is met?

Have you tested if the files written to the volume are still there after recreating or upgrading the pod that uses the volume?

I tested the csi-digitalocen driver with rancher 2.0.4 and after recreating or upgrading a deployment that uses the persistent volume the persistent volume was empty, all the files were lost, also the digitalocean block storage volume was also empty if the pod was removed and the volume detached.

I created an issue in the csi-digitalocean repository:

What does the YAML for your PVC look like?

Did you try adding persistentVolumeReclaimPolicy: Retain to it?

It’s the PVC example from the csi-digitalocean driver Readme, I also tried with the Retain reclaim policy but the same happened.

I tried the same setup in a cluster set up with Kontena Pharos and everything worked as expected so maybe it’s something specific to Rancher. That’s why I wanted to know if your setup is working correctly.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: do-block-storage

I was quickly testing yesterday to evaluate if I could move from AWS to DO. I checked again, and no I have the same problem. If I delete a pod or upgrade all data is lost. Sorry I should have checked that.

UPDATE: Fatih found the issue. It has to do with RKE installation defaults.

Solution:

Cluster Options > Edit as Yaml add the following lines:

services:
  kube-api:
    extra_args:
      feature-gates: MountPropagation=true

  kubelet:
    extra_args:
      feature-gates: MountPropagation=true
1 Like

HI

I have similar problem with Hetzner cloud. I need to enable
–feature-gates=CSINodeInfo=true,CSIDriverRegistry=true

I try to add this config to cluster, but without error it is not accepted. I can save it, but editing again and service section is missing.

I’m running Rancher 2.2.0-RC6

driver_name: "rancherkubernetesengine"
# 
#   # Rancher Config
# 
docker_root_dir: "/var/lib/docker"
enable_cluster_alerting: false
enable_cluster_monitoring: true
enable_network_policy: false
local_cluster_auth_endpoint: 
  enabled: true
name: "sandbox"

services:
  kube-api:
    extra_args:
      feature-gates: CSINodeInfo=true,CSIDriverRegistry=true

  kubelet:
    extra_args:
      feature-gates: CSINodeInfo=true,CSIDriverRegistry=true
1 Like