Hi. I’ve just installed a single-node Rancher Kubernetes cluster to experiment with Longhorn. The k8s cluster was easy enough to setup, but I cannot seem to get Longhorn to cooperate, thought I’d reach out here to see if I’m doing something obviously wrong.
I first went ahead and created a new storage class named longhorn
and a PV named longhorn-volv-pv
:
And when I create a PVC with the following definition it successfully binds the PV:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-volv-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 2Gi
volumeName: longhorn-volv-pv
$ kubectl apply -f pvc.yaml
persistentvolumeclaim/longhorn-volv-pvc created
$ kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/longhorn-volv-pvc Bound longhorn-volv-pv 10Gi RWO longhorn 27s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/longhorn-volv-pv 10Gi RWO Retain Bound default/longhorn-volv-pvc longhorn 100s
Then I create a pod with the following definition, expecting it to mount my volume. It seems to get scheduled properly, but I guess my nodes don’t know how to attach the storage…
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: volv
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: volv
persistentVolumeClaim:
claimName: longhorn-volv-pvc
$ kubectl apply -f pod.yaml
pod/volume-test created
$ kubectl describe pod volume-test
Name: volume-test
Namespace: default
Priority: 0
Node: server/192.168.3.1
Start Time: Sat, 26 Sep 2020 21:26:50 +0200
Labels: <none>
Annotations: kubernetes.io/psp: default-psp
Status: Pending
IP:
IPs: <none>
Containers:
volume-test:
Container ID:
Image: nginx:stable-alpine
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/data from volv (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sx6pr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
volv:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: longhorn-volv-pvc
ReadOnly: false
default-token-sx6pr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sx6pr
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/volume-test to server
After a short while it times out giving me the following events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/volume-test to server
Warning FailedAttachVolume 15s attachdetach-controller AttachVolume.Attach failed for volume "longhorn-volv-pv" : attachdetachment timeout for volume longhorn-volv-pv
Warning FailedMount 12s kubelet, server Unable to attach or mount volumes: unmounted volumes=[volv], unattached volumes=[default-token-sx6pr volv]: timed out waiting for the condition