Hello !
I try to use Rancher 2 with Longhorn but it’s doesn’t work.
This is my configuration:
- RancherOS
- version v1.4.1 from os image rancher/os:v1.4.1
- Kubernetes:
- Client Version: version.Info{Major:“1”, Minor:“12”, GitVersion:“v1.12.0”, GitCommit:“0ed33881dc4355495f623c6f22e7dd0b7632b7c0”, GitTreeState:“clean”, BuildDate:“2018-09-27T17:05:32Z”, GoVersion:“go1.10.4”, Compiler:“gc”, Platform:“linux/amd64”}
- Server Version: version.Info{Major:“1”, Minor:“11”, GitVersion:“v1.11.1”, GitCommit:“b1b29978270dc22fecc592ac55d903350454310a”, GitTreeState:“clean”, BuildDate:“2018-07-17T18:43:26Z”, GoVersion:“go1.10.3”, Compiler:“gc”, Platform:“linux/amd64”}
- Rancher:
- v2.1.1
- Longhorn:
- v0.3.1
I have install my kubernetes cluster on two nodes with rke and I have install Rancher with helm.
Before to install Longhorn, i have launch this script to check the environment:
curl -sSfL https://raw.githubusercontent.com/rancher/longhorn/master/scripts/environment_check.sh | bash
The result:
> curl -sSfL https://raw.githubusercontent.com/rancher/longhorn/master/scripts/environment_check.sh | bash
daemonset.apps "longhorn-environment-check" created
waiting for pods to become ready (0/2)
waiting for pods to become ready (0/2)
all pods ready (2/2)
MountPropagation is enabled!
cleaning up...
daemonset.apps "l
After, i have install Longhorn from the Rancher catalog in “longhorn-system” namespace with the “csi” driver.
Everything seems to have installed correctly:
> kubectl -n longhorn-system get all
NAME READY STATUS RESTARTS AGE
pod/csi-attacher-0 1/1 Running 0 5m
pod/csi-provisioner-0 1/1 Running 0 5m
pod/engine-image-ei-3bda103d-2d52c 1/1 Running 0 5m
pod/engine-image-ei-3bda103d-vm6rg 1/1 Running 0 5m
pod/longhorn-csi-plugin-2xjdx 2/2 Running 0 5m
pod/longhorn-csi-plugin-cf7g4 2/2 Running 0 5m
pod/longhorn-driver-deployer-7ff445576d-gcnc6 1/1 Running 0 5m
pod/longhorn-manager-6bf2r 1/1 Running 0 5m
pod/longhorn-manager-cbbgr 1/1 Running 0 5m
pod/longhorn-ui-5f599b67fd-82jr8 1/1 Running 0 5m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/csi-attacher ClusterIP 10.43.1.40 <none> 12345/TCP 5m
service/csi-provisioner ClusterIP 10.43.102.137 <none> 12345/TCP 5m
service/longhorn-backend ClusterIP 10.43.49.130 <none> 9500/TCP 6m
service/longhorn-frontend NodePort 10.43.177.26 <none> 80:30031/TCP 5m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/engine-image-ei-3bda103d 2 2 2 2 2 <none> 5m
daemonset.apps/longhorn-csi-plugin 2 2 2 2 2 <none> 5m
daemonset.apps/longhorn-manager 2 2 2 2 2 <none> 5m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/longhorn-driver-deployer 1 1 1 1 5m
deployment.apps/longhorn-ui 1 1 1 1 5m
NAME DESIRED CURRENT READY AGE
replicaset.apps/longhorn-driver-deployer-7ff445576d 1 1 1 5m
replicaset.apps/longhorn-ui-5f599b67fd 1 1 1 5m
NAME DESIRED CURRENT AGE
statefulset.apps/csi-attacher 1 1 5m
statefulset.apps/csi-provisioner 1 1 5m
After, I tried to use the example who is specified in the Longhorn project github:
> kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/examples/pvc.yaml
persistentvolumeclaim "longhorn-volv-pvc" created
pod "volume-test" created
And now it’s failed:
The volume in Longhorn have a “Attaching” status in the Longhorn UI.
If I describe the pod:
> kubectl describe pod/volume-test
Name: volume-test
Namespace: default
Node: rancher.[...].com/[...]
Start Time: Fri, 02 Nov 2018 13:16:28 +0000
Labels: <none>
Annotations: <none>
Status: Pending
IP:
Containers:
volume-test:
Container ID:
Image: nginx:stable-alpine
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/data from volv (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vg7mt (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
volv:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: longhorn-volv-pvc
ReadOnly: false
default-token-vg7mt:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vg7mt
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7m (x6 over 7m) default-scheduler pod has unbound PersistentVolumeClaims (repeated 2 times)
Normal Scheduled 7m default-scheduler Successfully assigned default/volume-test to rancher-[...].com
Warning FailedAttachVolume 6m (x8 over 7m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-7bace724-dea1-11e8-8625-0cc47a9fd474" : rpc error: code = Internal desc = Action [attach] not available on [&{pvc-7bace724-dea1-11e8-8625-0cc47a9fd474 volume map[self:http://longhorn-backend:9500/v1/volumes/pvc-7bace724-dea1-11e8-8625-0cc47a9fd474] map[]}]
Warning FailedAttachVolume 1m (x3 over 5m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-7bace724-dea1-11e8-8625-0cc47a9fd474" : rpc error: code = Aborted desc = The volume pvc-7bace724-dea1-11e8-8625-0cc47a9fd474 is attaching
Warning FailedMount 41s (x3 over 5m) kubelet, rancher-***.***l.com Unable to mount volumes for pod "volume-test_default(7bd8d921-dea1-11e8-8625-0cc47a9fd474)": timeout expired waiting for volumes to attach or mount for pod "default"/"volume-test". list of unmounted volumes=[volv]. list of unattached volumes=[volv default-token-vg7mt]
>
Can you help me ?
Thanks.
Benjamin.