Rancher 2.0 and Longhorn

Is it possible to use Longhorn with Rancher 2.0 (kubernetes)?

1 Like

Yes, we will integrate Longhorn to work with 2.0.

http://rancher.com/docs/rancher/v2.0/en/faq/#are-you-going-to-integrate-longhorn

2 Likes

Longhorn is now showing up in v2.0.0-beta4 under the Rancher curated Catalog.

Please pay attention that the default image tags in longhorn catalog entry are outdated and not accessible anymore.

After I set the latest tags from hub.docker.com everything went fine with rancher v2.0.0

1 Like

Tags are now correct. But I’m facing some problems using it:

-> Upgrading a node with a Longhorn volume claim with a “start new, stop old” upgrade policy will end up in the updated pod stuck at “ContainerCreating”.
-> Creating a “many nodes read-write” volume claim will end up in a stuck volume, with a “Pending” Status.

EDIT: Adding upgrade policy for first bug.

I haven’t messed with “Many Nodes Read-Write” volume claims yet, so I can’t comment on that.

I did have the same issue with the pod being stuck in the creation state, however. Through some snooping around in kubectl, I figured out that the volume wasn’t being mounted properly. According to the troubleshooting in the README.md of the Longhorn repository, this generally means that the volume plugin hasn’t been set correctly.

Using the instructions, I figured out that despite not using some strange Kubernetes environment (I’m running bare-metal), my volume plugin path still deployed itself at /var/lib/kubelet/volumeplugins instead. Upgrading my Longhorn app and changing that setting quickly brought my container up. Hopefully this helps with the issue that you’re having.

I am running in an issue where I can deploy Longhorn on my cluster… but attempting to deploy applications with PV does not complete. I see the volume being bound… but I have a feeling something is gripping somewhere. Anyone able to deploy longhorn and then leverage it to deply PV for catalog apps?

Volumes show as detached in longhorn ui… not sure if this is normal:

OK. Found the fix. With a custom cluster deployed on VMs the default Longhorn Flexvolume Path does not work. You have to change it to:

/var/lib/kubelet/volumeplugins

I wish the feild description presented a list of common strings depending on the cloud environment used… Right now this is what it read:

For GKE uses /home/kubernetes/flexvolume/ instead, users can find the correct directory by running ps aux|grep kubelet on the host and check the --volume-plugin-dir parameter. If there is none, the default /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ will be used.

Something like the following might help:

For GKE uses /home/kubernetes/flexvolume/ instead.
For Custom use /var/lib/kubelet/volumeplugins instead.

In doubt users can find the correct directory by running ps aux|grep kubelet on the host and check the --volume-plugin-dir parameter. If there is none, the default /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ will be used.

2 Likes