How to update workloads with with attached longhorn storage?

When attempting to upgrade/update/modify workloads with storage attached with rolling updates, the new pod can never initialize because the original won’t relinquish the claim. I some cases it is simple to just change the upgrade policy, but in other cases it is not so simple, such as with Apps “Launched” from within the ui.

For instance you get a bunch of Apps working and running for a few days with storage provisioned with Longhorn then you realize your cluster is getting a bit messy and you want to go and reorganize and change some of the projects and namespaces. The only way to do that is to delete all the storage?

Otherwise they seem to sit forever attempting to attach storage that their antecedents are holding in a death grip.

If a Longhorn volume is attached by Kubernetes during the workload launching (rather than by the Longhorn UI), the Longhorn volume will be detached from the old pod then reattached to the new pod when the old pod is stopped/terminating… You don’t need to delete the PVC of the Longhorn volume.

What about in the case of workloads created through the rancher ui, which just recieved their storage from the default storage class, which happens to be longhorn? It looks like the old pod never terminates, because the new pod is never ready, because it cannot attach. Deleting the old pod results in a third pod–identical to the old one and not containing the upgraded bits, which immediately grabs the claim, thereby perpetuating the workload’s inability to upgrade. Is the a way around this?

OK, I see. I guess the application you launched is a deployment with upgrade strategy RollingUpdate.

For deployment rolling update, the old pod will be terminated only when the new pod is running. But since Longhorn volume is a RWO volume by default (you can enable RWX mode), the new pod cannot take over the volume before the old pod is down. It’s like a deadlock.

Different from deployment, daemonset rolling update will kill the old pod before launching a new pod. This is perfect with Longhorn. Hence I recommend using DaemonSet with Longhorn.

As a workaround, you can increase spec.strategy.rollingUpdate.maxUnavailable so that there is at least one old pod can be terminated before the new one is running. Or you can directly use strategy Recreate instead.

1 Like

Alright thanks. Some good things to try here. For one I hadn’t realized you could enable RWX with longhorn.

@inmanturbo if you want to use rolling updates with RWO (single node usage) storage, you need to set a pod affinity rule.

So that kubernetes schedules the new pods to the same node the old pod is on.
Also ensure that the workload you are working with can handle multiple writers/readers.

1 Like

Ultimately, this is what I’ve done, using nfs provisioner as recommended in the docs. I had to get all the data copied across first though. The trick to getting there in the interim however on an app that couldn’t update was to manually scale down the stubborn pod, then scale up the pod afterward. In this case the registry in a harbor deployment.

Thanks again.