When we update or edit a workload, the updated pod fails to initalize if its scheduled on another host in a 3 node cluster.
This happens because longhorn volumes are not multi node read write.
Isnt it possible for rancher to stop the old workload, reattach the volume to the new host and start the pod there?
Is this a bug, a missing feature or am I doing something wrong?
thanks for getting back. I found that option later on.
I noticed that pods backed by a longhorn volume cant be rescheduled when the hosts goes down as longhorn-engine doesnt release the volume. Is this a bug or as designed?
If this works as designed, longhorn is not suitable for HA use IMHO.