Node Scheduling with Longhorn problems

I’ve got a bit of a weird situation. I have a deployment that used to be tied to a local-path PVC. Because of this, I had setup the deployment’s node scheduling to only run under a specific node. Since then, I discovered longhorn and migrated the PVC to longhorn and was successfully able to have the deployment pods bind to the longhorn PVC correctly.
I have 3 duplicates of the volume across the cluster, so I was hoping I could now free the deployment from its current node scheduling and set it to run on any node available. When I do that, however, I get an error message stating that the PVC is not available on any of my nodes
image
Even though I can go into longhorn’s web UI and can clearly see the volumes in question. (would attach image here, but I’m new and can’t due to restrictions)
And then, if I go back to scheduling to a specific node (not rollback, just change the scheduling back to what it was), it suddenly finds the PVC with no problem and starts running again.
If Longhorn only works when using a specific node or I have to specify the specific nodes, I haven’t run across that in the documentation thus far.

@bramnet
Can you check if the PVC of the Longhorn volume is matched with the PVC used by the deployment?

@derek.su
Thank you for the response. As far as I can see, I have the volume matched with the PVC used by deployment.


If there’s something else I should be checking, please let me know. Thanks.

Why does the error message show “… persistentvolumeclaim “emulatorjs-pv-claim” not found”?
But the pvc is “emulatorjs” in your update?

Good catch. I looked in the yaml and it shows an additional volume (old one) that didn’t show up in the web form. Removed it and tried again, now it’s working like a charm.

| derek.su
August 18 |

  • | - |

Why does the error message show “… persistentvolumeclaim “emulatorjs-pv-claim” not found”?
But the pvc is “emulatorjs” in your update?

I just uninstalled and reinstalled rancher through helm. I should have thought of that sooner. It’s now working without issues.