I have a Longhorn volume that should have 3 replicas. I logged into the Longhorn UI and noticed that the volume was “Degraded”. When looking at the volume, I saw that there were 3 replicas all “Running” but only one was blue, two were grey, and the health was “Degraded”. I deleted one of the volumes, and LH automatically rebuilt it, and shows as Healthy, but the 3rd replica is still grey. What can cause it to get to this state, and why did LH not self-heal the volume?
You can check the events or manager log to see what’s going on with that replica. We can help if we know more information about it.
Where do I see those logs?
You can get Longhorn manager logs using
kubectl -n longhorn-system get pod then check for the pods with
longhorn-manager prefix. You can also send the support bundle to firstname.lastname@example.org and we can take a look, just make sure you’ve added details in the description of the support bundle so we would know which issue it is.