FailedAttachVolume during Virtual Machine creation - volume pvc-* is not ready for workloads

I have a single Node Harvester cluster in my lab. I’ve had some success with setting up Virtual Machines initially but after creating several I keep coming back to this FailedAttachVolume error:

Events:
  Type     Reason                  Age                   From                     Message
  ----     ------                  ----                  ----                     -------
  Warning  FailedScheduling        17m                   default-scheduler        0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled               17m                   default-scheduler        Successfully assigned windows/virt-launcher-dc-02-nznnv to harvester-01
  Warning  FailedScheduling        17m                   default-scheduler        0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   SuccessfulAttachVolume  16m                   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-7876bb17-103a-44de-b62b-8f046b6c068f"
  Normal   SuccessfulMountVolume   16m                   kubelet                  MapVolume.MapPodDevice succeeded for volume "pvc-7876bb17-103a-44de-b62b-8f046b6c068f" globalMapPath "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-7876bb17-103a-44de-b62b-8f046b6c068f/dev"
  Normal   SuccessfulMountVolume   16m                   kubelet                  MapVolume.MapPodDevice succeeded for volume "pvc-7876bb17-103a-44de-b62b-8f046b6c068f" volumeMapPath "/var/lib/kubelet/pods/1b8d7e08-f3c3-474d-9798-23d3e9771c15/volumeDevices/kubernetes.io~csi"
  Warning  FailedMount             14m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-2], unattached volumes=[libvirt-runtime cloudinitdisk-ndata private disk-0 container-disks public hotplug-disks cloudinitdisk-udata ephemeral-disks disk-1 disk-2 sockets]: timed out waiting for the condition
  Warning  FailedMount             12m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-2], unattached volumes=[cloudinitdisk-ndata hotplug-disks container-disks disk-0 disk-2 disk-1 public ephemeral-disks sockets private libvirt-runtime cloudinitdisk-udata]: timed out waiting for the condition
  Warning  FailedMount             10m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-2], unattached volumes=[cloudinitdisk-ndata ephemeral-disks hotplug-disks disk-0 disk-2 private cloudinitdisk-udata container-disks libvirt-runtime sockets disk-1 public]: timed out waiting for the condition
  Warning  FailedMount             8m14s                 kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-2], unattached volumes=[cloudinitdisk-ndata public ephemeral-disks container-disks hotplug-disks disk-0 cloudinitdisk-udata private libvirt-runtime sockets disk-1 disk-2]: timed out waiting for the condition
  Warning  FailedAttachVolume      6m38s (x13 over 17m)  attachdetach-controller  AttachVolume.Attach failed for volume "pvc-f3c83035-e49e-4952-9c2e-385e94996a85" : rpc error: code = Aborted desc = volume pvc-f3c83035-e49e-4952-9c2e-385e94996a85 is not ready for workloads
  Warning  FailedMount             5m58s                 kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-2], unattached volumes=[public ephemeral-disks container-disks libvirt-runtime sockets cloudinitdisk-ndata cloudinitdisk-udata private hotplug-disks disk-0 disk-1 disk-2]: timed out waiting for the condition
  Warning  FailedMount             3m43s                 kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-2 disk-0], unattached volumes=[hotplug-disks libvirt-runtime sockets ephemeral-disks container-disks disk-1 disk-2 cloudinitdisk-udata cloudinitdisk-ndata private public disk-0]: timed out waiting for the condition
  Warning  FailedMount             88s                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-2 disk-0], unattached volumes=[cloudinitdisk-udata sockets hotplug-disks cloudinitdisk-ndata disk-1 disk-2 ephemeral-disks container-disks libvirt-runtime private public disk-0]: timed out waiting for the condition
  Warning  FailedAttachVolume      30s (x16 over 17m)    attachdetach-controller  AttachVolume.Attach failed for volume "pvc-6f66ae77-d90b-4763-857a-1bb015e16ec1" : rpc error: code = Aborted desc = volume pvc-6f66ae77-d90b-4763-857a-1bb015e16ec1 is not ready for workloads

I’ve read this might relate to the default numberOfReplicas being 3 having issues on a single node cluster. I updated the default longhorn storage class through the ConfigMap:

kubectl describe configmap longhorn-storageclass -n longhorn-system
Name:         longhorn-storageclass
Namespace:    longhorn-system
Labels:       app.kubernetes.io/instance=harvester
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=harvester
              app.kubernetes.io/version=v1.0.0
              helm.sh/chart=harvester-1.0.0
Annotations:  helm.sh/hook: post-install,post-upgrade

Data
====
storageclass.yaml:
----
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: longhorn
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: "Delete"
volumeBindingMode: Immediate
parameters:
  numberOfReplicas: "1"
  staleReplicaTimeout: "30"
  fromBackup: ""
  baseImage: ""
  migratable: "true"

Events:  <none>

When the VM rootdisk is created it uses this StorageClass and the PersistentVolume numberOfReplicas is 1.

kubectl describe storageclass "longhorn"
~
parameters:
  numberOfReplicas: "1"
  staleReplicaTimeout: "30"
  fromBackup: ""
  baseImage: ""
  migratable: "true"
~

When I upload an ISO image it creates a new StorageClass for each but defaults to numberOfReplicas of 3

NAME                   PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
longhorn (default)     driver.longhorn.io   Delete          Immediate           true                   3h9m
longhorn-image-5c4rd   driver.longhorn.io   Delete          Immediate           true                   146m
longhorn-image-lzj44   driver.longhorn.io   Delete          Immediate           true                   145m
longhorn-image-tvkfb   driver.longhorn.io   Delete          Immediate           true                   18m
longhorn-image-zb6xl   driver.longhorn.io   Delete          Immediate           true                   133m
kubectl describe storageclass "longhorn-image-5c4rd"
Name:                  longhorn-image-5c4rd
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           driver.longhorn.io
Parameters:            backingImage=default-image-5c4rd,migratable=true,numberOfReplicas=3,staleReplicaTimeout=30
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

This is where I’m wondering if I need to change what the default is on image upload. I’m not sure where exactly to do this though. I had some success initially by building a Windows 10 & Server 2022 VM. Then I deleted the Windows ISO Volume & Virtio driver ISO. Creating the next VM worked and I repeated this process. Now though I am getting stuck even after deleting the other volumes:

kubectl -n windows get pvc,pods,vmi
NAME                                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
persistentvolumeclaim/admin-01-disk-1-f7mxr     Bound    pvc-ad846a83-fa5b-499d-885f-2e5668ef6ea1   40Gi       RWX            longhorn               110m
persistentvolumeclaim/dc-01-disk-1-ptsjq        Bound    pvc-9801c1b0-e168-4c0c-987c-fe3eba67afb7   40Gi       RWX            longhorn               85m
persistentvolumeclaim/dc-02-disk-0-7yaex        Bound    pvc-f3c83035-e49e-4952-9c2e-385e94996a85   10Gi       RWX            longhorn-image-zb6xl   36m
persistentvolumeclaim/dc-02-disk-1-q8yqj        Bound    pvc-7876bb17-103a-44de-b62b-8f046b6c068f   40Gi       RWX            longhorn               36m
persistentvolumeclaim/dc-02-disk-2-akcjm        Bound    pvc-6f66ae77-d90b-4763-857a-1bb015e16ec1   10Gi       RWX            longhorn-image-5c4rd   36m
persistentvolumeclaim/desktop-01-disk-1-w59as   Bound    pvc-0fd1a0a7-f5c2-4557-a273-34a5c24c9594   40Gi       RWX            longhorn               111m

NAME                                 READY   STATUS              RESTARTS   AGE
pod/virt-launcher-admin-01-km8q2     1/1     Running             0          92m
pod/virt-launcher-dc-01-7vghr        1/1     Running             0          58m
pod/virt-launcher-dc-02-nznnv        0/1     ContainerCreating   0          36m
pod/virt-launcher-desktop-01-2rh95   1/1     Running             0          91m

NAME                                            AGE   PHASE        IP            NODENAME       READY
virtualmachineinstance.kubevirt.io/admin-01     92m   Running      10.20.20.41   harvester-01   True
virtualmachineinstance.kubevirt.io/dc-01        58m   Running      10.20.20.10   harvester-01   True
virtualmachineinstance.kubevirt.io/dc-02        36m   Scheduling                                False
virtualmachineinstance.kubevirt.io/desktop-01   91m   Running      10.20.20.51   harvester-01   True

In this current state I have tried creating a CentOS VM in the default namespace and replicated

kubectl -n default get pvc,pods,vmi
NAME                                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
persistentvolumeclaim/centos-01-disk-0-dxhoq   Bound    pvc-e84daa57-3baf-4d69-abfc-0cf10a6d1a2f   10Gi       RWX            longhorn-image-tvkfb   24m
persistentvolumeclaim/centos-01-disk-1-fu9st   Bound    pvc-4b37f7e8-2438-4eef-b7db-d9bf521445dd   40Gi       RWX            longhorn               24m

NAME                                READY   STATUS              RESTARTS   AGE
pod/virt-launcher-centos-01-tr99z   0/1     ContainerCreating   0          24m

NAME                                           AGE   PHASE        IP    NODENAME   READY
virtualmachineinstance.kubevirt.io/centos-01   24m   Scheduling                    False
Events:
  Type     Reason              Age                   From                     Message
  ----     ------              ----                  ----                     -------
  Warning  FailedScheduling    24m                   default-scheduler        0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled           24m                   default-scheduler        Successfully assigned default/virt-launcher-centos-01-tr99z to harvester-01
  Warning  FailedScheduling    24m                   default-scheduler        0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedMount         22m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[cloudinitdisk-udata ephemeral-disks container-disks cloudinitdisk-ndata private disk-0 disk-1 hotplug-disks public libvirt-runtime sockets]: timed out waiting for the condition
  Warning  FailedMount         20m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[libvirt-runtime sockets cloudinitdisk-udata disk-0 disk-1 public container-disks hotplug-disks cloudinitdisk-ndata private ephemeral-disks]: timed out waiting for the condition
  Warning  FailedMount         17m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-1 disk-0], unattached volumes=[libvirt-runtime disk-1 disk-0 public ephemeral-disks cloudinitdisk-ndata container-disks sockets hotplug-disks cloudinitdisk-udata private]: timed out waiting for the condition
  Warning  FailedMount         15m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[sockets public hotplug-disks private disk-0 container-disks ephemeral-disks cloudinitdisk-udata cloudinitdisk-ndata disk-1 libvirt-runtime]: timed out waiting for the condition
  Warning  FailedAttachVolume  14m (x13 over 24m)    attachdetach-controller  AttachVolume.Attach failed for volume "pvc-4b37f7e8-2438-4eef-b7db-d9bf521445dd" : rpc error: code = Aborted desc = volume pvc-4b37f7e8-2438-4eef-b7db-d9bf521445dd is not ready for workloads
  Warning  FailedMount         13m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[hotplug-disks sockets container-disks disk-0 public ephemeral-disks private libvirt-runtime cloudinitdisk-udata cloudinitdisk-ndata disk-1]: timed out waiting for the condition
  Warning  FailedMount         11m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[container-disks libvirt-runtime private public disk-0 disk-1 cloudinitdisk-udata cloudinitdisk-ndata ephemeral-disks sockets hotplug-disks]: timed out waiting for the condition
  Warning  FailedMount         8m51s                 kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[cloudinitdisk-ndata sockets public hotplug-disks disk-0 disk-1 ephemeral-disks container-disks libvirt-runtime cloudinitdisk-udata private]: timed out waiting for the condition
  Warning  FailedMount         6m33s                 kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-1 disk-0], unattached volumes=[public ephemeral-disks hotplug-disks libvirt-runtime sockets cloudinitdisk-ndata disk-1 private container-disks cloudinitdisk-udata disk-0]: timed out waiting for the condition
  Warning  FailedMount         4m19s                 kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[ephemeral-disks private disk-0 sockets cloudinitdisk-ndata container-disks hotplug-disks libvirt-runtime cloudinitdisk-udata public disk-1]: timed out waiting for the condition
  Warning  FailedAttachVolume  3m56s (x18 over 24m)  attachdetach-controller  AttachVolume.Attach failed for volume "pvc-e84daa57-3baf-4d69-abfc-0cf10a6d1a2f" : rpc error: code = Aborted desc = volume pvc-e84daa57-3baf-4d69-abfc-0cf10a6d1a2f is not ready for workloads
  Warning  FailedMount         2m2s                  kubelet                  (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[libvirt-runtime cloudinitdisk-udata cloudinitdisk-ndata disk-0 public ephemeral-disks container-disks disk-1 hotplug-disks sockets private]: timed out waiting for the condition

I first came across this yesterday working through the kubevirt docs passing unattend.xml files. I was using config maps for storage and the virtio through a container. Again, that worked initially but then had the same issue using the virtio container so I switched to the iso. After building the VM, I came across the same issue on the syspreped machine which I then deleted and started again.

Today I reinstalled harvester and made the config changes prior to deploying any VMs. I replicated after deploying the 2nd Server 2022 VM while the first was still in progress. I’m wondering if the ISO being mounted to another before removal could have something to do with this?

Hi…

I have the same situation.
The dashboard said that containercreating for a long time, the events said

Unable to attach or mount volumes: unmounted volumes=[disk-1], unattached volumes=[ephemeral-disks private container-disks hotplug-disks cloudinitdisk-ndata disk-1 libvirt-runtime sockets cloudinitdisk-udata public disk-0]: timed out waiting for the condition

Is there any workaround to solve this?

Environment:
Harvester ISO: 1.0.0
Baremetal: 10 nodes Fujitsu Primergy
VM Image: Ubuntu-21.04 img format (qcow2)

Regards,
Fadjar Tandabawana

My original issue was due to lack of storage on the host. Also a lack of understanding of Longhorn didn’t help…

I had another look this evening but this time went into the Longhorn UI > Volumes. Here I found the volume showing Detached with a red exclamation mark “The volume cannot be scheduled”. Clicked into the Volume:

“Scheduling Failure, Replica Scheulding Failure” let me to: My volumes keep getting “Degraded” · Issue #1949 · longhorn/longhorn (github.com)

Yaskers first reply indicated disk space on the node itself. Which now makes sense because I used a small nvme. Again, lack of understanding of longhorn but I see for each volume it shows 3 Replicas. With the UI I can select other Degraded volumes and reduce the replicas down to 1. Not sure how the VMs consume storage at this point thin/thick but will go through longhorn docs and come back to this with a bigger nvme drive & ideally get default replicas down to 1

Has anyone found a workaround to this?

I am getting the same issue in a virtualized instance of Harvester built on the v1.1 iso. This was built with kvm64 6cores/threads12 16gb ram and one 300gb drive presented to it. comes up and I try to build the first vm “rancher” giving it 20gig hd, 4cores/6gb ram. with ubuntu 2204. After turning it on I get this error.

0/1 nodes are available: 1 Insufficient devices.kubevirt.io/kvm. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

I look in the longhorn dashboard and see the same as described above…

1 Insufficient devices.kubevirt.io/kvm

Did you enable nested virtualization?

Change the replicas of image volume is avaliable as of Harvester v1.1.
Refer to The open source hyperconverged infrastructure (HCI) solution for a cloud native world (harvesterhci.io)

Thanks I will look into these. I am pretty sure nested is enabled by default on truenas scale vm’s but will check that. I did select kvm64 so it is possible that it is not available on that cpu. Also will try your second recommendation also. thx

so nested is enabled but apparently kvm64 cpu does not support nested or maybe there is another configuration I need to find to enable it on kvm64 cpu’s. when I switched it to host cpu everything was fixed. I put it on kvm as my hardware is different enough I didn’t want to run into migration issues once I get more hosts up and running.

I have an issue a volume is in faulty state - Detached , we have 40 TB disk space

Thanks for the hint!

For the sake of completeness, to anyone who may come across this error and this post, here’s the CLI version of tracking down the error.

Look through kubectl events or describe the pod in question and see whether you can spot a PV that’s causing trouble

[root@rocky ~]# kubectl -n gitlab describe pod gitlab-postgresql-0 | tail
  ----     ------              ----                 ----                     -------
  Warning  FailedScheduling    22m (x238 over 62m)  default-scheduler        0/4 nodes are available: 1 Insufficient memory, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) had untolerated taint {node-role.kubernetes.io/ingress: }, 1 node(s) had untolerated taint {node-role.kubernetes.io/storage-medium: }. preemption: 0/4 nodes are available: 1 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling.
  Normal   Scheduled           21m                  default-scheduler        Successfully assigned gitlab/gitlab-postgresql-0 to node-linux-x86-64-rocky-03da24fc3d134944927eccea7874f749-0
  Warning  FailedAttachVolume  21m                  attachdetach-controller  AttachVolume.Attach failed for volume "pvc-57879c88-cfb7-4759-b3ae-2834ed023fb0" : CSINode node-linux-x86-64-rocky-03da24fc3d134944927eccea7874f749-0 does not contain driver driver.longhorn.io
  Warning  FailedMount         19m                  kubelet                  Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[postgresql-password dshm data kube-api-access-n45r8 custom-init-scripts]: timed out waiting for the condition
  Warning  FailedMount         10m                  kubelet                  Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[custom-init-scripts postgresql-password dshm data kube-api-access-n45r8]: timed out waiting for the condition
  Warning  FailedMount         8m11s (x2 over 14m)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data kube-api-access-n45r8 custom-init-scripts postgresql-password dshm]: timed out waiting for the condition
  Warning  FailedMount         3m41s (x2 over 17m)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[kube-api-access-n45r8 custom-init-scripts postgresql-password dshm data]: timed out waiting for the condition
  Warning  FailedMount         84s (x3 over 12m)    kubelet                  Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[dshm data kube-api-access-n45r8 custom-init-scripts postgresql-password]: timed out waiting for the condition
  Warning  FailedAttachVolume  61s (x17 over 21m)   attachdetach-controller  AttachVolume.Attach failed for volume "pvc-57879c88-cfb7-4759-b3ae-2834ed023fb0" : rpc error: code = Aborted desc = volume pvc-57879c88-cfb7-4759-b3ae-2834ed023fb0 is not ready for workloads

In this case, it says pvc-57879c88-cfb7-4759-b3ae-2834ed023fb0 is having problems.

Then describe that PV and look at the longhorn.io/volume-scheduling-error annotation. There’s your error message.

[root@rocky ~]# kubectl describe pv pvc-57879c88-cfb7-4759-b3ae-2834ed023fb0
Name:            pvc-57879c88-cfb7-4759-b3ae-2834ed023fb0
Labels:          <none>
Annotations:     longhorn.io/volume-scheduling-error: disks are unavailable;insufficient storage
                 pv.kubernetes.io/provisioned-by: driver.longhorn.io
Finalizers:      [kubernetes.io/pv-protection external-attacher/driver-longhorn-io]
StorageClass:    longhorn
Status:          Bound
Claim:           gitlab/data-gitlab-postgresql-0
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        8Gi
Node Affinity:   <none>
Message:         
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            driver.longhorn.io
    FSType:            ext4
    VolumeHandle:      pvc-57879c88-cfb7-4759-b3ae-2834ed023fb0
    ReadOnly:          false
    VolumeAttributes:      dataLocality=disabled
                           fromBackup=
                           fsType=ext4
                           numberOfReplicas=2
                           staleReplicaTimeout=30
                           storage.kubernetes.io/csiProvisionerIdentity=1675342008187-8081-driver.longhorn.io
Events:                <none>
[root@rocky ~]# 

If you cannot find the PV in question in kubectl events / pod description, pick ones that have familiar names from the PVC list (or by the names that might relate to your pod)

[root@rocky ~]# kubectl get pvc -o wide
NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
data-gitlab-postgresql-0           Bound    pvc-57879c88-cfb7-4759-b3ae-2834ed023fb0   8Gi        RWO            longhorn       63m   Filesystem
gitlab-minio                       Bound    pvc-28e1b3e6-a878-4a7c-a599-6eee34578bbe   10Gi       RWO            longhorn       63m   Filesystem
gitlab-prometheus-server           Bound    pvc-17c2a01b-72cc-4dec-838a-cc7eb0b8aaeb   8Gi        RWO            longhorn       63m   Filesystem
redis-data-gitlab-redis-master-0   Bound    pvc-5a9829db-2ad8-43ab-a69a-71b3ef3673d0   8Gi        RWO            longhorn       63m   Filesystem
repo-data-gitlab-gitaly-0          Bound    pvc-b748819d-2fbd-4fc8-bd09-9b2682664b9c   50Gi       RWO            longhorn       63m   Filesystem
[root@rocky ~]# 

If that doesn’t work out, try looking through volumeattachments - see which ones are not attached

[root@rocky ~]# kubectl get volumeattachments.storage.k8s.io -o wide
NAME                                                                   ATTACHER             PV                                         NODE                                                         ATTACHED   AGE
csi-05b262fa2bbb34c95621bacdbfe455a13ad28e010ae6777d35d3058e33a52c57   driver.longhorn.io   pvc-5a9829db-2ad8-43ab-a69a-71b3ef3673d0   node-linux-x86-64-rocky-03da24fc3d134944927eccea7874f749-0   false      65m
csi-3a8fad2058abc9f2016bb9c96191e9d971b16baf774fe50b33054744e7f408eb   driver.longhorn.io   pvc-57879c88-cfb7-4759-b3ae-2834ed023fb0   node-linux-x86-64-rocky-03da24fc3d134944927eccea7874f749-0   false      24m
csi-5d3a8b5c40f8333b19dcb66743710003604c119b1a7bead9eeccb710c0906046   driver.longhorn.io   pvc-28e1b3e6-a878-4a7c-a599-6eee34578bbe   node-linux-x86-64-rocky-03da24fc3d134944927eccea7874f749-0   true       24m
csi-758743437d4ebc71d21be82d97576e9bc5fb018827e723bd6f649b9780b71639   driver.longhorn.io   pvc-17c2a01b-72cc-4dec-838a-cc7eb0b8aaeb   node-linux-x86-64-rocky-03da24fc3d134944927eccea7874f749-0   true       65m
csi-ff93f6ea9186acc760c463c2abd596b417f583170b2d0151dffd4d67ef772f9d   driver.longhorn.io   pvc-b748819d-2fbd-4fc8-bd09-9b2682664b9c   node-linux-x86-64-rocky-03da24fc3d134944927eccea7874f749-0   false      24m

EDIT:

Or, you know, just list Longhorn volumes and find the detached ones. Also, mind the degraded state of the others

[root@rocky ~]# kubectl get volumes.longhorn.io -A -o wide
NAMESPACE         NAME                                       STATE      ROBUSTNESS   SCHEDULED   SIZE          NODE                                                         AGE
longhorn-system   pvc-17c2a01b-72cc-4dec-838a-cc7eb0b8aaeb   attached   degraded                 8589934592    node-linux-x86-64-rocky-03da24fc3d134944927eccea7874f749-0   99m
longhorn-system   pvc-28e1b3e6-a878-4a7c-a599-6eee34578bbe   attached   degraded                 10737418240   node-linux-x86-64-rocky-03da24fc3d134944927eccea7874f749-0   99m
longhorn-system   pvc-57879c88-cfb7-4759-b3ae-2834ed023fb0   detached   unknown                  8589934592                                                                 99m
longhorn-system   pvc-5a9829db-2ad8-43ab-a69a-71b3ef3673d0   detached   unknown                  8589934592                                                                 99m
longhorn-system   pvc-b748819d-2fbd-4fc8-bd09-9b2682664b9c   detached   unknown                  53687091200                                                                99m
[root@rocky ~]#