FailedAttachVolume during Virtual Machine creation - volume pvc-* is not ready for workloads

I have a single Node Harvester cluster in my lab. I’ve had some success with setting up Virtual Machines initially but after creating several I keep coming back to this FailedAttachVolume error:

Events:
  Type     Reason                  Age                   From                     Message
  ----     ------                  ----                  ----                     -------
  Warning  FailedScheduling        17m                   default-scheduler        0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled               17m                   default-scheduler        Successfully assigned windows/virt-launcher-dc-02-nznnv to harvester-01
  Warning  FailedScheduling        17m                   default-scheduler        0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   SuccessfulAttachVolume  16m                   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-7876bb17-103a-44de-b62b-8f046b6c068f"
  Normal   SuccessfulMountVolume   16m                   kubelet                  MapVolume.MapPodDevice succeeded for volume "pvc-7876bb17-103a-44de-b62b-8f046b6c068f" globalMapPath "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-7876bb17-103a-44de-b62b-8f046b6c068f/dev"
  Normal   SuccessfulMountVolume   16m                   kubelet                  MapVolume.MapPodDevice succeeded for volume "pvc-7876bb17-103a-44de-b62b-8f046b6c068f" volumeMapPath "/var/lib/kubelet/pods/1b8d7e08-f3c3-474d-9798-23d3e9771c15/volumeDevices/kubernetes.io~csi"
  Warning  FailedMount             14m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-2], unattached volumes=[libvirt-runtime cloudinitdisk-ndata private disk-0 container-disks public hotplug-disks cloudinitdisk-udata ephemeral-disks disk-1 disk-2 sockets]: timed out waiting for the condition
  Warning  FailedMount             12m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-2], unattached volumes=[cloudinitdisk-ndata hotplug-disks container-disks disk-0 disk-2 disk-1 public ephemeral-disks sockets private libvirt-runtime cloudinitdisk-udata]: timed out waiting for the condition
  Warning  FailedMount             10m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-2], unattached volumes=[cloudinitdisk-ndata ephemeral-disks hotplug-disks disk-0 disk-2 private cloudinitdisk-udata container-disks libvirt-runtime sockets disk-1 public]: timed out waiting for the condition
  Warning  FailedMount             8m14s                 kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-2], unattached volumes=[cloudinitdisk-ndata public ephemeral-disks container-disks hotplug-disks disk-0 cloudinitdisk-udata private libvirt-runtime sockets disk-1 disk-2]: timed out waiting for the condition
  Warning  FailedAttachVolume      6m38s (x13 over 17m)  attachdetach-controller  AttachVolume.Attach failed for volume "pvc-f3c83035-e49e-4952-9c2e-385e94996a85" : rpc error: code = Aborted desc = volume pvc-f3c83035-e49e-4952-9c2e-385e94996a85 is not ready for workloads
  Warning  FailedMount             5m58s                 kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-2], unattached volumes=[public ephemeral-disks container-disks libvirt-runtime sockets cloudinitdisk-ndata cloudinitdisk-udata private hotplug-disks disk-0 disk-1 disk-2]: timed out waiting for the condition
  Warning  FailedMount             3m43s                 kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-2 disk-0], unattached volumes=[hotplug-disks libvirt-runtime sockets ephemeral-disks container-disks disk-1 disk-2 cloudinitdisk-udata cloudinitdisk-ndata private public disk-0]: timed out waiting for the condition
  Warning  FailedMount             88s                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-2 disk-0], unattached volumes=[cloudinitdisk-udata sockets hotplug-disks cloudinitdisk-ndata disk-1 disk-2 ephemeral-disks container-disks libvirt-runtime private public disk-0]: timed out waiting for the condition
  Warning  FailedAttachVolume      30s (x16 over 17m)    attachdetach-controller  AttachVolume.Attach failed for volume "pvc-6f66ae77-d90b-4763-857a-1bb015e16ec1" : rpc error: code = Aborted desc = volume pvc-6f66ae77-d90b-4763-857a-1bb015e16ec1 is not ready for workloads

I’ve read this might relate to the default numberOfReplicas being 3 having issues on a single node cluster. I updated the default longhorn storage class through the ConfigMap:

kubectl describe configmap longhorn-storageclass -n longhorn-system
Name:         longhorn-storageclass
Namespace:    longhorn-system
Labels:       app.kubernetes.io/instance=harvester
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=harvester
              app.kubernetes.io/version=v1.0.0
              helm.sh/chart=harvester-1.0.0
Annotations:  helm.sh/hook: post-install,post-upgrade

Data
====
storageclass.yaml:
----
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: longhorn
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: "Delete"
volumeBindingMode: Immediate
parameters:
  numberOfReplicas: "1"
  staleReplicaTimeout: "30"
  fromBackup: ""
  baseImage: ""
  migratable: "true"

Events:  <none>

When the VM rootdisk is created it uses this StorageClass and the PersistentVolume numberOfReplicas is 1.

kubectl describe storageclass "longhorn"
~
parameters:
  numberOfReplicas: "1"
  staleReplicaTimeout: "30"
  fromBackup: ""
  baseImage: ""
  migratable: "true"
~

When I upload an ISO image it creates a new StorageClass for each but defaults to numberOfReplicas of 3

NAME                   PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
longhorn (default)     driver.longhorn.io   Delete          Immediate           true                   3h9m
longhorn-image-5c4rd   driver.longhorn.io   Delete          Immediate           true                   146m
longhorn-image-lzj44   driver.longhorn.io   Delete          Immediate           true                   145m
longhorn-image-tvkfb   driver.longhorn.io   Delete          Immediate           true                   18m
longhorn-image-zb6xl   driver.longhorn.io   Delete          Immediate           true                   133m
kubectl describe storageclass "longhorn-image-5c4rd"
Name:                  longhorn-image-5c4rd
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           driver.longhorn.io
Parameters:            backingImage=default-image-5c4rd,migratable=true,numberOfReplicas=3,staleReplicaTimeout=30
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

This is where I’m wondering if I need to change what the default is on image upload. I’m not sure where exactly to do this though. I had some success initially by building a Windows 10 & Server 2022 VM. Then I deleted the Windows ISO Volume & Virtio driver ISO. Creating the next VM worked and I repeated this process. Now though I am getting stuck even after deleting the other volumes:

kubectl -n windows get pvc,pods,vmi
NAME                                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
persistentvolumeclaim/admin-01-disk-1-f7mxr     Bound    pvc-ad846a83-fa5b-499d-885f-2e5668ef6ea1   40Gi       RWX            longhorn               110m
persistentvolumeclaim/dc-01-disk-1-ptsjq        Bound    pvc-9801c1b0-e168-4c0c-987c-fe3eba67afb7   40Gi       RWX            longhorn               85m
persistentvolumeclaim/dc-02-disk-0-7yaex        Bound    pvc-f3c83035-e49e-4952-9c2e-385e94996a85   10Gi       RWX            longhorn-image-zb6xl   36m
persistentvolumeclaim/dc-02-disk-1-q8yqj        Bound    pvc-7876bb17-103a-44de-b62b-8f046b6c068f   40Gi       RWX            longhorn               36m
persistentvolumeclaim/dc-02-disk-2-akcjm        Bound    pvc-6f66ae77-d90b-4763-857a-1bb015e16ec1   10Gi       RWX            longhorn-image-5c4rd   36m
persistentvolumeclaim/desktop-01-disk-1-w59as   Bound    pvc-0fd1a0a7-f5c2-4557-a273-34a5c24c9594   40Gi       RWX            longhorn               111m

NAME                                 READY   STATUS              RESTARTS   AGE
pod/virt-launcher-admin-01-km8q2     1/1     Running             0          92m
pod/virt-launcher-dc-01-7vghr        1/1     Running             0          58m
pod/virt-launcher-dc-02-nznnv        0/1     ContainerCreating   0          36m
pod/virt-launcher-desktop-01-2rh95   1/1     Running             0          91m

NAME                                            AGE   PHASE        IP            NODENAME       READY
virtualmachineinstance.kubevirt.io/admin-01     92m   Running      10.20.20.41   harvester-01   True
virtualmachineinstance.kubevirt.io/dc-01        58m   Running      10.20.20.10   harvester-01   True
virtualmachineinstance.kubevirt.io/dc-02        36m   Scheduling                                False
virtualmachineinstance.kubevirt.io/desktop-01   91m   Running      10.20.20.51   harvester-01   True

In this current state I have tried creating a CentOS VM in the default namespace and replicated

kubectl -n default get pvc,pods,vmi
NAME                                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
persistentvolumeclaim/centos-01-disk-0-dxhoq   Bound    pvc-e84daa57-3baf-4d69-abfc-0cf10a6d1a2f   10Gi       RWX            longhorn-image-tvkfb   24m
persistentvolumeclaim/centos-01-disk-1-fu9st   Bound    pvc-4b37f7e8-2438-4eef-b7db-d9bf521445dd   40Gi       RWX            longhorn               24m

NAME                                READY   STATUS              RESTARTS   AGE
pod/virt-launcher-centos-01-tr99z   0/1     ContainerCreating   0          24m

NAME                                           AGE   PHASE        IP    NODENAME   READY
virtualmachineinstance.kubevirt.io/centos-01   24m   Scheduling                    False
Events:
  Type     Reason              Age                   From                     Message
  ----     ------              ----                  ----                     -------
  Warning  FailedScheduling    24m                   default-scheduler        0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled           24m                   default-scheduler        Successfully assigned default/virt-launcher-centos-01-tr99z to harvester-01
  Warning  FailedScheduling    24m                   default-scheduler        0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedMount         22m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[cloudinitdisk-udata ephemeral-disks container-disks cloudinitdisk-ndata private disk-0 disk-1 hotplug-disks public libvirt-runtime sockets]: timed out waiting for the condition
  Warning  FailedMount         20m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[libvirt-runtime sockets cloudinitdisk-udata disk-0 disk-1 public container-disks hotplug-disks cloudinitdisk-ndata private ephemeral-disks]: timed out waiting for the condition
  Warning  FailedMount         17m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-1 disk-0], unattached volumes=[libvirt-runtime disk-1 disk-0 public ephemeral-disks cloudinitdisk-ndata container-disks sockets hotplug-disks cloudinitdisk-udata private]: timed out waiting for the condition
  Warning  FailedMount         15m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[sockets public hotplug-disks private disk-0 container-disks ephemeral-disks cloudinitdisk-udata cloudinitdisk-ndata disk-1 libvirt-runtime]: timed out waiting for the condition
  Warning  FailedAttachVolume  14m (x13 over 24m)    attachdetach-controller  AttachVolume.Attach failed for volume "pvc-4b37f7e8-2438-4eef-b7db-d9bf521445dd" : rpc error: code = Aborted desc = volume pvc-4b37f7e8-2438-4eef-b7db-d9bf521445dd is not ready for workloads
  Warning  FailedMount         13m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[hotplug-disks sockets container-disks disk-0 public ephemeral-disks private libvirt-runtime cloudinitdisk-udata cloudinitdisk-ndata disk-1]: timed out waiting for the condition
  Warning  FailedMount         11m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[container-disks libvirt-runtime private public disk-0 disk-1 cloudinitdisk-udata cloudinitdisk-ndata ephemeral-disks sockets hotplug-disks]: timed out waiting for the condition
  Warning  FailedMount         8m51s                 kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[cloudinitdisk-ndata sockets public hotplug-disks disk-0 disk-1 ephemeral-disks container-disks libvirt-runtime cloudinitdisk-udata private]: timed out waiting for the condition
  Warning  FailedMount         6m33s                 kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-1 disk-0], unattached volumes=[public ephemeral-disks hotplug-disks libvirt-runtime sockets cloudinitdisk-ndata disk-1 private container-disks cloudinitdisk-udata disk-0]: timed out waiting for the condition
  Warning  FailedMount         4m19s                 kubelet                  Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[ephemeral-disks private disk-0 sockets cloudinitdisk-ndata container-disks hotplug-disks libvirt-runtime cloudinitdisk-udata public disk-1]: timed out waiting for the condition
  Warning  FailedAttachVolume  3m56s (x18 over 24m)  attachdetach-controller  AttachVolume.Attach failed for volume "pvc-e84daa57-3baf-4d69-abfc-0cf10a6d1a2f" : rpc error: code = Aborted desc = volume pvc-e84daa57-3baf-4d69-abfc-0cf10a6d1a2f is not ready for workloads
  Warning  FailedMount         2m2s                  kubelet                  (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[libvirt-runtime cloudinitdisk-udata cloudinitdisk-ndata disk-0 public ephemeral-disks container-disks disk-1 hotplug-disks sockets private]: timed out waiting for the condition

I first came across this yesterday working through the kubevirt docs passing unattend.xml files. I was using config maps for storage and the virtio through a container. Again, that worked initially but then had the same issue using the virtio container so I switched to the iso. After building the VM, I came across the same issue on the syspreped machine which I then deleted and started again.

Today I reinstalled harvester and made the config changes prior to deploying any VMs. I replicated after deploying the 2nd Server 2022 VM while the first was still in progress. I’m wondering if the ISO being mounted to another before removal could have something to do with this?

Hi…

I have the same situation.
The dashboard said that containercreating for a long time, the events said

Unable to attach or mount volumes: unmounted volumes=[disk-1], unattached volumes=[ephemeral-disks private container-disks hotplug-disks cloudinitdisk-ndata disk-1 libvirt-runtime sockets cloudinitdisk-udata public disk-0]: timed out waiting for the condition

Is there any workaround to solve this?

Environment:
Harvester ISO: 1.0.0
Baremetal: 10 nodes Fujitsu Primergy
VM Image: Ubuntu-21.04 img format (qcow2)

Regards,
Fadjar Tandabawana

My original issue was due to lack of storage on the host. Also a lack of understanding of Longhorn didn’t help…

I had another look this evening but this time went into the Longhorn UI > Volumes. Here I found the volume showing Detached with a red exclamation mark “The volume cannot be scheduled”. Clicked into the Volume:

“Scheduling Failure, Replica Scheulding Failure” let me to: My volumes keep getting “Degraded” · Issue #1949 · longhorn/longhorn (github.com)

Yaskers first reply indicated disk space on the node itself. Which now makes sense because I used a small nvme. Again, lack of understanding of longhorn but I see for each volume it shows 3 Replicas. With the UI I can select other Degraded volumes and reduce the replicas down to 1. Not sure how the VMs consume storage at this point thin/thick but will go through longhorn docs and come back to this with a bigger nvme drive & ideally get default replicas down to 1