Pod stucks when recreates at another node

Hello everyone!

Longhorn v.1.4.2, Rancher v 2.7

During helm upgrade, pod recreates on another node and fails to start, because of

Warning FailedMount 110s (x8 over 51m) kubelet Unable to attach or mount volumes: unmounted volumes=[grafana-storage], unattached volumes=[grafana-datasources grafana-alerting kube-api-access-zhbw4 grafana-storage]: timed out waiting for the condition

Is there any way to fix that problem without manual deleting of pod or assigning pod strictly to one node?

Information about volume:

kubectl get volumes.longhorn.io -n longhorn-system
NAME                                       STATE      ROBUSTNESS   SCHEDULED   SIZE          NODE       AGE
pvc-8c7e943a-b569-468b-b1a0-033e4537bf01   attached   healthy                  21474836480   10.0.1.1   4d21h
pvc-9d24f973-fc94-4651-b742-a126f2783c9d   attached   healthy                  16106127360   10.0.1.3   4d21h
pvc-a349a923-c394-44e1-955e-2e399f6ed1ff   attached   healthy                  21474836480   10.0.1.1   4d21h
pvc-e2a04311-7248-4543-84e2-ad131c5f85be   attached   healthy                  1073741824    10.0.1.1   4d20h

kubectl describe pvc -n monitoring-staging grafana-pv-claim 
Name:          grafana-pv-claim
Namespace:     monitoring-staging
StorageClass:  longhorn
Status:        Bound
Volume:        pvc-9d24f973-fc94-4651-b742-a126f2783c9d
Labels:        app=grafana
               app.kubernetes.io/managed-by=Helm
Annotations:   meta.helm.sh/release-name: monitoring
               meta.helm.sh/release-namespace: monitoring-staging
               project.werf.io/env: staging
               project.werf.io/name: analytics
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: driver.longhorn.io
               volume.kubernetes.io/storage-provisioner: driver.longhorn.io
               werf.io/version: v1.2.246
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      15Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       grafana-67c5597ccd-b4n69
               grafana-9b8794fd4-z4djx
Events:        <none>
Name:            pvc-9d24f973-fc94-4651-b742-a126f2783c9d
Labels:          <none>
Annotations:     longhorn.io/volume-scheduling-error: 
                 pv.kubernetes.io/provisioned-by: driver.longhorn.io
Finalizers:      [kubernetes.io/pv-protection external-attacher/driver-longhorn-io]
StorageClass:    longhorn
Status:          Bound
Claim:           monitoring-staging/grafana-pv-claim
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        15Gi
Node Affinity:   <none>
Message:         
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            driver.longhorn.io
    FSType:            ext4
    VolumeHandle:      pvc-9d24f973-fc94-4651-b742-a126f2783c9d
    ReadOnly:          false
    VolumeAttributes:      dataLocality=disabled
                           fromBackup=
                           fsType=ext4
                           numberOfReplicas=3
                           staleReplicaTimeout=30
                           storage.kubernetes.io/csiProvisionerIdentity=1690305727508-8081-driver.longhorn.io
Events:                <none>