Longhorn volume degraded, Replica Scheduling Failure, Error Message: precheck new replica failed

Hello, I have an RKE2 cluster 2 nodes (master and worker) installed on Azure VMs. I have installed Longhorn through Rancher Apps & Marketplace. I have made a Deployment and PersistentVolumeClaim using the storage class of longhorn, once I applied, I bumped every time in this error shown on Longhorn UI:


Replica Scheduling Failure
Error Message: replica scheduling failed

State: attached

Health:degraded

Ready for workload:Ready

Conditions: TooManySnapshots


and all replicas are running

I have two pods longhorn manager, one on the master node and the other on the worker node, I can see this message on the pod deployed on the worker node:

"level=warning msg=“Unable to create new replica pvc-a2a771ba-b2f6-46bd-a3ae-db2da181b4df-r-443a4e8b” func=“controller.(*VolumeController).replenishReplicas” file=“volume_controller.go:2314” accessMode=rwo controller=longhorn-volume error=“No available disk candidates to create a new replica of size 2147483648” frontend=blockdev migratable=false node=worker-node-rancher owner=worker-node-rancher state=attached volume=pvc-a2a771ba-b2f6-46bd-a3ae-db2da181b4df "

knowing that I have enough unallocated disk space

You only have 2 nodes, one of wich is not a worker node. longhorn normally only uses worker nodes (or mixed worker/controller/etcd) nodes for storage.
The default replica count is 3 and there is also a default node-antiaffinity configured. So you need at least 3 worker nodes unless you tune your longhorn configuration.