Harvester host redundant disk configuration

I just started to test Harvester on a pare of bare metal servers and after reading the documentation there is (at least) one thing that I don’t understand: how to setup a redundant disk configuration for VM’s and in particular Longhorn.

I know that Longhorn replicates data across multiple nodes, but I cannot figure out how to make sure a physical disk failure doesn’t affect multiple nodes/volumes that hold replicated data on a Harvester node, as I have no control over the location of the multiple instances of data replicated by Longhorn.

To explain my struggles let me elaborate a bit on the configuration I’m working with: I have 2 servers with a RAID1 NVMe disk configuration for the host OS and VM root disks. In addition I have 4x 8TB SSD in each server I want to use for Longhorn storage.
As far as I can see, the only way I can add these disks in the Harvester UI is by adding them one by one as ext4 or XFS partitions. This will add up to the total storage available and I can create Longhorn storage out of it for the 3 Longhorn nodes. But, as far as I can tell I have no way to ensure that each Longhorn node stores all of its data on different physical disks to provide real redundancy within one server/Harvester node.

Or… should I manually configure some kind of RAID setup on the host before exposing the (virtual/logical) disks to Harvester? But what is the point of having multiple Longhorn nodes for redundancy in that case?

I’m sure I’m missing the point completely, but somehow the documentation doesn’t help me understand it.

Replying myself with some updated insights. :wink:

As I understand it now, the redundancy primarily comes from Longhorn Replica Instance Manager replicating across different nodes for a Volume.

Does this mean that after a disk failure on the node, those pods will continue to run but accesses the replica on the other node?