DRBD 3+ multi-node cluster & linstor

Currently using SLES 12 SP3 with HA & KVM

I have setup a DRBD in primary/primary mode for the system. However it seem only 2 hosts can use the DRBD device.
Reading the linbit docs, they have a newer managment for DRBD called ‘linstor’ to allow multi-node cluster to access the DRBD device.
The ‘linstor’ apps are not in the SLES 12 reporitory.
If i try ‘zypper install linstor-client’, it come back as;

‘linstor-client’ not found in package names. Trying capabilities.
No provider of ‘linstor-client’ found.

Same for ‘linstor-server’

Does anybody have experience with ‘linstor’ to manage DRBD to allow a multi-node cluster to access the DRBD device?

Any comments or suggestion would be appreciated.

OK, so no go with linstor

How about drbdmanage?

What I would like is to have multiple hosts to access the DRBD pimary/primary pair. Can this be done?

Hi johngoutbeck,

What I would like is to have multiple hosts to access the DRBD pimary/primary pair. Can this be done?

I’ve not had the best experiences with running DRBD dual-primary, both in very early stages of development and then even with later production-level code. It even got as bad as having practically inconsistent contents, without DRBD noticing (IOW, from DRBD’s point of view,the two backing stores were consistent, but in reality they weren’t, showing different content when mounting the FS on both nodes).

I’ve rather early resorted to using a storage infrastructure (i. e. two separate servers running DRBD, with multiple resources in single-primary mode), with the at-the-time active server exporting the block device via some different mechanism (Fiber Channel, iSCSI). The “client” systems then all used i. e. a cluster file system.

You haven’t mentioned the purpose of exporting the block device(s) - is it about backing devices for VMs, for use in a virtualization cluster?

Regards,
J

Thank J for your info.

I’m working SLES 12 KVM hosts (3 or more). Trying to have a network RAID1 Storage (DRBD) for all the host to see as a single clustered volume (ocfs2). So if one storage device goes down, or is brought offline, the VMs will stay up. Was hoping I can do this with DRBD, but since it involves only 2 hosts, it seems this cannot be done.

OR can it?
Following your statement

I’ve rather early resorted to using a storage infrastructure (i. e. two separate servers running DRBD, with multiple resources in single-primary mode), with the at-the-time active server exporting the block device via some different mechanism (Fiber Channel, iSCSI). The “client” systems then all used i. e. a cluster file system.

Use 2 SLES 12 storage hosts with DRBD in primary/primary mode. Setup ISCSI targets on these 2 storage nodes on multiple paths.
On the SLES 12 HA KVM hosts, use ISCSI initiators to connect to the targets with multipath. This should give me 4 paths to the same single data store?
Setup volume clustering with ocfs2 to store the VM images and data.

What do you think?

Hi John,

2 SLES 12 storage hosts with DRBD in primary/primary mode. Setup ISCSI targets on these 2 storage nodes on multiple paths
[…]
What do you think?

in my opinion it’s still better to avoid primary/primary configurations, for operational stability reasons.

Personally, I’d set up the two storage servers with multiple DRBD resources, i.e. one per virtual disk, and run these in primary/secondary mode. Each of these can be handled via Pacemaker, together with activating each activated DRBD resource in the iSCSI target.

Going that route, you might see the following benefits:

  • simpler design from the storage resource’s operational point of view (albeit having more complex operations when migrating storage resources between nodes - but only at that point in time, not for every read/write request)

  • trouble with single DRBD resources would only affect a single VM

  • no OCFS2 overhead (cluster-wide communications), and you could even avoid file system overhead at the storage server completely (exporting the DRBD resource, rather than a file on it)

  • you’d have load balancing at the storage server level, rather than trying to balance at the multipath layer

  • easier growth for increased storage demand - you could place new DRBD resources on any storage of the servers, no need to grow the OCFS2 filesystem.

You’d have a read/write delay when migrating the “primary” from one storage node to the other, but that would likely get covered by retries and multipath handling.

I’ve yet to run something along those lines via iSCSI, my practical experiences with this are via Fiber Channel with NPIV.

Regards,
J