DRBD & HA multipathing

DRBD & HA multipathing

I have a DRBD setup but the resource file only list a connection through a single network connection.

resource sbd02 {
meta-disk internal;
device /dev/drbd_sbd02 minor 2;
net {
protocol C;

  after-sb-0pri discard-zero-changes;
  after-sb-1pri discard-secondary;
  after-sb-2pri disconnect;

}

connection-mesh {
hosts nss-sn01 nss-sn02;
}

on nss-sn01 {
address 192.168.14.251:7792;
disk /dev/sn01-vg0/sbd02;
node-id 0;
}
on nss-sn02 {
address 192.168.14.252:7792;
disk /dev/sn02-vg0/sbd02;
node-id 1;
}
}

Would like to have HA connectivity between the two storage servers with dual network connections

Wondering if 2 addresses could be used and if 1 link would go down the other link would take over automatically. Maybe even use both links like bonding which to give higher performance mirroring, but without bonding.

New possible resource config
on nss-sn01 {
address 192.168.14.251:7792;
address 192.168.15.251:7792;
disk /dev/sn01-vg0/sbd02;
node-id 0;
}
on nss-sn02 {
address 192.168.14.252:7792;
address 192.168.15.252:7792;
disk /dev/sn02-vg0/sbd02;
node-id 1;
}

Can this be done?

Any advice/suggestions/comments.

Hi johngoutbeck,

Wondering if 2 addresses could be used

I don’t think so, but you could simply give it a try :slight_smile:

and if 1 link would go down the other link would take over automatically. Maybe even use both links like bonding

From my point of view network link bonding seems like the better (and easier) approach. Create a bond on each host, assign the IP address to the bond, ready to rock. If one link fails, the other will take over.

Depending on the general and network setup, different bonding types will achieve different levels of load balancing and require different levels of support by the network hardware (switch).

Maybe even use both links like bonding which to give higher performance mirroring, but without bonding.

Why do you want to / need to avoid bonding?

Regards,
J

Thanks for your input J.

I haven’t seen and example of 2 (multiple) address for a resource connection

I’m avoiding bonding since I have simple switches.

The reason for not bonding is to have two LAN links on different subnets, each connected to their own switch. This way if a link or a switch goes down (scheduled or unscheduled), then the connection between the DRBD storage system would stay up and connected - like multipath for a SAN.

Bonding(LAG) can be used if the switches support MLAG (Multi-Chassis Link Aggregation - where 2 ports on 2 different switches work as a LAG bond).

So I take it DRBD does not support mulitpath between storage nodes within it resource configuration.

[QUOTE=johngoutbeck;55622]Thanks for your input J.

I haven’t seen and example of 2 (multiple) address for a resource connection

I’m avoiding bonding since I have simple switches.[/QUOTE]

there are bonding modes that do not require explicit switch support (as would be needed for i.e. LACP, bonding mode 4).

[QUOTE=johngoutbeck;55622]The reason for not bonding is to have two LAN links on different subnets, each connected to their own switch. This way if a link or a switch goes down (scheduled or unscheduled), then the connection between the DRBD storage system would stay up and connected - like multipath for a SAN.

Bonding(LAG) can be used if the switches support MLAG (Multi-Chassis Link Aggregation - where 2 ports on 2 different switches work as a LAG bond).[/QUOTE]

…or you interconnect the switches on layer 2, connect each server to each switch and use bonding modes without LACP, i.e. hot-standy or round-robin. If one link breaks, or one switch is rebooted, you’d still have connectivity. Just not as effective as LACP appears - but in case you’re only connecting these two servers, LACP typically wouldn’t use more than one link at a time either.

Yes, I’d say so - just because I have found no indication that it would support it, so if some DRBD guru jumps in as says “it does!”, I’ll stand corrected :wink:

Regards,
J