If “path_grouping_policy” is set to “group_by_prio” then I observe(via iostat -xk /dev/dm-0) all I/O only on /dev/sdg and /dev/sdq, while sdb and sdl remains 100% idle.
But when “path_grouping_policy” is set to “multibus”, then I can see that I/O is properly distributed on all the disks(sdg, sdq … sdb, sdl).
when “path_grouping_policy” is set to “group_by_prio” then running “multipath -ll” shows:
On 02/10/2015 05:44 AM, sharfuddin wrote:
,[color=blue]
Please recommend me which one is recommended for best
performance(group_by_prio vs multibus).[/color]
Really depends on your hardware setup.
If you have a good storage subsystem and multiple controllers, then round
robining across all active connections could gain you some performance.
But if have limited controllers (looks like you have 2?), and your storage
really can’t saturate your connections (just some examples), then last used or
even priority groups works just fine. Priority groups could work best when
there are different types of controllers and/or topologies in use. In which
case you’d assign priority to the better paths…
Regardless, if you decided to trial and benchmark each way, make sure you are
driving read/write loads from multiple clients. Otherwise, you might not see a
difference at all (of course, depending on config, you still might not see a
big difference).
On 02/10/2015 05:44 AM, sharfuddin wrote:
,[color=green]
Please recommend me which one is recommended for best
performance(group_by_prio vs multibus).[/color]
Really depends on your hardware setup.
If you have a good storage subsystem and multiple controllers, then round
robining across all active connections could gain you some performance.
But if have limited controllers (looks like you have 2?), and your storage
really can’t saturate your connections (just some examples), then last used or
even priority groups works just fine. Priority groups could work best when
there are different types of controllers and/or topologies in use. In which
case you’d assign priority to the better paths…
Regardless, if you decided to trial and benchmark each way, make sure you are
driving read/write loads from multiple clients. Otherwise, you might not see a
difference at all (of course, depending on config, you still might not see a
big difference).[/color]
HP has something to say about your storage unit and preferred multipathing
scenario, see:
On 02/10/2015 05:44 AM, sharfuddin wrote:
,[color=green]
Please recommend me which one is recommended for best
performance(group_by_prio vs multibus).[/color]
Really depends on your hardware setup.
If you have a good storage subsystem and multiple controllers, then round
robining across all active connections could gain you some performance.
But if have limited controllers (looks like you have 2?), and your storage
really can’t saturate your connections (just some examples), then last used or
even priority groups works just fine. Priority groups could work best when
there are different types of controllers and/or topologies in use. In which
case you’d assign priority to the better paths…
Regardless, if you decided to trial and benchmark each way, make sure you are
driving read/write loads from multiple clients. Otherwise, you might not see a
difference at all (of course, depending on config, you still might not see a
big difference).[/color]
HP has something to say about your storage unit and preferred multipathing
scenario, see: