[QUOTE=Magic31;12301]That filter will probably not pick up anything, as /dev/dm.* does not match the naming. /dev/dm-* would work, but those devices created by the Linux device mapper are created dynamically as devices are present. During a reboot and when having changed something in your server setup (or also possible when using a different device driver) the numbering can be different (e.g. what was previously /dev/dm-1, becomes /dev/dm-4 after rebooting).
If you have a mix of disk devices and intend to only include certain disk devices to participate in the LVM household, using the /dev/dm-* as argument is too generic.
So there it has it’s value to define the LVM filter to only include specific devices and ignore others that don’t fit the filter.
Using the /dev/disk/by/id/[intended disk device name or vendor path when working with SAN storage]… those device names are persistent, even when reinstalling the OS bit of your server(s).
What are you looking to setup? Is this a single server setup? And what type of storage (DAS/NAS/SAN)?
I has SLES 11 SP2 server and two luns connected from SAN with Multipath enabled. So I have dm-0 and dm-1.
Based on the MPIO setup advise given by the Novell Tech Support and one of that is to modify the filter of /etc/lvm/lvm.conf to point to “/dev/disk/by-id/dm-uuid-.*”.
Unfortunately, I faced a problem whereby one of the soft link “/dev/disk/by-id/dm-uuid-.* → …/…/dm-x” will be missing under two condition:
- Server rebooted.
- Single path failure.
It solved by restarted the multipathd service. Anyway, it kinda weird and i could not found the root cause of that.
Hence, I made the change in the /etc/lvm/lvm.conf filter again to point to the “/dev/dm-*”, this time it survives after server reboot and single path failure test. Command pvscan return expected result.
By the way, you have answered my questions and i will perform more and more testing again.
Thank you so much Willem!