Mulitpath devices missing

Hello everybody,

We have several virtual machines using several multipath devices without problems. Recently we have to change several multipath devices to use LVM instead of simple partitions. The process we follow to change to LVM was this:

[LIST=1]
[]Add a new volume to server
[
]Rescan scsi channels
[]Locate new device (multipath -ll)
[
]Partition using fdisk /dev/mapper/XXXXXXXXXXXXXX
[]Add new partition to LVM (pvcreate /dev/mapper/XXXXXXXXX_part1)
[
]Create LVM (vgcreate, lvcreate and mkfs.ext)
[]Mount new LVM volume and copy data from NO LVM volume to the new LVM one
[
]Change /etc/fstab to use the new LVM volume
[]Reboot the server
[
]Test applications and are working as expected
[*]
[/LIST]

Now I want to remove the old volume but what happend now it’s incredible:

[LIST]
[]I can see the NO LVM volumes on multipath -ll
[
]I can not see new LVM volumes on multipath -ll
[*]
[/LIST]

Using multipath -v4 -ll I can see several errors about my new volumes:

Mar 27 08:41:40 | 360050763808100b2580000000000010b: pgfailback = -2 (controller setting)
Mar 27 08:41:40 | 360050763808100b2580000000000010b: pgpolicy = group_by_prio (controller setting)
Mar 27 08:41:40 | 360050763808100b2580000000000010b: selector = service-time 0 (internal default)
Mar 27 08:41:40 | 360050763808100b2580000000000010b: features = 1 queue_if_no_path (controller setting)
Mar 27 08:41:40 | 360050763808100b2580000000000010b: hwhandler = 0 (controller setting)
Mar 27 08:41:40 | 360050763808100b2580000000000010b: rr_weight = 1 (controller setting)
Mar 27 08:41:40 | 360050763808100b2580000000000010b: minio = 1 rq (config file default)
Mar 27 08:41:40 | 360050763808100b2580000000000010b: no_path_retry = -2 (inherited setting)
Mar 27 08:41:40 | 360050763808100b2580000000000010b: fast_io_fail_tmo = 5 (config file default)
Mar 27 08:41:40 | 360050763808100b2580000000000010b: retain_attached_hw_handler = 1 (config file default)
Mar 27 08:41:40 | 360050763808100b2580000000000010b: update dev_loss_tmo to 2147483647
Mar 27 08:41:40 | 360050763808100b2580000000000010b: assembled map [1 queue_if_no_path 0 2 1 service-time 0 4 1 66:240 1 66:192 1 66:208 1 66:224 1 service-time 0 4 1 65:
0 1 65:16 1 65:48 1 65:32 1]
Mar 27 08:41:40 | 360050763808100b2580000000000010b: set ACT_CREATE (map does not exist)
Mar 27 08:41:40 | 360050763808100b2580000000000010b: domap (0) failure for create/reload map
Mar 27 08:41:40 | 360050763808100b2580000000000010b: ignoring map

If I use pvs command:
Found duplicate PV YdSuz28BDFA80yyYo6rjSldR2XTzBAEh: using /dev/sdr1 not /dev/sdq1 Found duplicate PV dFqLjo9wiySQRd3BECsq3rXiFQNsNUZz: using /dev/sdax1 not /dev/sdaw1 Found duplicate PV YdSuz28BDFA80yyYo6rjSldR2XTzBAEh: using /dev/sds1 not /dev/sdr1 Found duplicate PV dFqLjo9wiySQRd3BECsq3rXiFQNsNUZz: using /dev/sday1 not /dev/sdax1 Found duplicate PV YdSuz28BDFA80yyYo6rjSldR2XTzBAEh: using /dev/sdt1 not /dev/sds1 Found duplicate PV dFqLjo9wiySQRd3BECsq3rXiFQNsNUZz: using /dev/sdaz1 not /dev/sday1 Found duplicate PV dFqLjo9wiySQRd3BECsq3rXiFQNsNUZz: using /dev/sdu1 not /dev/sdaz1 Found duplicate PV dFqLjo9wiySQRd3BECsq3rXiFQNsNUZz: using /dev/sdv1 not /dev/sdu1 Found duplicate PV ffOVl9506HTjgTq5ECzAUvA8LHUetxJx: using /dev/sdbb1 not /dev/sdba1 Found duplicate PV dFqLjo9wiySQRd3BECsq3rXiFQNsNUZz: using /dev/sdw1 not /dev/sdv1 Found duplicate PV ffOVl9506HTjgTq5ECzAUvA8LHUetxJx: using /dev/sdbc1 not /dev/sdbb1 Found duplicate PV dFqLjo9wiySQRd3BECsq3rXiFQNsNUZz: using /dev/sdx1 not /dev/sdw1 Found duplicate PV ffOVl9506HTjgTq5ECzAUvA8LHUetxJx: using /dev/sdbd1 not /dev/sdbc1 Found duplicate PV ffOVl9506HTjgTq5ECzAUvA8LHUetxJx: using /dev/sdy1 not /dev/sdbd1 Found duplicate PV ffOVl9506HTjgTq5ECzAUvA8LHUetxJx: using /dev/sdz1 not /dev/sdy1 Found duplicate PV ffOVl9506HTjgTq5ECzAUvA8LHUetxJx: using /dev/sdaa1 not /dev/sdz1 Found duplicate PV ffOVl9506HTjgTq5ECzAUvA8LHUetxJx: using /dev/sdab1 not /dev/sdaa1 Found duplicate PV YdSuz28BDFA80yyYo6rjSldR2XTzBAEh: using /dev/sdas1 not /dev/sdt1 Found duplicate PV YdSuz28BDFA80yyYo6rjSldR2XTzBAEh: using /dev/sdat1 not /dev/sdas1 Found duplicate PV YdSuz28BDFA80yyYo6rjSldR2XTzBAEh: using /dev/sdau1 not /dev/sdat1 Found duplicate PV YdSuz28BDFA80yyYo6rjSldR2XTzBAEh: using /dev/sdav1 not /dev/sdau1 PV VG Fmt Attr PSize PFree /dev/sdab1 vg_var lvm2 a-- 10,00g 0 /dev/sdav1 vg_usr_sap lvm2 a-- 25,00g 0 /dev/sdx1 vg_usr_sap_epp lvm2 a-- 50,00g 0

SuSe support send me a new filter for lvm but in this server it’s not working as expected.

filter = [ "a|/dev/disk/by-id/dm-name-.*|", "r/.*/" ]

The problem with this filter is that I can not see my new volumes listed as /dev/disk/by-id/dm-name-*, they are not created at all.

Any idea whats wrong?

Thanks.

encuentromoda,

It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.

These forums are peer-to-peer, best effort, volunteer run and that if your issue
is urgent or not getting a response, you might try one of the following options:

Be sure to read the forum FAQ about what to expect in the way of responses:
http://forums.suse.com/faq.php

If this is a reply to a duplicate posting or otherwise posted in error, please
ignore and accept our apologies and rest assured we will issue a stern reprimand
to our posting bot…

Good luck!

Your SUSE Forums Team
http://forums.suse.com

I’m not understanding WHY you’re doing when you’re doing beginning at Step 4. Why create a partition? Just issue a pvcreate and be done with it. And there shouldn’t be any need to reboot the host.
blah
Have you considered using LUN aliases in your multipathing configuration? If you have device names like sdav then you have a LOT of devices and/or paths. It might help to assign a single name to a LUN, and then it would be easier to address uniquely. I’ve always used phonetics (like alpha, bravo, charlie, delta…) - your multipathing config might have entries like:

multipaths {
multipath {
wwid 20007523245424
alias alpha
}
multipath {
wwid 20007523245499
alias bravo
}
}
It would probably be less-confusing.
As for the rest of your problem, I suspect a LUN presentation/SAN zoning issue - it looks to me like one or more LUNs are being presented multiple times per path. But that’s a guess.