Automatically change /dev/mapper after upgrade to SLES11 SP2

Hi,
After upgrading one server SLES 11 SP1 to SP2 which has multipath in it, the file system change to use /dev/mapper/****. Before that, it use /dev/sdaN.
I also upgrade the other servers which do not use the multipath and no SAN attached, the file system remain to use /dev/sdaN . ONLY for the server that has multipath, it suddenly change to /dev/mapper after the upgrade. Is it the new features regarding the Service Pack 2. Can someone help me on this to verify if this is normal or not and some explanation.

Before Upgrade (SLES SP1):
Kernel-2.6.32.12-0.7-default

Filesystem Size Used Avail Use% Mounted on
/dev/sda2 69G 39G 27G 59% /
devtmpfs 16G 252K 16G 1% /dev
tmpfs 16G 104K 16G 1% /dev/shm
/dev/sda1 92M 31M 56M 36% /boot
/dev/sda4 20G 11G 8.2G 57% /opt
/dev/sda5 9.9G 216M 9.2G 3% /tmp
/dev/sda3 20G 7.0G 12G 38% /usr
/dev/mapper/360050768028082bde80000000000000b_part1
504G 23G 456G 5% /home/db2inst1

After upgrade to SLES SP2:

Kernel: 3.0.13-0.27-default

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/3600605b002f69e3014d025e31645bcf0_part6 80G 7.5G 69G 10% /
devtmpfs 16G 340K 16G 1% /dev
tmpfs 16G 100K 16G 1% /dev/shm
/dev/mapper/3600605b002f69e3014d025e31645bcf0_part1 92M 46M 41M 53% /boot
/dev/mapper/3600605b002f69e3014d025e31645bcf0_part4 20G 9.4G 9.4G 51% /opt
/dev/mapper/3600605b002f69e3014d025e31645bcf0_part5 9.9G 1.4G 8.0G 15% /tmp
/dev/mapper/3600605b002f69e3014d025e31645bcf0_part3 20G 7.3G 12G 39% /usr

Hi duderage,

Can someone help me on this to verify if this is normal or not and some explanation.

device names (/dev/sd*) are not considered to be consistent across reboots, so some years ago it was decided to better switch to “symbolic” device names managed by udev - /dev/disk/by-/ . So I recommend to change that across all your systems, during regular system maintenance (I’ve only seen varying disk devices once in a few years… so no real need to hurry. But once you’re struck by that symptom, you remember to use that on every system :wink: )

OTOH, what you’re seeing is something different. I assume (I hadn’t had to deal with MP yet) that the setup script switched to using the alias name, which is considered “persistent”, too. I’d consider such a change “normal”

See also the SLES11 storage admin guide (https://www.suse.com/documentation/sles11/pdfdoc/stor_admin/stor_admin.pdf) section 7.2.3 and 7.9.

Regards,
Jens

Hi Jens,
Thanks and appreciate for the prompt reply. i have few questions to make it crystal clear:

  1. “So I recommend to change that across all your systems”
    what do you mean by this? so i need to upgrade to Service Pack 2 through all the system related?

2.Do i need to modify the lvm.conf file so that it does not scan and use the physical paths

  1. below are the contents in /dev/disk/by-id

in my understanding, the physical path should be point to sda1,sda2,sda3 instead of dm-1,dm-2,dm-3 for scsi(as below). Is there any way to change and point to sda1,sda2 so on avoiding it from use the /dev/mapper when we df -h for /, /usr/ /opt etc…

lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-name-360050768028082bde80000000000000b → …/…/dm-1
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-name-360050768028082bde80000000000000b_part1 → …/…/dm-2
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-name-3600605b002f69e3014d025e31645bcf0 → …/…/dm-0
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-name-3600605b002f69e3014d025e31645bcf0_part1 → …/…/dm-3
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-name-3600605b002f69e3014d025e31645bcf0_part2 → …/…/dm-4
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-name-3600605b002f69e3014d025e31645bcf0_part3 → …/…/dm-5
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-name-3600605b002f69e3014d025e31645bcf0_part4 → …/…/dm-6
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-name-3600605b002f69e3014d025e31645bcf0_part5 → …/…/dm-7
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-name-3600605b002f69e3014d025e31645bcf0_part6 → …/…/dm-8
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-uuid-mpath-360050768028082bde80000000000000b → …/…/dm-1
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-uuid-mpath-3600605b002f69e3014d025e31645bcf0 → …/…/dm-0
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-uuid-part1-mpath-360050768028082bde80000000000000b → …/…/dm-2
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-uuid-part1-mpath-3600605b002f69e3014d025e31645bcf0 → …/…/dm-3
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-uuid-part2-mpath-3600605b002f69e3014d025e31645bcf0 → …/…/dm-4
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-uuid-part3-mpath-3600605b002f69e3014d025e31645bcf0 → …/…/dm-5
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-uuid-part4-mpath-3600605b002f69e3014d025e31645bcf0 → …/…/dm-6
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-uuid-part5-mpath-3600605b002f69e3014d025e31645bcf0 → …/…/dm-7
lrwxrwxrwx 1 root root 10 Nov 21 14:55 dm-uuid-part6-mpath-3600605b002f69e3014d025e31645bcf0 → …/…/dm-8
lrwxrwxrwx 1 root root 10 Nov 21 14:55 scsi-360050768028082bde80000000000000b → …/…/dm-1
lrwxrwxrwx 1 root root 10 Nov 21 14:55 scsi-360050768028082bde80000000000000b-part1 → …/…/dm-2
lrwxrwxrwx 1 root root 10 Nov 21 14:55 scsi-3600605b002f69e3014d025e31645bcf0 → …/…/dm-0
lrwxrwxrwx 1 root root 10 Nov 21 14:55 scsi-3600605b002f69e3014d025e31645bcf0-part1 → …/…/dm-3
lrwxrwxrwx 1 root root 10 Nov 21 14:55 scsi-3600605b002f69e3014d025e31645bcf0-part2 → …/…/dm-4
lrwxrwxrwx 1 root root 10 Nov 21 14:55 scsi-3600605b002f69e3014d025e31645bcf0-part3 → …/…/dm-5
lrwxrwxrwx 1 root root 10 Nov 21 14:55 scsi-3600605b002f69e3014d025e31645bcf0-part4 → …/…/dm-6
lrwxrwxrwx 1 root root 10 Nov 21 14:55 scsi-3600605b002f69e3014d025e31645bcf0-part5 → …/…/dm-7
lrwxrwxrwx 1 root root 10 Nov 21 14:55 scsi-3600605b002f69e3014d025e31645bcf0-part6 → …/…/dm-8
lrwxrwxrwx 1 root root 10 Nov 21 14:55 wwn-0x60050768028082bde80000000000000b → …/…/dm-1
lrwxrwxrwx 1 root root 10 Nov 21 14:55 wwn-0x60050768028082bde80000000000000b-part1 → …/…/dm-2
lrwxrwxrwx 1 root root 10 Nov 21 14:55 wwn-0x600605b002f69e3014d025e31645bcf0 → …/…/dm-0
lrwxrwxrwx 1 root root 10 Nov 21 14:55 wwn-0x600605b002f69e3014d025e31645bcf0-part1 → …/…/dm-3
lrwxrwxrwx 1 root root 10 Nov 21 14:55 wwn-0x600605b002f69e3014d025e31645bcf0-part2 → …/…/dm-4
lrwxrwxrwx 1 root root 10 Nov 21 14:55 wwn-0x600605b002f69e3014d025e31645bcf0-part3 → …/…/dm-5
lrwxrwxrwx 1 root root 10 Nov 21 14:55 wwn-0x600605b002f69e3014d025e31645bcf0-part4 → …/…/dm-6
lrwxrwxrwx 1 root root 10 Nov 21 14:55 wwn-0x600605b002f69e3014d025e31645bcf0-part5 → …/…/dm-7
lrwxrwxrwx 1 root root 10 Nov 21 14:55 wwn-0x600605b002f69e3014d025e31645bcf0-part6 → …/…/dm-8

many thanks in advance. hope you dont mind to explain in details regarding my question :)…

rgds,
khairul

Hi khairul,

[QUOTE=duderage;17735]Hi Jens,
Thanks and appreciate for the prompt reply. i have few questions to make it crystal clear:

  1. “So I recommend to change that across all your systems”
    what do you mean by this? so i need to upgrade to Service Pack 2 through all the system related?[/QUOTE]

no, it’s about how your devices are referenced in /etc/fstab. Instead of using /dev/sd*, use the corresponding (persistent) name from i.e. /dev/disk/by-uuid.

If you prefer changing this via yast: There’s a sub-menu when you edit your file systems (“System” → “Partitioner” → select you file system → “Edit” → “Fstab options” → “Mount in /etc/fstab by”) where you can change from “Device name” to i.e. UUID.

BTW, the current level is service pack 3, so if you’re upgrading (and all dependencies permit), you should go that step as well.

Personally, I rule out as much as I can from lvm.conf, whenever confusion may arise. But it was only on rare occasion that I actually experienced difficulties with the default configuration.

If you felt the need to exclude devices in lvm.conf already, I’d recommend to exclude any possible occurrence :wink:

[QUOTE=duderage;17735]3. below are the contents in /dev/disk/by-id

in my understanding, the physical path should be point to sda1,sda2,sda3 instead of dm-1,dm-2,dm-3 for scsi(as below). Is there any way to change and point to sda1,sda2 so on avoiding it from use the /dev/mapper when we df -h for /, /usr/ /opt etc…[/QUOTE]

These entries are auto-generated by udev. While you might find a way to modify the rule sets to achieve that other mapping, I do not see the point to it. Those actually are device mapper devices, so why not point to them? Those sd* device names are not persistent, so an effect of “ah, it’s sdc1!” from the df output may be way misleading.

Not at all, and please keep in mind that my statements above are my opinion, not some “matter of fact”. There are many ways to skin a cat, but of course sometimes there are reasons why things are set up the way it is, and if I believe to know about that “why”, I’ll try to explain :slight_smile:

Regards,
Jens

Hi Jens,
This is my current fstab after upgrade to SLES 11 SP2 from SLES SP1:
/dev/disk/by-id/scsi-3600605b002f69e3014d025e31645bcf0-part2 swap swap defaults 0 0
/dev/disk/by-id/scsi-3600605b002f69e3014d025e31645bcf0-part6 / ext3 acl,user_xattr 1 1
/dev/disk/by-id/scsi-3600605b002f69e3014d025e31645bcf0-part1 /boot ext3 acl,user_xattr 1 2
/dev/disk/by-id/scsi-3600605b002f69e3014d025e31645bcf0-part4 /opt ext3 acl,user_xattr 1 2
/dev/disk/by-id/scsi-3600605b002f69e3014d025e31645bcf0-part5 /tmp ext3 acl,user_xattr 1 2
/dev/disk/by-id/scsi-3600605b002f69e3014d025e31645bcf0-part3 /usr ext3 acl,user_xattr 1 2
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
debugfs /sys/kernel/debug debugfs noauto 0 0
usbfs /proc/bus/usb usbfs noauto 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0
#/dev/mapper/360080e50001837bc000004624d61b2ee_part1 /home/db2inst1 ext3 noauto 0 0
/dev/mapper/360050768028082bde80000000000000b_part1 /home/db2inst1 ext3 noauto 0 0

df -h (SLES 11 SP2): after upgrade

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/3600605b002f69e3014d025e31645bcf0_part6 80G 7.5G 69G 10% /
devtmpfs 16G 340K 16G 1% /dev
tmpfs 16G 100K 16G 1% /dev/shm
/dev/mapper/3600605b002f69e3014d025e31645bcf0_part1 92M 46M 41M 53% /boot
/dev/mapper/3600605b002f69e3014d025e31645bcf0_part4 20G 9.4G 9.4G 51% /opt
/dev/mapper/3600605b002f69e3014d025e31645bcf0_part5 9.9G 1.4G 8.0G 15% /tmp
/dev/mapper/3600605b002f69e3014d025e31645bcf0_part3 20G 7.3G 12G 39% /usr

df -h (SLES SP1) before upgrade:

Filesystem Size Used Avail Use% Mounted on
/dev/sda2 69G 39G 27G 59% /
devtmpfs 16G 252K 16G 1% /dev
tmpfs 16G 104K 16G 1% /dev/shm
/dev/sda1 92M 31M 56M 36% /boot
/dev/sda4 20G 11G 8.2G 57% /opt
/dev/sda5 9.9G 216M 9.2G 3% /tmp
/dev/sda3 20G 7.0G 12G 38% /usr

From your opinion and experiences, this kind of configuration has no issue right? my only concern is to verify if this the configuration(auto change to /dev/mapper after upgrade SP2) if there’s multipath. Afraid there will be issue with this kind configuration.

For the rest of the servers that i have upgraded to SP2 that do not have multipath, they all remains to/dev/sda like below:

df -h (AFTER upgrade SP2 for the server without multipath)
Filesystem Size Used Avail Use% Mounted on
/dev/sda7 75G 624M 71G 1% /
devtmpfs 3.9G 180K 3.9G 1% /dev
tmpfs 3.9G 100K 3.9G 1% /dev/shm
/dev/sda1 94M 11M 84M 12% /boot/efi
/dev/sda3 20G 7.5G 12G 41% /opt
/dev/sda6 5.0G 181M 4.5G 4% /root
/dev/sda5 9.9G 1.1G 8.3G 12% /tmp
/dev/sda4 20G 2.9G 16G 16% /usr

hope for your kind clarification :slight_smile: thanks.

Rgds,
Khairul

Hi Khairul,

those “new” fstab entries look ok to me and I wouldn’t expect any problems (setting aside my comments on SP3 and persistent device names, especially on those non-multipath servers :wink: ).

But wait, there is a noticeable difference:

SP1: /dev/sda2 69G 39G 27G 59% /

SP2: /dev/disk/by-id/scsi-3600605b002f69e3014d025e31645bcf0-part2 swap
SP2: /dev/disk/by-id/scsi-3600605b002f69e3014d025e31645bcf0-part6 /

So with SP1, partition 2 contained your root FS, but with SP2 that has moved to partition 6 (and partition 2 now contains swap). I believe that this was a manual change (using a new root fs partition for the upgrade and converting the old one to swap afterwards), but you might want to check that.

Regards,
Jens

Hi Jens,
the information given for SP1 is the another node that currectly active and havent upgrade to SP2 which actualy same spec that using the multipath. So i need to be really sure regarding the changes before i upgrade the another node to SP2 which it is the active node. So i can conclude that from your point of views, there’s nothing to reconfigure even on the df -h shows that /dev/mapper/**** after the upgrade. and from your suggestion we no need to change back to /dev/sda. Correct me if im wrong…

Thanks and rgds,
khairul

Hi Khairul,

[QUOTE=duderage;17744]Hi Jens,
the information given for SP1 is the another node that currectly active and havent upgrade to SP2 which actualy same spec that using the multipath. So i need to be really sure regarding the changes before i upgrade the another node to SP2 which it is the active node. So i can conclude that from your point of views, there’s nothing to reconfigure even on the df -h shows that /dev/mapper/**** after the upgrade. and from your suggestion we no need to change back to /dev/sda. Correct me if im wrong…

Thanks and rgds,
khairul[/QUOTE]

yes, with that background information, I think everything is ok :slight_smile:

Regards,
Jens

Thanks Jens, thats was helpful. :slight_smile: