Disks change their names after update on SLES 12

Hi,
I found a problem with my disks on my Oracle servers.
All of them are virtual machines on VMWare server and the operating system is SLES 12.
As I used to do in SLES 11, I create separate disks to Linux filesystem, Oracle software files and Oracle database files. I configured the disks for Oracle database files to use ASM instance, so I didn’t format them and I assigned permissions to grid infrastructure user. The ASM instance was createad correctly and it worked without problems afterwards.
The problem became after some patches installation. The system reboot and then, the diskgroups of the ASM instance didn’t start anymore. Looking to logs I saw that the diskgroups had problems to mount the disks configured at installation time. First of all, I thoguht that some disks might be corrupt, but then I found out the real problem: the disks devices changed their names.
Graphically the change was the next:

Has anybody found the same problem? How can I revert this to the original configuration?
Thanks in advanced

Hi rvillafafila,

[QUOTE=rvillafafila;29671]Hi,
I found a problem with my disks on my Oracle servers […] the disks devices changed their names.
[…]
Has anybody found the same problem? How can I revert this to the original configuration?[/QUOTE]

I’m sorry to tell you, but it is not considered a problem, but “working as designed”. Linux device names are not guaranteed to be persistent, actually for some years now (I don’t recall the actual kernel version that introduced this).

But luckily, you have options: When defining your mounts, you can chose to i.e. mount by file system UUID or by label.

So as a general rule of thumb, you could use the following guide lines:

  • if you’re not mounting foreign devices and would like to see descriptive mount statements, label your file systems and select “by volume label” in YaST’s “fstab options” of the device.
  • if you want to make sure that a specific file system is used and see a risk that you might have foreign devices with the same FS labels, use “by uuid”
  • avoid “by device name” unless you have a very specific reason to do so - rather use “device path” if you need to identify a specific device

This not only affects disk devices, but holds true for many other device types as well (i.e. network cards). In case you’re curious, it’s “udev” that creates any of these “by-*” links under /dev/disk and tries to i.e. rename network adapters to get persistent names (see the result of “find /lib/udev/rules.d/ -name \persist\”[FONT=monospace]).

[/FONT] Regards,
Jens

Thanks for you answer jmozdzen.
I always mount filesystem by uuid to avoid problems and until this version of SLES it works fine.
For now, the problem was solved changing init script that assigns permission to disks used by asm. I hope that the disk device name doesn’t change in the future

Hi rvillafafila,

[QUOTE=rvillafafila;29812]Thanks for you answer jmozdzen.
I always mount filesystem by uuid to avoid problems and until this version of SLES it works fine.
For now, the problem was solved changing init script that assigns permission to disks used by asm. I hope that the disk device name doesn’t change in the future[/QUOTE]

I don’t know how you have changed your script, but if you modified it to use “/dev/disk/by-id” symlinks, you wouldn’t have to worry about device name changes. The latter can indeed change, especially if you add new disks… OTOH, if you have to replace a disk, the “by-id” entry would change, so you’d have to adjust the configuration of your script as well.

From my experience, going by the kernel device name (/dev/sd*) is the least safest way.

Regards,
Jens

[QUOTE=jmozdzen;29814]Hi rvillafafila,

I don’t know how you have changed your script, but if you modified it to use “/dev/disk/by-id” symlinks, you wouldn’t have to worry about device name changes. The latter can indeed change, especially if you add new disks… OTOH, if you have to replace a disk, the “by-id” entry would change, so you’d have to adjust the configuration of your script as well.

From my experience, going by the kernel device name (/dev/sd*) is the least safest way.

Regards,
Jens[/QUOTE]

Thanks for your advice, jmozden.
My script is very simple. In boot.local I modify the owner and the permission of the disk that I’ll add to ASM

Hi,

I’ve had the issue : /dev/sdg got renamed to /dev/sdf and ASM (I’m using a Oracle 11gr2 RAC cluster) resfused to star. ASM is using the definitions in /etc/raw:

raw1:sdd
raw2:sde
raw3:sdg

to solve the issue I did change sdg to sdf and ASM was able to restart.

I was wondering if changing the /etc/raw file to:

raw1:/dev/disk/by-id/scsi-3600507680180852480000000000000fc

raw2:/dev/disk/by-id/scsi-3600507680180852480000000000000fd

raw3:/dev/disk/by-id/scsi-36005076801808524800000000000012e

The id is the same on both nodes and should not change after a reboot (I hope!).

can a path (/dev/disk/by-id…) be specified in /etc/raw? and if so, should this be a permanent solution?

regards,

Ivan