One of our XIV systems is going off lease and we have to move an lpar’s (running SLES 11.4) SAN disks to a different XIV. We ran a cold backup of the system using Storix. We added the new XIV luns to the lpar. Made identical partitions and added those partitions to their corresponding LVMs. We then used the pvmove commands to transfer the data. We then removed the original disks from the LVMs and removed the original disks from the system. The new multipath disks looked correct. We ran the mkinitrd command and then rebooted.
The system did reboot, but we lost the multipath on our root/system disk. It appears to have grabbed one of the paths (/dev/sda?) and used it to boot from. The other multipath data disks came back fine, as expected.
To fix the problem, we had to completely restore the lpar to the new XIV disks using the Storix software. This is fine, but required downtime longer than just the reboot.
In AIX, we migrate system and data disks all the time while the systems are up and running, and we always do a reboot to make sure the system is able to reboot. We are trying to do the same type of procedure with SUSE Linux. We are missing something concerning the boot/system disks in a multipath environment. Does anyone have a procedure that they can share on how to migrate multipath boot disks?
dcarlilecitgo,
It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.
These forums are peer-to-peer, best effort, volunteer run and that if your issue
is urgent or not getting a response, you might try one of the following options:
- Visit http://www.suse.com/support and search the knowledgebase and/or check all
the other support options available. - Open a service request: https://www.suse.com/support
- You could also try posting your message again. Make sure it is posted in the
correct newsgroup. (http://forums.suse.com)
Be sure to read the forum FAQ about what to expect in the way of responses:
http://forums.suse.com/faq.php
If this is a reply to a duplicate posting or otherwise posted in error, please
ignore and accept our apologies and rest assured we will issue a stern reprimand
to our posting bot…
Good luck!
Your SUSE Forums Team
http://forums.suse.com
With a lot of help from Rich Turner of Storix Software Support, I finally got the answer to my problem. It ended up being the “default =” line in the /etc/lilo.conf file. That default line was pointing to the wrong “Stanza” label (one created when we cloned the system with Storix). Once we got default pointing to the correct Stanza that our mkinitrd command updated, then the system booted up correctly.
For you newbies to SLES, like me, below are the steps & checks needed to “Migrate a multipath boot disk to a new disk on PPC”. It is considerably more complex than the steps involved in doing this in AIX, so enjoy.
-
System Information Prior to Any Changes:
a. Here are some commands prior to any changes
root@testem:/root> pvdisplay
Found duplicate PV 7ET7cdjovWpEBI6QClEZAURsoHte62Ke: using /dev/sdq2 not /dev/sda2
— Physical volume —
PV Name /dev/mpath/mpathb_part1
VG Name hanavg1
PV Size 945.56 GiB / not usable 4.55 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 242062
Free PE 28046
Allocated PE 214016
PV UUID yAnD0H-kq3q-Mtcc-Mqw0-d76L-nS72-iGbUVd
— Physical volume —
PV Name /dev/mpath/mpatha_part2
VG Name system
PV Size 79.94 GiB / not usable 2.63 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 20463
Free PE 9264
Allocated PE 11199
PV UUID 7ET7cd-jovW-pEBI-6QCl-EZAU-RsoH-te62Ke
root@testem:/root> vgdisplay
Found duplicate PV 7ET7cdjovWpEBI6QClEZAURsoHte62Ke: using /dev/sdq2 not /dev/sda2
— Volume group —
VG Name hanavg1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 255
Cur LV 6
Open LV 6
Max PV 1
Cur PV 1
Act PV 1
VG Size 945.55 GiB
PE Size 4.00 MiB
Total PE 242062
Alloc PE / Size 214016 / 836.00 GiB
Free PE / Size 28046 / 109.55 GiB
VG UUID I8MBG9-PzSN-EO2H-sFHv-MDSf-6iLt-F9Uw4R
— Volume group —
VG Name system
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 255
Cur LV 6
Open LV 6
Max PV 1
Cur PV 1
Act PV 1
VG Size 79.93 GiB
PE Size 4.00 MiB
Total PE 20463
Alloc PE / Size 11199 / 43.75 GiB
Free PE / Size 9264 / 36.19 GiB
VG UUID hmBdXl-9QhB-ED9H-0tB1-4Qa4-UIKb-SGFbs1
root@testem:/root> vgdisplay system -v
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
Using volume group(s) on command line
Finding volume group "system"
Found duplicate PV 7ET7cdjovWpEBI6QClEZAURsoHte62Ke: using /dev/sdq2 not /dev/sda2
— Volume group —
VG Name system
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 255
Cur LV 6
Open LV 6
Max PV 1
Cur PV 1
Act PV 1
VG Size 79.93 GiB
PE Size 4.00 MiB
Total PE 20463
Alloc PE / Size 11199 / 43.75 GiB
Free PE / Size 9264 / 36.19 GiB
VG UUID hmBdXl-9QhB-ED9H-0tB1-4Qa4-UIKb-SGFbs1
— Logical volume —
LV Name /dev/system/home
VG Name system
LV UUID P15PrI-xB6k-I9t7-57ip-hQsC-lAYy-FjQmed
LV Write Access read/write
LV Creation host, time (none), 2017-07-03 11:09:32 -0500
LV Status available
open 1
LV Size 25.01 GiB
Current LE 6403
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1024
Persistent major 253
Persistent minor 249
Block device 253:249
— Logical volume —
LV Name /dev/system/locallv
VG Name system
LV UUID 08937q-0AJO-859e-aZo1-MhjW-DJE1-jzynBv
LV Write Access read/write
LV Creation host, time (none), 2017-07-03 11:09:33 -0500
LV Status available
open 1
LV Size 500.00 MiB
Current LE 125
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1024
Persistent major 253
Persistent minor 248
Block device 253:248
— Logical volume —
LV Name /dev/system/maestrolv
VG Name system
LV UUID qeLJXD-o07e-xNjJ-IRHW-fzTy-0uhf-S18VcQ
LV Write Access read/write
LV Creation host, time (none), 2017-07-03 11:09:35 -0500
LV Status available
open 1
LV Size 3.00 GiB
Current LE 768
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1024
Persistent major 253
Persistent minor 247
Block device 253:247
— Logical volume —
LV Name /dev/system/optlv
VG Name system
LV UUID O9NPtD-Plon-WBN3-3wfS-NZxV-puId-Dtx1OM
LV Write Access read/write
LV Creation host, time (none), 2017-07-03 11:09:36 -0500
LV Status available
open 1
LV Size 3.00 GiB
Current LE 768
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1024
Persistent major 253
Persistent minor 246
Block device 253:246
— Logical volume —
LV Name /dev/system/root
VG Name system
LV UUID TqcX2j-gsAM-8wn1-aWxW-wHQ9-1UBr-w1ScZb
LV Write Access read/write
LV Creation host, time (none), 2017-07-03 11:09:37 -0500
LV Status available
open 1
LV Size 10.25 GiB
Current LE 2623
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1024
Persistent major 253
Persistent minor 245
Block device 253:245
— Logical volume —
LV Name /dev/system/swap
VG Name system
LV UUID eD60xs-EwUK-8pOl-wu6B-xmnt-M7vT-9ukUnl
LV Write Access read/write
LV Creation host, time (none), 2017-07-03 11:09:38 -0500
LV Status available
open 2
LV Size 2.00 GiB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1024
Persistent major 253
Persistent minor 244
Block device 253:244
— Physical volumes —
PV Name /dev/mpath/mpatha_part2
PV UUID 7ET7cd-jovW-pEBI-6QCl-EZAU-RsoH-te62Ke
PV Status allocatable
Total PE / Free PE 20463 / 9264
root@testem:/root>
root@testem:/root> multipath -ll
mpathb (2001738002fa301c3) dm-3 IBM,2810XIV
size=962G features=‘1 queue_if_no_path’ hwhandler=‘0’ wp=rw
`-± policy=‘service-time 0’ prio=1 status=active
|- 0:0:0:2 sdb 8:16 active ready running
|- 0:0:1:2 sdd 8:48 active ready running
|- 0:0:2:2 sdf 8:80 active ready running
|- 1:0:0:2 sdh 8:112 active ready running
|- 1:0:1:2 sdj 8:144 active ready running
|- 1:0:2:2 sdl 8:176 active ready running
|- 2:0:0:2 sdn 8:208 active ready running
|- 2:0:1:2 sdp 8:240 active ready running
|- 2:0:2:2 sdr 65:16 active ready running
|- 3:0:0:2 sdt 65:48 active ready running
|- 3:0:1:2 sdv 65:80 active ready running
`- 3:0:2:2 sdx 65:112 active ready running
mpatha (2001738002fa3012d) dm-0 IBM,2810XIV
size=112G features=‘1 queue_if_no_path’ hwhandler=‘0’ wp=rw
`-± policy=‘service-time 0’ prio=1 status=active
|- 0:0:0:1 sda 8:0 active ready running
|- 0:0:1:1 sdc 8:32 active ready running
|- 0:0:2:1 sde 8:64 active ready running
|- 1:0:0:1 sdg 8:96 active ready running
|- 1:0:1:1 sdi 8:128 active ready running
|- 1:0:2:1 sdk 8:160 active ready running
|- 2:0:0:1 sdm 8:192 active ready running
|- 2:0:1:1 sdo 8:224 active ready running
|- 2:0:2:1 sdq 65:0 active ready running
|- 3:0:0:1 sds 65:32 active ready running
|- 3:0:1:1 sdu 65:64 active ready running
`- 3:0:2:1 sdw 65:96 active ready running
root@testem:/root>
root@testem:/root> cd /etc
root@testem:/etc> cat multipath.conf
Default multipath.conf file created for install boot
Used mpathN names
defaults {
user_friendly_names yes
}
All devices are blacklisted
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^(hd|xvd|vd)[a-z]*"
wwid ".*"
}
blacklist_exceptions {
wwid "2001738002fa3012d"
wwid "2001738002fa301c3"
}
root@testem:/etc>
root@testem:/etc> cat lilo.conf
LILO configuration file for POWER-based systems
Used to boot from disk after system installation
activate
timeout=50
boot=/dev/mpath/mpatha_part1
default=Linux_3.0.101-63-ppc64_Storix
image=/boot/vmlinux-3.0.101-63-ppc64
label=Linux_3.0.101-63-ppc64_Storix
root=/dev/system/root
initrd=/boot/initrd-storix-root.img
append="quiet sysrq=1 insmod=sym53c8xx insmod=ipr dns=146.146.14.1 crashkernel=512M-:256M console=hvc0"
Stanzas retained from previous lilo.conf file.
image = /boot/vmlinux-3.0.101-63-ppc64
###Don’t change this comment - YaST2 identifier: Original name: linux###
label = SLES11_SP4
append = "quiet sysrq=1 insmod=sym53c8xx insmod=ipr dns=146.146.14.1 crashkernel=512M-:256M "
initrd = /boot/initrd-3.0.101-63-ppc64
root = /dev/system/root
root@testem:/etc>
root@testem:/etc> cd /boot
root@testem:/boot> ls -l
total 43476
-rw-r–r-- 1 root root 2997341 Jun 24 2015 System.map-3.0.101-63-ppc64
-rw-r–r-- 1 root root 1236 May 19 2015 boot.readme
-rw-r–r-- 1 root root 112426 Jun 24 2015 config-3.0.101-63-ppc64
lrwxrwxrwx 1 root root 23 Aug 3 2015 initrd → initrd-3.0.101-63-ppc64
-rw-r–r-- 1 root root 6781348 Aug 3 2015 initrd-3.0.101-63-ppc64
-rw------- 1 root root 8811612 Aug 3 2015 initrd-3.0.101-63-ppc64-kdump
-rw-r–r-- 1 root root 4714445 Jul 3 11:12 initrd-storix-root.img
-rw-r–r-- 1 root root 355 Jul 3 11:12 message.storix
-rw-r–r-- 1 root root 198050 Jun 24 2015 symvers-3.0.101-63-ppc64.gz
lrwxrwxrwx 1 root root 24 Aug 3 2015 vmlinux → vmlinux-3.0.101-63-ppc64
-rw-r–r-- 1 root root 20806910 Jun 24 2015 vmlinux-3.0.101-63-ppc64
root@testem:/boot>
- Now add the new system disk to testem server
Note: I had to zone the fiber switches, added the host to the new XIV and mapped the new lun to the host on the new XIV.
a. Run "rescan-scsi-bus.sh" on testem server
root@testem:/root> rescan-scsi-bus.sh
Scanning SCSI subsystem for new devices
Scanning host 0 for all SCSI target IDs, all LUNs
sg0 changed: device 0 0 0 0 …
from:RAID : 00
to: RAID or: IBM Model: 2810XIV-LUN-0 Rev: 0000
Type: RAID ANSI SCSI revision: 05
sg1 changed: device 0 0 0 1 …
from:Direct-Access : 01
to: Direct-Access Model: 2810XIV Rev: 0000
Type: Direct-Access ANSI SCSI revision: 05
sg2 changed: device 0 0 0 2 …
from:Direct-Access : 02
to: Direct-Access Model: 2810XIV Rev: 0000
Type: Direct-Access ANSI SCSI revision: 05
(data removed to save space)
Scanning for device 3 0 5 1 …
NEW: Host: scsi3 Channel: 00 Id: 05 Lun: 01
Vendor: IBM Model: 2810XIV Rev: 0000
Type: Direct-Access ANSI SCSI revision: 05
12 new or changed device(s) found.
[0:0:3:1]
[0:0:4:1]
[0:0:5:1]
[1:0:3:1]
[1:0:4:1]
[1:0:5:1]
[2:0:3:1]
[2:0:4:1]
[2:0:5:1]
[3:0:3:1]
[3:0:4:1]
[3:0:5:1]
0 remapped or resized device(s) found.
0 device(s) removed.
root@testem:/root>
b. Now fix the /etc/multipath.conf file to allow the new disk to be built by multipath.
NOTE one of the new device added “[3:0:5:1]” and use lsscsi command to get the device name:
root@testem:/root> lsscsi
[0:0:0:0] storage IBM 2810XIV-LUN-0 0000 -
[0:0:0:1] disk IBM 2810XIV 0000 /dev/sda
[0:0:0:2] disk IBM 2810XIV 0000 /dev/sdb
(data removed to save space)
[3:0:2:2] disk IBM 2810XIV 0000 /dev/sdx
[3:0:3:0] storage IBM 2810XIV-LUN-0 0000 -
[3:0:3:1] disk IBM 2810XIV 0000 /dev/sdah
[3:0:4:0] storage IBM 2810XIV-LUN-0 0000 -
[3:0:4:1] disk IBM 2810XIV 0000 /dev/sdai
[3:0:5:0] storage IBM 2810XIV-LUN-0 0000 -
[3:0:5:1] disk IBM 2810XIV 0000 /dev/sdaj
root@testem:/root>
Note that " [3:0:5:1] " is mapped to “/dev/sdaj”
Now we need to get the wwid of the drive “/dev/sdaj” using the "/lib/udev/scsi_id -gud " command:
root@testem:/root> /lib/udev/scsi_id -gud /dev/sdaj
2001738003009005b
root@testem:/root>
The wwid “2001738003009005b” now needs to be added to the /etc/multipath.conf file:
NOTE that the wwid is in lower case…UPPER CASE WILL NOT WORK!!!
I used vi to edit multipath.conf and here is what it looks like after the edit:
root@testem:/etc> cat multipath.conf
Default multipath.conf file created for install boot
Used mpathN names
defaults {
user_friendly_names yes
}
All devices are blacklisted
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^(hd|xvd|vd)[a-z]*"
wwid ".*"
}
blacklist_exceptions {
wwid "2001738002fa3012d"
wwid "2001738002fa301c3"
wwid "2001738003009005b"
}
root@testem:/etc>
c. Now run the multipath command and the new “mpathc” will be added:
root@testem:/etc> multipath
create: mpathc (2001738003009005b) undef IBM,2810XIV
size=112G features=‘1 queue_if_no_path’ hwhandler=‘0’ wp=undef
`-± policy=‘service-time 0’ prio=1 status=undef
|- 0:0:3:1 sdy 65:128 undef ready running
|- 0:0:4:1 sdz 65:144 undef ready running
|- 0:0:5:1 sdaa 65:160 undef ready running
|- 1:0:3:1 sdab 65:176 undef ready running
|- 1:0:4:1 sdac 65:192 undef ready running
|- 1:0:5:1 sdad 65:208 undef ready running
|- 2:0:3:1 sdae 65:224 undef ready running
|- 2:0:4:1 sdaf 65:240 undef ready running
|- 2:0:5:1 sdag 66:0 undef ready running
|- 3:0:3:1 sdah 66:16 undef ready running
|- 3:0:4:1 sdai 66:32 undef ready running
`- 3:0:5:1 sdaj 66:48 undef ready running
root@testem:/etc>
Now I ran the “multipath -ll” command:
root@testem:/etc> multipath -ll
mpathc (2001738003009005b) dm-5 IBM,2810XIV
size=112G features=‘1 queue_if_no_path’ hwhandler=‘0’ wp=rw
`-± policy=‘service-time 0’ prio=1 status=enabled
|- 0:0:3:1 sdy 65:128 active ready running
|- 0:0:4:1 sdz 65:144 active ready running
|- 0:0:5:1 sdaa 65:160 active ready running
|- 1:0:3:1 sdab 65:176 active ready running
|- 1:0:4:1 sdac 65:192 active ready running
|- 1:0:5:1 sdad 65:208 active ready running
|- 2:0:3:1 sdae 65:224 active ready running
|- 2:0:4:1 sdaf 65:240 active ready running
|- 2:0:5:1 sdag 66:0 active ready running
|- 3:0:3:1 sdah 66:16 active ready running
|- 3:0:4:1 sdai 66:32 active ready running
`- 3:0:5:1 sdaj 66:48 active ready running
mpathb (2001738002fa301c3) dm-3 IBM,2810XIV
size=962G features=‘1 queue_if_no_path’ hwhandler=‘0’ wp=rw
`-± policy=‘service-time 0’ prio=1 status=active
|- 0:0:0:2 sdb 8:16 active ready running
|- 0:0:1:2 sdd 8:48 active ready running
|- 0:0:2:2 sdf 8:80 active ready running
|- 1:0:0:2 sdh 8:112 active ready running
|- 1:0:1:2 sdj 8:144 active ready running
|- 1:0:2:2 sdl 8:176 active ready running
|- 2:0:0:2 sdn 8:208 active ready running
|- 2:0:1:2 sdp 8:240 active ready running
|- 2:0:2:2 sdr 65:16 active ready running
|- 3:0:0:2 sdt 65:48 active ready running
|- 3:0:1:2 sdv 65:80 active ready running
`- 3:0:2:2 sdx 65:112 active ready running
mpatha (2001738002fa3012d) dm-0 IBM,2810XIV
size=112G features=‘1 queue_if_no_path’ hwhandler=‘0’ wp=rw
`-± policy=‘service-time 0’ prio=1 status=active
|- 0:0:0:1 sda 8:0 active ready running
|- 0:0:1:1 sdc 8:32 active ready running
|- 0:0:2:1 sde 8:64 active ready running
|- 1:0:0:1 sdg 8:96 active ready running
|- 1:0:1:1 sdi 8:128 active ready running
|- 1:0:2:1 sdk 8:160 active ready running
|- 2:0:0:1 sdm 8:192 active ready running
|- 2:0:1:1 sdo 8:224 active ready running
|- 2:0:2:1 sdq 65:0 active ready running
|- 3:0:0:1 sds 65:32 active ready running
|- 3:0:1:1 sdu 65:64 active ready running
`- 3:0:2:1 sdw 65:96 active ready running
root@testem:/etc>
d. Now I used yast and make the two system partitions using the mpathc just created...
note make the small “root” partition first and then the second partition “system”.
Everything I have read says that the small “root” partition needs to be FAT, and it appears it can be FAT16 or FAT32. Yast2 appears to only build FAT32.
Note: When the /sbin/lilo command runs, it re-formats the “root” partition to FAT16…
e. The small “root” partition is not mounted by Suse Linux, so I used a dd command to copy everything on it to the new small partition we just built.
root@testem:/etc> dd if=/dev/mapper/mpatha_part1 of=/dev/mapper/mpathc_part1
dd: writing to `/dev/mapper/mpathc_part1’: No space left on device
399361+0 records in
399360+0 records out
204472320 bytes (204 MB) copied, 2.32855 s, 87.8 MB/s
root@testem:/etc>
f. Now we need to add the new system partition to the "system" LVM
(1) first, find out the LV's and what disks are assigned using the "vgs -o+devices" command
root@testem:/etc> vgs -o+devices
Found duplicate PV 7ET7cdjovWpEBI6QClEZAURsoHte62Ke: using /dev/sdq2 not /dev/sda2
VG #PV #LV #SN Attr VSize VFree Devices
hanavg1 1 6 0 wz–n- 945.55g 109.55g /dev/mpath/mpathb_part1(0)
hanavg1 1 6 0 wz–n- 945.55g 109.55g /dev/mpath/mpathb_part1(15360)
hanavg1 1 6 0 wz–n- 945.55g 109.55g /dev/mpath/mpathb_part1(117760)
hanavg1 1 6 0 wz–n- 945.55g 109.55g /dev/mpath/mpathb_part1(168960)
hanavg1 1 6 0 wz–n- 945.55g 109.55g /dev/mpath/mpathb_part1(209920)
hanavg1 1 6 0 wz–n- 945.55g 109.55g /dev/mpath/mpathb_part1(211456)
system 1 6 0 wz–n- 79.93g 36.19g /dev/mpath/mpatha_part2(0)
system 1 6 0 wz–n- 79.93g 36.19g /dev/mpath/mpatha_part2(6403)
system 1 6 0 wz–n- 79.93g 36.19g /dev/mpath/mpatha_part2(6528)
system 1 6 0 wz–n- 79.93g 36.19g /dev/mpath/mpatha_part2(7296)
system 1 6 0 wz–n- 79.93g 36.19g /dev/mpath/mpatha_part2(8064)
system 1 6 0 wz–n- 79.93g 36.19g /dev/mpath/mpatha_part2(10687)
root@testem:/etc>
(2) Use vgdisplay command to check the "Max PV" that can be in the vg...
root@testem:/etc> vgdisplay system -v
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
Using volume group(s) on command line
Finding volume group "system"
Found duplicate PV 7ET7cdjovWpEBI6QClEZAURsoHte62Ke: using /dev/sdq2 not /dev/sda2
— Volume group —
VG Name system
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 255
Cur LV 6
Open LV 6
Max PV 1
Cur PV 1
Act PV 1
VG Size 79.93 GiB
PE Size 4.00 MiB
Total PE 20463
Alloc PE / Size 11199 / 43.75 GiB
Free PE / Size 9264 / 36.19 GiB
VG UUID hmBdXl-9QhB-ED9H-0tB1-4Qa4-UIKb-SGFbs1
Note if “Max PV” is set to one then use the “vgchange vgname -p 0” to set the vgname to zero…
root@testem:/etc> vgchange system -p 0
Found duplicate PV 7ET7cdjovWpEBI6QClEZAURsoHte62Ke: using /dev/sdq2 not /dev/sda2
Volume group “system” successfully changed
root@testem:/etc>
(3) Now add the new partition /dev/mpath/mpathc_part2 to the volume group called "system" and then run "vgdisplay system -v" to check the "--- Physical volumes ---" section at the end.
root@testem:/etc> vgextend system /dev/mpath/mpathc_part2
Found duplicate PV 7ET7cdjovWpEBI6QClEZAURsoHte62Ke: using /dev/sdq2 not /dev/sda2
No physical volume label read from /dev/mpath/mpathc_part2
Physical volume “/dev/mpath/mpathc_part2” successfully created
Volume group “system” successfully extended
root@testem:/etc>
root@testem:/etc> vgdisplay system -v
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
Using volume group(s) on command line
Finding volume group "system"
Found duplicate PV 7ET7cdjovWpEBI6QClEZAURsoHte62Ke: using /dev/sdq2 not /dev/sda2
— Volume group —
VG Name system
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 255
Cur LV 6
Open LV 6
Max PV 0
Cur PV 2
Act PV 2
VG Size 159.93 GiB
PE Size 4.00 MiB
Total PE 40942
Alloc PE / Size 11199 / 43.75 GiB
Free PE / Size 29743 / 116.18 GiB
VG UUID hmBdXl-9QhB-ED9H-0tB1-4Qa4-UIKb-SGFbs1
— Physical volumes —
PV Name /dev/mpath/mpatha_part2
PV UUID 7ET7cd-jovW-pEBI-6QCl-EZAU-RsoH-te62Ke
PV Status allocatable
Total PE / Free PE 20463 / 9264
PV Name /dev/mpath/mpathc_part2
PV UUID bhrhRq-zBLe-e067-f71W-tLjF-lqAV-N9LGve
PV Status allocatable
Total PE / Free PE 20479 / 20479
Note in the Physical Volumes section of the last command that the “Total PE/Free PE” show that the new PV ( /dev/mpath/mpathc_part2 ) has all it’s PE free…
(4) Now that the new partition has been added to the "system" VG and we know the PV names...We can now run the pvmove command to transfer everything from the origional pv to the new pv.
root@testem:/etc> pvmove -v /dev/mpath/mpatha_part2 /dev/mpath/mpathc_part2
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
Executing: /sbin/modprobe dm-mirror
Finding volume group "system"
Archiving volume group "system" metadata (seqno 9).
Creating logical volume pvmove0
Moving 6403 extents of logical volume system/home
Moving 125 extents of logical volume system/locallv
Moving 768 extents of logical volume system/maestrolv
Moving 768 extents of logical volume system/optlv
Moving 2623 extents of logical volume system/root
Moving 512 extents of logical volume system/swap
Found volume group "system"
activation/volume_list configuration setting not defined: Checking only host tags for system/home
(data removed to save space)
Resuming system-pvmove0 (253:8)
Found volume group "system"
Resuming system-home (253:249)
Found volume group "system"
Resuming system-locallv (253:248)
Found volume group "system"
Resuming system-maestrolv (253:247)
Found volume group "system"
Resuming system-optlv (253:246)
Found volume group "system"
Resuming system-root (253:245)
Found volume group "system"
Resuming system-swap (253:244)
Found volume group "system"
Removing system-pvmove0 (253:8)
Removing temporary pvmove LV
Writing out final volume group after pvmove
Creating volume group backup "/etc/lvm/backup/system" (seqno 17).
root@testem:/etc>
Now run the “vgdisplay system -v” command again and note the “— Physical Volumes —” which should show that the original (mpatha) disk has all it’s PE’s free and the new (mpathc) disk contains all the data…
root@testem:/etc> vgdisplay system -v
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
Using volume group(s) on command line
Finding volume group "system"
Found duplicate PV 7ET7cdjovWpEBI6QClEZAURsoHte62Ke: using /dev/sdq2 not /dev/sda2
— Volume group —
VG Name system
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 17
VG Access read/write
VG Status resizable
MAX LV 255
Cur LV 6
Open LV 6
Max PV 0
Cur PV 2
Act PV 2
VG Size 159.93 GiB
PE Size 4.00 MiB
Total PE 40942
Alloc PE / Size 11199 / 43.75 GiB
Free PE / Size 29743 / 116.18 GiB
VG UUID hmBdXl-9QhB-ED9H-0tB1-4Qa4-UIKb-SGFbs1
— Physical volumes —
PV Name /dev/mpath/mpatha_part2
PV UUID 7ET7cd-jovW-pEBI-6QCl-EZAU-RsoH-te62Ke
PV Status allocatable
Total PE / Free PE 20463 / 20463
PV Name /dev/mpath/mpathc_part2
PV UUID bhrhRq-zBLe-e067-f71W-tLjF-lqAV-N9LGve
PV Status allocatable
Total PE / Free PE 20479 / 9280
Also, if you now run the vgs -o+devices commands, you will see that the “system” vg is now using our new PV.
root@testem:/etc> vgs -o+devices
Found duplicate PV 7ET7cdjovWpEBI6QClEZAURsoHte62Ke: using /dev/sdq2 not /dev/sda2
VG #PV #LV #SN Attr VSize VFree Devices
hanavg1 1 6 0 wz–n- 945.55g 109.55g /dev/mpath/mpathb_part1(0)
hanavg1 1 6 0 wz–n- 945.55g 109.55g /dev/mpath/mpathb_part1(15360)
hanavg1 1 6 0 wz–n- 945.55g 109.55g /dev/mpath/mpathb_part1(117760)
hanavg1 1 6 0 wz–n- 945.55g 109.55g /dev/mpath/mpathb_part1(168960)
hanavg1 1 6 0 wz–n- 945.55g 109.55g /dev/mpath/mpathb_part1(209920)
hanavg1 1 6 0 wz–n- 945.55g 109.55g /dev/mpath/mpathb_part1(211456)
system 2 6 0 wz–n- 159.93g 116.18g /dev/mpath/mpathc_part2(0)
system 2 6 0 wz–n- 159.93g 116.18g /dev/mpath/mpathc_part2(6403)
system 2 6 0 wz–n- 159.93g 116.18g /dev/mpath/mpathc_part2(6528)
system 2 6 0 wz–n- 159.93g 116.18g /dev/mpath/mpathc_part2(7296)
system 2 6 0 wz–n- 159.93g 116.18g /dev/mpath/mpathc_part2(8064)
system 2 6 0 wz–n- 159.93g 116.18g /dev/mpath/mpathc_part2(10687)
root@testem:/etc>
(5) Now that the original disk "/dev/mpath/mpatha_part2" is freed up, we can remove it from VG using the vgreduce command and the "vgdisplay system -v" command shows it has been removed from the VG:
root@testem:/etc> vgreduce system /dev/mpath/mpatha_part2
Found duplicate PV 6myX80oXdyb73KzqtllE0iFO1VsXKpor: using /dev/sdq2 not /dev/sda2
Removed “/dev/mpath/mpatha_part2” from volume group “system”
root@testem:/etc> vgdisplay system -v
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
Using volume group(s) on command line
Finding volume group "system"
Found duplicate PV 6myX80oXdyb73KzqtllE0iFO1VsXKpor: using /dev/sdq2 not /dev/sda2
— Volume group —
VG Name system
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 38
VG Access read/write
VG Status resizable
MAX LV 255
Cur LV 6
Open LV 6
Max PV 0
Cur PV 1
Act PV 1
VG Size 80.00 GiB
PE Size 4.00 MiB
Total PE 20479
Alloc PE / Size 11199 / 43.75 GiB
Free PE / Size 9280 / 36.25 GiB
VG UUID hmBdXl-9QhB-ED9H-0tB1-4Qa4-UIKb-SGFbs1
— Physical volumes —
PV Name /dev/mpath/mpathc_part2
PV UUID j1J6ZK-fFL0-nxEP-ssE5-cmGV-ZSTs-oiVPQh
PV Status allocatable
Total PE / Free PE 20479 / 9280
root@testem:/etc>
(6) Now we can remove the mpatha disks and get them off the system using the "multipath -F mpatha" command and then
Get the multipath mpatha devices which will be used when we remove those devices:
root@testem:/etc> multipath -l mpatha
mpatha (2001738002fa3012d) dm-0 IBM,2810XIV
size=112G features=‘1 queue_if_no_path’ hwhandler=‘0’ wp=rw
`-± policy=‘service-time 0’ prio=0 status=active
|- 0:0:0:1 sda 8:0 active undef running
|- 0:0:1:1 sdc 8:32 active undef running
|- 0:0:2:1 sde 8:64 active undef running
|- 1:0:0:1 sdg 8:96 active undef running
|- 1:0:1:1 sdi 8:128 active undef running
|- 1:0:2:1 sdk 8:160 active undef running
|- 2:0:0:1 sdm 8:192 active undef running
|- 2:0:1:1 sdo 8:224 active undef running
|- 2:0:2:1 sdq 65:0 active undef running
|- 3:0:0:1 sds 65:32 active undef running
|- 3:0:1:1 sdu 65:64 active undef running
`- 3:0:2:1 sdw 65:96 active undef running
root@testem:/etc>
Now remove the old multipath mpatha:
root@testem:/etc> multipath -F mpatha
Jul 06 10:31:56 | mpathc_part2: map in use
Jul 06 10:31:56 | failed to remove multipath map mpathc
Jul 06 10:31:58 | mpathb_part1: map in use
Jul 06 10:31:58 | failed to remove multipath map mpathb
root@testem:/etc>
Multipath -ll shows that the mpatha is gone:
root@testem:/etc> multipath -ll
mpathc (2001738003009005b) dm-5 IBM,2810XIV
size=112G features=‘1 queue_if_no_path’ hwhandler=‘0’ wp=rw
`-± policy=‘service-time 0’ prio=1 status=active
|- 0:0:3:1 sdy 65:128 active ready running
|- 0:0:4:1 sdz 65:144 active ready running
|- 0:0:5:1 sdaa 65:160 active ready running
|- 1:0:3:1 sdab 65:176 active ready running
|- 1:0:4:1 sdac 65:192 active ready running
|- 1:0:5:1 sdad 65:208 active ready running
|- 2:0:3:1 sdae 65:224 active ready running
|- 2:0:4:1 sdaf 65:240 active ready running
|- 2:0:5:1 sdag 66:0 active ready running
|- 3:0:3:1 sdah 66:16 active ready running
|- 3:0:4:1 sdai 66:32 active ready running
`- 3:0:5:1 sdaj 66:48 active ready running
mpathb (2001738002fa301c3) dm-1 IBM,2810XIV
size=962G features=‘1 queue_if_no_path’ hwhandler=‘0’ wp=rw
`-± policy=‘service-time 0’ prio=1 status=active
|- 0:0:0:2 sdb 8:16 active ready running
|- 0:0:1:2 sdd 8:48 active ready running
|- 0:0:2:2 sdf 8:80 active ready running
|- 1:0:0:2 sdh 8:112 active ready running
|- 1:0:1:2 sdj 8:144 active ready running
|- 1:0:2:2 sdl 8:176 active ready running
|- 2:0:0:2 sdn 8:208 active ready running
|- 2:0:1:2 sdp 8:240 active ready running
|- 2:0:2:2 sdr 65:16 active ready running
|- 3:0:0:2 sdt 65:48 active ready running
|- 3:0:1:2 sdv 65:80 active ready running
`- 3:0:2:2 sdx 65:112 active ready running
root@testem:/etc>
(7) run the "mkinitrd" command which re-creates the initramfs image files stored in the "/boot" directory referenced in the /etc/lilo.conf file:
root@testem:/etc> cd /boot
root@testem:/boot> ls -l
total 48940
-rw-r–r-- 1 root root 2997341 Jun 24 2015 System.map-3.0.101-63-ppc64
-rw-r–r-- 1 root root 1236 May 19 2015 boot.readme
-rw-r–r-- 1 root root 112426 Jun 24 2015 config-3.0.101-63-ppc64
lrwxrwxrwx 1 root root 23 Jul 5 16:14 initrd → initrd-3.0.101-63-ppc64
-rw-r–r-- 1 root root 7142252 Jul 5 16:14 initrd-3.0.101-63-ppc64
-rw------- 1 root root 9131592 Jul 5 16:14 initrd-3.0.101-63-ppc64-kdump
-rw-r–r-- 1 root root 4902042 Jul 3 17:14 initrd-storix-new.img
-rw-r–r-- 1 root root 4714445 Jul 3 11:12 initrd-storix-root.img
-rw-r–r-- 1 root root 355 Jul 3 11:12 message.storix
-rw-r–r-- 1 root root 198050 Jun 24 2015 symvers-3.0.101-63-ppc64.gz
lrwxrwxrwx 1 root root 24 Aug 3 2015 vmlinux → vmlinux-3.0.101-63-ppc64
-rw-r–r-- 1 root root 20806910 Jun 24 2015 vmlinux-3.0.101-63-ppc64
root@testem:/boot> mkinitrd
Kernel image: /boot/vmlinux-3.0.101-63-ppc64
Initrd image: /boot/initrd-3.0.101-63-ppc64
Root device: /dev/system/root (mounted on / as ext3)
modprobe: Module crct10dif not found.
WARNING: no dependencies for kernel module ‘crct10dif’ found.
Kernel Modules: scsi_mod hid usb-common usbcore usbhid ehci-hcd ohci-hcd uhci-hcd xhci-hcd scsi_tgt scsi_transport_fc ibmvfc crc-t10
dif sd_mod cdrom dm-mod scsi_dh dm-multipath dm-service-time dm-round-robin mbcache jbd ext3 sg scsi_transport_srp ibmvscsic libata
lpfc ipr scsi_dh_alua scsi_dh_hp_sw scsi_dh_emc scsi_dh_rdac dm-log dm-region-hash dm-mirror dm-snapshot dm-queue-length dm-least-pe
nding linear
Features: dm multipathd block usb multipath kpartx lvm2 resume.userspace resume.kernel
63338 blocks
Network: auto
Calling mkinitrd -k /boot/vmlinux-3.0.101-63-ppc64 -i /tmp/mkdumprd.fPjzIXPs2b -f ‘kdump network’ -B -s ‘’
Regenerating kdump initrd …
Kernel image: /boot/vmlinux-3.0.101-63-ppc64
Initrd image: /tmp/mkdumprd.fPjzIXPs2b
Root device: /dev/system/root (mounted on / as ext3)
modprobe: Module crct10dif not found.
WARNING: no dependencies for kernel module ‘crct10dif’ found.
Kernel Modules: scsi_mod hid usb-common usbcore usbhid ehci-hcd ohci-hcd uhci-hcd xhci-hcd scsi_tgt scsi_transport_fc ibmvfc crc-t10
dif sd_mod cdrom dm-mod scsi_dh dm-multipath dm-service-time dm-round-robin mbcache jbd ext3 sg scsi_transport_srp ibmvscsic libata
lpfc ipr scsi_dh_alua scsi_dh_hp_sw scsi_dh_emc scsi_dh_rdac dm-log dm-region-hash dm-mirror dm-snapshot dm-queue-length dm-least-pe
nding af_packet ibmveth linear nls_utf8
Features: dm multipathd block usb network multipath kpartx lvm2 resume.userspace resume.kernel kdump
79311 blocks
root@testem:/boot>
root@testem:/boot> ls -l
total 48936
-rw-r–r-- 1 root root 2997341 Jun 24 2015 System.map-3.0.101-63-ppc64
-rw-r–r-- 1 root root 1236 May 19 2015 boot.readme
-rw-r–r-- 1 root root 112426 Jun 24 2015 config-3.0.101-63-ppc64
lrwxrwxrwx 1 root root 23 Jul 6 11:04 initrd → initrd-3.0.101-63-ppc64
-rw-r–r-- 1 root root 7139300 Jul 6 11:04 initrd-3.0.101-63-ppc64
-rw------- 1 root root 9131220 Jul 6 11:04 initrd-3.0.101-63-ppc64-kdump
-rw-r–r-- 1 root root 4902042 Jul 3 17:14 initrd-storix-new.img
-rw-r–r-- 1 root root 4714445 Jul 3 11:12 initrd-storix-root.img
-rw-r–r-- 1 root root 355 Jul 3 11:12 message.storix
-rw-r–r-- 1 root root 198050 Jun 24 2015 symvers-3.0.101-63-ppc64.gz
lrwxrwxrwx 1 root root 24 Aug 3 2015 vmlinux → vmlinux-3.0.101-63-ppc64
-rw-r–r-- 1 root root 20806910 Jun 24 2015 vmlinux-3.0.101-63-ppc64
root@testem:/boot>
(8) Since we have changed out our boot disks and since this is a IBM Power system, we now need to adjust the /etc/lilo.conf to point to our new boot disk and run the /sbin/lilo command:
(a) Make a backup copy of the current /etc/lilo.conf file
(b) Now edit the /etc/lilo.conf file changing the "boot =" line to point to the new /dev/mpath/mpathc_part1 partition. Also, make sure that the "default =" points to the correct stanza label within lilo.conf (which caused me two weeks to get figure out).
root@testem:/etc> cat lilo.conf