Process to resize the / "root" filesystem

I am running SLSE12 and the root partition is 95% used and I would like to increase this storage.
I noticed the filesystems devtmpfs, and three tmpfs are 943M and 949M respectively by minimally used. Are these logical filesystems within the / “root” filesystem? If this is so then can these be resized smaller? Would the / “root” automatically obtain this space or would it need to be resized larger?
Another way I was looking at this was to find the directories that were large and replace them with a LVM volume. I find /var with 285M but a bit confused with /bin at 5.0M. If I can replace /var then what would the process be to replace it?

df -kh
Filesystem Size Used Avail Use% Mounted on
/dev/dasda3 5.5G 5.0G 291M 95% /
devtmpfs 943M 0 943M 0% /dev
tmpfs 949M 80K 949M 1% /dev/shm
tmpfs 949M 9.4M 940M 1% /run
tmpfs 949M 0 949M 0% /sys/fs/cgroup
/dev/dasda1 194M 22M 162M 12% /boot/zipl

cat /etc/fstab
/dev/disk/by-path/ccw-0.0.15c0-part2 swap swap defaults 0 0
/dev/disk/by-path/ccw-0.0.15c0-part3 / ext4 acl,user_xattr 1 1
/dev/disk/by-path/ccw-0.0.15c0-part1 /boot/zipl ext2 acl,user_xattr 1 2

du -sh bin boot dev etc home lib lib64 lost+found mnt opt root run sbin selinux srv sys test tmp usr var
5.0M bin
114M boot
80K dev
21M etc
1.1M home
80M lib
16M lib64
16K lost+found
4.0K mnt
4.0K opt
22M root
9.2M run
12M sbin
4.0K selinux
92K srv
0 sys
4.0K test
40K tmp
4.4G usr
285M var

On 05/19/2015 12:04 PM, mikenash wrote:[color=blue]

I am running SLSE12 and the root partition is 95% used and I would like
to increase this storage.
I noticed the filesystems devtmpfs, and three tmpfs are 943M and 949M
respectively by minimally used. Are these logical filesystems within
the / “root” filesystem? If this is so then can these be resized[/color]

No, they use RAM. Playing with them will not help you.
[color=blue]

smaller? Would the / “root” automatically obtain this space or would it
need to be resized larger?[/color]

Resizing a partition to be smaller never automatically changes another
partition to be larger. Unless partition boundaries line up, usually with
the new space after the existing space, you cannot always resize this way
anyway. This is why things like LVM are so great, because you can add
partitions to volume groups and then use those groups’ spaces however you
like in volumes.
[color=blue]

Another way I was looking at this was to find the directories that were
large and replace them with a LVM volume. I find /var with 285M but a
bit confused with /bin at 5.0M. If I can replace /var then what would
the process be to replace it?[/color]

Mount a new volume somewhere, copy everything from /var into /mnt/newvar,
then modify /etc/fstab (or equivalent) so that the partition behind your
new var is auto-mounted to /var. It is best/easiest/safest to do this
when booting from rescue media, vs. trying to do it while the system is
booted. Too many things keep their fingers in /var (and some other
places) so doing this online can be very hard, or even impossible if you
try something like /bin .


Good luck.

If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below…

Thank you very much for your reply. The explanation of the ram disk was very helpful.
I was wondering how to to remove the files from the /var directory and thought maintenance mode would be an option. However, I seem to have a problem with maintenance mode. Anyway I believe I can create a LVM volume and copy the contents of /var. Create a new /etc/fstab with this mount. IPL a different linux system and mount this volume. Then delete the contents of /var directory.
The results from the du command on the /bin directory do not appear to be accurate. The state 5.0M which is the size of / “root”! What is this about?
Another option I was thinking about is the /swap filesystem. This filesystem is on the same volume. Can this be utilized to resize the / “root” filesystem? I could create a swap LVM volume to replace the /swap filesystem.

On 05/19/2015 01:44 PM, mikenash wrote:[color=blue]

Thank you very much for your reply. The explanation of the ram disk was
very helpful.
I was wondering how to to remove the files from the /var directory and
thought maintenance mode would be an option. However, I seem to have a
problem with maintenance mode. Anyway I believe I can create a LVM
volume and copy the contents of /var. Create a new /etc/fstab with
this mount. IPL a different linux system and mount this volume. Then
delete the contents of /var directory.[/color]

If you create a new disk somewhere and make it available to the computer
then you should be able to partition, format, and mount it wherever. As
long as you copy over required things (which should basically be
everything from the old directory) then you can mount over the old
location. It’s perfectly fine to mount over an existing directory which
effectively hides what is in the mounted-over location. It wastes space
(since you cannot access the old file names to delete them, freeing space)
but it’s great for testing something that may break your system (like
this). Copy things over, mount it, reboot the system, and if it’s okay
and auto-mounted then you know you didn’t break things. Now you can go
and reboot into some recovery mode again, clean out the original
directory, reboot one more time and you’re done.
[color=blue]

The results from the du command on the /bin directory do not appear to
be accurate. The state 5.0M which is the size of / “root”! What is
this about?[/color]

Apparently all of the data under / is actually under /bin, which is not
abnormal since you should probably not be putting things directly under /
but instead under various subdirectories /usr, /bin, /tmp, /var, etc.).
If those directories are not partitions on their own, then they take up
space of / an any free space calculation you do will be identical.
[color=blue]

Another option I was thinking about is the /swap filesystem. This
filesystem is on the same volume. Can this be utilized to resize the /
“root” filesystem? I could create a swap LVM volume to replace the
/swap filesystem.[/color]

It’s available, but I’m not sure if you’ll be able to merge it as it
appears to be earlier on the disk. I do not know, and maybe it’s nothing,
but I’d feel better if things were already using LVM, or if swap at least
came later on the disk.


Good luck.

If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below…

Thanks again for all you help. I made a mistake about the /bin directory. It is 5.0 Meg and not 5.0 Gig.:frowning:
I create a 10 G lvm volume to replace the /usr directory which has a total of 4.4G. I successfully mounted this manually.
I updated /etc/fstab: /dev/pool1/lvusr /usr ext3 acl,user_xattr 1 2
The IPL stalls with message: Starting Reload Configuration from the Real Root.
Not sure why the /usr directory would cause this problem.

Do not try mounting just anything separately from the main system. Some
filesystems do NOT do this properly; /var works nicely, but I do not know
how many other filesystems will. If I were you, I’d to back to playing
with var first, and then deal with other filesystems as you see fit.

With that said, it is sometimes popular to partition the filesystem to the
nth degree; while that can be fun, I think it is much more practical to
only partition as needed, so typically I have the root (/) of course, then
may partition off /tmp, /var, and /home, and that’s about it (except for
swap, if you opt to have one; I do not usually). If you are trying to
partition every single directory, consider why and the benefit to the cost
of time to setup, manage, deal with wasted space (extra space allocated to
each partition), space constraint issues (as you need to resize or grow
partitions and filesystems to handle growth), etc.


Good luck.

If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below…

Found another message.
Reached target Initrd Root File System.
Starting Reload Configuration from the Real Root…
Expecting device dev-pool1-lvusr.device…
Started Reload Configuration from the Real Root.
I am looking at what options that may helpful in fstab.
I did a “man fstab” but not much in the documentation for specific options.

Hi mikenash,

first of all - did you regenerate initrd after adding LVM to your system? It might be running without the LVM feature, hard to tell for me.

Regarding what to put on LVM and what not, here’s how we usually set things up:

  • separate /boot partition, for historic reasons
  • remainder of first disk: a single partition used as an LVM physical volume
  • set a more specific VG name, rathern than “system” - if you ever virtualize the disk(s) and need to activate on the host (or move the physical disks to some other host for repair), you’ll have explcit names and no clashes with existing VGs
  • LVs: root, usr, var, opt, srv, tmp, home, swap (this may vary depending on your needs, i.e. if nothing currently goes into /srv on a server, there’s no need to spend an LV for that… if home is from an NFS server, the same applies)
  • all FS get a descriptive label and are mounted via those labels, rather than device names

For starters, we go with rather small LVs and increase that live, on demand.

Regards,
Jens

I believe I have regenerated the initrd but being a newbie I issues the following commands.
grub2-mkconfig -o /boot/grub2/grub.cfg
grub2-zipl-setup

[COLOR="#4B0082"]I received some additional diagnostic information after waiting.[/COLOR]
Starting dracut pre-mount hook…
[ OK ] Started dracut pre-mount hook.
Starting File System Check on /dev/disk/by-path/ccw-0.0.15c0-part3…

[ OK ] Started File System Check on /dev/disk/by-path/ccw-0.0.15c0-part3.
Mounting /sysroot…
[ OK ] Mounted /sysroot.
[ OK ] Reached target Initrd Root File System.
Starting Reload Configuration from the Real Root…
Expecting device dev-pool1-lvusr.device…
[ OK ] Started Reload Configuration from the Real Root.
[ TIME ] Timed out waiting for device dev-pool1-lvusr.device.
[DEPEND] Dependency failed for /sysroot/usr.
[DEPEND] Dependency failed for Initrd File Systems.

Generating “/run/initramfs/rdsosreport.txt”

Entering emergency mode. Exit the shell to continue.
Type “journalctl” to view system logs.
You might want to save “/run/initramfs/rdsosreport.txt” to a USB stick or /boot
after mounting them and attach it to a bug report.

[COLOR="#4B0082"]I cat this file and at the end I captured this information.[/COLOR]
[ 1.356949] linux140 kernel: dasdc:VOL1/ SLES12: dasdc1 dasdc2 dasdc3
[ 1.883186] linux140 kernel: EXT4-fs (dasdc1): mounting ext2 file system usin
g the ext4 subsystem
[ 1.886148] linux140 kernel: EXT4-fs (dasdc1): mounted filesystem without jou
rnal. Opts: (null)
[ 1.901336] linux140 kernel: PM: Starting manual resume from disk
[ 1.922546] linux140 systemd-fsck[357]: /dev/dasdc3: clean, 260932/374624 fil
es, 1343053/1496088 blocks
[ 1.942085] linux140 kernel: EXT4-fs (dasdc3): mounted filesystem with ordere
d data mode. Opts: (null)
[ 1.959060] linux140 systemd-fstab-generator[369]: Checking was requested for
/dev/pool1/lvusr, but /sbin/fsck.ext3 cannot be used: No such file or directory

[ 92.251530] linux140 systemd[1]: Timed out waiting for device dev-pool1-lvus
.device.
[ 92.251671] linux140 systemd[1]: Dependency failed for /sysroot/usr.
[ 92.260726] linux140 systemctl[394]: Failed to start initrd-switch-root.targe
t: Transaction is destructive.
:/run/initramfs#

Hi mikenash,

your fstab entry states this is an ext3 file system - any reason not to go for an Ext4? I have not had to deal with this situation, but maybe the boot environment is not prepared to handle Ext3, hence the message about the missing file system chaecker?

According to https://ext4.wiki.kernel.org/index.php/Ext4_Howto#Converting_an_ext3_filesystem_to_ext4 it should be possible to simply change the fstab entry to “ext4”, but don’t take my word for it: Please have a current backup at hand before trying.

Regards,
Jens

Well, I have made several different attempts to mount this filesystem. I set the option to not check the filesystem at boot with a zero “0”. I tried to mount with uuid and mapper. I convert the filesystem to ext4. They all failed. I am successful when I change the mountpoint to /mnt. This suggests that there is a technical reason that prohibits using the /usr mount point or this is a bug in the programming. The message that the system is waiting for the device may be true. The following message stating: “Dependency failed for /sysroot/usr. and Failed to start initrd-switch-root.targe : Transaction is destructive.” may indicate the restriction with mounting to the /usr directory during the boot process. Mounting later is successful but useless because I need a replacement for this mount in order to reclaim space for the root directory. Could there be a option for the /etc/fstab mounting that may allow this to happen?

I made some progress. I was thinking that the mount to /usr might be restrictive. I issued the command mount | grep /usr. This returned the following.
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
I do not understand this but this definitely looks like it is system controlled related. I read a little bit about cgoup and systemd and can understand the message stating: “Transaction is destructive.”; would definitely expect the system to prevent this from happening. I found some other directories that would fit nicely for my needs. The directories, /usr/src and /usr/share would work best. I created an lvm for the /usr/src directory which used 1.2G. I tarred the directory onto the lvm and the untarred it.
updated /etc/fstab and rebooted. This mount successfully. Umounted the lvm and erased everything in /usr/src, then rebooted. Rebooted the system and the results came down to 78% from 95%. Then I umount the lvm and restored the /usr/src directory from that save tar file. Restored /etc/fstab and rebooted successfully. Thanks again to everyone for all your help. I have learned a lot from this exercise.

Hi Mikenash,

[QUOTE=mikenash;27997]I made some progress. I was thinking that the mount to /usr might be restrictive. I issued the command mount | grep /usr. This returned the following.
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
I do not understand this but this definitely looks like it is system controlled related.[/QUOTE]
most of all, it’s not (directly) about your /usr mount, but rather about “/sys/fs/cgroup/systemd”. Your grep statement caught that line because of the location of the release_agent.

Just for comparison, here’s what I have on a machine running systemd (but not SLES12):

host:~ # mount |grep usr /dev/mapper/system--host-usr on /usr type ext4 (rw,relatime,data=ordered) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) host:~ #

where you can see that /usr is mounted separately and is backed by an LV (named “usr” and part of the VG named “system-host”… “host” in real is the host name used for that machine.

So technically speaking, it is possible to mount /usr separately, and via LVM, on a systemd-controlled machine. But I’ve yet to try that on a SLES12 server.

Would you mind sharing the actual content of /etc/fstab, to see if anything catches the eye?

Regards,
Jens

Hello Jens, thank you for looking. Here is the requested information and more.
cat /etc/fstab
/dev/disk/by-path/ccw-0.0.15c0-part2 swap swap defaults 0 0
/dev/disk/by-path/ccw-0.0.15c0-part3 / ext4 acl,user_xattr 1 1
/dev/disk/by-path/ccw-0.0.15c0-part1 /boot/zipl ext2 acl,user_xattr 1 2

mount | grep /usr
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)

lvm volume
ACTIVE ‘/dev/pool1/lvusr’ [10.00 GiB] inherit

blkid
/dev/mapper/pool1-lvusr: UUID=“3fffd563-e55d-42d6-bfa8-ab0f8d3fd3e2” TYPE=“ext4”

Hi mikenash,

cat /etc/fstab

I should have been more precise - I was looking for the fstab in the state that didn’t let you boot. Because the version you just posted (btw, wrapping those sections in [ C OD E ] / [/ C O D E], i.e. by marking that part of the message and then click on the “#” icon of the forum editor, will make the post more readable) I see no entry to mount /usr, which I would have expected somewhere after the entry for the root FS.

Another guess from my side: Have you “moved aside” the original /usr directory and created a new one? It may well be that the boot code will detect that /usr (the directory on the root fs) does contain entries, and as a measure of caution refuses to mount something over that - for the fear of hiding content in /usr, rendering the system useless. In our installations, /usr (the directory in the root fs, used as the mount point for the LV) is empty.

So a test could be to boot the system via a rescue system, mount the original root FS to some mointpoint (I’ll be using “/rootfs” as an example) and then issue “mv /rootfs/usr /rootfs/usr.dist && mkdir /rootfs/usr”. That way, upon the next boot from disk systemd will see an empty /usr and may try to mount your /usr LV (if you added it to your fstab in advance :wink: ). If this doesn’t work either, you of course can again boot into recovery and move back the original /usr directory.

Regards,
Jens

Regards,
Jens

Hello Jens, I tried your suggestion and I am still unsuccessful.

cat fstab.test3 /dev/disk/by-path/ccw-0.0.15c0-part3 / ext4 acl,user_xattr 1 1 /dev/disk/by-path/ccw-0.0.15c0-part1 /boot/zipl ext2 acl,user_xattr 1 2 /dev/pool1/lvswap swap swap defaults 0 0 /dev/pool1/lvusr /usr ext4 defaults 1 1

Hi mikenash,

I tried your suggestion and I am still unsuccessful

including the step of creating an empty mount point, I assume… then the only difference I can see is that I’m using “mount by label”, which results in mounting the /dev/mapper device, instead of the “native” LV device that you’re using. As I see no real reason why your approach should not work, may I suggest that you create a service request so that some support engineer can have a look at all the details of your system and assist you in getting this to work?

Regards,
Jens

Thanks Jens, and thank you for the comment tip. I have tried unsuccessfully to mount by UUID and mapper. I tried with your mount options and it will still fail. The mount will work just by changing the mount point to /mnt. The system does not find the device when using mount point /usr. Are there any commands available in emergency mode to query devices. The command lsdasd did not work. I never open a request but I will find out how to do this.

Hi mikenash,

You ought to be able to tell which devices are recognized by looking at /sys/block - every block device gets a directory there. To get an idea of what you may see there, check the directory on a running system. (/sys has dynamically created content, so no use in trying to create/delete files there - and only write to files where you know what you’re doing, as this will reconfigure things on the fly. Reading, otoh, should be no problem.)

The emergency environment is truly limited in its commands, but “lvm” will be there. Things like “lvdisplay” on a normal system are just symlinks to “lvm”, by which that binary will know what you want, right from the way you started it. Inside rescue, none of these symlinks are available, so you simply call “lvm” and get into a command shell of that command. Use “help” to check the available commands, one of them being “vgs” (like “show all known VGs”), by which you may check if the underlying LVM volume group is already available.

I never asked how you’ve set up the VG - may it be that the drivers / settings to make the PVs of that VG available are only run in the final booted system, rather than at the early boot stage? Your question concerning which block devices are available points in that direction. In our case, we have made sure all PVs are available and active right from the start, either using HBAs and adding driver support, or by setting up all required subsystems during boot. (Please note that I’ve so far been using grub/initrd systems, not dracut - and iirc SLES12 uses dracut, so I cannot (yet) help with that).

You can open service requests from within the SCC page (https://scc.suse.com/support/requests) (if you are the product owner and have a corresponding support contract).

Regards,
Jens

Hello Jens, I was able to submit a service request. I do see the three dasd volumes associated with my system The lvm and vgs commands are not found. I did nothing special with the setup of VG. I performed a default type installation from ftp3.install.ibm.com. The error messages are expecting the device dev-pool1-lvusr.device but times out. So, this implies the lvm manager is not available! Maybe? Is there an alternative way to set this up to earlier in the boot process?