Out of disk space

Wondering the best way to add more disk space to /dev/mapper/system-root

Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/system-root btrfs 118G 107G 2.1G 99% /
udev tmpfs 1.9G 96K 1.9G 1% /dev
tmpfs tmpfs 1.9G 652K 1.9G 1% /dev/shm
/dev/sda1 ext3 152M 43M 101M 30% /boot

Added more space and configured as /dev/sda2

Disk /dev/sda: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders, total 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000303c9

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 321535 159744 83 Linux
/dev/sda2 321536 419430399 209554432 83 Linux

Not sure how to make that space available to /dev/mapper/system-root
Followed this for adding sda2
https://www.suse.com/support/kb/doc/?id=7018329

Thanks!

Hi and welcome to the Forum :slight_smile:
I would guess you running btrfs snapshots and not configured in /etc/snapper/configs/root? You may not have the btrfs maintenance service running?

So, first step would be see how many snapshots via;

snapper list

Then have a look at the config file and consider winding back totals and manually running the weekly cron job manually.

Then it’s a matter of ensuring the maintenance scripts are installed and run those cron jobs to clean up btrfs.

Have a read through this document and it should get cleaned up;
https://www.suse.com/documentation/sles11/stor_admin/data/trbl_btrfs_volfull.html

Hi kelly_zarate,

(preliminary word of caution: If things go wrong, your system can be wrecked. So before starting below steps, have some form of backup for your system, i. e. a snapshot done by VMware, if that’s what it is running on.)

/dev/mapper/system-root

so you are running LVM? Please check by running the commands “vgs” (showing a list of configured volume groups - it should show a row for “system”), “lvs” (showing all logical volumes, at least one for “root” on “system”) and “pvs” listing all known physical volumes, at least /dev/sda2).

I’d be interested in the output of “pvs” - does it list /dev/sda2 and is it already showing the increased size (assuming that you have resized the partition according to the KB article you referenced, and then rebooted the machine)?

If you’re on LVM, you could have avoided resizing that existing partition, because that KB article is for those default installs where BtrFS is on a partition (which cannot be resized live). With LVM, you could have simply used that added space as a new partition, added it to the volume group and then resize your logical volume, like you are to do now anyhow:

Once your PV knows about the size change (as it should do after a reboot - if not, call “pvresize /dev/sda2” explicitly), your volume group will show the new available room as “VFree” in the “vgs” command output. It’s then only required to add some (or all) of that space to your root file system’s logical volume by calling i. e. “lvresize -L +50G /dev/system/root” (to add 50 GB of space). So now you have a larger “partition” (actually a “logical volume”) with BtrFS on it - but that file system (not yet) knows about that added space. So the last step is to resize the file system: “btrfs filesystem resize max /”

Again, only follow these steps if you’re actually on LVM (thus “vgs” is showing the VG “system”) and after having made sure you have a way to revert the changes if things go wrong (i. e. via a snapshot of your VM).

Regards,
J

[QUOTE=malcolmlewis;55616]Hi and welcome to the Forum :slight_smile:
I would guess you running btrfs snapshots and not configured in /etc/snapper/configs/root? You may not have the btrfs maintenance service running?

So, first step would be see how many snapshots via;

snapper list

Then have a look at the config file and consider winding back totals and manually running the weekly cron job manually.

Then it’s a matter of ensuring the maintenance scripts are installed and run those cron jobs to clean up btrfs.

Have a read through this document and it should get cleaned up;
https://www.suse.com/documentation/sles11/stor_admin/data/trbl_btrfs_volfull.html[/QUOTE]

malcolmlewis thank you for the reply!
I verified that snapper is configured and appears to be running the weekly cron job. I also deleted several snapshots, but still didn’t free up any disk space.

Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/system-root btrfs 118G 107G 1.4G 99% /
udev tmpfs 1.9G 96K 1.9G 1% /dev
tmpfs tmpfs 1.9G 652K 1.9G 1% /dev/shm
/dev/sda1 ext3 152M 43M 101M 30% /boot

Hi J, thank you for the reply!

Unfortunately, when I attemp to use the vgs command, I get a message

/var/run/lvm/lock/V_system:aux: open failed: No space left on device
Can’t get lock for system

Also, I get the same message when attempting lvs.

Thanks,
Kelly

Hi
Then you going to have to hunt down, I would suspect a large log file
in /var/log, maybe journals or coredumps.

du -sh /var/log
journalctl --disk-usage
coredumpctl


Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
SLES 15 | GNOME Shell 3.26.2 | 4.12.14-25.25-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

Hi Kelly,

Unfortunately, when I attemp to use the vgs command, I get a message

/var/run/lvm/lock/V_system:aux: open failed: No space left on device
Can’t get lock for system

hm, a bad position then - to assign more space, you’ll have to use LVM - but to use that you’d need at least some (few) free space on “/”.

Also, I get the same message when attempting lvs.

Yes, unfortunately that’s expected, since it’s another "sub"command from the same suite of programs (actually both are links to the same program, “lvm”, each invoking their appropriate lvm-build-in command).

On top of Malcolm’s recommendation to clean up log files, running “zypper clean” might give you a few more free bytes by cleaning the “repository cache” from libzypp. Once at least a few kBs are free, you might want to retry the LVM commands.

If nothing of that helps, it might be time for booting from separate installation media, to run all LVM commands from a recovery environment that doesn’t rely on you current hard-disk-based “/” file system.

Regards,
J

I was able to free up enough disk space for your original solution to work!
Thank you for sharing your expertise and the quick reply!!!

[QUOTE=kzarate;55789]I was able to free up enough disk space for your original solution to work!
Thank you for sharing your expertise and the quick reply!!![/QUOTE]
Hi
Thanks for the feedback, perhaps a quick summary of what you found consuming space and what you did do to resolve may help other Forum users in the future :slight_smile:

[QUOTE=jmozdzen;55732]Hi Kelly,

Unfortunately, when I attemp to use the vgs command, I get a message

/var/run/lvm/lock/V_system:aux: open failed: No space left on device
Can’t get lock for system

hm, a bad position then - to assign more space, you’ll have to use LVM - but to use that you’d need at least some (few) free space on “/”.

Also, I get the same message when attempting lvs.

Yes, unfortunately that’s expected, since it’s another "sub"command from the same suite of programs (actually both are links to the same program, “lvm”, each invoking their appropriate lvm-build-in command).

On top of Malcolm’s recommendation to clean up log files, running “zypper clean” might give you a few more free bytes by cleaning the “repository cache” from libzypp. Once at least a few kBs are free, you might want to retry the LVM commands.

If nothing of that helps, it might be time for booting from separate installation media, to run all LVM commands from a recovery environment that doesn’t rely on you current hard-disk-based “/” file system.

Regards,
J[/QUOTE]

Right, good idea! I was able to delete some old snapshots using “snapper list”. Also, deleted some larger log files from /var/log and ran “zypper clean”
Had to reboot for the disk space to update. Was down to 91%. I was then able to follow the steps from your original post.

Thanks!

Hi Kelly,

[QUOTE=kelly_zarate;55800]Right, good idea! I was able to delete some old snapshots using “snapper list”. Also, deleted some larger log files from /var/log and ran “zypper clean”
Had to reboot for the disk space to update. Was down to 91%. I was then able to follow the steps from your original post.

Thanks![/QUOTE]

just for future reference - restarting syslog might have been sufficient, it sounds as if “some larger log files” were the main contributors to your success and as log as they’re held open by syslogd, the disk space won’t get freed no matter you deleted their directory entry.

Also you might want to look info running “logrotate” on that machine, to have larger log files rotated periodically. If you do so, please have a look at the default files in /etc/logrotate.d, I believe they do contain the setting to retain 99 old versions, which may not be that helpful (although the rotated versions are at least compressed).

Regards,
J