Hi and welcome to the Forum
I would guess you running btrfs snapshots and not configured in /etc/snapper/configs/root? You may not have the btrfs maintenance service running?
So, first step would be see how many snapshots via;
snapper list
Then have a look at the config file and consider winding back totals and manually running the weekly cron job manually.
Then it’s a matter of ensuring the maintenance scripts are installed and run those cron jobs to clean up btrfs.
(preliminary word of caution: If things go wrong, your system can be wrecked. So before starting below steps, have some form of backup for your system, i. e. a snapshot done by VMware, if that’s what it is running on.)
/dev/mapper/system-root
so you are running LVM? Please check by running the commands “vgs” (showing a list of configured volume groups - it should show a row for “system”), “lvs” (showing all logical volumes, at least one for “root” on “system”) and “pvs” listing all known physical volumes, at least /dev/sda2).
I’d be interested in the output of “pvs” - does it list /dev/sda2 and is it already showing the increased size (assuming that you have resized the partition according to the KB article you referenced, and then rebooted the machine)?
If you’re on LVM, you could have avoided resizing that existing partition, because that KB article is for those default installs where BtrFS is on a partition (which cannot be resized live). With LVM, you could have simply used that added space as a new partition, added it to the volume group and then resize your logical volume, like you are to do now anyhow:
Once your PV knows about the size change (as it should do after a reboot - if not, call “pvresize /dev/sda2” explicitly), your volume group will show the new available room as “VFree” in the “vgs” command output. It’s then only required to add some (or all) of that space to your root file system’s logical volume by calling i. e. “lvresize -L +50G /dev/system/root” (to add 50 GB of space). So now you have a larger “partition” (actually a “logical volume”) with BtrFS on it - but that file system (not yet) knows about that added space. So the last step is to resize the file system: “btrfs filesystem resize max /”
Again, only follow these steps if you’re actually on LVM (thus “vgs” is showing the VG “system”) and after having made sure you have a way to revert the changes if things go wrong (i. e. via a snapshot of your VM).
[QUOTE=malcolmlewis;55616]Hi and welcome to the Forum
I would guess you running btrfs snapshots and not configured in /etc/snapper/configs/root? You may not have the btrfs maintenance service running?
So, first step would be see how many snapshots via;
snapper list
Then have a look at the config file and consider winding back totals and manually running the weekly cron job manually.
Then it’s a matter of ensuring the maintenance scripts are installed and run those cron jobs to clean up btrfs.
malcolmlewis thank you for the reply!
I verified that snapper is configured and appears to be running the weekly cron job. I also deleted several snapshots, but still didn’t free up any disk space.
Hi
Then you going to have to hunt down, I would suspect a large log file
in /var/log, maybe journals or coredumps.
du -sh /var/log
journalctl --disk-usage
coredumpctl
–
Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
SLES 15 | GNOME Shell 3.26.2 | 4.12.14-25.25-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!
Unfortunately, when I attemp to use the vgs command, I get a message
/var/run/lvm/lock/V_system:aux: open failed: No space left on device
Can’t get lock for system
hm, a bad position then - to assign more space, you’ll have to use LVM - but to use that you’d need at least some (few) free space on “/”.
Also, I get the same message when attempting lvs.
Yes, unfortunately that’s expected, since it’s another "sub"command from the same suite of programs (actually both are links to the same program, “lvm”, each invoking their appropriate lvm-build-in command).
On top of Malcolm’s recommendation to clean up log files, running “zypper clean” might give you a few more free bytes by cleaning the “repository cache” from libzypp. Once at least a few kBs are free, you might want to retry the LVM commands.
If nothing of that helps, it might be time for booting from separate installation media, to run all LVM commands from a recovery environment that doesn’t rely on you current hard-disk-based “/” file system.
[QUOTE=kzarate;55789]I was able to free up enough disk space for your original solution to work!
Thank you for sharing your expertise and the quick reply!!![/QUOTE]
Hi
Thanks for the feedback, perhaps a quick summary of what you found consuming space and what you did do to resolve may help other Forum users in the future
Unfortunately, when I attemp to use the vgs command, I get a message
/var/run/lvm/lock/V_system:aux: open failed: No space left on device
Can’t get lock for system
hm, a bad position then - to assign more space, you’ll have to use LVM - but to use that you’d need at least some (few) free space on “/”.
Also, I get the same message when attempting lvs.
Yes, unfortunately that’s expected, since it’s another "sub"command from the same suite of programs (actually both are links to the same program, “lvm”, each invoking their appropriate lvm-build-in command).
On top of Malcolm’s recommendation to clean up log files, running “zypper clean” might give you a few more free bytes by cleaning the “repository cache” from libzypp. Once at least a few kBs are free, you might want to retry the LVM commands.
If nothing of that helps, it might be time for booting from separate installation media, to run all LVM commands from a recovery environment that doesn’t rely on you current hard-disk-based “/” file system.
Regards,
J[/QUOTE]
Right, good idea! I was able to delete some old snapshots using “snapper list”. Also, deleted some larger log files from /var/log and ran “zypper clean”
Had to reboot for the disk space to update. Was down to 91%. I was then able to follow the steps from your original post.
[QUOTE=kelly_zarate;55800]Right, good idea! I was able to delete some old snapshots using “snapper list”. Also, deleted some larger log files from /var/log and ran “zypper clean”
Had to reboot for the disk space to update. Was down to 91%. I was then able to follow the steps from your original post.
Thanks![/QUOTE]
just for future reference - restarting syslog might have been sufficient, it sounds as if “some larger log files” were the main contributors to your success and as log as they’re held open by syslogd, the disk space won’t get freed no matter you deleted their directory entry.
Also you might want to look info running “logrotate” on that machine, to have larger log files rotated periodically. If you do so, please have a look at the default files in /etc/logrotate.d, I believe they do contain the setting to retain 99 old versions, which may not be that helpful (although the rotated versions are at least compressed).