Resize Root XFS partition

This morning I came in the office and my root partition was full up.

Using yast command line I did clean up some snappers however that has only given me about 100 mb.

Is it possible to resize a XFS root partition or is this going to be a reinstall?

Thank you.

df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 4.0K 3.8G 1% /dev
tmpfs 3.8G 76K 3.8G 1% /dev/shm
tmpfs 3.8G 10M 3.8G 1% /run
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
/dev/sda2 41G 40G 82M 100% /
/dev/sda2 41G 40G 82M 100% /var/crash
/dev/sda2 41G 40G 82M 100% /.snapshots
/dev/sda2 41G 40G 82M 100% /boot/grub2/i386-pc
/dev/sda2 41G 40G 82M 100% /boot/grub2/x86_64-efi
/dev/sda2 41G 40G 82M 100% /var/log
/dev/sda2 41G 40G 82M 100% /opt
/dev/sda2 41G 40G 82M 100% /var/spool
/dev/sda2 41G 40G 82M 100% /usr/local
/dev/sda2 41G 40G 82M 100% /tmp
/dev/sda2 41G 40G 82M 100% /srv
/dev/sda2 41G 40G 82M 100% /var/cache
/dev/sda2 41G 40G 82M 100% /var/lib/mariadb
/dev/sda2 41G 40G 82M 100% /var/lib/libvirt/images
/dev/sda2 41G 40G 82M 100% /var/lib/mailman
/dev/sda2 41G 40G 82M 100% /var/lib/named
/dev/sda2 41G 40G 82M 100% /var/lib/pgsql
/dev/sda2 41G 40G 82M 100% /var/opt
/dev/sda2 41G 40G 82M 100% /var/tmp
/dev/sda2 41G 40G 82M 100% /var/lib/machines
/dev/sda2 41G 40G 82M 100% /var/lib/mysql
/dev/sdb2 1008G 162G 846G 17% /home
tmpfs 764M 20K 764M 1% /run/user/0

This morning I came in the office and my root (/dev/sda2) partition was full up.

Using yast command line I did clean up some snappers however that has only given me about 100 mb.

(edited) while the / partition is BtrFS i do have a XFS partition (/dev/sda3) that according to YaST contains /usr/local however this partition has 888 gig.

Thank you.

df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 4.0K 3.8G 1% /dev
tmpfs 3.8G 76K 3.8G 1% /dev/shm
tmpfs 3.8G 10M 3.8G 1% /run
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
/dev/sda2 41G 40G 82M 100% /
/dev/sda2 41G 40G 82M 100% /var/crash
/dev/sda2 41G 40G 82M 100% /.snapshots
/dev/sda2 41G 40G 82M 100% /boot/grub2/i386-pc
/dev/sda2 41G 40G 82M 100% /boot/grub2/x86_64-efi
/dev/sda2 41G 40G 82M 100% /var/log
/dev/sda2 41G 40G 82M 100% /opt
/dev/sda2 41G 40G 82M 100% /var/spool
/dev/sda2 41G 40G 82M 100% /usr/local
/dev/sda2 41G 40G 82M 100% /tmp
/dev/sda2 41G 40G 82M 100% /srv
/dev/sda2 41G 40G 82M 100% /var/cache
/dev/sda2 41G 40G 82M 100% /var/lib/mariadb
/dev/sda2 41G 40G 82M 100% /var/lib/libvirt/images
/dev/sda2 41G 40G 82M 100% /var/lib/mailman
/dev/sda2 41G 40G 82M 100% /var/lib/named
/dev/sda2 41G 40G 82M 100% /var/lib/pgsql
/dev/sda2 41G 40G 82M 100% /var/opt
/dev/sda2 41G 40G 82M 100% /var/tmp
/dev/sda2 41G 40G 82M 100% /var/lib/machines
/dev/sda2 41G 40G 82M 100% /var/lib/mysql
/dev/sdb2 1008G 162G 846G 17% /home
tmpfs 764M 20K 764M 1% /run/user/0

fstab

UUID=338d62b2-9026-4fbb-8a18-61209afdecc6 swap swap defaults 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 / btrfs defaults 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /boot/grub2/i386-pc btrfs subvol=@/boot/grub2/i386-pc 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /boot/grub2/x86_64-efi btrfs subvol=@/boot/grub2/x86_64-efi 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /opt btrfs subvol=@/opt 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /srv btrfs subvol=@/srv 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /tmp btrfs subvol=@/tmp 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /usr/local btrfs subvol=@/usr/local 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /var/cache btrfs subvol=@/var/cache 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /var/crash btrfs subvol=@/var/crash 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /var/lib/libvirt/images btrfs subvol=@/var/lib/libvirt/images 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /var/lib/machines btrfs subvol=@/var/lib/machines 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /var/lib/mailman btrfs subvol=@/var/lib/mailman 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /var/lib/mariadb btrfs subvol=@/var/lib/mariadb 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /var/lib/mysql btrfs subvol=@/var/lib/mysql 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /var/lib/named btrfs subvol=@/var/lib/named 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /var/lib/pgsql btrfs subvol=@/var/lib/pgsql 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /var/log btrfs subvol=@/var/log 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /var/opt btrfs subvol=@/var/opt 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /var/spool btrfs subvol=@/var/spool 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /var/tmp btrfs subvol=@/var/tmp 0 0
UUID=03dccf18-66b6-4515-85bf-af245381d7e5 /.snapshots btrfs subvol=@/.snapshots 0 0
UUID=53c3bab7-7607-49ce-b96b-def9eab54065 /usr/local xfs defaults 1 2
UUID=d9f64ac1-0652-4d26-963b-605ed7fd61c5 /home ext4 acl,user_xattr 1 2
UUID=91ca64af-f3a4-4123-bfe4-6c5b0215f919 /backup-grey ext4 acl,user_xattr,nofail 1 2

If you have a mountpoint at /usr/local why does it NOT show up in your ‘df
-h’ output? That makes me wonder if it is really there, or mounted, or
relevant.

Unless I am missing something, you should be cleaning up snapshots, or
expanding the Btrfs volume, or something, rather than thinking about XFS
stuff. Am I missing something?

cd /
snapper list

df -h /usr/local


Good luck.

If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below.

If you want to send me a private message, please let me know in the
forum as I do not use the web interface often.

Thank you for your reply!

YaST Partition will not let me expand the Btrfs any further.

My thought was reducing the 888gig /usr/local mount point (no way it is taking up that much…) so I can expand the 40gig Brtfs as it keeps running out of space. Why does this /usr/local not seem right? A du -sch * does not show me what is taking up 888 gig.

On the / Brtfs partition before my original post I had 88 mb now I have strangly I have 36 gig!

snapper list
Type | # | Pre # | Date | User | Cleanup | Description | Userdata
-------±----±------±-------------------------±-----±--------±----------------------±-------------
single | 0 | | | root | | current |
single | 1 | | Sat Mar 31 12:56:35 2018 | root | | first root filesystem |
single | 2 | | Sat Mar 31 13:07:10 2018 | root | number | after installation | important=yes
pre | 102 | | Tue Apr 3 11:34:21 2018 | root | number | yast samba-server |
pre | 103 | | Tue Apr 3 11:34:22 2018 | root | number | yast disk |
post | 104 | 102 | Tue Apr 3 11:34:32 2018 | root | number | |
post | 105 | 103 | Tue Apr 3 11:37:44 2018 | root | number | |
pre | 106 | | Tue Apr 3 11:37:58 2018 | root | number | yast scc |
pre | 107 | | Tue Apr 3 11:39:25 2018 | root | number | yast disk |
post | 108 | 106 | Tue Apr 3 11:40:10 2018 | root | number | |
pre | 109 | | Tue Apr 3 11:48:50 2018 | root | number | yast scc |
post | 110 | 109 | Tue Apr 3 11:50:06 2018 | root | number | |
pre | 111 | | Tue Apr 3 11:50:15 2018 | root | number | yast disk |

df -h /usr/local
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 41G 4.5G 36G 12% /usr/local

On 04/03/2018 01:24 PM, mhcadmin wrote:[color=blue]

YaST Partition will not let me expand the Btrfs any further.[/color]

It may be useful to understand the disk structure to see what partitions
are actually out there:

sudo /sbin/fdisk -l /dev/sda
sudo /sbin/fdisk -l /dev/sdb

[color=blue]

My thought was reducing the 888gig /usr/local mount point (no way it is
taking up that much…) so I can expand the 40gig Brtfs as it keeps
running out of space. Why does this /usr/local not seem right? A du -sch

  • does not show me what is taking up 888 gig.[/color]

I just meant 888 GiB for anything seems unusual, and for something random
like /usr/local that just seems insane. Also the output did not show that
much space, and likely as a subvolume within Btrfs cleaning it would not
do much for you, particularly for another partition like /home.
[color=blue]

On the / Brtfs partition before my original post I had 88 mb now I have
strangly I have 36 gig![/color]

You may find that it takes a little time (seconds to minutes, not hours)
for cleanup of snapshots to reflect available free space in the ‘df’ output.
[color=blue]

snapper list
Type | # | Pre # | Date | User | Cleanup |
Description | Userdata
-------±----±------±-------------------------±-----±--------±----------------------±-------------
single | 0 | | | root | |
current |
single | 1 | | Sat Mar 31 12:56:35 2018 | root | | first
root filesystem |
single | 2 | | Sat Mar 31 13:07:10 2018 | root | number | after
installation | important=yes
pre | 102 | | Tue Apr 3 11:34:21 2018 | root | number | yast
samba-server |
pre | 103 | | Tue Apr 3 11:34:22 2018 | root | number | yast
disk |
post | 104 | 102 | Tue Apr 3 11:34:32 2018 | root | number |
|
post | 105 | 103 | Tue Apr 3 11:37:44 2018 | root | number |
|
pre | 106 | | Tue Apr 3 11:37:58 2018 | root | number | yast
scc |
pre | 107 | | Tue Apr 3 11:39:25 2018 | root | number | yast
disk |
post | 108 | 106 | Tue Apr 3 11:40:10 2018 | root | number |
|
pre | 109 | | Tue Apr 3 11:48:50 2018 | root | number | yast
scc |
post | 110 | 109 | Tue Apr 3 11:50:06 2018 | root | number |
|
pre | 111 | | Tue Apr 3 11:50:15 2018 | root | number | yast
disk |

df -h /usr/local
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 41G 4.5G 36G 12% /usr/local[/color]

Right, but /usr/local is a part of the overall Btrfs filesystem I think,
not anything separate, and definitely nothing near 888 GB in size.


Good luck.

If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below.

If you want to send me a private message, please let me know in the
forum as I do not use the web interface often.

Hey thanks so much for you help with this. Hightly Appreciated!

Ahh subvolume… hm.

fdisk -l /dev/sda
Disk /dev/sda: 931 GiB, 999653638144 bytes, 1952448512 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00076f7a

Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 4208639 4206592 2G 82 Linux swap / Solaris
/dev/sda2 * 4208640 88100863 83892224 40G 83 Linux
/dev/sda3 88100864 1952448511 1864347648 889G 83 Linux

fdisk -l /dev/sdb
Disk /dev/sdb: 1.8 TiB, 1999844147200 bytes, 3905945600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 840ACA4A-44B6-4B86-BA6B-F7999CC9E3D0

Device Start End Sectors Size Type
/dev/sdb1 2147753984 2987382783 839628800 400.4G Microsoft basic data
/dev/sdb2 264192 2147753065 2147488874 1T Microsoft basic data

Ran a snapper delete, killed all but the top three…

Remaining Snapshots

snapper list
Type | # | Pre # | Date | User | Cleanup | Description | Userdata
-------±–±------±-------------------------±-----±--------±----------------------±-------------
single | 0 | | | root | | current |
single | 1 | | Sat Mar 31 12:56:35 2018 | root | | first root filesystem |
single | 2 | | Sat Mar 31 13:07:10 2018 | root | number | after installation | important=yes

df -h

Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 4.0K 3.8G 1% /dev
tmpfs 3.8G 76K 3.8G 1% /dev/shm
tmpfs 3.8G 11M 3.8G 1% /run
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
/dev/sda2 41G 4.5G 36G 12% /
/dev/sda2 41G 4.5G 36G 12% /var/crash
/dev/sda2 41G 4.5G 36G 12% /.snapshots
/dev/sda2 41G 4.5G 36G 12% /boot/grub2/i386-pc
/dev/sda2 41G 4.5G 36G 12% /boot/grub2/x86_64-efi
/dev/sda2 41G 4.5G 36G 12% /var/log
/dev/sda2 41G 4.5G 36G 12% /opt
/dev/sda2 41G 4.5G 36G 12% /var/spool
/dev/sda2 41G 4.5G 36G 12% /usr/local
/dev/sda2 41G 4.5G 36G 12% /tmp
/dev/sda2 41G 4.5G 36G 12% /srv
/dev/sda2 41G 4.5G 36G 12% /var/cache
/dev/sda2 41G 4.5G 36G 12% /var/lib/mariadb
/dev/sda2 41G 4.5G 36G 12% /var/lib/libvirt/images
/dev/sda2 41G 4.5G 36G 12% /var/lib/mailman
/dev/sda2 41G 4.5G 36G 12% /var/lib/named
/dev/sda2 41G 4.5G 36G 12% /var/lib/pgsql
/dev/sda2 41G 4.5G 36G 12% /var/opt
/dev/sda2 41G 4.5G 36G 12% /var/tmp
/dev/sda2 41G 4.5G 36G 12% /var/lib/machines
/dev/sda2 41G 4.5G 36G 12% /var/lib/mysql
/dev/sdb2 1008G 162G 846G 17% /home
tmpfs 764M 20K 764M 1% /run/user/0

Looks like you have things worked out, then.

At the end of the day, your problem was solved by deleting old snapshots.
Depending on your box’s history this may or may not be normal, but you
have a modes disk size anyway so keep that in mind when building boxes in
the future. Since you cleaned up a bunch of snapshots before posting them
here, we cannot see what time period was covered by them, but the older
they are (and the farther apart they are in time), the greater the chance
they hold lots of stuff, many different package patches, etc. for example.
Often I’ve found deleting just one snapshot can clean up lots of space,
and deleting others will free up very little because of what changed when
they were created. The last list of snapshots you shared showed a lot of
thing done in ‘yast’, but that means they may be created to hold a few
KiBs of change, or just a few MiBs, but unless you apply a lot of patches
at once (usually via ‘zypper’ rather than Yast), a snapshot doesn’t
necessarily, or usually, take much space. This last snapshot cleanup you
did freed very little, but it only covered about days in time.


Good luck.

If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below.

If you want to send me a private message, please let me know in the
forum as I do not use the web interface often.

Thanks for your help on this. Yeah this is a new box and I def should partition better.

Looks like I am going to rebuild this evening and read up on partition recommendations.

Cheers!

Before you go down the path too far, having a partitioning discussion here
may be useful.

In general I would probably not fault your partitioning, other than the
allocation of space overall to your root filesystem, with all of its
included subvolumes, and the size of some snapshots. Btrfs changes things
with regard to partitioning and understanding that can be tricky,
particularly if you come from a more traditional Linux/Unix background.

As I recall from your first post, you basically have four big areas of
data; your root ( / ) filesystem, /dev/sda3 which I have not seen mounted
anywhere (wasted space), some mystery/unused space on another disk
(/dev/sdb1) and your /home filesystem on /dev/sdb2. The rest of the stuff
is made up of subvolumes within the first Btrfs root ( / ) filesystem.

It seems like maybe when the system was built the first disk’s second and
third partitions should have been one so you would have had plenty of
space for the system to do its thing, keep reasonable snapshots around,
etc. That did not happen, so instead you were running on 40 GiB for the
OS, which is probably fine for some systems, but not when you make a lot
of changes and leave snapshots from those changes around.

Snapshots can be tuned so you only keep so many of certain types/ages
around. For larger disks, you can keep more, and for smaller, fewer of
course. This is not partitioning per se, but it definitely impacts
partitioning, or rather is impacted by it. The default settings are
pretty good for most systems these days, but being aware of what happens
whenever you apply a patch is probably useful so that you get the benefit
of protection and rollback possibilities without possible drawbacks from
full disks.

Thanks for sharing your results, and situations, here; they will probably
help others.


Good luck.

If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below.

If you want to send me a private message, please let me know in the
forum as I do not use the web interface often.