Partition Resizing

I have a server running GroupWise. Has a small Btrfs partition (that is full) and a large XFS partition that is mostly unused. All the GroupWise apps run in the FULL Btrfs partition. If this were Windows I could easily shrink one partition and expand the other. Apparently, can’t shrink XFS? :frowning:

How do I fix this space allocation problem? Tried using DD to “clone” to a larger drive (500GB to 1TB), but the image created was corrupted somehow. There should be an EASY solution, but can’t find one or think of one. [I saw KB 7018329 that talks about adding a drive, but that seems to deal with virtual drives, and I am dealing with a real hard drive. Plus, seems like data could be corrupted easily with one partition spanning 2 drives, and would cause mounting problems.]

[QUOTE=holub457;57618]I have a server running GroupWise. Has a small Btrfs partition (that is full) and a large XFS partition that is mostly unused. All the GroupWise apps run in the FULL Btrfs partition. If this were Windows I could easily shrink one partition and expand the other. Apparently, can’t shrink XFS? :frowning:

How do I fix this space allocation problem? Tried using DD to “clone” to a larger drive (500GB to 1TB), but the image created was corrupted somehow. There should be an EASY solution, but can’t find one or think of one. [I saw KB 7018329 that talks about adding a drive, but that seems to deal with virtual drives, and I am dealing with a real hard drive. Plus, seems like data could be corrupted easily with one partition spanning 2 drives, and would cause mounting problems.][/QUOTE]
Hi and Welcome to the Forum :slight_smile:
So on the btrfs drive are you running snapper/snapshots?

Are the btrfs maintenance tools installed and running?

Can you show the output from;

snapper list
btrfs fi usage /

holub457 wrote:
[color=blue]

All the GroupWise apps run in the FULL Btrfs partition.[/color]

If by that you mean your GroupWise server is installed on the Btrfs
partition, I just hope your domain(s) and post office(s) aren’t there
too.

Btrfs is not the right filesystem on which to install a database (i.e.
GroupWise). XFS is where your domain and post office should be
installed. Because it is a different filesystem that would imply it
would also be a different partition than the one formatted for Btrfs.

[color=blue]

Plus, seems like data could be corrupted easily with one
partition spanning 2 drives, and would cause mounting problems.][/color]

First of all, a partition cannot span multiple drives. A partition is
wholly contained within a single drive.

If you use XFS for your GroupWise domain and post office, it (XFS)
could reside on the same physical drive along with your Btrfs partition
but a better solution, one that would provide better performance, would
be to use a completely separate hard drive for your GroupWise data.

You don’t say whether or not GroupWise is using eDirectory. Just a
note: eDirectory should never be installed on a Btrfs filesystem.


Kevin Boyle - Knowledge Partner
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below this post.
Thank you.

[QUOTE=malcolmlewis;57619]Hi and Welcome to the Forum :slight_smile:
So on the btrfs drive are you running snapper/snapshots?

Are the btrfs maintenance tools installed and running?

Can you show the output from;

snapper list btrfs fi usage / [/QUOTE]

Type | # | Pre # | Date | User | Cleanup | Description | Userdata
-------±-----±------±-------------------------±-----±--------±----------------------±-------------
single | 0 | | | root | | current |
single | 1 | | Mon Feb 12 13:23:24 2018 | root | | first root filesystem |
pre | 925 | | Mon Mar 4 18:06:04 2019 | root | number | zypp(ruby) | important=yes
post | 926 | 925 | Mon Mar 4 18:07:07 2019 | root | number | | important=yes
pre | 929 | | Tue Mar 5 18:11:31 2019 | root | number | zypp(ruby) | important=yes
post | 930 | 929 | Tue Mar 5 18:12:19 2019 | root | number | | important=yes
pre | 1006 | | Fri Mar 29 12:05:24 2019 | root | number | zypp(ruby) | important=yes
post | 1007 | 1006 | Fri Mar 29 12:07:32 2019 | root | number | | important=yes
pre | 1046 | | Thu Apr 4 06:09:44 2019 | root | number | zypp(ruby) | important=yes
post | 1047 | 1046 | Thu Apr 4 06:11:33 2019 | root | number | | important=yes
pre | 1110 | | Tue Apr 30 17:18:45 2019 | root | number | zypp(ruby) | important=yes
post | 1111 | 1110 | Tue Apr 30 17:20:12 2019 | root | number | | important=yes
pre | 1117 | | Sat May 4 16:12:06 2019 | root | number | yast online_update |
post | 1118 | 1117 | Sat May 4 16:12:36 2019 | root | number | |
pre | 1119 | | Tue May 7 07:02:21 2019 | root | number | yast online_update |
pre | 1120 | | Tue May 7 07:03:04 2019 | root | number | zypp(ruby) | important=no
post | 1121 | 1120 | Tue May 7 07:03:58 2019 | root | number | | important=no
post | 1122 | 1119 | Tue May 7 07:04:13 2019 | root | number | |
pre | 1123 | | Tue May 7 08:49:13 2019 | root | number | yast disk |
pre | 1124 | | Tue May 7 08:49:18 2019 | root | number | yast disk |
post | 1125 | 1123 | Tue May 7 08:49:32 2019 | root | number | |
post | 1126 | 1124 | Tue May 7 08:52:51 2019 | root | number | |
pre | 1127 | | Tue May 7 18:06:34 2019 | root | number | yast online_update |
pre | 1128 | | Tue May 7 18:07:10 2019 | root | number | zypp(ruby) | important=no
post | 1129 | 1128 | Tue May 7 18:10:52 2019 | root | number | | important=no
post | 1130 | 1127 | Tue May 7 18:10:56 2019 | root | number |

S4:~ # btrfs fi usage /
Overall:
Device size: 40.00GiB
Device allocated: 35.32GiB
Device unallocated: 4.68GiB
Device missing: 0.00B
Used: 32.03GiB
Free (estimated): 7.07GiB (min: 4.73GiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 82.59MiB (used: 0.00B)

Data,single: Size:33.01GiB, Used:30.62GiB
/dev/sda2 33.01GiB

Metadata,DUP: Size:1.12GiB, Used:722.72MiB
/dev/sda2 2.25GiB

System,DUP: Size:32.00MiB, Used:16.00KiB
/dev/sda2 64.00MiB

Unallocated:
/dev/sda2 4.68GiB

I know only enough to be dangerous. But, I think domains and post office are both installed on the Btrfs. When we installed, we accepted all default locations. The Btrfs partition fills up when we fail to archive GroupWise data for a while, and when we archive if shows as less full. So on that alone I believe it is where data is located. … No eDirectory at all.

[QUOTE=holub457;57627]Type | # | Pre # | Date | User | Cleanup | Description | Userdata
-------±-----±------±-------------------------±-----±--------±----------------------±-------------
single | 0 | | | root | | current |
single | 1 | | Mon Feb 12 13:23:24 2018 | root | | first root filesystem |
pre | 925 | | Mon Mar 4 18:06:04 2019 | root | number | zypp(ruby) | important=yes
post | 926 | 925 | Mon Mar 4 18:07:07 2019 | root | number | | important=yes
pre | 929 | | Tue Mar 5 18:11:31 2019 | root | number | zypp(ruby) | important=yes
post | 930 | 929 | Tue Mar 5 18:12:19 2019 | root | number | | important=yes
pre | 1006 | | Fri Mar 29 12:05:24 2019 | root | number | zypp(ruby) | important=yes
post | 1007 | 1006 | Fri Mar 29 12:07:32 2019 | root | number | | important=yes
pre | 1046 | | Thu Apr 4 06:09:44 2019 | root | number | zypp(ruby) | important=yes
post | 1047 | 1046 | Thu Apr 4 06:11:33 2019 | root | number | | important=yes
pre | 1110 | | Tue Apr 30 17:18:45 2019 | root | number | zypp(ruby) | important=yes
post | 1111 | 1110 | Tue Apr 30 17:20:12 2019 | root | number | | important=yes
pre | 1117 | | Sat May 4 16:12:06 2019 | root | number | yast online_update |
post | 1118 | 1117 | Sat May 4 16:12:36 2019 | root | number | |
pre | 1119 | | Tue May 7 07:02:21 2019 | root | number | yast online_update |
pre | 1120 | | Tue May 7 07:03:04 2019 | root | number | zypp(ruby) | important=no
post | 1121 | 1120 | Tue May 7 07:03:58 2019 | root | number | | important=no
post | 1122 | 1119 | Tue May 7 07:04:13 2019 | root | number | |
pre | 1123 | | Tue May 7 08:49:13 2019 | root | number | yast disk |
pre | 1124 | | Tue May 7 08:49:18 2019 | root | number | yast disk |
post | 1125 | 1123 | Tue May 7 08:49:32 2019 | root | number | |
post | 1126 | 1124 | Tue May 7 08:52:51 2019 | root | number | |
pre | 1127 | | Tue May 7 18:06:34 2019 | root | number | yast online_update |
pre | 1128 | | Tue May 7 18:07:10 2019 | root | number | zypp(ruby) | important=no
post | 1129 | 1128 | Tue May 7 18:10:52 2019 | root | number | | important=no
post | 1130 | 1127 | Tue May 7 18:10:56 2019 | root | number |

S4:~ # btrfs fi usage /
Overall:
Device size: 40.00GiB
Device allocated: 35.32GiB
Device unallocated: 4.68GiB
Device missing: 0.00B
Used: 32.03GiB
Free (estimated): 7.07GiB (min: 4.73GiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 82.59MiB (used: 0.00B)

Data,single: Size:33.01GiB, Used:30.62GiB
/dev/sda2 33.01GiB

Metadata,DUP: Size:1.12GiB, Used:722.72MiB
/dev/sda2 2.25GiB

System,DUP: Size:32.00MiB, Used:16.00KiB
/dev/sda2 64.00MiB

Unallocated:
/dev/sda2 4.68GiB[/QUOTE]
Hi
So you can configure the amount of snapshots via the configuration file;

/etc/snapper/configs/root

I would suggest probably only keep four (4), but your call…

Once you configure the above there is a cron job for snapper ( /etc/cron.daily/suse.de-snapper) that you can run manually and then check snapper list again.

Then run the btrfs maintenance cron job (/etc/cron.weekly/btrfs-balance) manually to recover disk space and you should be good to go space wise and no need to resize anything.

This is assuming the package btrfsmaintenance is installed…

[QUOTE=malcolmlewis;57631]Hi
So you can configure the amount of snapshots via the configuration file;

/etc/snapper/configs/root

I would suggest probably only keep four (4), but your call…

Once you configure the above there is a cron job for snapper ( /etc/cron.daily/suse.de-snapper) that you can run manually and then check snapper list again.

Then run the btrfs maintenance cron job (/etc/cron.weekly/btrfs-balance) manually to recover disk space and you should be good to go space wise and no need to resize anything.

This is assuming the package btrfsmaintenance is installed…[/QUOTE]

Great, this freed up a great amount of space. Thanks

holub457 wrote:
[color=blue]

I think domains and post office are both installed on the Btrfs[/color]

BTRFS is a supported filesystem for GroupWise but is not the best
choice for applications, like GroupWise, that are I/O intensive.

You would likely obtain better performance if your domain and post
office were installed on XFS but that is your choice.


Kevin Boyle - Knowledge Partner
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below this post.
Thank you.