On 07/25/2016 01:44 AM, swadm wrote:[color=blue]
On a test VM with SLES 12 SP1 with default btrfs setup I ran into a
situation with “no space left on device” on /var/…[/color]
Does this imply you have a separate filesystem setup for /var/… or
/var/lib/docker at least? Doing so is recommended, whether or not you use
BtrFS, per the documentation:
https://www.suse.com/documentation/sles-12/singlehtml/dockerquick/dockerquick.html
[color=blue]
As such a default setup will put together the whole installation on
one btrfs filesystem …
/
/.snapshots
/var/tmp
/var/spool
/var/opt
/var/lib/named
/var/lib/mailman
/var/crash
/var/lib/pgsql
/usr/local
/tmp
/srv
/opt
/boot/grub2/x86_64-efi
/boot/grub2/i386-pc
/var/log
… neither df nor “btrfs filesystem df” will help you hunting the
space hogs, as all share one common pool.[/color]
Have you tried the following?
btrfs fi usage / #note the trailing slash to specify filesystem
[color=blue]
I was able to ease the pressure by removing snapshots that snapper
created automatically:
https://www.suse.com/documentation/sles11/stor_admin/data/trbl_btrfs_volfull.html[/color]
If you have a lot of snapshots for whatever reason, then cleaning those up
can help, though if you did not partition /var/lib/docker separately then
those snapshots could be cleaning up data from all over the filesystem,
which would especially apply as patches or other Yast/zypper-based changes
are made to the system as snapshots are automatically generated then for
rollback purposes. Also, the “timeline” snapshots, configurable via Yast
or /etc/snapper/configs/* files, will generate a few more timeline-like
snapshots than I would prefer. I typically turn down the month/year
settings, and sometime even the day-based settings, depending on the
system’s purpose. Initial setup of a system often has a lot of patching,
and tweaking of system settings with Yast, and all of those snapshots will
be kept around longer than you expect, so my general practice is to get a
system working, clear out extra snapshots, then move forward.
[color=blue]
I would like to ask here if there is a good means to get an overview
where the btrfs file system usage actually comes.[/color]
Let me know if the ‘usage’ (vs. ‘fs’) portion of ‘btrfs filesystem’ works,
as shown above.
[color=blue]
In our case, /var/lib/docker with various docker images appears to be
the culprit, but even there, it seems to be hard to get a realistic
usage information: “du -ksh /var/lib/docker” is reporting far too much
(I think because of the btrfs layering employed by docker), and
“docker images” not enough to justify the usage.[/color]
If ‘du’ shows a lot of space used, I would not expect that to be just
BtrFS-related. The last time I checked, ‘du’ (the command, just to be
unambiguous) only looked at the current version of the filesystem, meaning
the current snapshot, but perhaps that has changed.
[color=blue]
A think a good idea might be relocating /var/lib/docker to a separate
partition out of scope of the btrfs default filesystem?[/color]
Yes, and that’s the official recommendation as well per the link above.
Just so we can understand better, how often are you making changes to your
images? Is this a box on which you are actively developing images to be
used as containers going forward? If so, the BtrFS filesystem’s CoW
feature can be really nice for tracking changes, but it can also be a bit
expensive for the same reason.
Something else I recall from SUSECon last year, which I had not fully
grasped yet, was that every command executed in a Dockerfile contributes
to the final size of the image because, like git, it treats every command
in a single Dockerfile as something to be remembered going forward, so
chaining commands (with ‘&&’) so make one big command, including commands
to clean up the image filesystem (cached package files, etc.) can help
reduce the final image size by having everything treated as a single
commit instead of multiple commits. I doubt this is very related what you
were seeing, but figured I would mention it in case it helps generally.
–
Good luck.
If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below…