SLES disk space not freed up after deleting files

Hi All,

We noticed that the storage on one of our servers is already at 91% due to the backup files which was stored on the same partition as the root. We have deleted the backup files and even checked and deleted snapshots as we have encountered before that snapshots can take up significant storage as well and just disable snapper.
But for the current scenario, it’s not freeing up those deleted files.

Checked if there are services holding those deleted files but don’t see any huge lists.

usage:
Overall:
Device size: 498.00GiB
Device allocated: 494.96GiB
Device unallocated: 3.03GiB
Device missing: 0.00B
Used: 301.22GiB
Free (estimated): 181.01GiB (min: 181.01GiB)
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 512.00MiB (used: 0.00B)

Data,single: Size:466.93GiB, Used:288.96GiB
/dev/mapper/system-root 466.93GiB

Metadata,single: Size:28.00GiB, Used:12.26GiB
/dev/mapper/system-root 28.00GiB

System,single: Size:32.00MiB, Used:96.00KiB
/dev/mapper/system-root 32.00MiB

Unallocated:
/dev/mapper/system-root 3.03GiB

Please also note we’ve disabled btrfs quota due to performance issues we encountered.

Thanks in advance!
Rob

I’ve seen people use hardlinked files and get confused why deleting the files doesn’t free space (which with a hardlink is because you didn’t delete all the hardlinks so the file’s still allocated and only has one less file system entry).

When I’m looking for space hogs I tend to either use du -hs /* and work down from there with what I find or use a graphical tool to help find the space hogs.

Hi,

Thanks for your inputs. Will try that.
On our case, I tried further deleting unused files then suddenly it got all freed up going from 91% to 63%.
Could the hardlinks thing is what causing this issue?

If you have them, yes. When you’re looking at disk space used hard links often get counted for each file, but when the partition is looking at disk space free you only free files up once all hard links are deleted.

As an example, we had a bunch of large files (tens of TB/day) and we had a database app that needed their names in a weird format but we were already using them normally. So what the dev ended up doing is putting a subfolder in to make a hardlink of the files in the weird name format the database app wanted. Most disk space measurement tools then thought the directories had two copies of the files and doubled their estimate of space. If you deleted the db hardlink subdirectories or the original files in the directories and not both, then disk free space wouldn’t go down but what you’d see from the du command or folder properties from Windows would seem like it’d gotten rid of half the files. Once you deleted both copies then you’d see the actual disk free amount go up.