Hi all,
starting with Longhorn I understand that it uses thin provisioned block devices. If we fill it with data, the volume grows (needs more space on disk). Deleting data from the volume by definition does not free space on the disk.
This espacially needs some attention, when we have to backup and there is no compression related to amount of data neither an incremental backup.
Are there some best practices to cover such challenge?
Well, offline copy of volumes with xxxGi will result in substantial downtime.
Any better ideas / practices?`
Thanks, Wolfram
The backup is incremental and compressed (though I think maybe compression doesn’t help much in the most cases, and it slows down the process, so we’re considering removing the compression for the backup): https://longhorn.io/docs/1.0.0/architecture/#backup-and-restore
@yasker: Thanks, in the meantime I shrinked the volume by offline copy.
Related to your answers there is a general consideration for doing the backup on the volume or (better) on the application (file-system) level.
Not sure I fully understood the DR backup story.
Longhorn’s built-in backup is currently happening on the block level. So it won’t help much on shrinking the volume size. We are looking into application backup/restore as well, potentially have hooks to enable application awareness backup, but that won’t help on shrinking the volume size as well.
But if your concern is mostly due to the downtime to restore the data, DR volume provides incremental restore, so you don’t need to copy xxxGiB’s data. DR volume will try to always keep in sync with the original volume, so the downtime will be minimized.