[QUOTE=jmozdzen;19974]
mount /HANA_DATA[/QUOTE]
should I’d rather use ?
#mount /dev/sdb1 /HANA_DATA
and after re-mount should I modify fstab to this re-mounted FS should be mounted at boot time or leave fstab untouched?
Regards
[QUOTE=jmozdzen;19974]
should I’d rather use ?
#mount /dev/sdb1 /HANA_DATA
and after re-mount should I modify fstab to this re-mounted FS should be mounted at boot time or leave fstab untouched?
Regards
Hi GN,
[QUOTE=gniecka;19976]should I’d rather use ?
#mount /dev/sdb1 /HANA_DATA
and after re-mount should I modify fstab to this re-mounted FS should be mounted at boot time or leave fstab untouched?[/QUOTE]
/etc/fstab contains a list of file systems and where they are to be mounted. Typically, this list is worked on during boot, but is available at any time, for any later mount, too. It is a configuration file (not a file describing the actual mount status).
If you at some time after boot enter
mount /HANA_DATA
the mount command will take a look at /etc/fstab and will in your case find a corresponding entry
and therefore knows that “/dev/disk/by-id/scsi-360050760409bf7181a83af4e23747b4e-part1” is to be mounted, to /HANA_DATA. As such, there’s no need to explicitly mention the device when calling mount. I even object giving the device name to the mount command: If calling mount without it, mount is forced to look up the information in a similar manner as is done during boot, thus if you have some typo error in /etc/fstab, you’ll notice immediately.
Alike, it is not required to update /etc/fstab: You still want the same device to be mounted at the same location (“directory”), it’s just that you re-created that mount-point directory (after moving aside the old one).
As a side note to the casual reader: It is not required to modify file ownership nor permissions of the newly created directory: Once mounted, that information (user/group ownership and directory permissions) is taken from the root directory of the mounted file system.
Regards,
Jens
Guys,
I have found root of my issue.
I have mounted windows NFS share for backups:
10.0.0.15:/HANA 1.9T 704G 1.3T 37% /mnt/share01
When I’ve checked logs and one day this share/windows server was unavailable, despite this my backup script has done backup to local /mnt/share01 folder and there was my space consumed.
after umounting this NFS share I have removed this “local” folder and free space is show correctly…
Regards
GN
Hi GN,
[QUOTE=gniecka;19975]Jens,
now I’m testing this disk space usage and I’m copying smth big to /HANA_DATA/,
/HANA_DATA is groving with amount of data beeing copied, but /dev/sda3 (/) remains at the same level, so I assume than now everything goes to /dev/sdb1 (desired behaviour) instead to local file system (/)…[/QUOTE]
yes, this is the expected behavior when there’s a file system mounted on /HANA_DATA.
In other words, just for preciseness: “/HANA_DATA is groving” is ambigious, it rather should read “the usage of the file system that is mounted at /HANA_DATA is growing, while the root file system’s usage doesn’t change”. And “du” will report more data on /HANA_DATA, since there is a directory entry for the big file(s) you’re creating.
What you (currently) cannot see via “ls”, “cp”, “rm” etc, but what “du” does see, are files under /HANA_DATA that were created when no extra file system was mounted on /HANA_DATA. Once you have umounted /HANA_DATA, those files will hopefully show up. (If there’s nothing in those directories after the umount, we’d be back to square one: Where’s all that disk space allocated that fills up your root file sytem? )
Regards,
Jens
All right! Great find!
OK, I was wrong about the location
Great you found it without any downtime for your server and thank you for reporting back!
Regards,
Jens