Drive became read only?

Hope this is the right place for this. I just had a really weird occurrence on my SLES server. One of the drives (the main one with the OS) kind of became read only. I could still browse and even view files on netware volumes from a Netware Client workstation but could not even view them from the console. I couldn’t even launch terminal. Luckily I was able to access remote manager and reboot the sever. Still had to run to the location because the other drive array in the machine decided it hadn’t been checked in 186 days so a 2 hour check was necessary but that’s another gripe. System came back up and appears to be fine now. Any idea what might have happened here and how to prevent it in the future?

You say it ‘kind of became read only’ but you don’t mention having tried creating a new file. Did you and if so, what happened?

What you describe sounds like the sort of behaviour one might possibly see if a partition became 100% full.

There are circumstances under which an volume could get re-mounted as read only in the event of an error. What does the /etc/fstab for the relevant partition look like? What’s the output of

$ dumpe2fs /dev/partitionname

(It’ll be long, wrap it in CODE tags for readability, look for the # button in the toolbar.)

And the most obvious question, is there anything in the logs that gives a clue as to what happened? Though if it was a lack of free space problem then logs may not have been written of course. (Which is one reason why it’s useful to have syslog also write a copy the log to another machine.)

You can disable the fsck on boot checks with tune2fs. Look at -i and -c options. But on a server, I’d leave it alone. (I’ve disabled the checks on my SLED machines and thus far got away with it. I don’t want the support calls from people complaining that they arrived at work, turned on their computer and had to wait while it did fsck because this was the 20th/40th/60th/etc time it had been booted.)

Well on the console it wasn’t only read only nothing would run and no files could be viewed. On a client I was able to view files on a mapped drive and even managed to copy one to a local drive and delete the file on the mapped drive. I could not rename files, copy them, or move them on the mapped drive.

[quote]
What you describe sounds like the sort of behaviour one might possibly see if a partition became 100% full. [/quote]

This is highly doubtful as there is 1.6TB of space left on the drive at this time.

[CODE]dump
Filesystem volume name:
Last mounted on:
Filesystem UUID: 6ac7ec0b-60ec-4688-ae5e-8bc809fc085d
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 131072
Block count: 523776
Reserved block count: 26188
Free blocks: 496641
Free inodes: 131027
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 127
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Filesystem created: Wed May 1 19:53:25 2013
Last mount time: Fri Apr 25 20:09:25 2014
Last write time: Fri Apr 25 20:09:25 2014
Mount count: 48
Maximum mount count: -1
Last checked: Wed May 1 19:53:25 2013
Check interval: 0 ()
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 49147a20-869b-4999-bb71-2dc7f078f0b9
Journal backup: inode blocks
Journal size: 32M

Group 0: (Blocks 0-32767)
Primary superblock at 0, Group descriptors at 1-1
Reserved GDT blocks at 2-128
Block bitmap at 129 (+129), Inode bitmap at 130 (+130)
Inode table at 131-642 (+131)
22246 free blocks, 8165 free inodes, 2 directories
Free blocks: 649-6143, 8690-12287, 12396-16383, 19332-19546, 21747-24558, 26630-32767
Free inodes: 20, 24-25, 27-28, 32, 34-8192
Group 1: (Blocks 32768-65535)
Backup superblock at 32768, Group descriptors at 32769-32769
Reserved GDT blocks at 32770-32896
Block bitmap at 32897 (+129), Inode bitmap at 32898 (+130)
Inode table at 32899-33410 (+131)
32125 free blocks, 8192 free inodes, 0 directories
Free blocks: 33411-65535
Free inodes: 8193-16384
Group 2: (Blocks 65536-98303)
Block bitmap at 65536 (+0), Inode bitmap at 65537 (+1)
Inode table at 65538-66049 (+2)
32254 free blocks, 8192 free inodes, 0 directories
Free blocks: 66050-98303
Free inodes: 16385-24576
Group 3: (Blocks 98304-131071)
Backup superblock at 98304, Group descriptors at 98305-98305
Reserved GDT blocks at 98306-98432
Block bitmap at 98433 (+129), Inode bitmap at 98434 (+130)
Inode table at 98435-98946 (+131)
32125 free blocks, 8192 free inodes, 0 directories
Free blocks: 98947-131071
Free inodes: 24577-32768
Group 4: (Blocks 131072-163839)
Block bitmap at 131072 (+0), Inode bitmap at 131073 (+1)
Inode table at 131074-131585 (+2)
32197 free blocks, 8174 free inodes, 1 directories
Free blocks: 131587-145407, 145433-149505, 149507-151551, 151578-159743, 159745-161792, 161795, 161797-163839
Free inodes: 32770, 32787, 32789-40960
Group 5: (Blocks 163840-196607)
Backup superblock at 163840, Group descriptors at 163841-163841
Reserved GDT blocks at 163842-163968
Block bitmap at 163969 (+129), Inode bitmap at 163970 (+130)
Inode table at 163971-164482 (+131)
32125 free blocks, 8192 free inodes, 0 directories
Free blocks: 164483-196607
Free inodes: 40961-49152
Group 6: (Blocks 196608-229375)
Block bitmap at 196608 (+0), Inode bitmap at 196609 (+1)
Inode table at 196610-197121 (+2)
24053 free blocks, 8192 free inodes, 0 directories
Free blocks: 205323-229375
Free inodes: 49153-57344
Group 7: (Blocks 229376-262143)
Backup superblock at 229376, Group descriptors at 229377-229377
Reserved GDT blocks at 229378-229504
Block bitmap at 229505 (+129), Inode bitmap at 229506 (+130)
Inode table at 229507-230018 (+131)
32125 free blocks, 8192 free inodes, 0 directories
Free blocks: 230019-262143
Free inodes: 57345-65536
Group 8: (Blocks 262144-294911)
Block bitmap at 262144 (+0), Inode bitmap at 262145 (+1)
Inode table at 262146-262657 (+2)
32254 free blocks, 8192 free inodes, 0 directories
Free blocks: 262658-294911
Free inodes: 65537-73728
Group 9: (Blocks 294912-327679)
Backup superblock at 294912, Group descriptors at 294913-294913
Reserved GDT blocks at 294914-295040
Block bitmap at 295041 (+129), Inode bitmap at 295042 (+130)
Inode table at 295043-295554 (+131)
32125 free blocks, 8192 free inodes, 0 directories
Free blocks: 295555-327679
Free inodes: 73729-81920
Group 10: (Blocks 327680-360447)
Block bitmap at 327680 (+0), Inode bitmap at 327681 (+1)
Inode table at 327682-328193 (+2)
32254 free blocks, 8192 free inodes, 0 directories
Free blocks: 328194-360447
Free inodes: 81921-90112
Group 11: (Blocks 360448-393215)
Block bitmap at 360448 (+0), Inode bitmap at 360449 (+1)
Inode table at 360450-360961 (+2)
32254 free blocks, 8192 free inodes, 0 directories
Free blocks: 360962-393215
Free inodes: 90113-98304
Group 12: (Blocks 393216-425983)
Block bitmap at 393216 (+0), Inode bitmap at 393217 (+1)
Inode table at 393218-393729 (+2)
32254 free blocks, 8192 free inodes, 0 directories
Free blocks: 393730-425983
Free inodes: 98305-106496
Group 13: (Blocks 425984-458751)
Block bitmap at 425984 (+0), Inode bitmap at 425985 (+1)
Inode table at 425986-426497 (+2)
32254 free blocks, 8192 free inodes, 0 directories
Free blocks: 426498-458751
Free inodes: 106497-114688
Group 14: (Blocks 458752-491519)
Block bitmap at 458752 (+0), Inode bitmap at 458753 (+1)
Inode table at 458754-459265 (+2)
32254 free blocks, 8192 free inodes, 0 directories
Free blocks: 459266-491519
Free inodes: 114689-122880
Group 15: (Blocks 491520-523775)
Block bitmap at 491520 (+0), Inode bitmap at 491521 (+1)
Inode table at 491522-492033 (+2)
31742 free blocks, 8192 free inodes, 0 directories
Free blocks: 492034-523775
Free inodes: 122881-131072
[/CODE]

Nothing I could find in the logs that did get saved. Others either didn’t get saved or are of an unknown format that won’t open.

[quote]
You can disable the fsck on boot checks with tune2fs. Look at -i and -c options. But on a server, I’d leave it alone. (I’ve disabled the checks on my SLED machines and thus far got away with it. I don’t want the support calls from people complaining that they arrived at work, turned on their computer and had to wait while it did fsck because this was the 20th/40th/60th/etc time it had been booted.)[/QUOTE]

Yea I don’t know if I really want to disable it. I will check it out. It would be nice if it was an opt in kind of thing where it would cancel if you didn’t press a key in x number of seconds especially when rebooting from remote. You don’t know if its doing an extended check or your server is toast.

There was only one strange thing I noticed when I was getting ready to reboot. From remote manager the cpu usage was reporting at 100% but it really didn’t look like any of the info was accurate and I couldn’t bring up the system monitor from the console to check.

FUBAR Wrote in message:
[color=blue]

Hope this is the right place for this. I just had a really weird
occurrence on my SLES server. One of the drives (the main one with the
OS) kind of became read only. I could still browse and even view files
on netware volumes from a Netware Client workstation but could not even
view them from the console. I couldn’t even launch terminal. Luckily I
was able to access remote manager and reboot the sever. Still had to
run to the location because the other drive array in the machine decided
it hadn’t been checked in 186 days so a 2 hour check was necessary but
that’s another gripe. System came back up and appears to be fine now.
Any idea what might have happened here and how to prevent it in the
future?[/color]

You’ve posted this in a SLES forum referencing SLES but you’ve
also mentioned NetWare volumes and Client so is this server
perhaps running Novell Open Enterprise Server (OES)? Perhaps you
could post the output from “cat /etc/*release” to clarify and let
us know SLES/OES release(s).

HTH.

Simon
SUSE Knowledge Partner

----Android NewsGroup Reader----
http://www.piaohong.tk/newsgroup

[QUOTE=smflood;20869]FUBAR Wrote in message:
[color=blue]

Hope this is the right place for this. I just had a really weird
occurrence on my SLES server. One of the drives (the main one with the
OS) kind of became read only. I could still browse and even view files
on netware volumes from a Netware Client workstation but could not even
view them from the console. I couldn’t even launch terminal. Luckily I
was able to access remote manager and reboot the sever. Still had to
run to the location because the other drive array in the machine decided
it hadn’t been checked in 186 days so a 2 hour check was necessary but
that’s another gripe. System came back up and appears to be fine now.
Any idea what might have happened here and how to prevent it in the
future?[/color]

You’ve posted this in a SLES forum referencing SLES but you’ve
also mentioned NetWare volumes and Client so is this server
perhaps running Novell Open Enterprise Server (OES)? Perhaps you
could post the output from “cat /etc/*release” to clarify and let
us know SLES/OES release(s).

HTH.

Simon
SUSE Knowledge Partner

----Android NewsGroup Reader----
http://www.piaohong.tk/newsgroup[/QUOTE]

LSB_VERSION="core-2.0-noarch:core-3.2-noarch:core-4.0-noarch:core-2.0-x86_64:core-3.2-x86_64:core-4.0-x86_64" Novell Open Enterprise Server 11 (x86_64) VERSION = 11.1 PATCHLEVEL = 1 SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 2

While I originally noticed the issue on mapped drives the problem affected everything on the drive not just the Novell shares.

Simon Flood wrote:
[color=blue]

could post the output from “cat /etc/*release”[/color]

As you know, OES is an add-on to a SLES system. Some issues on an OES
system have little to do with OES itself. Unfortunately, many forum
members seeking assistance are unable to distinguish the difference,
hence Simon’s query.

In this case, it would appear that you chose the correct forum.


Kevin Boyle - Knowledge Partner
If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below…