Hello, sorry fpr my bad english, i am not a natural english speaker.
My problem is very difficult:
we have a software that scan a hugh of files to check the modification date of the files, if the file modificated we copy this in a other directory and process the file …
But the problem is,there are many data, more than 70000 directorys with ca. 20-50 files.
On SLES 10 SP3 there are no problem, believe me
After a upgrade to SLES 11 SP3 , no inplace upgrade, a new installation and backup Data … the decribed program generated a hugh IO-load it screatched on the hard disk, so that it slows down the server.
i optimized with ionice the program, but it is not the solution.
Which may have changed in SLES? what chance I have to find the problem?
My guess, a “harddrive read cache” in Linux was disabled!!!
But the problem is,there are many data, more than 70000 directorys with ca. 20-50 files.
On SLES 10 SP3 there are no problem, believe me
After a upgrade to SLES 11 SP3 , no inplace upgrade, a new installation and backup Data … the decribed program generated a hugh [B]IO-load
[/B]the question which is immediately arising: what file system did you use on SLES10, which one now on SLES11SP3?
might, by chance, be the dir_index feature turned off?
tune2fs -l /dev/sda1 | grep dir_index
There may have been many changes between the ext3fs versions in the according kernels, but I more tend to believe that some tuning in SLES10 at either the file system or i/o scheduling level caused the better performance - or some change in the disk subsystem device driver itself, that came with SLES11.
Are you under SUSE support, so you may get engineering assistance via opening a “service request”?