Dear All,
I’ve done a disk IO performance test under SUSE Linux Enterprise Server 12, the result is very confusing.
In order to avoid the bottleneck of the physical disk, I set up a loop disk of 10G size at /dev/shm. The test tool is IOMeter linux edition, module is 100%random/60%write/40%read, block size is 4K.
I tested three times under the default kernel, and another three times in domain 0 under the xen kernel.
The maximum IOPS under default kernel is almost 90k, and the average is about 60k.
The maximum IOPS in domain 0 under xen kernel is almost 55K, and the average is about 35K.
Does anyone have encountered this issue yet, or please give me some advice to resolve it.
Thanks!
What are you trying to resolve? What do you perceive as being wrong?
You mentioned running a “disk IO performance test” but then you went out
and tested under /dev/shm and stated you did to to avoid a disk
bottleneck. Are you wanting to test the disk or not, and if so, why are
you using /dev/shm at all?
–
Good luck.
If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below…
We have a distributed storage agent running on Dom0, however we realized a high performance downgrade comparing to a baremetal Linux. As a result, we tried iometer for a benchmark.
When iometer is running on a SLES12 baremetal host, the average iops is about 90,000, but down to 60,000 when running on a SLES12 Dom0 Linux.
I am curious to know whether it is expected. If not, what might lead to the downgrade.
[QUOTE=ab;26001]
You mentioned running a “disk IO performance test” but then you went out
and tested under /dev/shm and stated you did to to avoid a disk
bottleneck. Are you wanting to test the disk or not, and if so, why are
you using /dev/shm at all?[/QUOTE]
I assumed that the different mechanisms of CPU scheduling and memory accessing may be one of the reasons that cause this huge performance gap.
So I setup a loop disk in /dev/shm to confirm whether or not. May be I call it as “Memory accessing performance test” better than “disk IO performance test”.
When iometer is running on a SLES12 baremetal host, the average iops is about 90,000, but down to 60,000 when running on a SLES12 Dom0 Linux
were those tests run on the same machine, one time booted with the standard kernel, the other time started with the Xen kernel? (Your first message seems to suggest this, but it’s not fully clear.) If not, are those identical servers you ran the tests on?
Is the Xen kernel configured to run with limited Dom0 memory (to avoid ballooning overhead), were any DomUs active at the same time? Is the same I/O scheduler active for the block device, in both cases?
[QUOTE=jmozdzen;26005]
Is the Xen kernel configured to run with limited Dom0 memory (to avoid ballooning overhead), were any DomUs active at the same time? Is the same I/O scheduler active for the block device, in both cases?[/QUOTE]
I didn’t limit memory of Dom0, and no active DomUs at the same time.
In both case, I/O schedulers of loop disk both are “none”, and can not be modified.
Have you considered opening a SR with SUSE so that the engineers may have a look at the specifics of your server situation and help out with either tuning suggestions or some patches, if this turns out to be a bug?
Have you considered opening a SR with SUSE so that the engineers may have a look at the specifics of your server situation and help out with either tuning suggestions or some patches, if this turns out to be a bug?
if you go to www.suse.com, in the “Support” menu there is the link to “My Support - Open Service Request” in the right “Customer Center” column. You need to be logged in to use this function.
if you go to www.suse.com, in the “Support” menu there is the link to “My Support - Open Service Request” in the right “Customer Center” column. You need to be logged in to use this function.