10Gbe Optimizations for SLES 10 SP4

I have several SLES 10 servers with 10GBe network cards in them. Are there any specific tuning parameters I need to modify in sysctl or anywhere else to get the most performance benefit out of my network connections? I’m currently backing up my servers using BackupExec and the RALUS client, and right now my SLES servers are about 1/3rd the speed backing up as my Server 2008R2 servers (everything being 100% identical, server hardware, switch config, etc). I saw some generic tuning for sysctl on Linux but wanted to see if there was anything SuSE specific or if anyone has any performance tweaks for 10GBe networks. My SLES servers run about 1,800 to 2,300 MB per min and my Win boxes are high 4’s, to low 5’s by comparison.

Hash: SHA1

Disclaimer: 10Gbe is a dream to me… low experience there.

Have you done any testing using something other than BackupExec? If my
math is correct then even 5,000 MB/minute (reported speed of windows
box) is nowhere near saturating a 10-gigabit connection:

5000 MB/min * 8 (bits/byte) * 1/60 (minute/seconds) = 667 megabit…
less than one gigabit/sec, nowhere near ten gigabits/sec. As a result I
wonder if this has anything to do with the hardware vs. the speed of
ralus or the stuff being backed up. If you have multiple systems of
each OS to try you could do the following on one Linux box (assuming
port 1234 is allowed through the firewall):

netcat -l 1234 >/dev/null

and this on the other:

time dd if=/dev/zero bs=1048576 count=20000 | netcat firstBoxIPHere 1234

This should basically send 20 GB of zeros across the wire as fast as
possible and it would be interesting to see how fast you can get things
going as few applications in the way as possible. Doing the same
to/from other systems, or even the windows boxes if you can find netcat
for windows, could also be interesting.

Good luck.
Version: GnuPG v2.0.18 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/


I have done some testing/tuning outside of BackupExec, on both the Linux and Windows side.

I didn’t copy out my numbers or anything but I did extensive testing using IOMeter, iPerf/jPerf, and timed copies (time cp ). Those types of tests show me getting 7-9Gb a sec throughput, so on the network side I’m fairly pleased with the results. It’s possible that it’s purely a backup issue, I’ll need to go back and check my TSA numbers and tests.

One other thought I just had, is that the other boxes we’re backing up have mostly static data, with not many changes being made. The SLES boxes are running OES as our main file server, so it has many changes made throughout the day. We’re backing up to disk using deduplication, so I wonder if the whole BackupExec dedup process is really what’s slowing the system down, and not the wire speed of the transfer.

I just wanted to throw the question out here to see if anyone using 10GBe knew if there were tuning tweaks that could be made or if the defaults were good enough as is.