When transport company FedEx built a new data center, adopting 10 gigabit Ethernet (10GE) was an easy choice, and not just because speed is nearly always welcome.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
"We asked if 10GE could make our environment less complex," the company's Chris Greer told a session at the VMworld conference today.
The answer was a resounding yes, with cabling alone offering a great benefit as the company previously ran collections of virtualized servers with ten one-gigabit cables connected. "We tried color-coding the cables," Greer said, but the results were still confusing.
The company also hoped to simplify its network so it could, in future, take advantage of Fiber Channel over Ethernet to connect NAS and iSCSI storage.
Initially, the company felt 10GE would be too expensive, but found Trimax cables reduced the price of cabling, albeit introducing cable length constraints. Trimax's limitations were deemed acceptable and the company adopted 10GE - but then could not achieve anywhere near its stated performance when moving data between virtualised servers.
"When we used one-gigabit Ethernet we were getting 40Mb/s transfers," Greer said. "With 10GE we were getting 70Mb/s."
To discover why the company was not getting the performance boost it expected, it called on Intel which conducted a series of tests, initially using two servers running native Linux to test 10GE performance.
The results, said Intel's Bob Albers, showed that network interface cards (NIC) are seldom installed in the correct PCI slot of a server, depriving the NIC of the bandwidth it needs to transmit data over the network.
Achieving good 10GE with a native or virtualized OS therefore needs a 10GE NIC to be connected to a first-generation PCIe slot. To sustain more than one port, an 8x second generation port is needed.
Intel tested tranfer speeds between two servers using the NetPerf, DCP, rsync and BBCP protocols and found that only the first and last achieved speeds near 10GE's upper limits, with encryption applied to DCP and rsync greaty hampering performance.
Albers said Intel's forthcoming Westmere processor (a new iteration of the current Xeon X5500) will adress this issue by adding native encryption instructions in silicon.
The next test used to find out how to improve 10GE speed saw the creation of virtualized Linux servers, but again network performance was slow under the same tests.
An attempted direct connection of NIC to VM eventually delivered close to 10GE performance, but with the unfortunate side effect of being possible only under VMDirectPath, a product that is not currently compatible with vMotion or VMware tools that offer high availability and fault tolerance.
Albers concluded the session by saying that Intel and VMware are working on new tools to overcome these issues, then offered the following checklist for would-be users of 10GE under VMware:
- Check which PCI slot your NIc uses and make sure it can deliver the speed you need
- Turn on VT-X, NUMA, SMT and VTD in your server BIOS
- Use the vmxnet3 driver (or its successors) and not the e1000 driver
Simon Sharwood attended Vmworld as a guest of VMware.