Lots of enterprises are concerned with raw speed and reduced network latency, but when it comes to the financial industry, a delay of only a few seconds could mean losing millions in a trade.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
"The value of a millisecond or microsecond has increased," says Andy Yates, global head of network architecture at NYSE Euronext. "It depends on the firm, but typically they can receive market data in about 40 microseconds. Five years ago, most firms traded out of their offices, and 30 milliseconds was typical. The logical next step -- though it sounds crazy -- is for us to start talking about nanoseconds or even picoseconds."
To meet this demand, NYSE Euronext now has a Force10 Networks switch/router in the top of each colocation server rack, creating a single-tier low latency LAN within the data centre, says Martin Rainsford, NYSE Euronext's global head of data centre engineering.
The goal of collapsing network tiers is to eliminate duplicated bottlenecks for increased speed, says Arpit Joshipura, Force10's chief marketing officer. Force10's ability to switch and route in a single box offers the ability to cut data straight through with no buffering, he adds.
In addition to collapsed network tiers, high-performance networking also relies on super-speed Ethernet. "We're seeing more and more Ethernet, even in HPC, which was an Infiniband domain. We are also seeing more 40G than 100G -- most servers haven't gone from 1G to 10G yet," Joshipura says.
Finally, even with architectural changes and 40 or 100 GbE, there are other factors to consider for high-performance networking. "Distance is also a factor -- you still need to think about your fibre runs, at 4.5 nanoseconds per metre. We built two new data centres in New Jersey and the UK on similar designs and with a similar goal -- there is a huge benefit in building a facility near the financial community," Rainsford explains. "Then the trading systems have to be optimised -- for instance, [trading firms] will buy specific NICs that meet their requirements."
Cutting the distance can mean important savings, Rainsford adds. "High performance networks are not cheap … a big part of that is the cost of the optics, although early adopters like us are already seeing prices coming down. The low-latency network is really only in the data centre. There's been a paradigm shift to offer proximity location in exchanges, although the WAN is still relevant and the STFI [Secure Financial Transaction Infrastructure] trading network is still there."
Meanwhile, as networking hardware gets faster, networking teams will have to prepare for unexpected interactions with software. Applications may experience microbursts, which can't be solved through generic testing.
"We use all sorts of technologies and vendors, we also do a lot of testing -- our reputation is at risk. It's our business now, so we have to keep it competitive and stable," Rainsford says. "It's the end-to-end experience, the trader just wants the whole thing to be as fast as possible. If the technology is done correctly, it becomes an even playing ground."