Fibre Channel over Ethernet (FCoE) infrastructure from QLogic and Cisco Systems has helped London-based service provider Atlanta Technology save on hardware costs and power consumption and brought a much simplified storage and server infrastructure.
Atlanta was founded in 1996 as a reseller selling hardware and software to SME customers; it added hosting capacity for its clients five years ago. The company provides enterprise-class technology to its hosted services customers from two data centres, one in Bloomsbury, central London, and one in west London. It typically runs applications such as Microsoft Exchange, SQL Server, Dynamics and SharePoint as well as Citrix Terminal Servers.
The main elements of its infrastructure are 12 SuperMicro multi-core x86 servers, each running about 30 VMware virtual servers. These are connected to clients via Ethernet with storage from a Compellent SAN of about 40 TB of capacity at each of Atlanta's data centres, plus some capacity in a Xyratex SAN and a SuperMicro SAN used as a disk backup target.
The SAN fabric had been entirely Fibre Channel until the FCoE implementation, which was part of a network upgrade aimed at relieving bottlenecks on virtual server traffic.
As a service provider, being able to maximise the number of VMs per physical device was a key criterion in network/fabric design, said Mike Kelson, company chairman.
"We had I/O issues with a lot of VMs on our servers and looked at what was available to overcome them. The choice was to put in a load more Ethernet cards or look for alternatives. We also didn't want a lot of spaghetti at the back of our cabinets. We were looking for a unified I/O capability," said Kelson.
"We needed to get cost and complexity out of the equation and were looking at ways of making the racks neat and tidy. We didn't want to invest in big Ethernet switches or complex Fibre Channel fabrics," he added.
An additional push factor toward FCoE was the cost associated with running an Ethernet network and separate Fibre Channel fabric and the separate cabling and cards that required.
Kelson's team first looked at virtual I/O products from Xsigo. "We looked at putting in 10 or 20 Gbps Xsigo adapters to lay virtual Infiniband and Ethernet capability on top of our HBAs. It's a good solution, but it was prohibitively expensive and also proprietary," he said.
Ultimately, Atlanta chose to build the FCoE infrastructure with Cisco Nexus 5000 switches at each site and a QLogic two-port 8100 8 Gbps converged network adapter (CNA) for each server, which combine the functions of the previously separate Ethernet NICs and Fibre Channel HBAs.
The key benefits for Atlanta were the decrease in the amount of cabling as well as increased power efficiency, with one combined Ethernet/Fibre Channel CNA card/adapter instead of separate NICs/HBAs.
Kelson said, "I haven't done so, but if I sat down and worked it out I could show a return on investment. There are lots of things we wouldn't have to buy now. For example, we don't need big port count fabric switches and Ethernet switches. Now our ESX servers only have two cables going into them. Before there would be up to eight -- [at least] two Fibre Channel, two Ethernet for iSCSI and two for Ethernet."
He added, "The CNAs cope easily with 30 VMs pumping data out through the Ethernet LAN. They are proving to be reliable and have a low maintenance cost. We basically fit them and forget them."
So, did Kelson have any concerns about a lack of maturity in FCoE, given the lack of vendor agreement over some components of the standard and a divergence in roadmap between the key switch vendors, Cisco and Brocade?
Apparently not. "Even with Ethernet there are incompatibilities and issues between vendors so it's not a worry to me that FCoE standards are not entirely harmonised," he said.
Finally, what about training? Was that an issue for the Atlanta network/storage teams? "The learning curve was steep but quite short, and being mostly based around Cisco means it has not been onerous. FCoE was an easy transition."