In an IDC report published earlier this year, analysts predicted that by 2012, "85% of net-new enterprise applications will be specifically designed to live in the cloud." The challenge for IT professionals, analysts say, will be to get the organisation's cloud network design ready to handle these apps.
“Every cloud company starts by taking the network for granted; then somewhere around 40 to 80 servers it hits a wall,” warned Ken Duda, VP of software engineering at cloud specialist Arista Networks. Avoiding that requires an understanding of new topologies and a network that is as homogeneous and as automated as possible, he said.
Cloud network design consideration No.1: What kind of cloud?
Enterprises must first understand their cloud model options: public, private or hybrid, said Glyn Bowden of storage industry group SNIA UK. Most enterprises will eventually go with a hybrid cloud, which is defined as the use of public and private components within the same system.
What are the benefits of this? With storage, this could mean having different data tiers in different clouds. Meanwhile, for business systems, it could mean hosting highly sensitive applications and data in a private cloud while placing management or billing tools in a public cloud.
The choice rests on weighing cost and use-case requirements. Using a public cloud for storage, for example, avoids the need to buy and build infrastructure, and it might address backup needs. Yet if an enterprise needs fast recovery, the low-latency connection necessary to provide that could be expensive. Meanwhile, private clouds can require more capital expenditure, but the investment generally yields advantages, such as flexibility and the ability to closely manage traffic for optimisation.
The decision can also differ depending on the type of applications and workloads involved. "You need to put filters on your workload—first an economic filter, then trust, and then technology," said Sandra Hamilton, vice president of EMC Consulting EMEA. She explained that workloads have different trust requirements—with 'trust' also covering speed, performance, security and so on. For one workload, the economics might suggest staying with a legacy architecture, while for another, both economics and trust might favour the public cloud. Yet even at that point, a suitable public cloud service might not exist, so a company would have to go private.
As a rule of thumb, companies can “start with an internal private cloud for test and development," advised Chris Rae, VP of cloud solutions for software company CA Technologies.
Cloud network topology differs
When an IT shop decides to go private or hybrid, it must often consider moving toward a broad, flat network that better enables movement of data and applications horizontally—or east-west—between servers. This enables the team to adjust an application's access to processor power and bandwidth. In a traditional network, data primarily flows up to the core and down to the aggregation layer, referred to as north-south.
Cloud network design takes it local
The need to increase speed and minimise network latency, in order to meet service level agreements, also means doing more locally. One solution is to have an application service module within the switch so that some tasks—such as security and load balancing—can be done within the rack, said Johan Ragmo, data business development manager for the northern Europe at Alcatel-Lucent Enterprise.
That ties in with the bundled approach proposed by Alcatel-Lucent, Brocade, Cisco and others, where servers, storage and switches are integrated into a pre-tested and ready-to-go block, rack, pod or container. The idea is that once this single standard item is connected to the automated provisioning layer, all of the included components are simply added to the cloud resource pool.
The other trick to preventing latency and optimising delivery is implementing network automation for not only resource allocation, but also for network configuration management.
"Automate provisioning, as well as automating the tools—computing, network, storage. You have to automate the people processes and get the people out of them – there's so many more important things they could be doing. And capacity is very important. You need to [automatically] manage and model capacity, so you run at 90 to 95% but never reach 110%," said CA's Rae.
Ultimately, cloud computing aims for one golden ideal—the ability to provide business applications and data at an ensured service level. While simplifying and flattening the network infrastructure is essential, it is a means to an end, not an end of itself, said Rae. "In my opinion the starting point is not the technology or the hardware layer; it's which business services you're going to deliver at what service level. You need to offer large, medium or small at gold, silver or bronze levels and that's it." He said it is 80/20—meaning 80% of the need covered at 20% of the cost.