The three-tier network has dominated the data centre since the late 1990s, but recent moves toward a virtualised, service-oriented, converged infrastructure has spawned demand for a flat network topology that had been abandoned years ago.
Flat network believers say that reducing the number of switching tiers enables the kind of any-to-any connectivity between servers and nodes on the network that allows for rapid and automated VM migration. They also say it will diminish latency and could ultimately reduce management overhead.
In this guide, we explore the considerations that network architects took before moving into flat networks, as well as related technologies such as network fabrics and virtual cluster switching that enable deeper manageability in a flat network topology.
Why the need for flat network topology?
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
For much of the last decade, networks have been designed to remain in place for many years, and once implemented they have been challenging and costly to change. However, this is changing as rapid developments in higher-level data centre trends impact the requirements of the network, according to Andrew Buss, service director for Freeform Dynamics.
In fact, Buss argued that virtualisation and the cloud are pushing network architects to invest in simplifying the network topology to support a dynamic infrastructure.
Moving away from a three-tier approach to a flattened network simplifies the traffic flows in the data centre, making virtual workload redeployments and general operations management much simpler.
Read more about the drivers IT architects should consider for data centre networking investment.
In the new data centre: Flat network topology and converged storage
The move towards flatter data centre networks aims to reduce the number of switching tiers to diminish latency and management overhead, and vendors are promising to make this happen with pre-standard technologies like Transparent Interconnection of Lots of Links (TRILL) and Cisco Systems’ Overlay Transport Virtualisation (OTV).
This forces network engineers and architects to navigate a tricky course through a lot of big promises and a good deal of proprietary technology. They end up with lots of new choices in architecture and approach -- and that means a lot more research before investment.
To move to this new data centre network and enable the private cloud, architects will also consolidate data centre and storage infrastructure using data centre bridging and Fibre Channel over Ethernet (FCoE), or possibly I/O virtualisation technology, in order to cut down on the number of cables, network interface cards (NICs) and host bus adaptors they need to install when racking a server.
Read more on the options to consider for a next-generation data centre network.
Using virtual cluster switching to manage a flat network
The UCLA Laboratory of Neuro Imaging (LONI), which maps the human brain and continually adds scanned data to the nearly petabyte-sized imaging database, is constantly accessed by about 1,000 researchers trying to understand and cure diseases like Alzheimer’s and schizophrenia. Hundreds of researchers are active at any given time, working with data sets from 20 MB to hundreds of gigabytes.
LONI recently implemented a non-blocked, flat Layer 2 network with all the racks of servers directly connected, but it turned to virtual cluster switching, a technology that enables groups of top-of-rack switches to act and be managed as one through a virtual chassis, in order to manage its network more effectively.
Beyond the high-performance data centres, the virtual chassis will play a central role in enabling management across the enterprise network, as glass-housed corporate data centres split apart and become geographically dispersed. The downside to most of the cluster switching or virtual chassis solutions available today is the fuzziness of interoperability.
Find out more on whether virtual cluster switching is the right option for your business in this case study.
Cloud networks require going flat and using Ethernet fabrics
Networking vendors argue that an important part of simplification and optimisation is to make the network as flat as possible. Cloud automation -- or the provisioning and deprovisioning of servers, storage, networking and applications on demand -- is crucial.
Vendors differ in just how much their automation tools and interfaces depend upon their hardware, and in particular on their proprietary switch operating systems, which is why networking is considered the laggard when it comes to auto provisioning within the data centre.
However, by simply using standards to add interoperability --thereby easing the task of automating workload allocation within the data centre network -- or by letting network administrators manage the networking infrastructure as a single switch with a single view of the fabric, it could help ensure that the move to Ethernet fabrics and potentially open network software goes smoothly.
Read more on the different options available to take the innovation step into cloud automation.
This was first published in December 2011