In recent years, we’ve seen deployment models move on significantly from the siloed IT operations of the early 2000s, when separate server, storage and networking teams were the norm.
How did we get here? Why choose one over another? And who are the market leaders?
Before the advent of widespread server virtualisation, IT infrastructure was managed by separate teams. At the time, this made complete sense, with IT organisations looking to deploy best-of-breed components for each technology.
Server virtualisation radically changed that landscape, as infrastructure started to become more integrated. This change allowed the migration towards the converged and hyper-converged systems we see today.
Converged infrastructure is effectively a packaging process that delivers the offerings of a supplier (and partners) as a stack of hardware, pre-tested and validated by the supplier.
Converged offerings consist of server, storage and networking hardware, usually delivered as a single rack and sold as a single product.
From the customer’s perspective, there are multiple benefits. On the technical side, the supplier takes on the job of checking components all work together and will validate updates before releasing them to the customer. Converged systems usually include management software that delivers orchestration for hardware components and virtual machines (VMs).
From a financial perspective, the customer gets a single system, fully supported by the supplier as the single point of contact, making it easier to cost, amortise and deliver systems for specific business needs, such as email services and virtual desktops.
Having a single support contract simplifies the operation of the hardware, removing the “blame game” that can be seen with large and complex infrastructure deployments and providing just one throat to choke.
We have seen the development of two styles of converged infrastructure: the product stack, where the supplier provides and ships the hardware as a single product; and reference architectures.
The reference architecture model simply describes which supplier platforms have been tested and certified to work together, allowing the customer more purchasing choice in specific products and specifications but, at the same time, providing a validated system with supplier support.
Hyper-converged systems take the packaging process a step further by integrating the features of storage and server virtualisation together in a single hardware offering.
Networking is still delivered externally with dedicated switches, but much of the traffic between internal VMs flows over software-based networking in the hypervisor.
The form factor in hyper-converged offerings is the server or node that consists of processor, memory and storage.
Storage services are delivered out of the hypervisor that runs on the node, or in a virtual machine on the node. Resilience is achieved by replication of data across multiple nodes and so hyper-converged systems often have a minimum node-count in their initial deployment.
Storage is significantly simplified and, in most cases, integrated into the node, and hence is hidden from the administrator.
The deployment method for hyper-converged systems is much easier than traditional systems, with the addition of nodes being a case of “plug in and go”, with little additional configuration work.
From a cost perspective, scalability means systems can be scaled one node at a time, which is very attractive for small to medium-sized enterprises (SMEs).
Converged versus hyper-converged – when and why?
Both converged options look to simplify operations, so when would one system be more appropriate over another?
From a scalability perspective, converged systems are more suited to larger applications such as Oracle, Microsoft Exchange or SAP installations.
Dedicated storage provides for higher availability. To minimise the impact of device failure on application performance, converged systems can use fully featured storage with array functionality such as predictive failure, where a disk device is replaced in a controlled fashion.
By contrast, hyper-converged systems usually rely on node-based recovery that can generate significant network traffic from storage rebuilds.
Looking at scalability from a different angle, converged systems tend to be designed to scale to a specific size with expansion being non-trivial.
In contrast, hyper-converged offerings are designed to grow with the addition of compute and storage resources in a cluster configuration. This can make hyper-converged systems more appropriate for smaller-scale deployments with smaller virtual machine requirements that expand over time.
Making the choice between converged and hyper-converged is a case of looking at the requirements of the applications being deployed. For smaller organisations, where dedicated storage and networking skills don’t exist, hyper-convergence is a great choice. But, where more control over the configuration is required – such as to dedicate or partition resources – then a converged system is a better option.
Market roundup – converged infrastructure
There is a large range of converged systems in the market, covering products and reference architectures.
Hitachi Data Systems offers converged systems that support three main hypervisor systems – VMware vSphere, Microsoft Hyper-V and Red Hat Linux.
The systems offer three configurations (UCP 2000, UCP 4000e and UCP 4000) that scale from 24 to 128 servers, with a maximum of 1,536GB of memory per server (2,048GB for UCP2000). Network connectivity includes 10Gbps Ethernet and 2Gbps to 16Gbps Fibre Channel.
Storage for the systems is based on Hitachi VSP G-series. Hardware resources are managed through two software systems: UCP Director and UCP Advisor. These integrate into hypervisor management tools such as VMware vCenter to enable hardware resource provisioning.
HPE has a large range of converged systems designed to fit a variety of workloads. This includes Converged System 700, designed for mixed workloads and which uses HPE BL460c Gen9 blade servers and 3PAR StoreServ 7000 series storage.
Meanwhile, HPE Helion CloudSystem offerings are designed for cloud-based systems such as HPE’s Helion software (OpenStack and Stackato). HPE also now offers composable infrastructure through Synergy, a system that enables customers to combine hardware resources (storage, network compute) to meet specific application requirements.
Since its merger, Dell-EMC has an expanded portfolio of products from the two companies. Converged systems include Vblock systems, based on the original EMC VCE partnership with Cisco and so include Cisco UCS servers and networking.
VxBlock systems use similar hardware but have started to introduce VMware NSX for networking. For storage, VxBlock/Vblock 350 systems use Dell EMC Unity arrays, 540 systems use XtremIO, and 740 systems use VMAX, all either as all-flash or hybrid (except XtremIO). Systems scale from two to 512 servers with up to 15PB of effective storage capacity.
NetApp has focused on reference architectures marketed under the FlexPod brand. Architecture offerings cover a wide range of systems, including Datacenter (for cloud-based workloads), Express (entry level), Select (Hadoop), Security, Enterprise Apps, VDI, Database and Cloud.
IBM has developed VersaStack, a set of reference designs, developed in conjunction with Cisco Systems. The systems use Cisco UCS servers, Cisco networking (Nexus and MDS) with IBM FlashSystem storage, IBM StorWize or IBM’s SAN Volume Controller (SVC). Each of the systems uses Cisco’s UCS Director software for hardware management and VMware vCenter with ESXi for virtualisation.
The Oracle Private Cloud Appliance provides up to 30 compute nodes per rack, with two 22-core Intel Xeon processors, 256GB DRAM (per node) and Infiniband in-rack connectivity. Shared storage is delivered by an Oracle ZS3-ES appliance with Infiniband connectivity and hybrid flash and HDD capacity.
Market roundup – hyper-converged infrastructures
Hyper-converged systems are available from startups and the incumbent hardware suppliers.
Nutanix pioneered the hyper-converged market and now offers a wide range of hardware systems across four main product sets (NX-1000, NX-3000, NX-6000 and NX-8000).
NX-1000 series models start with four nodes per appliance and single Intel Xeon E5-2609 processors (eight cores), up to 256GB DRAM and three 3.84TB solid-state drives (SSDs).
At the high end, NX-8000 nodes offer Intel Xeon E5-2699v4 processors (44 cores), 1.5TB of DRAM and 24 SSDs (up to 1.92TB). Nutanix offerings can run VMware vSphere or Acropolis, Nutanix’s own hypervisor system.
Hitachi Data Systems offers four hyper-converged node configurations, based on hybrid or all-flash storage. V240 and V240F are 2U four-node configurations with a range of Intel Xeon configurations (maximum E5-2680 v4 with 14 cores) and up to 512GB of DRAM. Storage capacities vary up to 6TB raw (hybrid) or 19TB raw (all-flash).
V210 systems scale up to dual 22-core E5-2699 Xeons, 1.5TB DRAM and 38TB (flash) or 60TB (hybrid) per node. Both configurations scale from two to 64 nodes and support 10Gbps Ethernet networking.
HPE offers hyper-converged systems based on VMware vSphere, using HPE StoreVirtual VSA to deliver distributed storage. Each HPE Hyper Converged 380 node supports two Intel Xeon E5 processors (six to 18 cores) with up to 1.5TB of DRAM. A single node can support up to three storage blocks for a maximum of 25.2TB per node (mix of HDD, SDD or hybrid) and up 16 nodes in a cluster.
HPE recently acquired Simplivity, a hyper-converged startup. Currently, Simplivity is advertising systems based on white-box servers or hardware from Cisco, Dell and Lenovo. White-box OmniCube nodes provide a range of hybrid or all-flash offerings that scale up to dual Intel Xeon E5-2600v4 processors with 1,443GB of DRAM and 40TB of storage capacity per node.
Dell EMC hyper-converged systems include VxRAIL, and it has five model series: G – general, E – entry, V – VDI, S – storage dense, and P – performance intensive.
These are delivered as either 1U or 2U nodes with dual Xeon E5-2600 processors and from 64GB to 1.5TB of DRAM. All models – except S-Series – are available as all-flash or hybrid configurations. VxRAIL systems use VMware Virtual SAN for distributed storage.
NetApp recently entered the market with NetApp HCI, a hyper-converged offering that uses SolidFire technology to deliver clusters with dedicated storage and compute nodes. Three configurations are available (small, medium, large) with up to 16 and 36 cores and 256GB to 768GB of storage available per compute node and up to six drives per storage node (480GB to 1,920GB SSDs).
Cisco now offers hyper-converged systems using Springpath technology in its HX series of appliances. Cisco Hyperflex offers four node sizes – HX220c M4 is a 1U node available in hybrid or all-flash (23.2GB all-flash or 480GB SSD and 7.2TB HDD).
HX240c is a 2U node also available with hybrid or all-flash (38.4GB all-flash or 1.6TB SSD and 27.6TB). Essentially, the model ranges provide for higher storage capacities and all-flash models.
The HC3 range has several model groups: HC1000 for entry level; HC2000 for mid-range; and HC4000 for high-end requirements.
Each model type varies in storage, processor and network performance. The entry-level HC1100 has a single Intel Xeon E5-2603v4 CPU, 64GB of RAM and four 1TB SAS HDDs. This scales to the HC4150 with two Intel Xeon E5-2640v3, 384GB of DRAM and two 400GB SSDs.
A number of suppliers also offer their products as software-only, or bundled with a hardware appliance. These include Pivot3, Atlantis Computing and Maxta.
Read more about hyper-converged infrastructures
- The rise of hyper-converged infrastructure – with compute, storage and networks in one box – seems ideal for SMEs, but is it always a better idea than traditional IT architecture?
- Hyperscale computing and storage are the norm for web giants. Hyper-converged scenarios make it possible for small and medium-sized enterprises to gain the advantages of combined server/storage nodes.