Software defined everything: Throwing more software into the mix

In theory, the move to software-defined everything makes sense. What are the practicalities of the "SD" approach?

In theory, the move to software-defined everything makes sense. What are the practicalities of the 'SD' approach?

Software-defined networks (SDN), software-defined storage (SDS), software-defined computing (SDC), software-defined datacentres (SDDC) -- the world is full of 'SDs'. It seems suppliers want to move away from any dependence on hardware differentiation and look to a more dynamic and responsive platform of software functionality.

At a theoretical level, the decoupling of software from specific hardware platforms -- software-defined everything, or SDE -- makes some sense.

A basic hardware platform can be built from commodity parts, with virtualisation and cloud computing being used to abstract the actual operational platform from the hardware.

The functions of the platform can then be carried out through software, and as the functions need to change to respond to technology and business market needs, only the software needs to change. No more fork-lift upgrades of the hardware; no more shoehorning in niche technology to do something 'clever'.

Of course, the world is never so simple, and the software-defined-everything world will be a part of the answer, not the answer itself. Technology will still continue to improve: today's servers will soon be overtaken in performance by tomorrow's; storage densities and speed will continue to improve; network capabilities, both in terms of bandwidth and how it operates (such as the use of fabric networks) will still require hardware updates as part of a planned IT lifecycle management approach. But with the SD approach, moving software to new hardware platforms becomes much easier.

More articles on software defined datacentres

Convenience foods producer tastes success with software-defined datacentre

Cisco is missing the transition to software-defined networks

Software-defined infrastructure challenges legacy IT culture, warns Gartner

Software now has a much stronger part to play in how the overall platform operates than it used to. The use of systems management frameworks may (thankfully) be dead, but the need to take a holistic view of a complex, heterogeneous IT platform will have to be carried out via a fully integrated set of software. And it is not just the management that matters -- the actual functions that the platform provides are also important, and may well see a strong move towards being more software defined.

Software just one ingredient

Let's take a look at the first SD to the market -- SDN. The Open Network Foundation (ONF) originally tried to drive SDN as a means of moving towards a commoditised switch network with everything being done at the software level through OpenFlow. However, service providers found that moving so many packets of data from the data plane to software-based management and control planes introduced too much latency into systems, and they set up network functions virtualisation (NFV) as a parallel system of aggregating functions to minimise latency. 

Companies such as Cisco (with Insieme Networks) and Juniper (with Contrail) decided that a more hybrid approach would meet their customers' (and shareholders') needs more directly. Some functions are dealt with via software, while others are still dealt with by intelligence built into the network itself through network operating systems (Cisco IOS and Juniper Junos) and specialised application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs).

Experimenting with new recipes

This is reflected elsewhere as well. Far from commoditising at the server layer, we are seeing more of a move towards a heterogeneous environment linked together with standardised application programming interfaces (APIs). The use of mixed processors, with Intel x86 central processing units (CPUs) being used alongside, for example, IBM's Power chips and low-power ARM processors, as well as graphics processing units (GPUs) from Nvidia and offload servers such as those from Azul Systems, is becoming more common.

Converged systems from IBM, Dell, HP, Nutanix and others are leading to intelligence being built into the system itself (for example, with IBM's CAPI connector, which provides hardware-enabled interconnect acceleration). Some intelligence within converged systems is being provided by software; however, a lot of it is down to intelligence within the system's engineering and in proprietary technologies. Use of proprietary approaches in a converged system is not a problem, as long as it does not adversely affect management and upgrade costs.

Although many functions are being abstracted into software, there still remains a strong part for hardware intelligence to play -- as long as it is in synergy with the software.

Working towards the perfect blend

Similarly in storage. The move from magnetic spinning disk through to flash-based systems has heavily stressed existing storage management software, and more suppliers are realising that the interplay between storage, networks and servers is now a major issue.

For example, putting in place a storage array that is fully flash-based may not give the improvements in performance envisaged, as the network cannot deal with IOPS that the storage array is capable of delivering. Changing the network so that it can deal with the IOPS then leads to the servers being underpowered and overwhelmed by the amount of data reaching the CPU.

EMC is trying to address this through its ViPR technology. As one leg of the EMC Federation of companies (which includes EMC II, VMware and Pivotal, alongside RSA and VCE), ViPR is part of an overall SDDC approach that aims to ensure that all parts of the hardware platform are managed to an optimum level. However, much of the intelligence is still in EMC storage arrays, VMware hypervisors and VCE VBlocks -- it is not a pure software play.

New-era storage suppliers, such as Pure Storage, Nimble and Violin Memory have realised that the future is not just in the array, but is in the intelligence and capabilities of the software that they wrap around the arrays. A heavy battle between these suppliers is going on to ensure each has software that is ready for tomorrow's IT platforms, providing intelligence to customers moving from physical to virtual environments.

Getting the timings right

Workload management becomes a major issue. With a heterogeneous IT platform with different capabilities -- different CPUs, cloud environments, network technologies -- ensuring that the right workload is on the right platform at the right time will be a major issue.

IBM has shown the way with its workload manager as used within its zEnterprise system, but others, such as Teradata, are also showing how specific workloads can be managed in a more automated manner against a highly dynamic resource pool.

Others who want to play heavily in the software-defined world are the old systems management companies such as CA and BMC. Reading the writing on the wall that the physical IT platform's days are numbered, both companies are moving rapidly to present portfolios of software-based tools that will work with abstract platforms such as private, public and hybrid clouds to optimise performance. With no axe to grind from the hardware point of view, CA and BMC are making far more of a pure software play -- but will need to ensure that they can make the most of hardware-based intelligence from the main players as much as they can.

Turn up the heat

As the hardware and software platform becomes more complex, it is important to regard it as a single entity and manage it accordingly. 

Here, expect to see datacentre infrastructure management (DCIM) suppliers become more vocal. The likes of Nlyte, FutureFacilities, Schneider and Emerson have software that is already treading on the toes of IBM, CA and BMC. Although some see it as being a synergistic play between DCIM and 'systems management' (or whatever it should now be called), an integration dashboard from a DCIM supplier could well make a major play into the market.

Another area where the software-defined approach is having a big effect is in the DevOps world. The only way that DevOps can work effectively is through a heavily automated and software-based approach. Companies such as Serena Software, IBM and BMC offer software to make DevOps work better. Delphix offers a means of software-defining virtual data, meaning that DevOps developers and testers can work against near real-time data without affecting the operational environment.

The proof is in the eating

Where does this leave us, then? A pure software-defined world, based on commoditised hardware, doesn't look like it will happen. A hybrid world, where certain functions are abstracted through to software, but certain intelligence remains with the underlying hardware, seems a more likely outcome.

From a buyer's point of view, it is ensuring that you tread the narrow line between an open (but sub-optimal) and a proprietary (but tied-in) approach. Ensure that what you implement is capable of embracing new technologies and new suppliers, but will meet the business's needs around performance and flexibility. A system that abstracts effectively, but utilises suppliers' APIs to make the most of the underlying capabilities, will work best.

'Software defined' will continue to be important; 'software-defined everything' may be a misnomer that can trap the unwary.

Clive Longbottom is the founder of analyst Quocirca

This was first published in January 2015

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Server virtualisation platforms and management

1 comment

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close