The dark side of software-defined datacentres

Feature

The dark side of software-defined datacentres

About a year ago, at VMworld 2012, virtualisation service provider VMware coined a term for what it saw as the next generation datacentre – the software-defined datacentre, or SDDC.  

Hot on the heels of software-defined networking (SDN) and just before software-defined storage (SDS), SDDC looks like it heralds a new way of dealing with technology hardware as a platform and the applications that run on top of it.

datacentre.jpg

A software-defined datacentre is an IT facility where the elements of the infrastructure such as networking, storage, CPU and security – are virtualised and delivered as a service. The operation of the infrastructure is entirely automated by software.

From a positive point of view, there has to be abstraction between the hardware and the applications in a virtualised environment. If there isn’t, then virtualisation cannot provide on its promise; there will still be dependencies between the application and the physical hardware.  Certainly, cloud cannot be implemented in any shape or form without such an abstraction in place.

An SDDC should enable this to be the case. All the smarts of putting together composite applications, rolling them out, monitoring them and fixing any problems need to be carried out above the physical platform itself. 

For many organisations, this will be based on bringing together existing solutions from the physical environment to help with new capabilities in the virtual through development or acquisition.  

For example, CA has recently acquired Layer 7 Technologies and Nolio which will help build on previous acquisitions of 3Tera and Nimsoft that helped with filling gaps in Unicenter and Clarity. IBM continues to acquire companies that will help it build on its Tivoli systems management platform. 

Others will take a “virtual down” approach, such as VMware itself with its vCloud Suite and the ecosystem of other software around it.

However, this does lead to issues that may not be apparent at the outset. 

As a given in a software-defined datacentre, the physical and virtual assets have to be managed as a single entity. It is no use having one set of tools flagging up that there is a problem in the virtual environment if the problem is down to a physical fault, yet the tools cannot identify that fault for you. If the two sets of tools for physical and virtual are not fully integrated, then it is no use.

The stage that standards are at can also be a problem. 

SDDC was first mentioned by VMware, and it has a vested interest in making sure that what it puts out as SDDC software supports VMware ESX and ESXi hypervisors, more than any other hypervisor. 

Others will be trying to be as agnostic as possible, supporting as many different environments as they can – but may be hampered by the different levels of functionality available on different platforms, as virtualisation platforms such as ESX, Microsoft Hyper-V and Citrix Xen continue to mature.

These problems may not just be down to working across multiple hypervisors. Take a modern system – a vBlock, Cisco UCS (unified computing system) or vStart system. These are highly engineered systems that have a lot of built-in management capability.  Any over-arching SDDC system will have to be able to deal with these systems – or bypass them completely, which negates a lot of the value of going with an engineered solution in the first place.

Many suppliers are looking at how graphics processing units (GPUs) can be used within their systems as well, allowing certain types of workload to be offloaded to a platform that is more suited to their needs.  

Others, such as Azul Systems, have highly specific silicon that can be used to run Java-based workloads natively. Is it possible to expect a single SDDC system to be able to embrace all of these different platforms?

Then, there’s what IBM is up to with its PureFlex systems. 

Here, not only is there an engineered system, but there are also multiple different types of silicon – x86 and Power as the main constituents – but this could expand to include mainframe CPUs. IBM runs its own intelligent workload manager within PureFlex – any SDDC system will need to be able to operate alongside such capabilities.

The impact of the other “SDxs” or software-defined everything also has to be addressed. 

In an ideal situation, an SDDC system would cover everything; CPUs, storage and networks. However, SDN is ploughing its own furrow and will evolve at a different speed to aspects of a software defined datacentre.  

Software-defined storage is for now, a niche for a few small storage players, but it is likely to grow in importance as groups such as the Storage Networking Industry Association (SNIA) brings out more standards around cloud storage.

The big issue for SDDC is not that it is a bad idea. In fact, it is a brilliant idea that is a necessity to make cloud work the way it should. The problem is more down to there being far too many agendas out there in the supplier-land and that these suppliers (as always) see little value in their adopting and abiding by any standards 100%.

Indeed, as with so many standards in the past, expect to see announcements from suppliers that say they will support and adopt SDDC – and then add extra functionalities that only works with their systems.

Eventually, it is likely that the software-defined datacentre will work its way out as a super-set of management and implementation functions that suppliers will write to, providing greater functionality through their own tools that will plug into any system that is there to provide oversight of the total system.  

It may not be the most elegant technical solution – but it should work and meet the main needs of suppliers and customers alike.


Clive Longbottom is a service director at UK analyst Quocirca


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in May 2013

 

COMMENTS powered by Disqus  //  Commenting policy