The SDDC is a concept in its infancy and, as such, IT managers would be wise to adopt interim offerings and build on them as the technology advances.
The software-defined datacentre (SDDC) is an evolving architectural and operational philosophy, not a product you can buy with a demonstrated return on investment (ROI).
The SDDC vision is built on three fundamental architectural pillars: virtualisation at all layers; orchestration and automation; modular standards-based infrastructure.
Traditional infrastructure provisioning and management methods are not enough to support the frequent changes required for massive and dynamic systems of engagement.
SDDC architectures hold the promise of simplification and the benefit of hiding the complexities of infrastructure provisioning and management.
The SDDC software layer provides visibility into the physical and virtual resources and implements a spectrum of infrastructure management, from streamlined operator-mediated interventions through automated policy-driven provisioning and adaptation based on the demand patterns of the underlying infrastructure – for example, servers, storage, network, power and cooling.
More articles on software defined IT
Start with early building blocks
Today’s technology infrastructure management practices are over-complicated, posing a bigger barrier than the technology itself. IT managers should not wait for a “big bang” approach to simplify this complexity. Significant value can be gained from the interim offerings and practices can then evolve along with technology innovations.
To prepare for the adoption of SDDC, enterprises must understand their requirements for physical and virtual resources.
In various discussions with technology suppliers and their customers, dialogue regarding SDDC starts and ends with their capabilities around orchestration tools and virtualisation. Though virtualisation is an important foundation for software-defined environments, a true SDDC environment should also be able to detect even the bare metal servers, storage and network devices and converged infrastructure environments, as well as integrate the workload orchestration and look at the power and cooling situation in the datacentres.
The reality of today’s market is that no product suites include all of the required SDDC attributes, but businesses should at least be aware of where their requirements do not match supplier offerings.
As a first step towards the SDDC journey, IT managers should identify the applications and environments that will need on-demand provisioning and dynamic scale-up and scale-down capabilities. If deploying new workloads and provisioning resources on demand is a strong business case, they can work with suppliers such as Cisco, EMC, HP, VMware and others to implement SDDC in the enterprise environment.
In a Computer Weekly article, Forrester analyst Andre Kindness wrote that software-defined network (SDN) products and concepts will need five years to mature enough for enterprises to use them in production. There is a lot of work to be done to tie the components together and fit them into other management systems, orchestration software, hypervisor management solutions and Layer 4 to Layer 7 services.
Software-defined offerings for compute and storage are more mature than for networks. Software-defined storage is an important element of the SDDC picture, with new entrants such as Maxta, Nexenta, Atlantis Computing, Sanbolic, and offerings from established players such as VMware’s VSAN and EMC’s ViPR redefining how storage is provisioned and accessed. Additionally, new hyperconverged entrants such as Nutanix and SimpliVity are redefining expectations for scalable integrated infrastructure and software stacks.
Explore new ways of provisioning IT architecture
As a first step on the SDDC journey, IT managers should identify the applications and environments that will need on-demand provisioning and dynamic scale-up and scale-down capabilities
Companies are starting to use private and public cloud for their mainstream applications, with business line metering and billing becoming more common. With such economic controls, IT managers will need to release unused resources almost instantaneously.
This adds to the complexities already engendered by the provisioning of new resources. SDDC is not the same as cloud computing, but it is a necessary foundation of any cloud service. As such, SDDC technology will increasingly focus on the automation of the complete lifecycle of cloud procurement.
Also, in the hybrid environment, management complexity has increased to the point where it is simply not humanly possible to follow traditional technology management practices for the underlying infrastructure. To manage complexity, you must have an abstraction layer in which software takes control of orchestration, with minimum manual intervention.
IT managers should identify the applications and services that are supporting customer-centric innovation (system of engagement), applications that need to be replicated and copied to test environments frequently, and the application and service containers that need to be redeployed quickly (new instances) for new business units.
These applications will be the most suitable candidates for testing SDDC for the enterprise. Cloud service providers and colocation providers should explore SDDC technology to reduce the infrastructure provisioning time when taking on new customers and for scaling capacity.
There is a need to build hybrid cloud orchestration capabilities to handle the tsunami of data that will be generated by digital business, the internet of things and big data. It is a good idea for CIOs to press suppliers to deliver automation driven by simpler policies, making them do the hard work for you and hiding the complexity that is inherent in all SDDC-capable automation tools. Automation that requires heavy human involvement is contradictory to the very principles of automation.
Team up for application development
The real benefit of SDDC will come when application developers can relate the application performance to infrastructure components and dynamically provision new infrastructure components where there is a performance bottleneck.
For example, if there is high network utilisation in one datacentre, can the application provision a clone of the application in another datacentre and add the new node dynamically in the load balancer?
An accompanying transition for most organisations will be the move to a DevOps (development and operations) style of deployment, as application lifecycles and time between releases compresses. DevOps is normally attributed to application software, but the same principles and techniques also apply to system software, the heart of SDDC.
IT managers should unify their organisation’s full service development by combining infrastructure engineering and application development into a consolidated design process. SDDC is not about buying a piece of software or hardware – you will need architects and developers who understand how to harness application programming interface (API) based technology infrastructure orchestration. To harness the real benefits of an SDDC, IT leaders will need to empower their application developers and let them consume datacentre resources without compromising on the security, compliance and resiliency aspects of infrastructure.
Datacentre management within SDDC
It will be important to correlate the IT workload with the physical asset management layer. SDDC is not just about a policy-based approach for compute, storage and networks. IT managers should include feeds from datacentre infrastructure management (DCIM) to optimise the use of physical resources and govern workload movements on the load patterns across all of the relevant resources, including power and cooling.
It is important that they incorporate this broader perspective as loads span across server rooms, datacentres and geography itself. If SDDC does not address the physical infrastructure component of the datacentres, it will fall short of reaping the true potential of SDDC.
Anyone operating a datacentre of more than approximately 250kW will probably benefit from a unified solution for physical infrastructure, such as power, cooling and asset management. This needs to be integrated with the management of virtual resources to assist in workload orchestration for dynamic services, including any hybrid cloud environments. Integration between DCIM supplier solutions and management products for the software-defined assets is still a work in progress, but all major DCIM suppliers are making investments in integration with leading virtualisation management platforms. Horizon capabilities include automated power-down and power-up of physical infrastructure in response to workload requirements and automation policy requirements.
This is an extract from the Forrester research report: The software-defined datacenter is still a work in progress – Tools and technology: The infrastructure transformation playbook (August, 2014), by Richard Fichera. Fichera is a vice-president and principal analyst at Forrester Research and serves the information needs of infrastructure and operations leaders,