The datacentre vision of just a decade ago was focused on a consolidated infrastructure located in a single datacentre – with a hot backup site – consisting of a small number of highly scalable servers servicing a large application pool.
Today, IT organisations are faced with the task of optimising that legacy infrastructure and aligning it closer to fast-changing business strategies.
IT must provide solid change control and all the predictability of traditional IT, while also providing cloud-like speed for the new world of mobile, big data, and cloud-native apps.
To maximise agility and minimise complexity, CIOs must bring together disparate public and private cloud environments to create a centrally managed and optimised hybrid cloud.
Resources must be allocated efficiently with simplified and integrated management. With DevOps, many new apps are being created on the go, so the corporate IT infrastructure must also reduce error-prone manual processes and automate IT operations
Success is measured by the ability to deliver the benefits of software-defined infrastructure leveraging both internal and third-party cloud services. Datacentre remodelling has both software and hardware components.
Why upgrade legacy at all?
Legacy platforms in industry verticals such as finance and insurance survive because of the risk and cost of replacing them. Replacement projects can fail, with follow-on business losses, lost confidence in the IT department and threats to people’s careers. The time and cost involved in system testing and the prospect of a big user retraining programme must be factored in. Against this, a stable platform with tried-and-tested processes remains appealing.
However, recent research from Temenos in the banking sector found that 14% of costs in banking are IT-related compared with a cross-industry average of 7%. This is caused by multiple factors including redundant, outdated, and/or siloed applications. Something has to give.
Some may choose to mix old and new using tools such as IBM’s WebSphereMQ message broker. This can handle multiple operating systems, providing a convenient way of passing messages between applications running under incompatible operating systems. Similarly, virtualisation can disaggregate legacy systems software from the hardware on which it was originally designed to run, allowing users to replace their hardware and consolidate legacy systems on a single server.
The cost of such mix-and-match products comes with continued siloed information, more bandwidth consumption, problems with application security due to lack of software patching, and simply maintaining legacy skillsets. Not many young IT people want to get into Cobol programming or Windows NT4 these days. At some point, the cost and inconvenience of maintaining a legacy system will force even banks to remodel their datacentres.
All businesses are data-driven now. And for many organisations, customer-generated data is being created more quickly than it can be analysed. Any company thinking about internet-of-things data gathering to facilitate better decisions and business growth will certainly need to spend a lot of effort to improve real-time analysis and availability. The challenge is to clean, manage and store it, visualise it so it is easy to understand, and automate data processes so they run efficiently and without error.
Data visualisation tools and dashboards, such as the ones provided by Oracle BI Cloud Service, IBM Watson Analytics, SAP Lumira and Microsoft SSRS (SQL Server Reporting Services) in Azure can identify patterns and underpin reliable data-driven decisions. This way, companies can find the insights that unlock performance. The right insights can improve business performance and strengthen customer relationships.
More and more of the corporate datacentre is being virtualised across all servers to ensure seamless merging between
on-site private computing and off-site public cloud. IT now faces the challenge of managing and monitoring swift infrastructure changes driven by virtualisation. Typically, this involves consolidation, automation and orchestration, workload replatforming and migration. Companies see their leading-edge peers driving increased efficiency and performance across their entire enterprise with these strategies. But how do they make it happen?
The good news is that corporate management is often willing to allocate significant resources to this transition, and a host of systems integrators are ready to provide support encompassing assessment, design, build, testing, implementation, support and managed cloud transitions. What are the key steps in this transition, and how is it managed?
Taking the right steps
To address the key question “how can we use the cloud?”, corporate management must formulate its cloud and data strategy (key phrases could be product and tech innovation, customer insight and market expansion), soliciting input from the IT and networking groups, as well as lines-of-business product and marketing executives to unite operational
and business insights from IT data in real time.
Directly involving lines of business is important because more and more application deployment projects are initiated directly by them and circumvent the IT department – so-called shadow IT.
The cloud and data strategy should aim to transform business decisions, enhance customer relationships and create new revenue opportunities in line with corporate governance, risk and compliance policies.
Any datacentre remodelling strategy must also ensure that the IT group has the tools to monitor, analyse and automate IT operations at a level that supports cross-application data analysis. The IT and network group audits and supports the business needs for big data and analytics infrastructure by providing the right data mining, warehousing and business intelligence capabilities.
Using reference models
The “customer journey” is often used to describe the ongoing relationship between a business and its customers. The corporate cloud and data strategy must support this customer dialogue any time, any place, on any device and across any channel, and deliver value-added services in real time. When planning datacentre remodelling, it is a good idea to use a reference model to ensure consistency and comprehensiveness. There are many vertical industry reference models available from the leading cloud platform providers: VMware, Microsoft and OpenStack providers such as Red Hat, IBM and HP Enterprise.
But technology is only the first step. The key enabler to driving business value from the cloud is people and processes. Companies may need cloud managed services to transform their operational service. Hybrid cloud offers the best of both worlds, optimising existing datacentre space on-premise, with the ability to burst to public cloud when business scenarios demand it.
Datacentre remodelling also entails a hardware side of the equation. Processing and connectivity constraints determined by limits in power, cooling and space as well as lower capital expenditure are forcing firms to converge their infrastructure and move away from siloed servers, storage, data and processes to use available power and space more efficiently.
This converged infrastructure groups multiple IT components (servers, data storage devices, networking equipment and software) into a single, optimised on-site computing package for IT infrastructure management, automation and orchestration. With a pool of centrally managed computers, storage and networking resources, they can be shared across a wider range of corporate applications. Policy-driven processes can be used to manage these shared resources.
HPE coined the term composable infrastructure to denote its hyper-convergence approach to datacentre architecture (now augmented with its recent Simplivity acquisition). This offers the ability to compose and recompose fluid pools of compute, storage and fabric for any application or workload. In a similar vein, Cisco extends the functionality of its unified computing system management with a software-defined infrastructure (SDI) that treats infrastructure as code, disaggregating computer resources so they can be programmed and automatically managed more efficiently. In this way, infrastructure resources become fluid pools that can be composed dynamically.
Hyper-convergence may ultimately lead enterprises to adopt a serverless computing model. This does not mean getting rid of the hardware on which a service or application runs, but rather the capacity to provide a function as a service in a unified environment.
The first real instantiation of this serverless computing model is demonstrated by Amazon Web Services’ (AWS) Lambda product. Launched last year, Lambda lets users run code without needing to provision servers on AWS’s EC2 platform. Once the code is loaded, Lambda takes care of all the rest – provisioning the resources to run the workload, monitoring and managing the dynamics of maintaining these at the right levels. Competing offerings are Microsoft Azure Functions and IBM’s Bluemix OpenWhisk, plus Google’s Platform Cloud Functions.
Few companies today are willing to undergo a complete datacentre remodelling. The vast majority are on a phased journey from legacy mission-critical, siloed on-site applications to a distributed and disaggregated datacentre supporting mobile users, ad hoc locations and customers 24/7. Future IT datacentre function will entail less application management and more support of lines-of-business apps development and deployment.
Bernt Ostergaard is a service director at Quocirca.