The advantages of private cloud services are plentiful. But so are its hurdles. Understanding its promises and problems right at the beginning is crucial to use cloud effectively.
Cloud computing services may be less fuzzy now for some IT pros who have worked out what it actually means and how it could work for them. But, for many, cloud computing is still a confusing term because of the conflicting messages from vendors, users and analysts. Your best bet is to learn all you can about private cloud services and its problems.
The promise of private cloud is reasonably simple: It has to do with moving from a one-application-per-physical-server (OAPPS) approach to a shared-resources model. This means that IT will be investing in fewer servers, storage systems and network equipment, and will improve business flexibility by reducing functional redundancy.
Those are fine words, but what do they mean in reality? And what are the problems that organisations could face when they move to a private cloud architecture and begin using private cloud services?
Resource-sharing: The real promise of private cloud services
Many organisations are already using a form of shared resources: virtualisation. But IT professionals should never confuse virtualisation and cloud services, even though cloud computing depends on virtualisation.
Taking a group of servers, virtualising them and putting a single application back on top of the virtualised servers will save on the number of servers required and bring costs down. But virtualisation alone will not help you enjoy the full benefits of private cloud services, which is, sharing multiple physical resources between many applications and functions.
The real promise of cloud is elasticity -- the ability to use server, storage and network resources across different applications and functions. For example, an application that you use for payroll may have cyclical needs, being run once a week or month, whereas accounts payable may run every third evening.
On an OAPPS model, it is unlikely that overall server use would run above 5% measured during a 24/7 week, and if clustering is used for availability reasons, you could be looking at 3% or less. But, at peak times, the application could be using 80% of server resources or even higher, and performance may be compromised when the application hits the resource buffer, which happens occasionally. Such a peak and trough situation is not sustainable because it does not support the business effectively, and it drains energy, space, maintenance costs, and skills as well as software licences.
However, if the payroll and accounts payable applications are able to share the same underlying resources, such as those provided by the elasticity of a private cloud-based infrastructure, then the peaks and troughs can be effectively handled.
The cyclical nature of the applications’ demands means that both the applications will not need the resources at the same time. The IT pros only need to architect the overall system such that the maximum needs of the hungriest application are met.
Although the applications are sharing all resources, they can easily manage any spike in requirements so that the spike does not result in any application hitting the performance buffers.
If IT carries out this exercise across the total application portfolio, then it can increase resource utilisation beyond 80% without impacting the core performance of the applications. It is very important to note that only core performance will not be affected.
However, it is easy to be misled and believe that because everything is now running well in the data centre, everything is OK.
Quocirca, a data centre research company, conducted a study which showed that for IT pros, the overall, end-to-end application performance is of critical importance. What’s the point of having blindingly fast data centre performance if the users’ experience is poor, particularly if the organisation is using private cloud services as a means of serving up virtual desktops?
It makes more sense to put into place appropriate tooling that can measure, in real time, how well end user response times are met. The tools should also be able to identify and rapidly deal with any issues to ensure zero downtime.
Overcoming licensing challenges of private cloud services
The next area to consider before using private cloud computing services is licencing.
Because resources are elastic in a true cloud environment, it is easy to fall into a position where many unused -- but live -- virtual machines are enabled across a cloud. Running a software audit could reveal that many of these machines are running outside of current licensing agreements.
IT pros should use cloud-aware licensing management software which can make sure that licencing requirements are always met. The software should be tied into the tools that manage virtual machines and appliances so it can identify systems that are unused and flag them for administrative intervention.
Private clouds help move workloads
Another promise of private cloud services is that workloads can be shifted away from an in-house facility. With private cloud services, IT can move the workloads from the private data centre to a shared facility (colocation) or even to a public cloud if it fits with a company’s IT and business strategy.
Currently, however, this is proving difficult for many early cloud adopters as this promise comes with a problem. Because of a lack of significant and established cloud standards, the majority of private cloud services have been built against software stacks that are not compatible with the stacks used in public clouds. As such, workloads cannot be moved between the two without rewriting applications and porting capabilities.
Potential adopters of private cloud services should look to stacks from cloud providers that are already talking about private/public cloud interoperability and providers that demonstrate a strong commitment to open standards.
But, if for any reason a business selects a software stack that does not provide such capabilities, it must find out what services or tools the provider can offer for workload interoperability with the major public cloud stacks.
Getting past the security hurdle
Finally, there is the perceived problem of cloud security. Some misperceptions and approaches are causing potential adopters to focus on the wrong areas. One such perceived problem is around data control on the cloud. Data has to cross over from within the organisation to externally held (and often shared) hardware, and the mis-perception is that the organisation’s IT is relinquishing control of the data, and with that, its security.
However, in reality, by moving to an information-centric security approach, organisations can be more certain about security in the cloud environment. In addition, they can open up the information flow across their complete value chain, enabling greater business opportunities. I call such an approach a “compliance-oriented architecture”.
Cloud is the future, with its promise of cost savings and business flexibility. Organisations that do not adopt cloud will experience ballooning IT costs in comparison to the expenses of competitors that do adopt it.
Nevertheless, adopting private cloud services without due consideration will just replace one chaotic environment with another. So, IT pros must ensure that the main areas -- cost, scalability, licensing and security -- are fully considered and covered before taking the plunge.
A vast majority of businesses are likely to use a hybrid cloud infrastructure -- a mix of private and public cloud services. Ensuring that the journey to a hybrid cloud is built on a secure, efficient and flexible environment requires solid, up-front planning and due diligence.
Clive Longbottom is a Service Director at UK analyst Quocirca Ltd and a contributor to SearchVirtualDataCentre.co.uk
This was first published in July 2012