Get on the path to utility computing

By adopting a step-by-step approach to utility computing, business processes can be streamlined without incurring prohibitive start-up costs

A lot has been written about utility computing, covering topics such as web services, service oriented architecture, blade computing and grids. As a means of making better use of existing hardware resources and creating a more flexible platform to meet ongoing changes in business processes, the promise of utility computing is difficult to beat.

However, research carried out by Quocirca suggests that many organisations are struggling with the process of moving to a utility computing platform. Organisations perceive that there is a need to rip out and replace existing systems, moving from separate silos of functionality to a single virtualised base in one move.

This type of change is widely perceived as being prohibitively expensive, as existing software and hardware investments will need to be replaced. Additionally, there may be an impact on application availability as system updates are carried out.

As a result of these concerns, many utility computing projects become mired in uncertainty surrounding what the real aims are. Business leaders involved in utility projects are often left disillusioned with slow progress.

Dead in the water

The high cost of comprehensive system change and the possibility of reducing application availability means that many utility computing projects are essentially dead in the water before they have begun.

For most large organisations, the rip-and-replace approach to IT was discarded in the late 1980s. The services that enterprise IT systems provide are simply too central to business operations for them to be offline for significant periods of time.

Today, IT is required to be a process facilitator. This has led to a reduction in application buying as more functional systems based around services - such as virtualised systems - are being sought.

Often, these approaches are touted as being one-stop solutions to the various challenges of enterprise IT. Diagrams show fully service oriented functional clouds, based on highly virtualised hardware-resource pools. All of this is great in theory, but who pays to get the basic infrastructure ready for such an end point?

No business wants to pay for a major infrastructure project simply to make one of its processes more effective. Therefore, the job for the IT department is to help the business identify how the main part of the system can be implemented as a set of services based on a utility platform that makes use of existing hardware.

This process should not be too difficult as the majority of major enterprise applications are capable of having internal functions made visible as web services through the use of specific connectors.

Using this approach, new functionality can be implemented as utility services in a cost-effective manner, and the most can be made of functionality in existing applications. Areas such as billing engines, customer records management, and many workflows will already be found within existing applications. The line of business only needs to pay for its own requirements, ensuring continued buy-in from them.

As each project is implemented, the IT department needs to ensure that anything that has already been implemented as a service is re-used rather than being recreated. IT must also ensure that only one version of any functional service is used, doing away with functional redundancy and associated problems such as multiple data records and reporting-structure confusions.

Streamlined environment

With this approach, the functional landscape is rationalised as time goes by, and the various islands of functionality rapidly knit together as projects are implemented, taking the organisation closer to the desired solution. The basic IT structure changes from an inefficient application-focused platform to a more streamlined utility environment.

The main area that needs to be looked at when implementing such changes is the management of the infrastructure. Although existing tools may be good enough for asset identification, measurement and control, virtualisation management and functional provisioning will need to be looked at, and investments may well be needed to ensure that the utility areas of the infrastructure work to the best of their capabilities.

Most organisations realise that their IT infrastructure is a highly dynamic environment, and that yesterday's end vision is only today's starting point. The aim has to be to regard each project as a set of small steps, rather than a single giant stride, and to keep reviewing where you believe the path is taking you.

Read more on Business applications

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close