Putting your trust in another company to supply your computing needs remotely over the internet sounds daunting but managed well it can show benefits.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Hardware as a Service is in its infancy but the core technology has been developing since the early years of computing. Back in the 1960s and 1970s we had time sharing on mainframes and minicomputers, which gave way to managed services. The late 1990s saw the birth of the ill-fated application service providers (ASPs) which failed to survive when the internet bubble burst.
Now managed services and outsourcing are evolving into cloud computing. The drivers of the market are improved communication channels, a pay-per-access pricing model and the efficiencies provided by technologies such as virtualisation and federated storage. IT has become a utility that can be turned up or down as usage fluctuates. Cloud computing also smoothes out the spikes that can occur unexpectedly in IT budgets when a server needs replacing or another problem pops up to drain resources.
Ted Chamberlin, research director at Gartner, says, "It's pretty evident that we're still at the infancy of Hardware as a Service but it is related to managed co-location, a very mature offering. The majority of potential clients are server-huggers; they refuse to lose their grip on their hardware. Co-location over the cloud removes the worry about the biggest capital expenditure issue - hardware procurement refresh."
Issues such as staffing, licensing, depreciation, maintenance and back-up become the service provider's concern, leaving the internal team to concentrate on the constancy of the data flow assured by the service level agreement (SLA). IT managers will have to find the best offer from established providers like IBM, HP, and Fujitsu Siemens, competing with new entrants in the form of Amazon and Google, and new services from AT&T, BT, Colt and other telecom companies. Legal issues, such as compliance and the physical location of data, will also have to be considered, alongside which services can be trusted to the cloud.
Joe Duran, Primergy product manager at Fujitsu Siemens Computers, explains, "The measurement of what was used and how well that was provided will be key but there are different charge models depending on the type of requirement you have. I don't think there's going to be one absolute set of rules, just as there's no single set of rules for how you buy your IT equipment."
Current suppliers of managed services are adapting to cloud operations and working out what charge systems are suitable. Ian Brooks, HP's head of internet and mobile strategy, says, "If it's a storage service it's per Gbyte, or for computational services it could be billing per CPU-hour or per node server if you have a clustered application.
"Longer tenure could be based on service levels. For example, a smaller company that can do without e-mail for half a day would agree to a basic service level 1 but a bigger company, where e-mail is critical, would pay a higher premium for a guaranteed service at service level 3 or 4.
In the longer term, where you get very large multi-clustered environments, you may well move onto a market dynamic where you buy certain periods of time - similar to the futures markets for buying airline seats or coffee beans."
The ability to build and tear down new systems without interfering with internal hardware is making Hardware as a Service attractive. A prime example is developing private clouds - internal systems that use the cloud over the corporate network rather than openly across the internet. IBM has opened several cloud computing centres worldwide for customers working out their cloud strategies.
Dennis Quan, IBM's CTO for high performance on demand solutions, says, "In February, we announced our Design and Implementation for Cloud Test Environments service because most of the work that goes into getting a cloud service up and running is the final test and pre-production phase.
"Resources can be made available quickly by allowing access to those facilities running on our hardware. One of the benefits of the cloud approach is that you only pay for what you use and the service provider takes care of the details of the management of those underlying physical assets and the growth required to sustain the client base."
Apart from the trust required, a secondary consideration is the speed. The maximum speed attained on the internet currently stands at around 9Gbits/sec under ideal conditions. Naturally, this area is the focus of much research, and an Australian team at the University of Sydney has developed a chip that promises a theoretical speed of over 600Gbits/sec but it won't be available commercially for at least five years.