Google Compute Engine (GCE) was unavailable for two hours on 19 February 2015 due to a network problem.
According to the company’s dashboard, GCE experienced connectivity issues in multiple availability zones between 6.59am and 9.00am GMT.
The downtime affected the start of the working day for European users. One user tweeted: "Google Cloud was down, but it's back for me (Europe). So far so good. I'm still looking forward to a formal resolution of the issue."
Michael Allen, solutions vice-president at Dynatrace, said that while many organisations have moved IT infrastructure and services to the cloud to achieve better economies of scale, the drawback is that an outage such as that experienced by Google now has the potential to cripple an organisation.
"Technology has become critical to success and business continuity in today’s digital service economy, so any IT outage or slowdown carries the risk of cutting off vital revenue streams and creating lasting damage to customer relationships," he said.
In November 2014, Microsoft experienced a major outage in a storage component on Azure, which affected the availability of its OneDrive cloud storage platform and Office 365.
As Computer Weekly has previously reported, no cloud service can guarantee 100% uptime, so applications need to be engineered to compensate for failure.
At the time, Ovum principal analyst Michael Azoff noted: "Even within a single cloud provider we see examples where a business user has no failsafe strategy of balancing across different datacentres. Of course, that does not help where an error affects the total service."
One Computer Weekly reader, commenting on the complexity of cloud platforms compared with traditional IT, recently said the technology underlying the delivery of cloud services is more advanced and complex.
"Seeing into the cloud and understanding the capacity needed and delivered at the actual platform level will be critical," he said.