Cloud-first? Cabinet Office seeks £700m datacentre partner for 'top secret' data

avenkatraman | 1 Comment | No TrackBacks
| More

The Cabinet Office and GDS (Government Digital Service) have issued a service contract notice seeking a private partner that can provide datacentre colocation services to handle UK government's information classified as "official", "secret" and "top secret".

The government has earmarked up to £700m for the four-year datacentre infrastructure agreement.

"The operating environment is to be capable of housing computer infrastructure that initially handles information with UK Government security classification 'official' but there may be a future requirement for Data Centre Colocation Services that handle information with 'secret' and 'top secret' security classification," the government document read. "The provision of secret and top secret [information] would be subject to separate security accreditation and security classification," it added.

The facilities partner must be able to subscribe for a majority shareholding (up to 75% less one share) in the new private company limited established by the Cabinet Office to provide Data Centre Colocation Services - DatacentreCo.

But under government's Cloud First policy, many existing and new applications will move to the public cloud over the next few years. The Cabinet Office's cloud-first strategy, announced last year, meant that the cloud will be mandated as the first choice for all new IT purchases in government.

The new potentially £700m datacentre will host 'legacy' applications "not suitable or not ready for cloud hosting or for which conversion to cloud readiness would be uneconomic," the document read.

Cabinet Office, 70 Whitehall, London (next to ...

Cabinet Office, 70 Whitehall, London (next to Downing Street) (Photo credit: Wikipedia)

The Cabinet Office wants the full spectrum of datacentre services - rack space, power facilities, network and security. The datacentre hosting the official and secret information will be spread across an area of 350 sq. metres hosting 150 standard 42u racks. This sounds like a modular datacentre requirement.

And it wants "at least two separate [facility] locations subject to appropriate minimum separation requirements".  

Also on the government wish-list are datacentre compliance with security requirements, scalability, proven track-record in the last three years, performance certificates and specific latency performance requirements (less than 0.5 milliseconds) - to cater to the requirements of initial users -- Department of Work and Pensions, the Home Office and the Highways Agency.

The main aim is to have a datacentre facility that is high-quality, efficient, scalable, transparent, service-based ('utility') models - basically cloud-like but not the cloud.

How long do you reckon we'll have to wait before the government declares "serious over-capacity in datacentres" like it did in 2011?

Cloud's Hollywood moment - as a villain in Cameron Diaz's Sex Tape

avenkatraman | No Comments | No TrackBacks
| More

For those still wondering if cloud computing is really mainstream - even Hollywood thinks so. Cameron Diaz's rom-com Sex Tape releasing next Friday is all about the dangers of the cloud.

Cameron diaz

Cameron diaz (Photo credit: Wikipedia)

The movie stars Diaz and Jason Segel as a couple making a sex tape in an attempt to spice up their boring lives.  The video inevitably makes it to the cloud through Segel's iPad on which it was filmed.  The movie tracks how the couple desperately tries to get the video off the cloud while embarrassingly juggling comments from their parents, bosses and even mailman who all see it.

Here's some of the dialogues between Diaz (as Annie)and Segel (Jay):

Annie: (walks in) Honey, that sounds familiar, is that our...

Jay: You know the Cloud?

Annie: Stares ominously before yelling F@#$.

Jay: It went up! It went up to the cloud

Annie: And you can't get it down from the cloud?

Jay: Nobody understands the cloud. It's a f#$@ing mystery.

Whether they succeed to wipe their content off the cloud or not, we'll know only on 18thJuly.  But it looks like a big struggle with Jay and Annie taking desperate measures like nicking devices belonging to their friends and families and even breaking network infrastructures to get the tape off the cloud.

Maybe Jay and Annie are showing in satirical manner how cloud is a one way street - easy to get it up (even inadvertently) but damn hard to get it off!

Here's the trailer of Sex Tape starring Cameron Diaz, Jason Segel and The Cloud:

 


Amazon debuts Zocalo to hog SharePoint, Google Drive, Box and Dropbox market shares

avenkatraman | No Comments | No TrackBacks
| More

Almost 13 years after Microsoft launched the first version of SharePoint, Amazon has launched its version of file sharing and collaboration tool Zocalo at AWS Summit in New York today. Some AWS Summit followers have billed Zocalo as Google Drive and Dropbox killer on Twitter.

Yes, it's called Zocalo which, according to Wikipedia, is the main plaza or meeting-point in the heart of the historic centre of Mexico City.

A late entrant in the document sharing space (Dropbox took off in 2007), Amazon will offer Zocalo for $5 per user per month for 200GB of storage (Dropbox costs $15) or even free (only 50GB) with AWS WorkSpaces - the desktop computing service in the public cloud.

According to Amazon, "document sharing and collaboration is a challenge in today's enterprise". Take that SharePoint and Google Drive or even Office 365.

Zocalo has some pretty nifty features such as multi-device support, offline usage, word and powerpoint collaboration, and it integrates with existing corporate directory (Active Directory). But there's a catch and it's about vendor lock-in - Users will have to put their data first into Amazon S3.

Mexico City Zocalo

Mexico City Zocalo (Photo credit: Wikipedia)

Will Zocalo really tempt users out of Evernotes, SharePoint, Google Drive, Box and Dropbox? I don't know about that but it is a pretty clear indication of SaaS, PaaS and IaaS convergence in the cloud segment - Zocalo - a purely SaaS service from a primarily IaaS provider. And it is also proves how Amazon wants to provide everything that enterprise IT needs (scary?).

Moving to the cloud purely to save costs? Think again

avenkatraman | No Comments | No TrackBacks
| More

Organisations turning to the cloud with a sole intention of cost savings are the ones that are least happy with their cloud infrastructure and are the ones that are most likely to give up cloud adoption.

A recent Cloud Industry Forum research found that in the UK, large enterprises showed highest rates of adoption, at just over 80% followed by small and medium businesses. But public sector's cloud adoption lagged at around 68%.

The study also explored the drivers of cloud adoption and found that the flexibility of cloud as a delivery model was the primary reason for adoption in the private sector while operational cost savings were the main motive for the public sector.

It reminds me of an interesting conversation I had at Cloud World Forum a month ago with Photobox CIO Graham Hobson. Photobox was one of the early adopters of public cloud services - AWS. "When we started, cloud cost was just a fraction (20%) of our total IT spend. Today it is almost equal and I won't be surprised if our cloud costs overtake our on-premises spend soon," Hobson told me.

But that doesn't worry Hobson. In fact he says that public cloud has yielded several benefits in terms of scalability, IT responsiveness and efficiencies for Photobox. "If I was starting a company today, I would have adopted more cloud services than I did a few years ago," he said.

Cloud services operate on a pay-as-you-go model and although it may look attractively low cost at the beginning, if your IT requires constant high capacity and high performance, your cloud bill can soar.

Like I have argued before, cost-savings on the cloud comes over time as businesses get the hang of capacity management and scalability but the main aim of cloud use should be to grow the business and enable newer revenue-generating opportunities.

Just as Netflix or CERN or BP did.

The main advantage of cloud computing isn't always cost saving. If anything, cost saving is usually the byproduct of IT efficiencies found by running IT in cloud.

The super world of supercomputers

avenkatraman | No Comments | No TrackBacks
| More

Last Thursday, I met AWS to learn how users are building supercomputers in the cloud and also to see one being created right in front of me!

Unfortunately, the demo didn't succeed. I don't know if it was a buggy code or what but Ian Massingham, technology evangelist at AWS wasn't able to create a supercomputer and he was as disappointed about it as I was if not more.

But Ian had created one the previous evening -- "Just ran up my first HPC cluster on AWS using the newly released cfncluster demo," read Ian's tweet from the previous day. The link to a demo video AWS sent me subsequently also showed how to get started on cfncluster in 5 minutes.

Amazon cfncluster is a sample code framework, available for free - - to help users run high performance computing (HPC) clusters on AWS infrastructure.

I got to hear how enterprise customers, pharmaceutical companies, scientists, engineers and researchers are build cluster computers on AWS to do some pretty serious tasks such as research on medicine, assessing financial standing of the companies etc, all while saving money (My feature article on how enterprises are exploiting public cloud capabilities for HPC will appear on ComputerWeekly site soon).

And having spent the last two days at the International Supercomputing conference (ISC 2014) in Leipzig, I feel that high-performance computing, hyperscale computing and supercomputers are the fastest growing sub-set of IT. HPC is no more restricted to just science labs but even enterprises such as Rolls Royce and Pfizer are building supercomputers - to analyse jet engine compressors and to research around diseases respectively.

tianhe-2

tianhe-2 (Photo credit: sam_churchill)

Take Tianhe-2, a supercomputer developed by China's National University of Defense Technology for research which retained its position as the world's biggest supercomputer.  It has 3120000 cores, runs a performance of 33.86 petaflops/s (quadrillions of calculations per second) and uses 17808kW of power. Or the US DoE's Titan - a Cray XK7 system running more than 560,000 cores or any of the UK's top 30 supercomputers. They are all mind-boggling in their size, compute performance and uses.

Whether on the cloud or on-premises, I didn't hear a single HPC use-case in the last two days that wasn't cool or awe-inspiring. Imperial College London, Tromso University, Norway, US Department of Energy, Edinburgh University and AWE all use supercomputers to do research and computation around things that matter to you and me. As one analyst told me, "From safer cars to shinier hair, supercomputers are used to solve real-life problems".

Now I know why Ian was having a hard time to pick his favourite cloud HPC project - they're all cool.








What Cloud World Forum 2014 tells us about cloud

avenkatraman | No Comments | No TrackBacks
| More

The sixth annual Cloud World Forum wrapped up yesterday and here's what the event tells us about the state of cloud IT in the enterprise world.

OpenStack is gaining serious traction

OpenStack's big users and providers claimed the cloud technology is truly enterprise-ready because of its freedom from vendor lock-in and portability features. Big internet companies such as eBay are running mission-critical workloads on OpenStack cloud.  Even smaller players such as German company Centralway is using open source cloud to power its infrastructure when TV adverts create load peaks.

HP says it is "all in" when it comes to OpenStack. It is investing over $1bn in cloud-related products and services, including an investment in the open-source cloud. RedHat has just acquired eNovance, a leader in the OpenStack integration services for $95m. Rackspace and VMware are ramping up their OpenStack services and IBM has built its cloud strategy around OpenStack.

Skills shortage around developing OpenStack APIs into a cloud infrastructure seems to be the only big barrier hindering its widescale adoption.

Rise of the cloud marketplace

Cloud marketplace is fast becoming an important channel for cloud transactions. According to Ovum analyst Laurent Lachal, company JasperSoft gained 500 new customers in just six months with AWS marketplace. Oracle, Rackspace, Cisco, Microsoft and IBM have all recently launched cloud services marketplaces. 

What it means to the users? Browsing the full spectrum of cloud services will become as easy for customers as browsing apps in the Apple App Store or Google Play. "As cloud matures, established marketplace seems like a logical evolution. It is a new trend but it gives users a wealth of options in a one-stop-shop kind of way," said Lachal.

Vendor skepticism on the rise

Bank of England CIO John Finch, in his keynote, warned users of "pesky vendors" and cloud providers' promises around "financial upside of using the cloud". Legal experts and top enterprise users urged delegates to understand the SLAs and contract terms very clearly before shaking hands with the cloud providers.

Changing role of CIOs

Cloud is leading to the rise of Shadow IT and CIOs must don the role of becoming the broker of technologies and educating enterprise users on compliance and security, it became apparent at the event. Technology integration, IT innovation and service brokerage are some of the skills CIOs need to develop in the cloud era.

Questions around compliance, data protection, security on the cloud remain unanswered

Most speakers focusing on the challenges around cloud adoption mentioned security, data sovereignty, privacy, compliance and vendor-friendly SLAs as its biggest barriers

Not all enterprises using cloud are putting mission-critical apps on public cloud

Lack of trust seems to be the main reason why enterprises are not putting mission critical workloads on public cloud. Bank of England's Finch just stopped short of saying "never" to public cloud. Take Coca Cola bottling company CIO Onyeke Nchege for instance - he's planning to put mission critical ERP systems on the cloud but private cloud. EBay runs its website on the OpenStack cloud - but a private version it built for itself. One reason customers cite is that mission critical apps seem to be more static and don't need fast-provisioning or high scalability.

"It is not always about the technology though. In our case our metadata is not sophisticated enough for us to take advantage of public cloud," said Charles Ewan, IT director at the Met Office.

But there are some enterprises such as Astra Zeneca (running payroll workloads on public cloud) or News UK that manages its flagship newspaper brands on AWS cloud.

Urgent need for cloud standards in the EU

Lack of standards and regulations around cloud adoption, data protection and sovereignty and cloud exit strategies is making cloud adoption messy. Legal technology experts urged users to be "wise" in their cloud adoption until such time that regulations are developed. But regulators and industry bodies including the European Commission, the FCA and Bank of England are inching closer to developing guidelines and regulatory advice to protect cloud users.

Everyone's trying to get their stamp on the cloud

The more crowded than ever Cloud World Forum saw traditional heavyweights (IBM, HP, Dell, Cisco) rub shoulders with a slew of new, smaller entrants as well as public cloud poster-boys such as AWS, Google and Microsoft Azure. Technology players ranging from chip providers to datacentre cooling services sellers were all there to claim their place in the cloud world. 

Why some cloud projects fail?

avenkatraman | No Comments | No TrackBacks
| More

I was at a roundtable earlier this week discussing the findings of one enterprise cloud research. The findings are embargoed until June 24, but what struck me the most was the numbers around failed or stalled cloud projects.

And that led me to discuss it more with industry insiders. Here are a few reasons why cloud projects might fail:

  • Using cloud services but not using them to address business needs

One joke doing rounds in the industry goes a bit like this - The IT head tells his team, "You lot start coding, I'll go out and ask them what they want".

But the issue of not aligning business objectives with IT is still prevalent. The latest study by Vanson Bourne found that as many as 80% of UK CIOs admit to significant gaps between what the business wants and when the IT can deliver it. While the average gap cited was five months, it ranged between seven to 18 months.

  • Moving cloud to production without going through the SLAs again and again. And again

If one looks at the contracts of major cloud providers, it becomes apparent that the risk, is almost always pushed out on to the user and not on the provider - be it around downtime, latency, availability, and data regulations. It is one thing to test cloud services and quite another to put it out on actual production.

  • Hasty adoption

Moving cloud to production hastily without testing and piloting the technology enough and without planning management strategies will also lead to failure or disappointment with cloud services.

  • Badly written apps

If your app isn't configured correctly, it shouldn't be on the cloud. Just migrating badly written apps on to the cloud will not make them work. And if you are not a marque customer, your cloud provider will not help you with it either.

  • Being obsessed with cost savings on the cloud

One expert says - those who adopt cloud for cost savings fail; those who use it to do things they couldn't do in-house succeed. Cost-savings on the cloud comes over time as businesses get the hang of capacity management and scalability but the primary reason for cloud adoption should be to grow the business and enable newer revenue-generating opportunities. For example, News UK adopted cloud services with an aim to transform its IT and manage its paywall strategy. Its savings were a byproduct.  

  • Early adoption of cloud services... Or leaving it too late

Ironic as it may sound, if you are one of the earliest adopters of cloud, chances are that your cloud might be the earliest iteration and may not be as rich in features as the newer versions. It may even be more complex than current cloud services. For instance, there is a lot of technical difference between pre-OpenStack Rackspace cloud and its OpenStack version.

If you've left it too late, then your competitors are ahead of the curve and the other business stakeholders influence IT's cloud buying decisions.

  • Biased towards one type of cloud

Hybrid IT is the way forward. Being too obsessed with private cloud services will lead to deeper vendor lock-in and adopting too much public cloud will lead to compliance and security issues. Enterprises must not develop a private cloud or a public cloud strategy but use cloud elements that best solves their problems. Take Betfair for instance, it uses a range of different cloud services. It uses AWS Redshift warehouse service for data analytics but uses VMware vCloud for automation and orchestration.

  • Relying heavily on reference architecture

Cloud services are meant to be unique to suit individual business needs. Replicating another organisation's cloud strategies and infrastructure is likely to be less helpful.

  • Lack of skills and siloed approach

Cloud may indeed have entered mainstream computing but the success of cloud directly depends on the skills and experience of the team deploying it. Hiring engineers and cloud architects with experience on AWS to build private cloud may backfire. Experts have also called on enterprises to embrace DevOps and cut down the siloed approach to succeed in cloud. British Gas hired IT staff with the right skills for its Hive project built on the public cloud.

  • Viewing it as in-house datacentre infrastructure or traditional IT

Cloud calls for new ways of IT thinking. Just replacing internal infrastructure with cloud services but using the same IT strategies and policies to govern the cloud might result in cloud failure.

There may be other enterprise-related problems such as lack of budget or cultural challenges or legacy IT that may result in failed or stalled cloud project, but more often it is the strategy (or the lack of it) to blame than the technologies.

My 10 minutes with Google's datacentre VP

avenkatraman | No Comments | No TrackBacks
| More

Google's Joe Kava speaking at the Google EU Da...

Google's Joe Kava speaking at the Google EU Data Center Summit (Photo credit: Tom Raftery)

At the Datacentres Europe 2014 conference in Monaco, I had a chance to not just hear Google's datacentre VP Joe Kava deliver a keynote speech on how the search giant uses machine learning to achieve energy efficiency but also to speak to him individually for 10 minutes.

Here is my quick Q&A with him:

What can smaller datacentre operators learn from Google's datacentres? There's a feeling among many CIOs and IT teams that Google can afford to pump in millions into its facilities to keep them efficient.

Joe Kava: That attitude is not correct. In 2011, we published an exhaustive "how to" instruction set explaining how datacentres can be made more energy efficient without spending a lot of money. We can demonstrate it through our own use cases. Google's network division, which is the size of a medium enterprise, had a technology refresh and by spending between $25,000 and $50,000 per site, we could improve their high availability features and improve their PUEs from 2.2 to 1.5. The savings were so high that it yielded a payback of the IT spend in just seven months. You show me a CIO who wouldn't like a payback in seven months. 

Are there any factors, such as strict regulations, that are stifling the datacentre sector?

It is always better for an industry to regulate itself than have the government do it. It fosters innovation. There are many players in the industry that voluntarily regulate themselves in terms of data security and carbon emissions. One example is how since 2006, the industry has strongly rallied together behind the PUE metric and has taken energy efficiency tools quite to heart.

What impact is IoT having on datacentres?

Joe Kava: IoT (internet of things) is definitely having an impact on datacentres. As more volumes of data are created and as mass adoption of the cloud takes place, naturally it will require IT to think about datacentres and its efficiency differently. IoT brings huge sets of opportunities to datacentres.

What is your one piece of advice to CIOs?

You may think I am saying this because I am from Google but I strongly feel that most people that operate their datacentres shouldn't be doing it. That's not their core competency. Even if they do everything correctly and even if they have a big budget to build a resilient, highly efficient datacentre, they cannot compete in terms of the quick turnaround and the scalability that dedicated third-party providers can offer.

Tell us something about Google's datacentres that we do not know

It is astounding to see what we can achieve in terms of efficiency with good old-fashioned testing and development and diligence. The datacentre team constantly questions the parameters and constantly pushes the boundaries to find newer ways to save money with efficiency. We design and build a lot of our own components and I am not just talking about servers and racks. We even design and build our own cooling infrastructure and develop our own components of the power architecture that goes into a facility.

It is a better way of doing things.

Are you building a new datacentre in Europe?

(Smiles broadly) We are always looking at expanding our facilities.

How do you feel about the revelations of the NSA surveillance project and how it has affected third-party datacentre users' confidence?

It is a subject I feel very strongly from my heart but it is a question that I will let the press and policy team of Google handle.

Thank you Joe

Thank you!

 


Enhanced by Zemanta

No such thing as absolute freedom from vendor lock-in, even in open source, proves Red Hat

avenkatraman | No Comments | No TrackBacks
| More

OpenStack is a free, open source cloud computing platform giving users freedom from vendor lock-in. When it was alleged that Red Hat won't support customers who use other versions of OpenStack cloud on its Linux operating systems, its president Paul Cormier passionately shared the company's vision of open-source but steered clear from stating wholeheartedly that it WILL support its users no matter what version of OpenStack they use.

Any CIO worth his salt will admit that support services can be a deal-breaker when deciding to invest in technology.

Red Hat customers opt for the vendor's commercial version of Linux (RHEL) over free Linux versions because they want to use its support services and make their IT enterprise-class. This has helped Red Hat build a $10bn empire around Linux and become the most dominant provider of commercial open source platform.

OpenStack

OpenStack (Photo credit: Wikipedia)

So when Cormier says -- "Users are free to deploy Red Hat Enterprise Linux with any OpenStack offering, and there is no requirement to use our OpenStack technologies to get a Red Hat Enterprise Linux subscription. 

And separately, "Our OpenStack offerings are 100% open source. In addition, we provide support for Red Hat Enterprise Linux OpenStack Platform," -- customers are likely to pick Red Hat's OpenStack cloud on Red Hat operating system resulting in supplier lock-in.

Cormier justified: "Enterprise-class open source requires quality assurance. It requires standards. It requires security. OpenStack is no different. To cavalierly 'compile and ship,' untested OpenStack offerings would be reckless. It would not deliver open source products that are ready for mission critical operations and we would never put our customers in that position or at risk." 

Yes, Red Hat has to seek growth from its cloud offerings and as an open source leader, it has to protect the reputation of open cloud as being enterprise-ready.

Red Hat's efforts in the open source industry are commendable. For instance, it acquired Ceph provider Inktank last month and said it will open source Inktank's closed source monitoring offering.

But as the open sourced poster child, it also has the responsibility to contribute more to the spirit of open cloud and to invest more in Open source technology to give users absolute freedom to choose the cloud they like.

Competition among cloud providers is getting fiercer. To grab a larger share of the growing market, some cloud providers are slashing cloud costs while others are differentiating by offering managed services.  But snatching flexibility and freedom from cloud users is never a good idea.

But it will be unfair to single out Red Hat to open up its ecosystem. There's HP, IBM, VMware and Oracle who are all part of the OpenStack project and who all have their versions of OpenStack cloud.

As Cormier says, "We would celebrate and welcome competitors like HP showing commitment to true open source by open sourcing their entire software portfolio."

Until then it's a murky world. What open source? What open cloud? 


Enhanced by Zemanta

Using cloud for test and development environments? Avoid this costly mistake

avenkatraman | No Comments | No TrackBacks
| More

Using cloud services for application testing or software development is becoming a common practice because of cloud's scalability, agility, ease of deployment and cost savings.

But some users are not yielding the cost saving benefit, and in some cases, even seeing cloud costs soar because of a simple error -- they are not turning storage instances down when not in use.

Time and again purveyors of cloud computing have highlighted scalability as the hallmark of cloud computing and time and again users have listed the ability to scale the resources up and down as one of the biggest cost saving factors of the cloud.

But when discussing cloud costs and myths with a public cloud consultancy firm recently, I was shocked to learn that many enterprises that use the cloud for testing and development forget to scale down their testing environment at the end of the day and end up paying for idle IT resources - defeating the purpose of using cloud computing.

Building a test and dev lab in the cloud has its benefits - it saves the team time from building the entire environment from the ground up. Also, should the new software not work, they can launch another iteration quickly.  But the main benefit is the lower cost.

But delirious app testers and software developers may be leaving the instances running and pay for cloud storage for the hours of the night when no activity takes place on the infrastructure.

On the public cloud, turning down unused instances and capacity does not delete the testing environment. This means developers can simply scale the system up the next day to start from where they left.

But the practice of leaving programs running on the cloud is so common that cloud suppliers, management companies, and consultancies have all developed tools to help customers mitigate this waste.

For instance, AWS provides CloudWatch alarms which help customers set parameters on their instances so they automatically shut down if they are idle or underutilised.

Another tool it offers is AWS Trusted Advisor - available for free to customers on Business Level Support, or above. It looks at their account activity and actively shows them how they can save money by shutting down instances, buying Reserved Instances or moving to Spot Pricing.

"In 2013 alone, it generated more than a million recommendations for customers, helping customers realise over $207m in cost reductions," AWS spokesman told me.

Cloud costs can be slashed by following good practices in capacity planning and resource-provisioning. But that's at a strategic level while quick savings can be achieved by simple, common sense measures such as running instances only when necessary.

Perhaps, it is time to think of cloud resources as utilities - if you don't leave the lights on when you leave work why should you leave idle instance running on the pay-as-you-operate cloud?

That's $207m IT efficiency savings for customers of just one cloud provider. Imagine.

 

Enhanced by Zemanta






AWS may be building a datacentre in Germany, but will the cloud data remain safe and private?

avenkatraman | No Comments | No TrackBacks
| More

As public cloud provider AWS is looking to expand its datacentre footprint in Europe in the post-Prism world, it may have picked Germany because of the stricter regulations around data sovereignty. But the recent US court ruling asking Microsoft to hand over one customer's email data held in its Dublin datacentre suggests that data on the cloud, regardless of where it is stored, may not be really private and secure.

While AWS has not clearly said it is building a datacentre in Germany, at its London Summit last week, Stephen Schmidt, its vice-president and chief information security officer told me that they are always looking to expand and that a Wall Street Journal article was "pretty explicit" about where their next datacentre might be?

The WSJ article quotes Andy Jassy, senior vice president naming Germany as its next datacentre location because of its "significant business in Germany" who could be demanding that their data resides within the country.

According to Chris Bunch from Cloudreach, a UK cloud consultancy firm that implements AWS clouds, AWS is growing so fast and have such market dominance that adding capcity for further growth is clearly sensible. AWS will have built one in the region within the next 12 months. 

Amazon already has three infrastructure facilities in Frankfurt, with seven others in London, Paris and Amsterdam. In addition to these ten Edge locations, it has three EC2 availability zones in Ireland, catering to EU customers.

But just as one would hail the potential AWS datacentre in Germany as a credible move to protect user data on the cloud, comes a US magistrate Court judgment ordering Microsoft to give the District Court access to the contents of one of its customer's emails stored on a server located in Dublin. Microsoft challenged the decision but the judge disagreed and rejected its challenge.

Microsoft said: "The US government doesn't have the power to search a home in another country, nor should it have the power to search the content of email stored overseas."

"Microsoft's argument is simple, perhaps deceptively so," Judge Francis said in an official document, quashing Microsoft's challenge.

"It has long been the law that a subpoena requires the recipient to produce information in its possession, custody, or control regardless of the location of that information," he said.

Well, perhaps we still have a long way to go to see the rules of data sovereignty upheld, but with AWS's growing customer portfolio, it will be good news to have public cloud data reside in Germany which has one of the strongest and toughest data regulations around the world.


Enhanced by Zemanta






Find recent content on the main index or look in the archives to find all content.

Recent Comments

  • Philip Virgo: Given that Snowden burst the Cloud Bubble with regard to read more

-- Advertisement --