The invisible business: Mobile plus cloud

cdonnelly | No Comments | No TrackBacks
| More

In this guest post Amit Singh, president of Google for Work, explains why enterprises need to start adopting a mobile- and cloud-first approach to doing business if they want to remain one step ahead of the competition.

One of the most exciting things happening today is the convergence of different technologies and trends. In isolation, a trend or a technological breakthrough is interesting, at times significant. But taken together, multiple converging trends and advances can completely upend the way we do things.

Netflix is a classic example. It capitalised on the widespread adoption of broadband internet and mobile smart devices, as well as top-notch algorithmic recommendations and an expansive content strategy, to connect a huge number of people with content they love. The company just announced that it has more than 65 million subscribers.

Other examples of new and improved approaches to existing problems abound. As Tom Goodwin, SVP of Havas Media, said recently: "Uber, the world's largest taxi company, owns no vehicles. Facebook, the world's most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world's largest accommodation provider, owns no real estate. Something interesting is happening."

Each of these companies has capitalised on a convergence of various trends and technological breakthroughs to achieve something spectacular.

Some of the factors I see driving change include exponential technological growth and the democratisation of opportunity, as well as the emergence of public cloud platforms that are fast, secure and easy to use. Together, these trends underpin a powerful formula for rapid business growth: mobile plus cloud.

We know the future of computing is mobile. There are 2.1 billion smartphone subscriptions worldwide, and that number grew by 23% last year.

We spend a lot of time on our mobile devices. Since 2014, more internet traffic has come from mobile devices than from desktop computers. Forward-looking companies are building mobile-first solutions to reach their users and customers, because that's where we all are.

On the backend, the cost of computing has been dropping exponentially, and now anyone has access to massive computing and storage resources on a pay as you go basis because of cloud. Companies can get started by hosting their data and infrastructure in the cloud for almost nothing.

Hence mobile plus cloud. You can use mobile platforms to reach customers while powering your business with cloud computing. You can build lean and scale fast, and benefit automatically from the exponential growth curve of technology.

As computing power increases and costs decrease, cloud platforms grow more capable and the mobile market expands. In this state, technological change is an opportunity.

How cloud challenges the incumbents to think different

Snapchat is one of the best examples of how this can work. It was founded in 2011. The team used Google Cloud Platform for their infrastructure needs and focused relentlessly on mobile. Just four years later, Snapchat supports more than 100 million active users per day, who share more than 8,000 photos every second

The mobile plus cloud formula is exciting, but it also poses challenges for established players. According to a study by IBM, some companies spend as much as 80% of their IT budgets on maintaining legacy systems, such as onsite servers.

For these companies, technological change is a threat. Legacy systems don't incorporate the latest performance improvements and cost savings. They aren't benefitting from exponential growth, and they risk falling behind their competitors who are.

This can be daunting, since it's not realistic for most companies to make big changes overnight.

If you run a business with less than agile legacy systems, here's one practical way to respond to the fast pace of technological change: foster an internal culture of experimentation.

The cost of trying new technologies is very low, so run trials and expand them if they produce results. For example, try using cloud computing for a few data analysis projects, or give a modern browser to employees in one department of the company and see if they work better.

There are no "one size fits all" solutions, but with an open mind, smart leaders can discover what works best for their team.

It's important to try, especially as technology becomes more capable and more of the world adopts a mobile plus cloud formula. Those who experiment will be best placed to capitalise on future convergences.

Uber's success suggests enterprises need to think like startups about cloud

cdonnelly | No Comments | No TrackBacks
| More
Cloud-championing CIOs love to bang on about how ditching on-premise technologies helps liberate IT departments, as it means they can spend less time propping up servers and devote more to developing apps and services that will propel the business forward. 

It's a shift that, when successfully executed, can help make companies more competitive, as they're nimbler and better positioned to quickly respond to market changes and evolving consumer demands. 

But it takes time, with Gartner analyst John-David Lovelock telling Computer Weekly this week that companies take at least a year to get up and running in the cloud from having first considered taking the plunge. 

"It takes companies about 12 months to say, 'this server is more expensive or this storage array is too expensive so we should go for Compute-as-a-Service or Storage-as-a- Service instead'," he said. 

"Making that shift within a year is not something they can traditionally do if they weren't already on the path to the cloud." 

Future development 
Companies preparing to make such a move can't afford to be without a top-notch team of developers, if they're serious about capitalising on the agility benefits of cloud, according to Jeff Lawson, CEO of cloud communications company Twilio. 

"Every company has to think of themselves as software builders or they will probably become irrelevant. Companies are building software and iterating quickly to create great experiences for customers, and they're going to out-compete those that aren't," he told Computer Weekly. 

Lawson was in London this week to support his San Francisco-based company's European expansion plans, which have already seen Twilio invest in offices in London, Dublin and Estonia. 

In fact, the company claims to have signed up 700,000 developers across the globe, and that one in five people across the world have already interacted with an app featuring its technology. 

The firm's cloud-based SMS and voice calling API is used by taxi-hailing app Uber to send alerts out to customers when the drivers they've booked are nearby, for example, and similarly by holiday accommodation listing site, AirBnB. 

Both these companies are regularly lauded by the likes of Amazon Web Services, EMC and Google because they're both popular services that are said to be run exclusively on cloud technologies. 

Neither has to suffer the burden of having weighty, legacy technology investments eating up large portions of their IT budgets. For this reason, enterprises should be looking at them for inspiration about how to make their operations leaner, meaner and more agile, it's often said. 

Given that Uber and AirBnB have seemingly become household names overnight, it highlights - to a certain extent - why the move to cloud is something the enterprise can't afford to put off. 

Simply because, in the time it takes them to get there, a newer, nimbler, born-in-the-cloud competitor might have made a move on their territory and it may be harder to outmanoeuvre them with on-premise technologies.

What the enterprise can learn from Google's decision to go "all-in" on cloud

cdonnelly | No Comments | No TrackBacks
| More

Google has spent the best part of a decade telling firms to ditch on-premise productivity tools and use its cloud-based Google Apps suite instead. So, the news that it's moving all of the company's in-house IT assets to the cloud may have surprised some.

Surely a company that spends so much time talking up the benefits of cloud computing should have ditched on-premise technology years ago, right?

Not necessarily, and with so many enterprises wrestling with the what, when and how much questions around cloud, the fact Google has only worked out the answers for itself now is sure to be heartening stuff for enterprise cloud buyers to hear.

Reserving the right

The search giant has been refreshingly open in the past with its misgivings about entrusting the company's corporate data to the cloud (other people's clouds, that is) because of security concerns.

Instead, it prefers employees to use its online storage, collaboration and productivity tools, and has shied away from letting them use services that could potentially send sensitive corporate information to the datacentres of its competitors.

This was a view the company held as recently as 2013, but now it's worked through its trust issues, and made a long-term commitment to running its entire business from the cloud.

So much so, the firm has already migrated 90% of its corporate applications to the cloud, a Google spokesperson told the Wall Street Journal.

What makes this really interesting is the implications this move has for other enterprises. If a company the size of Google feels the cloud is a safe enough place for its data, surely it's good enough for them too?

Particularly as Google has overcome issues many other enterprises may have grappled with already (or are likely to) during their own move to the cloud.

Walking the walk

What the Google news should serve to do is get enterprises thinking a bit more about how bought-in the other companies whose cloud services they rely on really are to the idea.

While they publicly talk up the benefits of moving to the cloud, and why it's a journey all their customers should be embarking on, have they (or are they in the throes of) going on a similar journey themselves?

If not, why not, and why should they expect their customers to do so? If they are (or have), then talk about it. Not only will doing so add some much needed credibility to their marketing babble, but will show customers they really do believe in cloud, and aren't just talking it up because they've got a product to sell.

Did you believe in any of these cloud computing myths?

avenkatraman | No Comments | No TrackBacks
| More

Myths and misunderstandings around the use and benefits of cloud computing are slowing down IT project implementations, impeding innovation, inducing fear and distracting enterprises from yielding business efficiency and innovation, analyst firm Gartner has warned.

It has identified the top ten common misunderstandings around cloud:

Myth 1: Cloud is always about the money

Assuming that the cloud always saves money can lead to career-limiting promises. Saving money may end up one of the benefits, but it should not be taken for granted. It doesn't help when all the big daddies of the cloud world - AWS, Google Microsoft - are doing are tripping over each other to cut down prices. But cost savings must be seen as a nice-to-have benefit while agility and scalability should be the top reasons for adopting cloud services.

Myth 2: You have to do cloud to be good

According to Gartner, this is the result of rampant "cloud washing." Some cloud washing is based on a mistaken mantra (fed by hype) that something cannot be "good" unless it is cloud, a Gartner analyst said.

Besides, enterprises are billing many of their IT projects cloud for a tick in the box and to secure funding from the stakeholders. People are falling into the trap of believing that if something is good it has to be cloud.

There are many use cases where cloud may not be a great fit - for instance, if your business does not experience too many peaks and lulls, then cloud may not be right for you. Also, for enterprises in heavily regulated sector or those operating within strict data protection regulations, a highly agile datacentre within IT's full control may be a best bet.

Myth 3: Cloud should be used for everything

Related to the previous myth, this refers to the belief that the characteristics of the cloud are applicable to everything - even legacy applications or data-intensive workloads.

Unless there are cost savings, moving a legacy application that doesn't change is not a good candidate.

Myth 4: "The CEO said so" is a cloud strategy

Many companies don't have a cloud strategy and are doing it just because their CEO wants. A cloud strategy begins by identifying business goals and mapping potential benefits of the cloud to them, while mitigating the potential drawbacks. Cloud should be thought of as a means to an end. The end must be specified first, Gartner advises.

Myth 5: We need One cloud strategy or one vendor

Cloud computing is not one thing, warns Gartner.  Cloud services include IaaS, SaaS or PaaS models and cloud types include private, public or hybrid clouds. Then there are applications that are right candidates for one type of cloud. A cloud strategy should be based on aligning business goals with potential benefits. Those goals and benefits are different in various use cases and should be the driving force for businesses, rather than standardising on one strategy.

Myth 6: Cloud is less secure than on-premises IT

Cloud is perceived as less secure. To date, there have been very few security breaches in the public cloud -- most breaches continue to involve on-premises datacentre environments.

Myth 7: Cloud is not for mission-critical use

Cloud is still mainly used for test and development. But the analyst firm notes that many organisations have progressed beyond early use cases and are using the cloud for mission-critical workloads. There are also many enterprises (such as Netflix or Uber) that are "born in the cloud" and run their business completely in the cloud.

Myth 8: Cloud = Datacentre

Most cloud decisions are not (and should not be) about completely shutting down datacentres and moving everything to the cloud. Nor should a cloud strategy be equated with a datacentre strategy. In general, datacentre outsourcing, datacentre modernisation and datacentre strategies are not synonymous with the cloud.

Myth 9: Migrating to the cloud means you automatically get all cloud characteristics

Don't assume that "migrating to the cloud" means that the characteristics of the cloud are automatically inherited from lower levels (like IaaS), warned Gartner. Cloud attributes are not transitive. Distinguish between applications hosted in the cloud from cloud services. There are "half steps" to the cloud that have some benefits (there is no need to buy hardware, for example) and these can be valuable. However, they do not provide the same outcomes.

Myth 10: Private Cloud = Virtualistaion

Virtualisation is a cloud enabler but it is not the only way to implement cloud computing. Not only is it sufficient either. Even if virtualisation is used (and used well), the result is not cloud computing. This is most relevant in private cloud discussions where highly virtualised, automated environments are common and, in many cases, are exactly what is needed. Unfortunately, these are often erroneously described as "private cloud", according to the analyst firm.

"From a consumer perspective, 'in the cloud' means where the magic happens, where the implementation details are supposed to be hidden. So it should be no surprise that such an environment is rife with myths and misunderstandings," said David Mitchell Smith, vice president and Gartner Fellow. 

How Ucas keeps downtime away with disaster recovery strategies

avenkatraman | No Comments | No TrackBacks
| More

Business Continuity is often perceived as a concept only followed by the biggest of big business, but the reality is that the need and corresponding services increasingly underpin everyday life. An invisible safety net making sure important everyday events continue - no matter what is crucial for all verticals. And education is no exception.

In this guest blogpost, Mike Osborne, school governor & head of business continuity at Phoenix IT talks about the importance of business continuity for Ucas.

During the last few weeks, despite the fact that students now have to pay much higher fees for studying, we have seen more people than ever applying for higher education. An extra 30,000 new places were created this year. This has made the competitive battle between universities even more intense as they fight to secure the best students, especially over the clearing period.

For both -- the Universities and Colleges Admissions Service (Ucas) and universities -- the clearing and application periods are a time when the availability and function of their operations are most visible not just to students and their parents but also the Government and the Media.

In 2011, both universities and students experienced massive problems with the Ucas online system during the clearing and application periods. This year, it's more important than ever both for Ucas, the universities and students that there are no system disruptions so students can get the offers they need in a timely fashion and universities can fill their spaces.

Until 20th September, when the clearing vacancy search closed, Ucas was put to the test as thousands of students scrambled to get an offer through the clearing system. According to Ucas last year, on the first weekend after A-level results were announced some 20,000 applicants were placed at a university or college through the Clearing system. Considering the critical nature of this period, it's essential that the admission agency (Ucas) and universities have ICT and Call Centre resources operating effectively, and without interruptions affecting operations.

ICT and call centre systems are vulnerable to a variety of service disruptions, ranging from severe Disasters (e.g. fire) to mild (e.g. short-term software glitches, power or communications loss). Universities and Ucas are now taking out robust ICT contingency plans such as workplace business continuity and Cloud based DRaaS (disaster recovery as a service), to ensure that information processing systems and student data, critical to the university, are maintained and protected against relevant threats and that the organisation has the ability to recover systems in a timely and controlled manner.

With many mid-market companies also seeing the potential of Disaster Recovery using Cloud technology, it's not surprising that universities and Ucas are spending more time, money and effort on implementing DRaaS plans. DR as a Service allows data to be stored securely offsite and if the right service is selected, also provide near instantaneous system and network recovery.

When added to Call Centre recovery services as part of a Business Continuity Plan, DRaaS offers a convenient and cost effective solution.

With government and Higher Education Funding Council for England (Hefce) imposing fines on institutions for over-recruitment and with student data including unique research projects increasing, it is more essential than ever for universities and Ucas to keep system downtime to a minimum.  







Picking the cloud service that's right for you

avenkatraman | No Comments | No TrackBacks
| More

Organisations tend to have one of two IT strategies today: those who are already planning and eventually implementing cloud strategy, and those who are going to be doing it soon. But, the options that companies are faced with are dizzying, often contradictory, and usually dangerously expensive. So what's the best way for organisations to find the ideal cloud service for their specific needs?  

Determining what is needed from the cloud will drive what platform organisations should deploy on. Considerations like budget, expected performance, and project timeline all have to be carefully balanced before plunging ahead. Broadly speaking the platform options range from using someone else's public cloud, such as AWS, to building your own private cloud from scratch.  Where an organisation lands on that spectrum will be driven by how they rank the primary factors involved.

In this guest blogpost, Christopher Aedo, chief product architect at Mirantis explains how to evaluate the cloud requirements and pick the right platform

In essence there are seven key factors to address that will help businesses clarify what really matters and enable them to establish their individual cloud requirements. These are:

Control: How much control do you have over the environment and hardware? Make sure the cloud platform you select delivers the level of control you require. 

Deployment Time: How long before you need to be up and running? How much time will you burn just sorting out, ordering, racking and provisioning the hardware? It is critical that the cloud platform you choose can deployed in the right amount of time.

Available Expertise : Can your single IT staff member handle the project, or do you need a team of experts poached from the biggest cloud makers? Choose a cloud platform that matches the expertise you have available - or you can afford to bring in.

Performance: In a single server there are so many components impacting performance - from the memory bus to your NIC and everything in between. However performance directly correlates with budget - a larger budget will usually see greater performance. However there is no reason a smaller budget can't see high performance - providing you select the right option.  

Scalability: Your platform of choice should accommodate adding, or reducing, capacity quickly and easily. Will your chosen platform require downtime to scale up or down or can it be executed seamlessly?  

Commitment: From no contract "utility pricing" to the long term investment of owning all your gear - the longer you're tied up, the greater the risk.

Cost: This may be the most important and most difficult factor to account for. You can see it as an output from your other factors, or your ultimate limiter dictating where you'll make concessions.  There are definitely some good ways to maximize your dollar while minimisng your risk as long as you keep your head up and your eyes open.

By addressing these factors early on in the process of implementing a cloud based solution you will save yourself time, resource and budget in the long run. However having addressed what you want the cloud to deliver it is important that you match your requirements with the right type of cloud platform.

English: Cloud Computing Image
Here are the main cloud options:

Option 1: The Public Cloud

The big players here are AWS and RackSpace, but there are other contenders with fewer bells and whistles like DigitalOcean and Linode. These represent the lowest entry barrier (you just need 'net access and a credit card!) but also offer the least control and the greatest cost increases as you scale up.

The public cloud is priced like a utility offering the opportunity to scale up/down as needed. This is well suited to handling a highly elastic demand, but it's important to keep an eye on what you've spun up.

With a public cloud you get limited access to the underlying hardware, and no visibility into what's beneath the covers of the cloud - although you will get some flexibility in configuration and near instant deployment of service without the need for any real expert to be involved.

However, generally speaking, you're going to find relatively low performance with a public cloud with higher performance coming at significantly increased cost. You can also expect to be billed by the minute in return for not being held to any contract. Many will offer discounts with a commitment of some sort, but then you give up the ability to drop resources when you no longer need them.

Option 2: Hosted Private Cloud

There are many well-known vendors offering options in this space, ranging from complete turn-key environments to build-to-order approaches. They will provide the hardware on short-term lease, and will charge you to manage that hardware.  

Companies like RackSpace will work with you to architect your environment, and provide assistance in deployment - which could take up to 6 weeks. You'll need moderate to extreme expertise and your average junior sys-admin is going to be way out of their depth using such a service.

Levels of control will vary from high to minimal depending on how much of the platform you manage and deploy yourself. The level of commitment will also vary but the longer you commitment the more likely an alternative platform is to make sense. HPC is not well suited to an elastic demand and upscale in 2-6 weeks - and generally there will be no 'scale-down' option.

Option 3: Build your own private cloud (BYPC)

BYPC requires a high level of technical expertise within the business and will present you with the greatest technical and financial risk. However, you will have total control over the hardware design, the network design, and how your cloud components are configured - but expect this to take a year to 18 months to complete.

Your costs in the build-your-own approach can be kept down if performance and reliability are of no concern, or they can (needlessly) go through the roof if you're not making carefully planned decisions. The performance of BYPC will be entirely dependent on your budget constraints and how successful your architectural planning is.

There are lots of moving pieces, and the risks are tremendous, as you may be committing hundreds of thousands of dollars to your cloud pilot. Ask anyone who's actually tried this; it's a lot harder than it looks.

Option 4: Private-cloud-as-a-Service (PCaaS)

PCaaS, such as OpenStack, represents a balance between the value and flexibility of public cloud and the control of private cloud.

PCaaS provides total control over how hardware is used, and that hardware is 100% dedicated to you with a minimum One-day commitment on a rolling contract. As a result of the minimal commitments it can be deployed within a few hours and you will be free to scale the size of your environment up and down at nearly the same pace as if you were on a public cloud.

The costs are higher than a comparable number of VMs in a public cloud, but with no long-term commitment and clear pricing from the start, your financial risks are lower than any other private cloud approach.

You'll need a moderate skill level with PCaaS but your risks are mitigated because you're in a managed environment. Whereas, until recently, PCaaS required you to have a reasonable amount of OpenStack knowledge developments such as OpenStack Express have drastically reduced the expertise needed to implement a PCaaS.

Each of these cloud platforms has validity, as well as a real sweet spot, where that particular approach is the only obvious good choice for your business needs. If you properly consider your requirements and how they match with the options available, your cloud project will not end up as a  costly mistake.

Microsoft Azure European users take note - HDInsight performance issue

avenkatraman | No Comments | No TrackBacks
| More

Microsoft Azure cloud service status website at 5pm BST on Friday, September 26th showed that while the core Azure platform components were working properly, there was "partial performance degradation" on Azure's HDInsight service for customers in West Europe.

The status website warned that customers may experience intermittent failures when attempting to provision new HDInsight clusters in West Europe.

HDInsight is a Hadoop distribution powered by the cloud. It allows IT to process unstructured or semi-structured data from web clickstreams, social media, server logs, devices and sensors to analyse data for business insights.

Microsoft has assured cloud users that the engineers have identified the root cause of the performance degradation issue and are working out the mitigation steps.

The company has vowed to provide updates every two hours or as events update. I sense a long wait before the weekend beckons for European enterprises' Azure users.

Doesn't the NHS use Microsoft Azure HDInsight? Oh yes, it does!

EMC-HP merger would have meant more of the same old complex, slow, legacy and big IT

avenkatraman | No Comments | No TrackBacks
| More

Two large, very large companies that have been under tremendous pressure in the software-defined storage and cloud era - EMC and HP - toyed with the idea of a merger, according to Wall Street Journal but eventually the idea fell apart because of concerns from both HP and EMC on whether their shareholders would give it a nod.

The deal would have created a mega-vendor worth $130bn with HP's Meg Whitman as the chief executive of the combined entity and EMC's Joe Tucci as chairman or president.  

Ailing EMC has been under pressure from its investors calling for it spin-off its sister company VMware on the prospect that the company will do better if split up.

According to WSJ, the EMC-HP merger talks have been going on for almost a year.

But the combination of two traditional vendors would have only meant more of the same old legacy, complex, slow and big IT offerings. There is an absence of a meaningful synergy but lot of service overlaps.

Meg Whitman speaks at the Tech Museum in San J...

Meg Whitman, CEO, HP

HP has a bad history in acquisitions. A merger would have been bad news for both companies even though EMC has a better track record of acquisitions and is attempting to redefine itself in the new cloud era.

EMC Corp is far more than EMC - it has all those fingers in the pies of VMware, RSA, VCE, Pivotal and so on: unpicking these or keeping them going forward would be difficult.

Other names mentioned in a merger with EMC include Dell and Cisco Systems.

Mergers are always a hit or miss and more of a risk when the stakes are higher, as in this case. The problem with these traditional vendors is that, in the past, they have tried to address all the aspects of a datacentre and so they have competing products. For example, EMC-owned VMware's software-defined network (SDN) offerings threaten Cisco's switches and routers business worth billions.   

As one analyst tells me if EMC is really seeking a merger, it should be going for a Rackspace-type platform company (not Rackspace itself as it has ruled itself out now) where EMC can make a bigger play of VMware's cloud offering, of the whole software-denied everything message, of ViPR and so on.

Or would Tucci go for Cisco? Markets are betting on EMC-Cisco deal with EMC share prices are up 16 cents.

A merger with heavyweight HP would have left a company trying to sell a complex approach into standard customers' datacentres. Thankfully, it was only a thought. 

Dell is making all the right noises and the right bets. Will its magic work?

avenkatraman | No Comments | No TrackBacks
| More

I had a chance to see Michael Dell - in flesh - for the first time yesterday in Brussels at the Dell Solutions Summit. He delivered a great keynote on Dell's datacentre strategy, its investment plans and also spoke about all the hot IT topics - software-defined infrastructures, internet of things, security and data protection.

English: Michael Dell, founder & CEO, Dell Inc.

Michael Dell, founder & CEO, Dell

Michael sounded optimistic about Dell's place in the future IT but what was new was how open Dell has become as a company and its firm commitment to all things that determine the new age IT - software defined, cloud, security, mobile, big data,  next-gen storage and IoT.

For one, Michael was candid with the numbers. He said:

  • The total IT market is worth $3 trillion and we have a 2% share of it. Only 10 companies have 1% or more share of that $3 trillion market.
  • Dell's business comprises 85% government and enterprise IT and just 15% is end-user focused.

This kind of number-feeding the press and analysts is new at Dell, which, until now, like rest of the service providers in the industry kept business numbers close to its chest.

But that was not all, Michael didn't hold back from saying a few things that raised a few eyebrows:

  • "I wish we hadn't made some of the acquisitions we did."
  • "As ARM moves to 64-bit architecture, it becomes more interesting," Michael said. He said the company is open to working with its longstanding partner Intel's rival for mainstream datacentre products if that's where the market moved.
  • He also said, Dell is a big believer of the software-defined future. "We ourselves are moving our storage IT into a software defined environment."
  • And to those that wrote off the PC industry, Michael said: "We absolutely believe in the PC business, we are consolidating/growing".

Michael's optimism and confidence in the company's future is a far cry from last year when the company's ailing business strategy forced it to get itself off the public eye.

"Going private has helped us," he said while speaking in Brussels. "It has enabled us to put our focus 100% on our customers. We have invested more in research, development, innovation and in channels in the past year."

Dell also seems to be striking a right chord with its customers, channel and analysts as those I spoke to said they like the company a lot and are pleased with how quickly it adapts and listens to its users.

Dell Research will be focused on five areas - software defined IT, next generation storage (NVM, Flash), next gen cooling, big data/analytics, IoT. Analysts say that's a good bet.

"Dell's foray into research clearly designed to establish it as an IT innovator as well as a scale/efficiency player," says Simon Robinson from 451 Research group on Twitter.

Product-wise too it is making progress. Dell has been more creative than its competitors in designing its new servers on the latest Xeon chip. Its 13th-generation PowerEdge servers have capabilities such as NFC for server inventory management, new Flash capabilities, and has more front sockets.

Dell is also being innovative in its enterprise cloud strategies. It is providing the reference architectures, proof of concepts and server technologies to its system integrators to do the cloud implementation for customers. Having catered to the likes of AWS in the past, Dell has used that cloud experience to build reference architectures but gets the channel to implement it.

"We see private cloud as the future of cloud computing," Michael said. According to him, enterprises in Europe prefer "local" clouds for data sovereignty and privacy issues, so it is supporting local system integrators with local datacentres to build cloud for the customers.

Michael and his company are certainly making the right noises and are investing in the right technologies. But will it increase their ranking in the datacentre (which I see as fourth after Cisco, HP and IBM - in that order), only time will tell.

Tintin (character)

Tintin with Snowy

Also, is it symbolic that Dell held its Solutions Summit party at the world-famous Comic Strip Museum in Brussels - the home of Tintin, Captain Haddock, the Smurfs and Asterix? Don't know, but I sure did have fun!

Cloud wars just got spicier, thanks to Google

avenkatraman | No Comments | No TrackBacks
| More

Ambitious startups and developers around the world got a big treat from Google ahead of the weekend - $100,000 worth of Google cloud credits along with 24/7 support from the tech experts at Google.

Urs Hölzle, Google Fellow launched the "Google Cloud Platform for Startups" initiative on Friday to help startups take advantage of its enterprise cloud offering and "get resources to quickly launch and scale their idea". The free cloud resources are aimed at helping developers focus on code without worrying about managing infrastructures. Google has also not set any restrictions on the type of cloud services users can spend their credits on giving them complete flexibility to choose IaaS, PaaS or SaaS or even data-related cloud offerings.

But to qualify, the startups will need to have less than $5m dollars in funding and have less than $500,000 in annual revenue. And the cloud credits are available through incubators, accelerators and investors.

Cloud computing has always been a technology that democratized IT by giving startups a level-playing field to compete with the big players. And cloud behemoth AWS is seen as the "go-to" cloud option for the cool, emerging poster-children of the web such as Netflix and Instagram (before it was acquired by Facebook).    

Google Appliance as shown at RSA Expo 2008 in ...

Google Appliance as shown at RSA Expo 2008 in San Francisco. It was only a computer case with no parts inside.-Daniel A (Photo credit: Wikipedia)

Google, AWS, Microsoft and IBM have so far been tripping over each other to announce price drops to lure more users to their cloud services. AWS launched a free programme called AWS Activate to help selected startups with resources for working with AWS. It includes services such as AWS web-based training, virtual office hours with an AWS Solutions Architect and credit for eight self-paced training labs. 

But Google has now upped the game by targeting startups with cloud credits of the size and scale that hasn't been seen before.

Cloud giants are targeting the ambitious startups because today's startups can become tomorrow's enterprises and the providers want these potential customers to use their platforms. 

Startups are lean and are quick at adopting new technologies such as the cloud but need some technical expertise so they can focus on their business than the underlying technology. Google is offering exactly that in a bid to get a footprint in the enterprise IT.

It will be interesting to see how UK's promising tech startups such as Swiftkey and Hailo use these cloud credits offered by Google. But more importantly how quickly AWS, Microsoft and IBM respond to this. 

Cloud wars just got spicier!  

 







Want cloud success? Eat your greens!

avenkatraman | No Comments | No TrackBacks
| More

Cloud computing is becoming a default option of delivering IT services but to reap all the benefits of the cloud, enterprises must do the boring stuff first.

On Thursday, I attended a Westminster eForum seminar on the future of cloud computing where I witnessed very interesting conversations around cloud adoption, risks, and its future from speakers ranging from analysts, legal experts and industry association heads to cloud vendors and public sector professionals.

Studies show that broccoli may help in the pre...

Boring but necessary! (Photo credit: Wikipedia)

When experts said cloud can be secure and cost-effective and can lead to innovation - it did not raise any eyebrows from the delegates. This suggests to me that users are fully convinced of cloud's benefits.

But even then, some cloud projects backfire. Why?

The excitement of cloud is leading enterprises to overlook the boring work they need to do beforehand to yield the full benefits of the cloud. Ovum analyst Gary Barnett illustrated this best in his (PowerPoint-free!) session. Here is an article where Gary shares the user instances where cloud has failed.

"My mum made sure I ate my broccoli before I got my pudding," Gary said. But in the cloud world, no one's eating the broccoli, he said.

"If you don't clean up your data before putting it on the cloud platform, you will have cloudy rubbish." He also pointed that some users are finding cloud expensive because they are not building proper policies and guidelines around its use.

Experts at the seminar insisted cloud is a secure way of doing IT and cloud breaches are usually because of users' "silly and predictable passwords" and their lack of awareness. Gary urged enterprises to educate users on the loopholes of predictable passwords.

"No one loves the boring stuff. But just like you have to eat your greens, you have to do all the boring stuff before adopting the cloud. Otherwise you're just transferring onsite mess offsite," Gary said.

The "Eat your greens" theme continued throughout the seminar and the floor roared out laughing when Microsoft's cloud director Maurice Martin said: "In my case, the greens were the cabbages, broccoli was too posh."

 

Five questions you must ask your cloud provider

avenkatraman | No Comments | No TrackBacks
| More

One of the main barriers to cloud adoption is data privacy. This is an issue because, for the majority of cloud providers, EU/EEA and US data privacy and Information Security standards are minefields which are very difficult to cross. And that is because their focus has been on the ease of use and functionality of their services rather than the all-important data privacy, information security, data integrity and reliability requirements around providing these services responsibly.

But, when looking through the plethora of cloud service providers, you can immediately sort the 'wheat from the chaff' once you start drilling down into the data privacy, information security, data integrity and reliability capabilities offered to ensure the protection of your and your customers' data.

In this guest blog post, Mike McAlpen, the executive director of security & compliance and data privacy officer at 8x8 Solutions outlines the questions cloud users must ask their providers before signing a contract.


Have you chosen the right cloud services provider?  
- M
ike McAlpen


By asking your cloud services provider the following questions you will be on the way to knowing whether you can entrust your data into its care.  

  • Compliance with EU/EEA data privacy standards

The most important question is whether your provider can provide third-party verification/audit assurance of their compliance with EU/EEA and/or US data privacy standards? It is not enough for the provider to simply produce this verification/audit assurance, it must show that it has fully implemented the UK Top 20 Critical Security Controls For Cyber Defence and/or ISO 27001 and/or rigorous US standards such as the Federal Information Security Act (FISMA) and International Information PCI-DSS v3.0 security standards.

If this verification/audit assurance is not available then your business is at peril of not meeting EU/EEA and/or US standards.

In the US, many EU/EEA and other countries it can be a criminal offence if a breach of personal data privacy occurs and an individual employee or senior management, depending on the circumstances of the breach, is deemed to be responsible.

  • Onward Transfer of Data

Does your provider work with third-party suppliers in order to deliver the cloud services it offers? If so you must check that it has contracts in place with its third-party suppliers that provide assurance that they are, and will continue to be, compliant with EU/EEA and/or US standards.

  • Data Encryption

Does the cloud solutions vendor provide the capability to encrypt sensitive data when it is being transferred across the Internet and importantly again when it is 'at rest'? (i.e. Stored by your cloud services provider, or in files on a computer, laptop USB flash drives or other electronic media?).

  • The Right to be Forgotten

Has your provider's solution been engineered to enable it to identify and associate each user's personal information data? It must also provide the capability for each user to view and modify this personal data. In addition, if the user wishes this data to be deleted, the provider must then be able to completely erase all of that person's personal data without affecting anyone else's data. 

  • Service Level Agreements (SLAs)

Outside of compliance with data privacy standards, another key issue is asking your provider how you will determine and then document, within your services contract, the required service level agreements (SLAs). It's no use whatsoever having the cloud services you have always wanted if you have no way of measuring or monitoring if they are actually being delivered to an acceptable level or if there are no financial penalties for non-compliance.

If your provider cannot answer "yes" to the above questions and you cannot agree to mutually acceptable SLAs - look for another provider!

VMworld 2014: What happened on Day 1

avenkatraman | No Comments | No TrackBacks
| More

On Day 1 of its annual conference VMworld 2014 themed "No Limits", VMware unveiled its strategies around open cloud platform OpenStack and around container technology Kubernetes. It also launched new tools to extend its software-defined datacentre and hybrid cloud offerings.

Open software-defined datacentre

One of the significant announcements was the VMware Integrated OpenStack - a service that provides enterprises - especially SMBs the flexibility to build a software-defined datacentre on any technology platform (VMware or not).

VMware Integrated OpenStack distribution is aimed at helping customers repatriate workloads from "unmanageable and insecure public clouds". Take that AWS.

Container technology and VMware infrastructures; Kubernetes collaboration

VMware is collaborating with Docker, Google and Pivotal to allow enterprises to run and manage container-based applications on its platforms.

At the annual conference, VMware said it has joined the Kubernetes community and will make Kubernetes' patterns, APIs and tools available to enterprises. Kubernetes, currently in pre-production beta, is an open-source implementation of container cluster management.

With Google, VMware's efforts will focus on bringing the pod based networking model of Open vSwitch to enable multi-cloud integration of Kubernetes.

Not only will deep integration with the VMware product line bring the benefits of Kubernetes to enterprise customers, but their commitment to invest in the core open source platform will benefit users running containers," said Joerg Heilig, VP Engineering, Google Cloud Platform. "Together, our work will bring VMware and Google Cloud Platform closer together as container based technologies become mainstream."

With Docker, it will collaborate to allow Docker Engine on VMware workflows. It will also work to improve interoperability between Docker Hub with VMware vCloud Air, VMware vCenter Server and VMware vCloud Automation Center.

New hybrid cloud capabilities

At VMworld, VMware released new hybrid cloud service capabilities and a new line-up of third-party mobile application services. The new capabilities include vCloud Air Virtual Private Cloud OnDemand that offers customers with on demand access to vCloud Air. Another capability - VMware vCloud Air Object Storage - is aimed at providing users with scalable storage options for unstructured data. It will enable customers to easily scale to petabytes and only pay for what they use, according to the company.

It also launched mobile development services within VMware's vCloud Air's service catalog.

Management as a service offerings

VMware also released two new IT management tools under its vRealize brand- for managing a software-defined datacentre and public cloud infrastructure services (IaaS).

VMware vRealize Air Automation is the cloud management tool that allows users to automate the delivery of application and infrastructure services while maintaining compliance with IT policies.

Meanwhile, VMware vRealize Operations Insight offers performance management, capacity optimization, and real-time log analytics. The tool also extends operations management beyond vSphere to an enterprise's entire IT infrastructure. Another sign than VMware is opening up its ecosystem to accommodate other virtualisation platforms.

Partnerships with Dell on software defined services

VMware  has extended collaboration with Dell to combine its NSX network virtualisation platform with the latter's converged infrastructure products.

"Global organisations are adopting the software-defined datacentre as an open, agile, secure and efficient architecture to simplify IT and transition to the hybrid cloud," said Raghu Raghuram, executive vice president, SDDC division, VMware. "The software-defined datacentre enables open innovation at speeds that cannot be matched in the hardware-defined world. As partners, VMware and Dell will advance networking in the SDDC, and collaborate to make advanced network virtualisation available to mutual customers." 

Partnership with HP on hybrid cloud

VMware and HP have extended their collaboration to give momentum to users' SDDC and hybrid cloud adoption. As part of the partnership, HP Helion OpenStack will support enterprise-class VMware virtualisation technologies.

The companies will also make standalone HP-VMware networking solution generally available. Together, these collaborative efforts can help simplify the adoption of the software-defined datacentre and hybrid cloud with less risk, and with greater operational efficiency and lower costs.

All in all, looks like VMware is opening up to competitive platforms and warming up to open source technologies but retains its standoffish traits when it comes to public cloud services.

Microsoft Azure goes down for users around multiple regions including Europe and Asia

avenkatraman | No Comments | No TrackBacks
| More

Just when I thought to myself: Cloud services must be improving as there are fewer outages reported this year than there were last year, Microsoft Azure cloud service went down for many users, including European ones, earlier this week.

Microsoft's Azure status page currently displays a chirpy: 

All good!

Everything is running great.


It also displays a bright green check besides its core Azure platform components such as Active Directory, and popular cloud services including its SQL Databases, and storage services.

A snoop into its history page shows that all wasn't good aboard Azure on Monday and Tuesday. Users experiencing full service interruption and performance degradation across several services including StorSimple, storage services, website services, backup and recovery and virtual machine offerings.

For a brief moment on Tuesday, August 19th, a subset of its customers in West Europe and North Europe using Virtual Machines, SQL Database, Cloud Services, and Storage were unable to access Azure resources or perform management operations. Users accessing Azure's Website cloud services in Northern Europe too faced connectivity issues.

WELCOME TO Microsoft®

WELCOME TO Microsoft® (Photo credit: Wikipedia)

The previous day, some of its customers across multiple regions were unable to connect to Azure Services such as Cloud Services, Virtual Machines, Websites, Automation, Service Bus, Backup, Site Recovery, HDInsight, Mobile Services, and StorSimple. 

Some of the services were down for almost five hours.

This week's global outage follows last week's (August 14th) Azure outage where users across multiple regions experienced full service interruption to its Visual Studio Online. The news doesn't bode well for CEO Satya Nadella's "cloud-first" strategy.

Here is a detailed report on Azure's latest datacentre outage.

Well, I may have tempted fate. Resilience and reliability are two words I'll use sparingly to describe public cloud services. 

PUE - the benevolent culprit in the datacentre

avenkatraman | No Comments | No TrackBacks
| More

Internet of Things, big data, and social media are all creating an insatiable demand for scalable, sophisticated and agile IT resources, making datacentres a true utility. This is making big tech and telecom companies to drift a bit from their core competency and build their own customised datacentres - take Telefonica's €420m investment in its new Madrid datacentre.

But the mind-boggling growth of computing infrastructure is occurring amid shocking increases in energy prices. Datacentres consume up to 3% of global electricity and produce 200 million metric tons of carbon dioxide, at an annual cost of $60bn. No wonder, IT energy efficiency is primary concern for everyone from CFOs to climate scientists.

In this guest blog post, Dave Wagner, TeamQuest's director of market development with 30 years of experience in the capacity management space explains why enterprises must not be too hung up on PUE alone to measure their datacentre efficiency.


Measuring datacentre productivity? Go beyond PUE
-by Dave Wagner


In their relentless pursuit of cost effectiveness, companies measure datacentre efficiency with power utilization effectiveness (PUE). The metric measures the total amount of power coming onto the datacentre floor, divided by how much of that power is actually used by the computing equipment.

PUE = Total energy
            IT energy

PUE is necessary but not a sufficient indicator to gauge the costs associated with running or leasing datacentres.

While PUE is a detailed measure of datacentre electrical efficiency, it is one of several elements that actually determine total efficiency. In the bigger picture, focus should be on more holistic and accurate measures of business productivity, not solely on efficient use of electricity.

Gartner analyst Cameron Haight talked about how a very large technology company owns the most efficient datacentre in the world with a PUE of 1.06. This basically means that 94% of every watt that comes into the floor actually gets to processing equipment. This remarkably efficient PUE achievement does not detail what they do with all of that power, and how much total work is accomplished. If all that power is going to servers that are switched on but essentially idling and not actually accomplishing any useful work, what does PUE really tell us? Actual efficiency in terms of doing real-world work could be nearly zero even when the PUE metric indicates a well-run datacentre in isolation.

Datacenter

Datacenter (Photo credit: Wikipedia)

Boiled down, what companies end up measuring with PUE is how efficiently they are moving electricity around within the datacentre.

By some estimates, many datacentres are actually only using 10-15% of their electricity to power servers that are actually computing something. Companies should minimize costs and energy use, but nobody invests in a company solely based on how efficiently they move electricity.

Datacentres are built and maintained for their computing capacity, and for the business work that can be done thereupon. I recommend correlating computing and power efficiency metrics with the amount of useful work and with customer or end user satisfaction metrics. When these factors are optimised in a continuous fashion, true optimization can be realised.

I've talked about addressing power and thermal challenges in datacentres for over a decade, and have seen progress made - recent statistics show a promising slowdown in datacentre power consumption rates in the US and Europe due to successful efficiency initiatives. Significant improvements in datacentre integration have helped IT managers control the different variables of a computing system, maximising efficiency and preventing over- or under-provisioning, both having obvious negative consequences.

An integrated approach to planning and managing datacentres enables IT to automate and optimise performance, power, and component management with the goal of efficiently balancing workloads, response times, and resource utilisation with business changes. Just as the IT side analyses the relationships between the components of the stack--networking, server, compute, and applications--the business side of the equation must always be an integral part of these analyses. Companies should always ask how much work they are accomplishing with the IT resources they have; unfortunately, often easier said than done. In the majority of datacentres and connected enterprises, the promise of continuous optimisation has not been fully realised, leaving lots of room for improvement.

As datacentres grow in size and capabilities, so must the tools used to manage them. Advanced analytics have become essential to bridging IT and business demands, starting with relatively simple co-relative and descriptive methods and progressing through predictive to prescriptive approaches. Predictive analytics are uniquely suited to understand the nonlinear nature of virtualised datacenter environments. 

These advanced analytic approaches enable enterprises to combine IT and non-IT metrics in such a powerful way that the data generated by the networked computing stack can become the basis for automated and embedded business intelligence. In the most sophisticated scenarios, analytics and machine algorithms can be applied in such a way that the datacentre learns from itself and generates insight and models for decision-making approaching the level of artificial intelligence.

 







What's making Oregon the datacentre capital

avenkatraman | No Comments | No TrackBacks
| More

I am just back from Oregon where I attended a workshop at Intel's Hillsboro campus. What amazed me the most - apart from the most delicious Peruvian cuisine I had at Portland of course - is Intel's large presence in the area and the number of big datacentres in Oregon.

Intel is the biggest employer of the region and has multiple, vast campuses there. It even has its own airport in Hillsboro, Oregon from where it operates regular flights to its Santa Clara headquarters for its employees. Several flights that carry up to 40 Intel employees operate every day. The hotel I stayed in at Hillsboro told me that on any given day, about 70% of the people it serves are Intel-related.

Apart from Intel almost hijacking Oregon with its presence, the state is also home to many datacentre facilities. Facebook (Prineville datacentre), Google (Its first datacentre - The Dalles), Amazon (Boardman), Apple (also Prineville), and Fortune Datacentres (Hillsboro) - they all have large facilities in Oregon. 

Here's why:

Cost:

One of the primary reasons many tech giants consider Oregon as the home to their datacentres is cheaper costs. Oregon does not have sales tax and this means computer products, building materials and services are cheaper than elsewhere in the US. In addition, power - which is a main datacentre money-guzzler - is cheaper in Oregon. Furthermore, the local government lures tech giants by providing incentives such as tax breaks and subsidies. All these factors attract datacentre investment here.

Prineville, Oregon

Prineville, Oregon (Photo credit: Wikipedia)

Talented workforce

Because of the tech culture of the region, many professionals develop server management and virtualisation skills. The emphasis on IT skills in the universities and Silicon Valley's investment in regular training workshops make the workforce in the area more talented and skilled for datacentre management.

Climate

Oregon's weather is comparatively mild. This makes the tricky task of datacentre cooling a little bit easier. It is simpler to devise cooling strategies for a facility when the ambient temperature does not vary highly. Oregon does not get baking hot like Texas or Kansas in the summers nor does it get overwhelmingly snowed under in winters.

Connectivity

The vast stretches of fibre optics cable that run even across Oregon's mountains, lakes and deserts provide fast connections and latency of milliseconds. Its proximity to Silicon Valley is another puller of datacentre investment.

Geography, stability and security

Big cloud and IT service providers love political and economic stability, and physical security and Oregon gives them that. The region is not too prone to natural disasters such as volcano eruptions, earthquakes or hurricanes - which acts as another big attraction to datacentre builders. Take Iceland for instance, despite its promise of 100% green geothermal energy and fibre optics connections to mainland Europe, many IT providers hesitate to set up datacentres there because of its vulnerability to natural disasters.  

Oregon has seismically stable soil and as part of the west coast, it has little to no lightning risk - one of the major cause of outages in the US.

As Google, which opened The Dalles in 2006 by investing $1.2bn, says, Oregon has the "right combination of energy infrastructure, developable land, and available workforce for the datacentre".

I wonder what Oregon's equivalent in Europe would be?

AWS is not the only pretty one in the room anymore

avenkatraman | No Comments | No TrackBacks
| More

It may be too early to conclude that the party at AWS towers is over but the cloud provider is definitely feeling the heat of the competition and the commodity cloud price wars - its quarterly earnings report showed.

Amazon's net sales increased 23% to $19.34bn but it reported a second-quarter net loss of $126m and warned that sales could slow in the current quarter. Amazon's business segment which includes AWS also saw growth drop to 38% year-over-year after witnessing consistent growth rates between 50% and 60% in the last two years.

Beautiful Bride Barbie - OOAK reroot

Beautiful Bride Barbie - OOAK reroot (Photo credit: RomitaGirl67)

I still remember how Amazon founder Jeff Bezos, at the first ever (2012) AWS re:Invent conference in Vegas, said that a
high-margin business is not the right one for AWS.   

There is no incentive to be efficient for businesses operating on high margins because they would make profits anyway, said Bezos.

"Operating a low-margin business is harder," he said adding that the AWS business model is very similar to the retailer's Kindle business model - where the money is not made when the device is sold, but when people use it and keep buying services for it. 

But the price cuts - which are becoming more frequent and deeper (65% cheaper) and driven more by market forces than by internal decisions - is becoming its biggest problem. Since 2008, AWS has slashed cloud services prices 42 times.

AWS has been leading the public cloud price-war, almost over-zealously but other behemoths including Microsoft and Google who have equally deep pockets have been quick to undercut one another in the race to the bottom in pricing for cloud services.

Although the cloud market is still growing rapidly, AWS is finding that its share of the larger pie is shrinking, even while its user number is still growing.  It looks like the growth is not enough to offset the price cuts - and this must be where the problems lie. Customers love discounts and price cuts but investors don't.

With Microsoft and Google apparently now serious about this market, AWS finally has credible competitors," says Gartner's public cloud expert Lydia Leong.

In May 2014, Synergy Research Group explained how Microsoft has grown its cloud infrastructure services "remarkably in the last year and is now pulling away from the pack of operators chasing Amazon".

"AWS is likely to continue to dominate this market for years, but the market direction is no longer as thoroughly in its control," Leong says.

AWS is no longer the only pretty one in the room. It is having to make space for Google Cloud Platform, Microsoft Azure, OpenStack, and IBM SoftLayer and also for the ferociously emerging players such as Digital Ocean and Profitbricks. 

Azure brings sunshine to Microsoft's lacklustre earnings. And how!

avenkatraman | No Comments | No TrackBacks
| More
Satya Nadella is going to be a happy man as his "mobile-first" "cloud-first strategy" is gathering momentum. Microsoft's cloud business has reported a triple-digit YoY growth, the company's earnings report for Q4 ended June 30, 2014 showed. 

Microsoft's commercial cloud revenue grew 147% with an annualised run rate that exceeds $4.4bn (£2.58bn) even as the company's overall profit was down 7%. 

I'm proud that our aggressive move to the cloud is paying off," said chief exec Nadella. 
nadella-tecnomovida

Satya Nadella, Microsoft CEO (Photo credit: tecnomovida)


Other cloud highlights of the Azure provider's results included a 11% revenue growth in its Windows volume licensing sales and similar double-digit revenue growth for server products including Azure, SQL Server, and System Center. 

Its Office 365 Home and Personal subscribers totaled more than 5.6 million, adding more than 1 million subscribers again this quarter. 

 "We are thrilled with the tremendous momentum of our cloud offerings with Office 365 and Azure both growing over 100% again," said Kevin Turner, chief operating officer at Microsoft. 

 As Gartner's research vice president, Merv Adrian told me, "In what was clearly a well-planned posture of demonstrating his command of the whole portfolio, Nadella delivered a strong, visionary picture of Microsoft's 'Digital work and life experiences' stressing the power of its portfolio in enterprise offerings old and new."

"There was good news in enterprise business -- from SQL Server, from "All-up Dynamics" growth, with CRM nearly doubling, and with a commitment to expand Azure footprint and capacity, launch new services and deliver more hybrid cloud tiering," Merv thinks.

While cloud offered a ray of sunshine to the company's earnings, Microsoft blamed Nokia acquisition for the dent in its profits. 

 Microsoft's profit for the quarter March to June 2014 was $4.6bn (£2.7bn), compared with $4.97bn for the same period last year. The company said the Nokia division, which it completed acquiring in April, lost $692m. 

 Last week, Microsoft said it will cut 18,000 jobs - more than 12,000 jobs related to the Nokia phone business division alone. This "restructuring plan to streamline and simplify its operations" is the most severe job cut in the company's 39-year history. 

 Microsoft laid claims to impressive cloud revenues even in the first quarter of 2014 with analysts insisting that the software giant is "now pulling away from the pack of operators chasing Amazon". 

 AWS was the lone leader in Gartner's magic quadrant until June this year when Microsoft joined its arch-rival in the Leader quadrant. AWS is beginning to face significant competition from Microsoft in the traditional business market, and from Google in the cloud-native market, noted Gartner analysts Leong, Douglas Toombs, Bob Gill, Gregor Petri, Tiny Haynes. 

 The biggest takeaway from Microsoft's earnings announced today is that it is indeed crushing it in the cloud sales and riding on the cloud momentum.

Cloud-first? Cabinet Office seeks £700m datacentre partner for 'top secret' data

avenkatraman | 1 Comment | No TrackBacks
| More

The Cabinet Office and GDS (Government Digital Service) have issued a service contract notice seeking a private partner that can provide datacentre colocation services to handle UK government's information classified as "official", "secret" and "top secret".

The government has earmarked up to £700m for the four-year datacentre infrastructure agreement.

"The operating environment is to be capable of housing computer infrastructure that initially handles information with UK Government security classification 'official' but there may be a future requirement for Data Centre Colocation Services that handle information with 'secret' and 'top secret' security classification," the government document read. "The provision of secret and top secret [information] would be subject to separate security accreditation and security classification," it added.

The facilities partner must be able to subscribe for a majority shareholding (up to 75% less one share) in the new private company limited established by the Cabinet Office to provide Data Centre Colocation Services - DatacentreCo.

But under government's Cloud First policy, many existing and new applications will move to the public cloud over the next few years. The Cabinet Office's cloud-first strategy, announced last year, meant that the cloud will be mandated as the first choice for all new IT purchases in government.

The new potentially £700m datacentre will host 'legacy' applications "not suitable or not ready for cloud hosting or for which conversion to cloud readiness would be uneconomic," the document read.

Cabinet Office, 70 Whitehall, London (next to ...

Cabinet Office, 70 Whitehall, London (next to Downing Street) (Photo credit: Wikipedia)

The Cabinet Office wants the full spectrum of datacentre services - rack space, power facilities, network and security. The datacentre hosting the official and secret information will be spread across an area of 350 sq. metres hosting 150 standard 42u racks. This sounds like a modular datacentre requirement.

And it wants "at least two separate [facility] locations subject to appropriate minimum separation requirements".  

Also on the government wish-list are datacentre compliance with security requirements, scalability, proven track-record in the last three years, performance certificates and specific latency performance requirements (less than 0.5 milliseconds) - to cater to the requirements of initial users -- Department of Work and Pensions, the Home Office and the Highways Agency.

The main aim is to have a datacentre facility that is high-quality, efficient, scalable, transparent, service-based ('utility') models - basically cloud-like but not the cloud.

How long do you reckon we'll have to wait before the government declares "serious over-capacity in datacentres" like it did in 2011?

Cloud's Hollywood moment - as a villain in Cameron Diaz's Sex Tape

avenkatraman | No Comments | No TrackBacks
| More

For those still wondering if cloud computing is really mainstream - even Hollywood thinks so. Cameron Diaz's rom-com Sex Tape releasing next Friday is all about the dangers of the cloud.

Cameron diaz

Cameron diaz (Photo credit: Wikipedia)

The movie stars Diaz and Jason Segel as a couple making a sex tape in an attempt to spice up their boring lives.  The video inevitably makes it to the cloud through Segel's iPad on which it was filmed.  The movie tracks how the couple desperately tries to get the video off the cloud while embarrassingly juggling comments from their parents, bosses and even mailman who all see it.

Here's some of the dialogues between Diaz (as Annie)and Segel (Jay):

Annie: (walks in) Honey, that sounds familiar, is that our...

Jay: You know the Cloud?

Annie: Stares ominously before yelling F@#$.

Jay: It went up! It went up to the cloud

Annie: And you can't get it down from the cloud?

Jay: Nobody understands the cloud. It's a f#$@ing mystery.

Whether they succeed to wipe their content off the cloud or not, we'll know only on 18thJuly.  But it looks like a big struggle with Jay and Annie taking desperate measures like nicking devices belonging to their friends and families and even breaking network infrastructures to get the tape off the cloud.

Maybe Jay and Annie are showing in satirical manner how cloud is a one way street - easy to get it up (even inadvertently) but damn hard to get it off!

Here's the trailer of Sex Tape starring Cameron Diaz, Jason Segel and The Cloud: