Cloud 28+: What HP must do to win over the cloud provider sceptics

cdonnelly | No Comments | No TrackBacks
| More

Boosting the take-up of cloud services across Europe has been the mission statement of both public sector and commercial organisations for several years now.

From the latter point of view, HP has been actively involved in this since the formal launch of its Cloud 28+ initiative in March 2015, which aims to provide European companies of all sizes with access to a federated catalogue that they can use to buy cloud services.

If you're thinking this sounds spookily like the UK government's G-Cloud public sector-focused procurement initiative, you would be right. The key principles are more or less the same, except the use of Cloud 28+ isn't limited to government departments or local authorities. It's open to all.

That message - during the two years that HP has been talking up its efforts in this area - doesn't seem to have reached everyone, though, particularly the providers one would assume would be a good fit for it.

Namely, the members of the G-Cloud community, who are already well-versed in how a setup like Cloud 28+ operates, and what is required to win business through it.

However, several key participants in the government procurement framework have privately expressed misgivings to Ahead In the Clouds about whether HP would welcome their involvement because they don't use its technologies to underpin their services.

Similarly, some said they weren't sure how they feel about hawking their cloud wares through an HP-branded catalogue, or if it would mean sharing details of the deals they do through Cloud 28+ with the firm.

The latter has been a long-held concern of cloud resellers, because - once the maker of the service you're reselling access to knows whose buying it - what's to stop them from cutting you out and dealing with them direct?  

HP assurance

All these points HP seemed intent on addressing during its Cloud 28+ in Action event in Brussels earlier this week, which saw the firm take steps to almost distance itself from the initiative it is supposed to be spearheading

As such, there were protestations on stage from Xavier Poisson, EMEA vice president of HP Converged Cloud,  about how Cloud 28+ belongs to the providers that populate its catalogue, not to HP, and how its future will be influenced by participants.

The attitude seems to be, while HP may have had a hand in inviting people to the Cloud 28+ party, it's not going to dictate who should be invited, the tunes they should dance to or what food gets served. It's simply providing a venue and directing people how to get there, before letting everyone get on with enjoying the revelry.  

From a governance point of view, it won't be HP calling the shots. That will be the job of a new, independent Cloud 28+ board who made their debut at the event.

On the topic of billing, the firm made a point of saying users won't be able to pay for services through Cloud 28+, and that it will - instead - rely on third-parties to handle the payment and settlement side of using the catalogue.

For those worried that being a non-user of HP technologies could preclude them from Cloud 28+, the news wasn't so good.

As it emerged that providers will have one year from joining Cloud 28+ to ensure the applications they want to sell through the catalogue run on the Helion-flavoured version of OpenStack. A move, HP said, is designed to guard users against the risk of vendor lock-in.

Even so, given the firm spent the majority of the event trying to play down its role in the initiative, it's a stipulation that might leave an odd taste in the mouth of some would-be participants and users. Especially in light of the uncertainty over just how open vendor-backed versions of OpenStack truly are 

HP said this is an area that could be reviewed later down the line by the Cloud 28+ governance board, but it will be interesting to see (once the initial hype around its launch dies down) if this emerges as turn-off for some potential participants.  

Opening up Europe for business

Admittedly, it would be short-sighted of them to dismiss joining Cloud 28+ out of hand on that basis, in light of the opportunities it could potentially open up for them to do business across Europe.

While the European Commission has stopped short of endorsing the initiative, it has acknowledged what Cloud 28+ is trying to do shares some common ground with its vision to create a Digital Single Market (DSM) across Europe, and might be worth paying attention to.

If Cloud 28+ emerges as the preferred method for the enterprise to procure IT, once the preparatory work to deliver the DSM is complete, for example, the Helion OpenStack requirement would pale in significance to the amount of business participants could gain through it.

Measuring the success of Cloud 28+

While Cloud 28+ is still under construction, it's only right the focus has been on the provider side of things, because - without them - there is no service catalogue.  

But it's what end users make of Cloud 28+ that will define its long-term success, despite HP's repeated boasts about how many providers have signed up (110 and counting) to-date. 

HP is preparing to go-live with Cloud 28+ in early December at its Discover event in London, and Poisson said the "client-side" of it will become a bigger focus after that, so it's likely we'll hear some momentum announcements around end user adoption in the New Year.

But, until there is a sizeable amount of business transacted through the catalogue, or some other form of demonstrable end user interest in it, there will remain a fair few providers who won't get why its worth their while to join.

Using big data to uncover the secrets of enterprise datacentre operations

cdonnelly | No Comments | No TrackBacks
| More

In this guest post, Frank Denneman, chief technologist of storage management software vendor PernixData, sets out why datacentre management could soon emerge as the main use case for big data analytics.

IT departments can sometimes be slow to recognise the power they yield, and the rise of cloud computing is a great example of this.

Over the last three decades IT departments focused on assisting the wider business, through automating activities that could increase output or refine the consistency of product development processes, before turning its attention to the automation of its own operations.

The same needs to happen with big data. A lot of organisations have looked to big data analytics to discover unknown correlations, hidden patterns, market trends, customer preferences and other useful business information. 

Many have deployed big data systems, forcing end users to look for hidden patterns between the new workloads and consumed resources within their own datacentre and see how this impacts current workloads and future capabilities.

The problem is virtual datacentres are comprised of a disparate stack of components. Every system is logging and presenting data the vendor seems appropriate. 

Unfortunately, variations in the granularity of information, time frames, and output formats make it extremely difficult to correlate data and understand the dynamics of the virtual datacentre.

However, hypervisors are very context-rich information systems, and are jam-packed with data ready to be crunched and analysed to provide a well-rounded picture of the various resource consumers and providers. 

Having this information at your fingertips can help optimise current workloads and identify systems better suited to host new ones. 

Operations will also change, as users are now able to establish a fingerprint of their system. Instead of micro-managing each separate host or virtual machine, they can monitor the fingerprint of the cluster. 

For example, how have incoming workloads changed the clusters' fingerprint over time, paving the way for a deeper trend analysis into resource usage. 

Information like this allows users to manage datacentres differently and - in turn - design them with a higher degree of accuracy. 

The beauty of having this set of data all in the same language, structure and format is that it can now start to transcend the datacentre. 

The dataset gleaned from each facility can be used to manage the IT lifecycle, improve deployment and operations, optimise existing workloads and infrastructure, leading to a better future design. But why stop there? 

Combining datasets from many virtual datacentres could generate insights that can improve the IT-lifecycle even more. 

By comparing facilities of the same size, or datacentres in the same vertical market, it might be possible to develop an understanding of the TCO of running the same VM on a particular host system, or storage system. 

Alternatively, users may also discover the TCO of running a virtual machine in a private datacentre versus a cloud offering. And that's the type of information needed in modern datacentre management. 

The enterprise benefits of making machine learning tools accessible to all

cdonnelly | No Comments | No TrackBacks
| More
In this guest post, Mike Weston, CEO of data science consultancy Profusion, discusses how Amazon's cloud-based push to democratise machine learning sets to benefit the enterprise.

Machine learning is the creation of algorithms that can interrogate and make predictions based on the contents of big data sets without needing to be rewritten for each new set of information. In a sense, it's a form of artificial intelligence.

The recent European launch of Amazon's machine learning platform has garnered a lot of attention and is designed so non-techies can use these tools to create predictions based on data.

Amazon's move follows Facebook's launch of a 'deep learning' lab in France to undertake research into artificial intelligence, particularly facial recognition. Both tech giants will compete with Microsoft's Azure computer service. 

Clearly, most major tech companies are pitching their tent in the data science camp. The reason is quite simple: demand. 

Data science is quickly moving from a niche service used by a few enterprises to a must have. Many business leaders are waking up to the fact that new technology like self-driving cars, the Internet of Things, smart cities and wearable devices are all powered or complimented by data science. 

The business case for using data science techniques in areas such as retail, logistics and marketing is also increasingly easy to prove. Consequently, data scientists are in demand like never before. Unfortunately, as many data scientists will tell you, their skills are still fairly rare - part computer scientist, part statistician. We're all aware that there is an acute skills gap in the technology sector and in many ways data scientists are the poster child. 

With demand increasing for data science and the pool of data science talent struggling to keep up with it, tech giants like Amazon are naturally seeking to provide non-techies with the skills needed to do it themselves. 

It may sound counterintuitive for the CEO of a data science consultancy to welcome this move, but I'm a firm believer that data science has immense power to improve businesses, cities and peoples' lives in general. 

If more people understand how to interrogate and use data to make informed decisions, the faster it will become an intrinsic part of how all businesses operate. Not only that, but the more repeatable tasks that can be undertaken by technology, the more time is freed up for data scientists to explore the information at their disposal more deeply and to innovate.

Addressing the big data skills gap
With the normalisation of data science as a business process or service, it should become more obvious and attractive for people to train in these techniques. This should eventually help plug the skills gap. 

Of course, the growth of data science platforms in Europe and the US won't, in the short-term, create an army of do-it-yourself data scientists capable of everything. Self-service software can only bring you so far. A great data scientist adds value to the data through analysis and interpretation - through asking 'why' and 'so what'. 

Highly-skilled data scientists are fundamental to the more complicated data science - uncovering profound insights from seemingly disparate data that radically change and improve how organisations relate to people. 

Nevertheless, the more data literate we all become the better we will be at both using data and asking the right questions. Businesses generally don't suffer from a lack of data. The problem tends to be that those in decision-making positions do not understand what the data could reveal and therefore what problems could be solved. This means that a business can underestimate the knowledge it holds, fail to exploit all its source of data, or fail to share information with people who could make better use of it. 

Business that understand data science and can use self-service platforms and tools to undertake basic actions will become savvier at collecting, managing and analysing it. With experience, should come an understanding of the full potential of data science and a willingness to experiment. 

Amazon's self-service platform is not in and of itself going to create a revolution in data science. However, it represents the growth in businesses seeking to empower themselves to make better use of the information they hold. 

Like any science, data science is at its most exciting when it is testing the limits of what is possible. By experimenting, repeating and refining techniques, data science becomes much more effective. 

Whether a business employs its own data scientists or gets outside help, the more these specialists work with a company, the more they understand, the better they become at creating insights and solutions, and the more value a business can extract from its data.

VDI: Why desktop virtualisation has finally come of age

cdonnelly | No Comments | No TrackBacks
| More
In this guest post, David Angwin, marketing director for Dell Cloud Client Computing, claims the benefits of desktop virtualisation now far outweigh the risks.

Desktop virtualisation (VDI) is a technology that has never been fully appreciated, despite promising benefits such as lower maintenance costs, greater flexibility and increased reliability.  

Many companies have taken advantage of server and storage virtualisation over the years, but desktops have been overlooked, and physical desktops remain the norm. 

While organisations are willing to invest heavily in virtualised back-end infrastructure, they may feel VDI will not provide much additional value, or that the drawbacks and risks outweigh the benefits. But this is not the case. 

Principles of desktop virtualisation
It is often thought VDI is about creating multiple virtual desktops on one device, but the reality is the user's desktop profile is stored on the host server and then optimised for the local device the user is logging on from. This ensures they can experience a desktop optimised for that device, allowing for a consistent experience. 

Many companies have deployed various access devices to consume VDI and are reaping the benefits. These include:

Thin Client: This is where all processing power and storage is in the datacentre, and is a very cost effective way of delivering desktops and applications to a mass audience as the devices are relatively low cost and typically use much less energy compared with standard desktops. 

Cloud PC: This is essentially a PC without a hard drive that offers full performance and is a good fit for organisations running a small datacentre. The operating system is sent to the PC from the server when the user requests a log on. 

Zero Clients: A zero client is designed for use on networks with a virtualized back-end infrastructure, and is able to offer all of the benefits of thin clients, but with added compute power. 

With the right client and back-end infrastructure, zero clients can help to optimise working conditions and cut IT running costs, as there is less equipment on the desk. 

Desktop Virtualisation Benefits
VDI does more than provide a low-cost desktop to mass users, but can help create new business opportunities, in the following areas.

Remote working: VDI enables organisations to work with companies in different locations around the world securely. By setting up remote workers on the network, users can access the data securely without putting any data at risk. This reduces the potential for data theft, corruption or loss, as the data does not leave the datacenter. 
•Business agility: With faster access to data, organisations are able to react intelligently to changing market conditions. 
•Windows migrations: Physical desktop set-ups can create challenges for IT departments when new operating systems are released. Traditionally, IT administrators needed to visit each desktop in the organisation to make the relevant updates. However, with VDI, organisations have the opportunity to reduce this cost and time, as the network can be updated centrally, meaning software patches and OS upgrades can be simplified. 

VDI brings end users and organisations a wide range of benefits including ongoing cost saving and compliance benefits. Companies in all business sectors can realise a stable and positive return on investment, while providing a desktop environment that offers users quick and easy secure access to everything on the network to enable productivity.

The invisible business: Mobile plus cloud

cdonnelly | No Comments | No TrackBacks
| More

In this guest post Amit Singh, president of Google for Work, explains why enterprises need to start adopting a mobile- and cloud-first approach to doing business if they want to remain one step ahead of the competition.

One of the most exciting things happening today is the convergence of different technologies and trends. In isolation, a trend or a technological breakthrough is interesting, at times significant. But taken together, multiple converging trends and advances can completely upend the way we do things.

Netflix is a classic example. It capitalised on the widespread adoption of broadband internet and mobile smart devices, as well as top-notch algorithmic recommendations and an expansive content strategy, to connect a huge number of people with content they love. The company just announced that it has more than 65 million subscribers.

Other examples of new and improved approaches to existing problems abound. As Tom Goodwin, SVP of Havas Media, said recently: "Uber, the world's largest taxi company, owns no vehicles. Facebook, the world's most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world's largest accommodation provider, owns no real estate. Something interesting is happening."

Each of these companies has capitalised on a convergence of various trends and technological breakthroughs to achieve something spectacular.

Some of the factors I see driving change include exponential technological growth and the democratisation of opportunity, as well as the emergence of public cloud platforms that are fast, secure and easy to use. Together, these trends underpin a powerful formula for rapid business growth: mobile plus cloud.

We know the future of computing is mobile. There are 2.1 billion smartphone subscriptions worldwide, and that number grew by 23% last year.

We spend a lot of time on our mobile devices. Since 2014, more internet traffic has come from mobile devices than from desktop computers. Forward-looking companies are building mobile-first solutions to reach their users and customers, because that's where we all are.

On the backend, the cost of computing has been dropping exponentially, and now anyone has access to massive computing and storage resources on a pay as you go basis because of cloud. Companies can get started by hosting their data and infrastructure in the cloud for almost nothing.

Hence mobile plus cloud. You can use mobile platforms to reach customers while powering your business with cloud computing. You can build lean and scale fast, and benefit automatically from the exponential growth curve of technology.

As computing power increases and costs decrease, cloud platforms grow more capable and the mobile market expands. In this state, technological change is an opportunity.

How cloud challenges the incumbents to think different

Snapchat is one of the best examples of how this can work. It was founded in 2011. The team used Google Cloud Platform for their infrastructure needs and focused relentlessly on mobile. Just four years later, Snapchat supports more than 100 million active users per day, who share more than 8,000 photos every second

The mobile plus cloud formula is exciting, but it also poses challenges for established players. According to a study by IBM, some companies spend as much as 80% of their IT budgets on maintaining legacy systems, such as onsite servers.

For these companies, technological change is a threat. Legacy systems don't incorporate the latest performance improvements and cost savings. They aren't benefitting from exponential growth, and they risk falling behind their competitors who are.

This can be daunting, since it's not realistic for most companies to make big changes overnight.

If you run a business with less than agile legacy systems, here's one practical way to respond to the fast pace of technological change: foster an internal culture of experimentation.

The cost of trying new technologies is very low, so run trials and expand them if they produce results. For example, try using cloud computing for a few data analysis projects, or give a modern browser to employees in one department of the company and see if they work better.

There are no "one size fits all" solutions, but with an open mind, smart leaders can discover what works best for their team.

It's important to try, especially as technology becomes more capable and more of the world adopts a mobile plus cloud formula. Those who experiment will be best placed to capitalise on future convergences.

Uber's success suggests enterprises need to think like startups about cloud

cdonnelly | No Comments | No TrackBacks
| More
Cloud-championing CIOs love to bang on about how ditching on-premise technologies helps liberate IT departments, as it means they can spend less time propping up servers and devote more to developing apps and services that will propel the business forward. 

It's a shift that, when successfully executed, can help make companies more competitive, as they're nimbler and better positioned to quickly respond to market changes and evolving consumer demands. 

But it takes time, with Gartner analyst John-David Lovelock telling Computer Weekly this week that companies take at least a year to get up and running in the cloud from having first considered taking the plunge. 

"It takes companies about 12 months to say, 'this server is more expensive or this storage array is too expensive so we should go for Compute-as-a-Service or Storage-as-a- Service instead'," he said. 

"Making that shift within a year is not something they can traditionally do if they weren't already on the path to the cloud." 

Future development 
Companies preparing to make such a move can't afford to be without a top-notch team of developers, if they're serious about capitalising on the agility benefits of cloud, according to Jeff Lawson, CEO of cloud communications company Twilio. 

"Every company has to think of themselves as software builders or they will probably become irrelevant. Companies are building software and iterating quickly to create great experiences for customers, and they're going to out-compete those that aren't," he told Computer Weekly. 

Lawson was in London this week to support his San Francisco-based company's European expansion plans, which have already seen Twilio invest in offices in London, Dublin and Estonia. 

In fact, the company claims to have signed up 700,000 developers across the globe, and that one in five people across the world have already interacted with an app featuring its technology. 

The firm's cloud-based SMS and voice calling API is used by taxi-hailing app Uber to send alerts out to customers when the drivers they've booked are nearby, for example, and similarly by holiday accommodation listing site, AirBnB. 

Both these companies are regularly lauded by the likes of Amazon Web Services, EMC and Google because they're both popular services that are said to be run exclusively on cloud technologies. 

Neither has to suffer the burden of having weighty, legacy technology investments eating up large portions of their IT budgets. For this reason, enterprises should be looking at them for inspiration about how to make their operations leaner, meaner and more agile, it's often said. 

Given that Uber and AirBnB have seemingly become household names overnight, it highlights - to a certain extent - why the move to cloud is something the enterprise can't afford to put off. 

Simply because, in the time it takes them to get there, a newer, nimbler, born-in-the-cloud competitor might have made a move on their territory and it may be harder to outmanoeuvre them with on-premise technologies.

What the enterprise can learn from Google's decision to go "all-in" on cloud

cdonnelly | No Comments | No TrackBacks
| More

Google has spent the best part of a decade telling firms to ditch on-premise productivity tools and use its cloud-based Google Apps suite instead. So, the news that it's moving all of the company's in-house IT assets to the cloud may have surprised some.

Surely a company that spends so much time talking up the benefits of cloud computing should have ditched on-premise technology years ago, right?

Not necessarily, and with so many enterprises wrestling with the what, when and how much questions around cloud, the fact Google has only worked out the answers for itself now is sure to be heartening stuff for enterprise cloud buyers to hear.

Reserving the right

The search giant has been refreshingly open in the past with its misgivings about entrusting the company's corporate data to the cloud (other people's clouds, that is) because of security concerns.

Instead, it prefers employees to use its online storage, collaboration and productivity tools, and has shied away from letting them use services that could potentially send sensitive corporate information to the datacentres of its competitors.

This was a view the company held as recently as 2013, but now it's worked through its trust issues, and made a long-term commitment to running its entire business from the cloud.

So much so, the firm has already migrated 90% of its corporate applications to the cloud, a Google spokesperson told the Wall Street Journal.

What makes this really interesting is the implications this move has for other enterprises. If a company the size of Google feels the cloud is a safe enough place for its data, surely it's good enough for them too?

Particularly as Google has overcome issues many other enterprises may have grappled with already (or are likely to) during their own move to the cloud.

Walking the walk

What the Google news should serve to do is get enterprises thinking a bit more about how bought-in the other companies whose cloud services they rely on really are to the idea.

While they publicly talk up the benefits of moving to the cloud, and why it's a journey all their customers should be embarking on, have they (or are they in the throes of) going on a similar journey themselves?

If not, why not, and why should they expect their customers to do so? If they are (or have), then talk about it. Not only will doing so add some much needed credibility to their marketing babble, but will show customers they really do believe in cloud, and aren't just talking it up because they've got a product to sell.

Did you believe in any of these cloud computing myths?

avenkatraman | No Comments | No TrackBacks
| More

Myths and misunderstandings around the use and benefits of cloud computing are slowing down IT project implementations, impeding innovation, inducing fear and distracting enterprises from yielding business efficiency and innovation, analyst firm Gartner has warned.

It has identified the top ten common misunderstandings around cloud:

Myth 1: Cloud is always about the money

Assuming that the cloud always saves money can lead to career-limiting promises. Saving money may end up one of the benefits, but it should not be taken for granted. It doesn't help when all the big daddies of the cloud world - AWS, Google Microsoft - are doing are tripping over each other to cut down prices. But cost savings must be seen as a nice-to-have benefit while agility and scalability should be the top reasons for adopting cloud services.

Myth 2: You have to do cloud to be good

According to Gartner, this is the result of rampant "cloud washing." Some cloud washing is based on a mistaken mantra (fed by hype) that something cannot be "good" unless it is cloud, a Gartner analyst said.

Besides, enterprises are billing many of their IT projects cloud for a tick in the box and to secure funding from the stakeholders. People are falling into the trap of believing that if something is good it has to be cloud.

There are many use cases where cloud may not be a great fit - for instance, if your business does not experience too many peaks and lulls, then cloud may not be right for you. Also, for enterprises in heavily regulated sector or those operating within strict data protection regulations, a highly agile datacentre within IT's full control may be a best bet.

Myth 3: Cloud should be used for everything

Related to the previous myth, this refers to the belief that the characteristics of the cloud are applicable to everything - even legacy applications or data-intensive workloads.

Unless there are cost savings, moving a legacy application that doesn't change is not a good candidate.

Myth 4: "The CEO said so" is a cloud strategy

Many companies don't have a cloud strategy and are doing it just because their CEO wants. A cloud strategy begins by identifying business goals and mapping potential benefits of the cloud to them, while mitigating the potential drawbacks. Cloud should be thought of as a means to an end. The end must be specified first, Gartner advises.

Myth 5: We need One cloud strategy or one vendor

Cloud computing is not one thing, warns Gartner.  Cloud services include IaaS, SaaS or PaaS models and cloud types include private, public or hybrid clouds. Then there are applications that are right candidates for one type of cloud. A cloud strategy should be based on aligning business goals with potential benefits. Those goals and benefits are different in various use cases and should be the driving force for businesses, rather than standardising on one strategy.

Myth 6: Cloud is less secure than on-premises IT

Cloud is perceived as less secure. To date, there have been very few security breaches in the public cloud -- most breaches continue to involve on-premises datacentre environments.

Myth 7: Cloud is not for mission-critical use

Cloud is still mainly used for test and development. But the analyst firm notes that many organisations have progressed beyond early use cases and are using the cloud for mission-critical workloads. There are also many enterprises (such as Netflix or Uber) that are "born in the cloud" and run their business completely in the cloud.

Myth 8: Cloud = Datacentre

Most cloud decisions are not (and should not be) about completely shutting down datacentres and moving everything to the cloud. Nor should a cloud strategy be equated with a datacentre strategy. In general, datacentre outsourcing, datacentre modernisation and datacentre strategies are not synonymous with the cloud.

Myth 9: Migrating to the cloud means you automatically get all cloud characteristics

Don't assume that "migrating to the cloud" means that the characteristics of the cloud are automatically inherited from lower levels (like IaaS), warned Gartner. Cloud attributes are not transitive. Distinguish between applications hosted in the cloud from cloud services. There are "half steps" to the cloud that have some benefits (there is no need to buy hardware, for example) and these can be valuable. However, they do not provide the same outcomes.

Myth 10: Private Cloud = Virtualistaion

Virtualisation is a cloud enabler but it is not the only way to implement cloud computing. Not only is it sufficient either. Even if virtualisation is used (and used well), the result is not cloud computing. This is most relevant in private cloud discussions where highly virtualised, automated environments are common and, in many cases, are exactly what is needed. Unfortunately, these are often erroneously described as "private cloud", according to the analyst firm.

"From a consumer perspective, 'in the cloud' means where the magic happens, where the implementation details are supposed to be hidden. So it should be no surprise that such an environment is rife with myths and misunderstandings," said David Mitchell Smith, vice president and Gartner Fellow. 

How Ucas keeps downtime away with disaster recovery strategies

avenkatraman | No Comments | No TrackBacks
| More

Business Continuity is often perceived as a concept only followed by the biggest of big business, but the reality is that the need and corresponding services increasingly underpin everyday life. An invisible safety net making sure important everyday events continue - no matter what is crucial for all verticals. And education is no exception.

In this guest blogpost, Mike Osborne, school governor & head of business continuity at Phoenix IT talks about the importance of business continuity for Ucas.

During the last few weeks, despite the fact that students now have to pay much higher fees for studying, we have seen more people than ever applying for higher education. An extra 30,000 new places were created this year. This has made the competitive battle between universities even more intense as they fight to secure the best students, especially over the clearing period.

For both -- the Universities and Colleges Admissions Service (Ucas) and universities -- the clearing and application periods are a time when the availability and function of their operations are most visible not just to students and their parents but also the Government and the Media.

In 2011, both universities and students experienced massive problems with the Ucas online system during the clearing and application periods. This year, it's more important than ever both for Ucas, the universities and students that there are no system disruptions so students can get the offers they need in a timely fashion and universities can fill their spaces.

Until 20th September, when the clearing vacancy search closed, Ucas was put to the test as thousands of students scrambled to get an offer through the clearing system. According to Ucas last year, on the first weekend after A-level results were announced some 20,000 applicants were placed at a university or college through the Clearing system. Considering the critical nature of this period, it's essential that the admission agency (Ucas) and universities have ICT and Call Centre resources operating effectively, and without interruptions affecting operations.

ICT and call centre systems are vulnerable to a variety of service disruptions, ranging from severe Disasters (e.g. fire) to mild (e.g. short-term software glitches, power or communications loss). Universities and Ucas are now taking out robust ICT contingency plans such as workplace business continuity and Cloud based DRaaS (disaster recovery as a service), to ensure that information processing systems and student data, critical to the university, are maintained and protected against relevant threats and that the organisation has the ability to recover systems in a timely and controlled manner.

With many mid-market companies also seeing the potential of Disaster Recovery using Cloud technology, it's not surprising that universities and Ucas are spending more time, money and effort on implementing DRaaS plans. DR as a Service allows data to be stored securely offsite and if the right service is selected, also provide near instantaneous system and network recovery.

When added to Call Centre recovery services as part of a Business Continuity Plan, DRaaS offers a convenient and cost effective solution.

With government and Higher Education Funding Council for England (Hefce) imposing fines on institutions for over-recruitment and with student data including unique research projects increasing, it is more essential than ever for universities and Ucas to keep system downtime to a minimum.  

Picking the cloud service that's right for you

avenkatraman | No Comments | No TrackBacks
| More

Organisations tend to have one of two IT strategies today: those who are already planning and eventually implementing cloud strategy, and those who are going to be doing it soon. But, the options that companies are faced with are dizzying, often contradictory, and usually dangerously expensive. So what's the best way for organisations to find the ideal cloud service for their specific needs?  

Determining what is needed from the cloud will drive what platform organisations should deploy on. Considerations like budget, expected performance, and project timeline all have to be carefully balanced before plunging ahead. Broadly speaking the platform options range from using someone else's public cloud, such as AWS, to building your own private cloud from scratch.  Where an organisation lands on that spectrum will be driven by how they rank the primary factors involved.

In this guest blogpost, Christopher Aedo, chief product architect at Mirantis explains how to evaluate the cloud requirements and pick the right platform

In essence there are seven key factors to address that will help businesses clarify what really matters and enable them to establish their individual cloud requirements. These are:

Control: How much control do you have over the environment and hardware? Make sure the cloud platform you select delivers the level of control you require. 

Deployment Time: How long before you need to be up and running? How much time will you burn just sorting out, ordering, racking and provisioning the hardware? It is critical that the cloud platform you choose can deployed in the right amount of time.

Available Expertise : Can your single IT staff member handle the project, or do you need a team of experts poached from the biggest cloud makers? Choose a cloud platform that matches the expertise you have available - or you can afford to bring in.

Performance: In a single server there are so many components impacting performance - from the memory bus to your NIC and everything in between. However performance directly correlates with budget - a larger budget will usually see greater performance. However there is no reason a smaller budget can't see high performance - providing you select the right option.  

Scalability: Your platform of choice should accommodate adding, or reducing, capacity quickly and easily. Will your chosen platform require downtime to scale up or down or can it be executed seamlessly?  

Commitment: From no contract "utility pricing" to the long term investment of owning all your gear - the longer you're tied up, the greater the risk.

Cost: This may be the most important and most difficult factor to account for. You can see it as an output from your other factors, or your ultimate limiter dictating where you'll make concessions.  There are definitely some good ways to maximize your dollar while minimisng your risk as long as you keep your head up and your eyes open.

By addressing these factors early on in the process of implementing a cloud based solution you will save yourself time, resource and budget in the long run. However having addressed what you want the cloud to deliver it is important that you match your requirements with the right type of cloud platform.

English: Cloud Computing Image
Here are the main cloud options:

Option 1: The Public Cloud

The big players here are AWS and RackSpace, but there are other contenders with fewer bells and whistles like DigitalOcean and Linode. These represent the lowest entry barrier (you just need 'net access and a credit card!) but also offer the least control and the greatest cost increases as you scale up.

The public cloud is priced like a utility offering the opportunity to scale up/down as needed. This is well suited to handling a highly elastic demand, but it's important to keep an eye on what you've spun up.

With a public cloud you get limited access to the underlying hardware, and no visibility into what's beneath the covers of the cloud - although you will get some flexibility in configuration and near instant deployment of service without the need for any real expert to be involved.

However, generally speaking, you're going to find relatively low performance with a public cloud with higher performance coming at significantly increased cost. You can also expect to be billed by the minute in return for not being held to any contract. Many will offer discounts with a commitment of some sort, but then you give up the ability to drop resources when you no longer need them.

Option 2: Hosted Private Cloud

There are many well-known vendors offering options in this space, ranging from complete turn-key environments to build-to-order approaches. They will provide the hardware on short-term lease, and will charge you to manage that hardware.  

Companies like RackSpace will work with you to architect your environment, and provide assistance in deployment - which could take up to 6 weeks. You'll need moderate to extreme expertise and your average junior sys-admin is going to be way out of their depth using such a service.

Levels of control will vary from high to minimal depending on how much of the platform you manage and deploy yourself. The level of commitment will also vary but the longer you commitment the more likely an alternative platform is to make sense. HPC is not well suited to an elastic demand and upscale in 2-6 weeks - and generally there will be no 'scale-down' option.

Option 3: Build your own private cloud (BYPC)

BYPC requires a high level of technical expertise within the business and will present you with the greatest technical and financial risk. However, you will have total control over the hardware design, the network design, and how your cloud components are configured - but expect this to take a year to 18 months to complete.

Your costs in the build-your-own approach can be kept down if performance and reliability are of no concern, or they can (needlessly) go through the roof if you're not making carefully planned decisions. The performance of BYPC will be entirely dependent on your budget constraints and how successful your architectural planning is.

There are lots of moving pieces, and the risks are tremendous, as you may be committing hundreds of thousands of dollars to your cloud pilot. Ask anyone who's actually tried this; it's a lot harder than it looks.

Option 4: Private-cloud-as-a-Service (PCaaS)

PCaaS, such as OpenStack, represents a balance between the value and flexibility of public cloud and the control of private cloud.

PCaaS provides total control over how hardware is used, and that hardware is 100% dedicated to you with a minimum One-day commitment on a rolling contract. As a result of the minimal commitments it can be deployed within a few hours and you will be free to scale the size of your environment up and down at nearly the same pace as if you were on a public cloud.

The costs are higher than a comparable number of VMs in a public cloud, but with no long-term commitment and clear pricing from the start, your financial risks are lower than any other private cloud approach.

You'll need a moderate skill level with PCaaS but your risks are mitigated because you're in a managed environment. Whereas, until recently, PCaaS required you to have a reasonable amount of OpenStack knowledge developments such as OpenStack Express have drastically reduced the expertise needed to implement a PCaaS.

Each of these cloud platforms has validity, as well as a real sweet spot, where that particular approach is the only obvious good choice for your business needs. If you properly consider your requirements and how they match with the options available, your cloud project will not end up as a  costly mistake.

Microsoft Azure European users take note - HDInsight performance issue

avenkatraman | No Comments | No TrackBacks
| More

Microsoft Azure cloud service status website at 5pm BST on Friday, September 26th showed that while the core Azure platform components were working properly, there was "partial performance degradation" on Azure's HDInsight service for customers in West Europe.

The status website warned that customers may experience intermittent failures when attempting to provision new HDInsight clusters in West Europe.

HDInsight is a Hadoop distribution powered by the cloud. It allows IT to process unstructured or semi-structured data from web clickstreams, social media, server logs, devices and sensors to analyse data for business insights.

Microsoft has assured cloud users that the engineers have identified the root cause of the performance degradation issue and are working out the mitigation steps.

The company has vowed to provide updates every two hours or as events update. I sense a long wait before the weekend beckons for European enterprises' Azure users.

Doesn't the NHS use Microsoft Azure HDInsight? Oh yes, it does!

EMC-HP merger would have meant more of the same old complex, slow, legacy and big IT

avenkatraman | No Comments | No TrackBacks
| More

Two large, very large companies that have been under tremendous pressure in the software-defined storage and cloud era - EMC and HP - toyed with the idea of a merger, according to Wall Street Journal but eventually the idea fell apart because of concerns from both HP and EMC on whether their shareholders would give it a nod.

The deal would have created a mega-vendor worth $130bn with HP's Meg Whitman as the chief executive of the combined entity and EMC's Joe Tucci as chairman or president.  

Ailing EMC has been under pressure from its investors calling for it spin-off its sister company VMware on the prospect that the company will do better if split up.

According to WSJ, the EMC-HP merger talks have been going on for almost a year.

But the combination of two traditional vendors would have only meant more of the same old legacy, complex, slow and big IT offerings. There is an absence of a meaningful synergy but lot of service overlaps.

Meg Whitman speaks at the Tech Museum in San J...

Meg Whitman, CEO, HP

HP has a bad history in acquisitions. A merger would have been bad news for both companies even though EMC has a better track record of acquisitions and is attempting to redefine itself in the new cloud era.

EMC Corp is far more than EMC - it has all those fingers in the pies of VMware, RSA, VCE, Pivotal and so on: unpicking these or keeping them going forward would be difficult.

Other names mentioned in a merger with EMC include Dell and Cisco Systems.

Mergers are always a hit or miss and more of a risk when the stakes are higher, as in this case. The problem with these traditional vendors is that, in the past, they have tried to address all the aspects of a datacentre and so they have competing products. For example, EMC-owned VMware's software-defined network (SDN) offerings threaten Cisco's switches and routers business worth billions.   

As one analyst tells me if EMC is really seeking a merger, it should be going for a Rackspace-type platform company (not Rackspace itself as it has ruled itself out now) where EMC can make a bigger play of VMware's cloud offering, of the whole software-denied everything message, of ViPR and so on.

Or would Tucci go for Cisco? Markets are betting on EMC-Cisco deal with EMC share prices are up 16 cents.

A merger with heavyweight HP would have left a company trying to sell a complex approach into standard customers' datacentres. Thankfully, it was only a thought. 

Dell is making all the right noises and the right bets. Will its magic work?

avenkatraman | No Comments | No TrackBacks
| More

I had a chance to see Michael Dell - in flesh - for the first time yesterday in Brussels at the Dell Solutions Summit. He delivered a great keynote on Dell's datacentre strategy, its investment plans and also spoke about all the hot IT topics - software-defined infrastructures, internet of things, security and data protection.

English: Michael Dell, founder & CEO, Dell Inc.

Michael Dell, founder & CEO, Dell

Michael sounded optimistic about Dell's place in the future IT but what was new was how open Dell has become as a company and its firm commitment to all things that determine the new age IT - software defined, cloud, security, mobile, big data,  next-gen storage and IoT.

For one, Michael was candid with the numbers. He said:

  • The total IT market is worth $3 trillion and we have a 2% share of it. Only 10 companies have 1% or more share of that $3 trillion market.
  • Dell's business comprises 85% government and enterprise IT and just 15% is end-user focused.

This kind of number-feeding the press and analysts is new at Dell, which, until now, like rest of the service providers in the industry kept business numbers close to its chest.

But that was not all, Michael didn't hold back from saying a few things that raised a few eyebrows:

  • "I wish we hadn't made some of the acquisitions we did."
  • "As ARM moves to 64-bit architecture, it becomes more interesting," Michael said. He said the company is open to working with its longstanding partner Intel's rival for mainstream datacentre products if that's where the market moved.
  • He also said, Dell is a big believer of the software-defined future. "We ourselves are moving our storage IT into a software defined environment."
  • And to those that wrote off the PC industry, Michael said: "We absolutely believe in the PC business, we are consolidating/growing".

Michael's optimism and confidence in the company's future is a far cry from last year when the company's ailing business strategy forced it to get itself off the public eye.

"Going private has helped us," he said while speaking in Brussels. "It has enabled us to put our focus 100% on our customers. We have invested more in research, development, innovation and in channels in the past year."

Dell also seems to be striking a right chord with its customers, channel and analysts as those I spoke to said they like the company a lot and are pleased with how quickly it adapts and listens to its users.

Dell Research will be focused on five areas - software defined IT, next generation storage (NVM, Flash), next gen cooling, big data/analytics, IoT. Analysts say that's a good bet.

"Dell's foray into research clearly designed to establish it as an IT innovator as well as a scale/efficiency player," says Simon Robinson from 451 Research group on Twitter.

Product-wise too it is making progress. Dell has been more creative than its competitors in designing its new servers on the latest Xeon chip. Its 13th-generation PowerEdge servers have capabilities such as NFC for server inventory management, new Flash capabilities, and has more front sockets.

Dell is also being innovative in its enterprise cloud strategies. It is providing the reference architectures, proof of concepts and server technologies to its system integrators to do the cloud implementation for customers. Having catered to the likes of AWS in the past, Dell has used that cloud experience to build reference architectures but gets the channel to implement it.

"We see private cloud as the future of cloud computing," Michael said. According to him, enterprises in Europe prefer "local" clouds for data sovereignty and privacy issues, so it is supporting local system integrators with local datacentres to build cloud for the customers.

Michael and his company are certainly making the right noises and are investing in the right technologies. But will it increase their ranking in the datacentre (which I see as fourth after Cisco, HP and IBM - in that order), only time will tell.

Tintin (character)

Tintin with Snowy

Also, is it symbolic that Dell held its Solutions Summit party at the world-famous Comic Strip Museum in Brussels - the home of Tintin, Captain Haddock, the Smurfs and Asterix? Don't know, but I sure did have fun!

Cloud wars just got spicier, thanks to Google

avenkatraman | No Comments | No TrackBacks
| More

Ambitious startups and developers around the world got a big treat from Google ahead of the weekend - $100,000 worth of Google cloud credits along with 24/7 support from the tech experts at Google.

Urs Hölzle, Google Fellow launched the "Google Cloud Platform for Startups" initiative on Friday to help startups take advantage of its enterprise cloud offering and "get resources to quickly launch and scale their idea". The free cloud resources are aimed at helping developers focus on code without worrying about managing infrastructures. Google has also not set any restrictions on the type of cloud services users can spend their credits on giving them complete flexibility to choose IaaS, PaaS or SaaS or even data-related cloud offerings.

But to qualify, the startups will need to have less than $5m dollars in funding and have less than $500,000 in annual revenue. And the cloud credits are available through incubators, accelerators and investors.

Cloud computing has always been a technology that democratized IT by giving startups a level-playing field to compete with the big players. And cloud behemoth AWS is seen as the "go-to" cloud option for the cool, emerging poster-children of the web such as Netflix and Instagram (before it was acquired by Facebook).    

Google Appliance as shown at RSA Expo 2008 in ...

Google Appliance as shown at RSA Expo 2008 in San Francisco. It was only a computer case with no parts inside.-Daniel A (Photo credit: Wikipedia)

Google, AWS, Microsoft and IBM have so far been tripping over each other to announce price drops to lure more users to their cloud services. AWS launched a free programme called AWS Activate to help selected startups with resources for working with AWS. It includes services such as AWS web-based training, virtual office hours with an AWS Solutions Architect and credit for eight self-paced training labs. 

But Google has now upped the game by targeting startups with cloud credits of the size and scale that hasn't been seen before.

Cloud giants are targeting the ambitious startups because today's startups can become tomorrow's enterprises and the providers want these potential customers to use their platforms. 

Startups are lean and are quick at adopting new technologies such as the cloud but need some technical expertise so they can focus on their business than the underlying technology. Google is offering exactly that in a bid to get a footprint in the enterprise IT.

It will be interesting to see how UK's promising tech startups such as Swiftkey and Hailo use these cloud credits offered by Google. But more importantly how quickly AWS, Microsoft and IBM respond to this. 

Cloud wars just got spicier!  


Want cloud success? Eat your greens!

avenkatraman | No Comments | No TrackBacks
| More

Cloud computing is becoming a default option of delivering IT services but to reap all the benefits of the cloud, enterprises must do the boring stuff first.

On Thursday, I attended a Westminster eForum seminar on the future of cloud computing where I witnessed very interesting conversations around cloud adoption, risks, and its future from speakers ranging from analysts, legal experts and industry association heads to cloud vendors and public sector professionals.

Studies show that broccoli may help in the pre...

Boring but necessary! (Photo credit: Wikipedia)

When experts said cloud can be secure and cost-effective and can lead to innovation - it did not raise any eyebrows from the delegates. This suggests to me that users are fully convinced of cloud's benefits.

But even then, some cloud projects backfire. Why?

The excitement of cloud is leading enterprises to overlook the boring work they need to do beforehand to yield the full benefits of the cloud. Ovum analyst Gary Barnett illustrated this best in his (PowerPoint-free!) session. Here is an article where Gary shares the user instances where cloud has failed.

"My mum made sure I ate my broccoli before I got my pudding," Gary said. But in the cloud world, no one's eating the broccoli, he said.

"If you don't clean up your data before putting it on the cloud platform, you will have cloudy rubbish." He also pointed that some users are finding cloud expensive because they are not building proper policies and guidelines around its use.

Experts at the seminar insisted cloud is a secure way of doing IT and cloud breaches are usually because of users' "silly and predictable passwords" and their lack of awareness. Gary urged enterprises to educate users on the loopholes of predictable passwords.

"No one loves the boring stuff. But just like you have to eat your greens, you have to do all the boring stuff before adopting the cloud. Otherwise you're just transferring onsite mess offsite," Gary said.

The "Eat your greens" theme continued throughout the seminar and the floor roared out laughing when Microsoft's cloud director Maurice Martin said: "In my case, the greens were the cabbages, broccoli was too posh."


Five questions you must ask your cloud provider

avenkatraman | No Comments | No TrackBacks
| More

One of the main barriers to cloud adoption is data privacy. This is an issue because, for the majority of cloud providers, EU/EEA and US data privacy and Information Security standards are minefields which are very difficult to cross. And that is because their focus has been on the ease of use and functionality of their services rather than the all-important data privacy, information security, data integrity and reliability requirements around providing these services responsibly.

But, when looking through the plethora of cloud service providers, you can immediately sort the 'wheat from the chaff' once you start drilling down into the data privacy, information security, data integrity and reliability capabilities offered to ensure the protection of your and your customers' data.

In this guest blog post, Mike McAlpen, the executive director of security & compliance and data privacy officer at 8x8 Solutions outlines the questions cloud users must ask their providers before signing a contract.

Have you chosen the right cloud services provider?  
- M
ike McAlpen

By asking your cloud services provider the following questions you will be on the way to knowing whether you can entrust your data into its care.  

  • Compliance with EU/EEA data privacy standards

The most important question is whether your provider can provide third-party verification/audit assurance of their compliance with EU/EEA and/or US data privacy standards? It is not enough for the provider to simply produce this verification/audit assurance, it must show that it has fully implemented the UK Top 20 Critical Security Controls For Cyber Defence and/or ISO 27001 and/or rigorous US standards such as the Federal Information Security Act (FISMA) and International Information PCI-DSS v3.0 security standards.

If this verification/audit assurance is not available then your business is at peril of not meeting EU/EEA and/or US standards.

In the US, many EU/EEA and other countries it can be a criminal offence if a breach of personal data privacy occurs and an individual employee or senior management, depending on the circumstances of the breach, is deemed to be responsible.

  • Onward Transfer of Data

Does your provider work with third-party suppliers in order to deliver the cloud services it offers? If so you must check that it has contracts in place with its third-party suppliers that provide assurance that they are, and will continue to be, compliant with EU/EEA and/or US standards.

  • Data Encryption

Does the cloud solutions vendor provide the capability to encrypt sensitive data when it is being transferred across the Internet and importantly again when it is 'at rest'? (i.e. Stored by your cloud services provider, or in files on a computer, laptop USB flash drives or other electronic media?).

  • The Right to be Forgotten

Has your provider's solution been engineered to enable it to identify and associate each user's personal information data? It must also provide the capability for each user to view and modify this personal data. In addition, if the user wishes this data to be deleted, the provider must then be able to completely erase all of that person's personal data without affecting anyone else's data. 

  • Service Level Agreements (SLAs)

Outside of compliance with data privacy standards, another key issue is asking your provider how you will determine and then document, within your services contract, the required service level agreements (SLAs). It's no use whatsoever having the cloud services you have always wanted if you have no way of measuring or monitoring if they are actually being delivered to an acceptable level or if there are no financial penalties for non-compliance.

If your provider cannot answer "yes" to the above questions and you cannot agree to mutually acceptable SLAs - look for another provider!

VMworld 2014: What happened on Day 1

avenkatraman | No Comments | No TrackBacks
| More

On Day 1 of its annual conference VMworld 2014 themed "No Limits", VMware unveiled its strategies around open cloud platform OpenStack and around container technology Kubernetes. It also launched new tools to extend its software-defined datacentre and hybrid cloud offerings.

Open software-defined datacentre

One of the significant announcements was the VMware Integrated OpenStack - a service that provides enterprises - especially SMBs the flexibility to build a software-defined datacentre on any technology platform (VMware or not).

VMware Integrated OpenStack distribution is aimed at helping customers repatriate workloads from "unmanageable and insecure public clouds". Take that AWS.

Container technology and VMware infrastructures; Kubernetes collaboration

VMware is collaborating with Docker, Google and Pivotal to allow enterprises to run and manage container-based applications on its platforms.

At the annual conference, VMware said it has joined the Kubernetes community and will make Kubernetes' patterns, APIs and tools available to enterprises. Kubernetes, currently in pre-production beta, is an open-source implementation of container cluster management.

With Google, VMware's efforts will focus on bringing the pod based networking model of Open vSwitch to enable multi-cloud integration of Kubernetes.

Not only will deep integration with the VMware product line bring the benefits of Kubernetes to enterprise customers, but their commitment to invest in the core open source platform will benefit users running containers," said Joerg Heilig, VP Engineering, Google Cloud Platform. "Together, our work will bring VMware and Google Cloud Platform closer together as container based technologies become mainstream."

With Docker, it will collaborate to allow Docker Engine on VMware workflows. It will also work to improve interoperability between Docker Hub with VMware vCloud Air, VMware vCenter Server and VMware vCloud Automation Center.

New hybrid cloud capabilities

At VMworld, VMware released new hybrid cloud service capabilities and a new line-up of third-party mobile application services. The new capabilities include vCloud Air Virtual Private Cloud OnDemand that offers customers with on demand access to vCloud Air. Another capability - VMware vCloud Air Object Storage - is aimed at providing users with scalable storage options for unstructured data. It will enable customers to easily scale to petabytes and only pay for what they use, according to the company.

It also launched mobile development services within VMware's vCloud Air's service catalog.

Management as a service offerings

VMware also released two new IT management tools under its vRealize brand- for managing a software-defined datacentre and public cloud infrastructure services (IaaS).

VMware vRealize Air Automation is the cloud management tool that allows users to automate the delivery of application and infrastructure services while maintaining compliance with IT policies.

Meanwhile, VMware vRealize Operations Insight offers performance management, capacity optimization, and real-time log analytics. The tool also extends operations management beyond vSphere to an enterprise's entire IT infrastructure. Another sign than VMware is opening up its ecosystem to accommodate other virtualisation platforms.

Partnerships with Dell on software defined services

VMware  has extended collaboration with Dell to combine its NSX network virtualisation platform with the latter's converged infrastructure products.

"Global organisations are adopting the software-defined datacentre as an open, agile, secure and efficient architecture to simplify IT and transition to the hybrid cloud," said Raghu Raghuram, executive vice president, SDDC division, VMware. "The software-defined datacentre enables open innovation at speeds that cannot be matched in the hardware-defined world. As partners, VMware and Dell will advance networking in the SDDC, and collaborate to make advanced network virtualisation available to mutual customers." 

Partnership with HP on hybrid cloud

VMware and HP have extended their collaboration to give momentum to users' SDDC and hybrid cloud adoption. As part of the partnership, HP Helion OpenStack will support enterprise-class VMware virtualisation technologies.

The companies will also make standalone HP-VMware networking solution generally available. Together, these collaborative efforts can help simplify the adoption of the software-defined datacentre and hybrid cloud with less risk, and with greater operational efficiency and lower costs.

All in all, looks like VMware is opening up to competitive platforms and warming up to open source technologies but retains its standoffish traits when it comes to public cloud services.

Microsoft Azure goes down for users around multiple regions including Europe and Asia

avenkatraman | No Comments | No TrackBacks
| More

Just when I thought to myself: Cloud services must be improving as there are fewer outages reported this year than there were last year, Microsoft Azure cloud service went down for many users, including European ones, earlier this week.

Microsoft's Azure status page currently displays a chirpy: 

All good!

Everything is running great.

It also displays a bright green check besides its core Azure platform components such as Active Directory, and popular cloud services including its SQL Databases, and storage services.

A snoop into its history page shows that all wasn't good aboard Azure on Monday and Tuesday. Users experiencing full service interruption and performance degradation across several services including StorSimple, storage services, website services, backup and recovery and virtual machine offerings.

For a brief moment on Tuesday, August 19th, a subset of its customers in West Europe and North Europe using Virtual Machines, SQL Database, Cloud Services, and Storage were unable to access Azure resources or perform management operations. Users accessing Azure's Website cloud services in Northern Europe too faced connectivity issues.

WELCOME TO Microsoft®

WELCOME TO Microsoft® (Photo credit: Wikipedia)

The previous day, some of its customers across multiple regions were unable to connect to Azure Services such as Cloud Services, Virtual Machines, Websites, Automation, Service Bus, Backup, Site Recovery, HDInsight, Mobile Services, and StorSimple. 

Some of the services were down for almost five hours.

This week's global outage follows last week's (August 14th) Azure outage where users across multiple regions experienced full service interruption to its Visual Studio Online. The news doesn't bode well for CEO Satya Nadella's "cloud-first" strategy.

Here is a detailed report on Azure's latest datacentre outage.

Well, I may have tempted fate. Resilience and reliability are two words I'll use sparingly to describe public cloud services. 

PUE - the benevolent culprit in the datacentre

avenkatraman | No Comments | No TrackBacks
| More

Internet of Things, big data, and social media are all creating an insatiable demand for scalable, sophisticated and agile IT resources, making datacentres a true utility. This is making big tech and telecom companies to drift a bit from their core competency and build their own customised datacentres - take Telefonica's €420m investment in its new Madrid datacentre.

But the mind-boggling growth of computing infrastructure is occurring amid shocking increases in energy prices. Datacentres consume up to 3% of global electricity and produce 200 million metric tons of carbon dioxide, at an annual cost of $60bn. No wonder, IT energy efficiency is primary concern for everyone from CFOs to climate scientists.

In this guest blog post, Dave Wagner, TeamQuest's director of market development with 30 years of experience in the capacity management space explains why enterprises must not be too hung up on PUE alone to measure their datacentre efficiency.

Measuring datacentre productivity? Go beyond PUE
-by Dave Wagner

In their relentless pursuit of cost effectiveness, companies measure datacentre efficiency with power utilization effectiveness (PUE). The metric measures the total amount of power coming onto the datacentre floor, divided by how much of that power is actually used by the computing equipment.

PUE = Total energy
            IT energy

PUE is necessary but not a sufficient indicator to gauge the costs associated with running or leasing datacentres.

While PUE is a detailed measure of datacentre electrical efficiency, it is one of several elements that actually determine total efficiency. In the bigger picture, focus should be on more holistic and accurate measures of business productivity, not solely on efficient use of electricity.

Gartner analyst Cameron Haight talked about how a very large technology company owns the most efficient datacentre in the world with a PUE of 1.06. This basically means that 94% of every watt that comes into the floor actually gets to processing equipment. This remarkably efficient PUE achievement does not detail what they do with all of that power, and how much total work is accomplished. If all that power is going to servers that are switched on but essentially idling and not actually accomplishing any useful work, what does PUE really tell us? Actual efficiency in terms of doing real-world work could be nearly zero even when the PUE metric indicates a well-run datacentre in isolation.


Datacenter (Photo credit: Wikipedia)

Boiled down, what companies end up measuring with PUE is how efficiently they are moving electricity around within the datacentre.

By some estimates, many datacentres are actually only using 10-15% of their electricity to power servers that are actually computing something. Companies should minimize costs and energy use, but nobody invests in a company solely based on how efficiently they move electricity.

Datacentres are built and maintained for their computing capacity, and for the business work that can be done thereupon. I recommend correlating computing and power efficiency metrics with the amount of useful work and with customer or end user satisfaction metrics. When these factors are optimised in a continuous fashion, true optimization can be realised.

I've talked about addressing power and thermal challenges in datacentres for over a decade, and have seen progress made - recent statistics show a promising slowdown in datacentre power consumption rates in the US and Europe due to successful efficiency initiatives. Significant improvements in datacentre integration have helped IT managers control the different variables of a computing system, maximising efficiency and preventing over- or under-provisioning, both having obvious negative consequences.

An integrated approach to planning and managing datacentres enables IT to automate and optimise performance, power, and component management with the goal of efficiently balancing workloads, response times, and resource utilisation with business changes. Just as the IT side analyses the relationships between the components of the stack--networking, server, compute, and applications--the business side of the equation must always be an integral part of these analyses. Companies should always ask how much work they are accomplishing with the IT resources they have; unfortunately, often easier said than done. In the majority of datacentres and connected enterprises, the promise of continuous optimisation has not been fully realised, leaving lots of room for improvement.

As datacentres grow in size and capabilities, so must the tools used to manage them. Advanced analytics have become essential to bridging IT and business demands, starting with relatively simple co-relative and descriptive methods and progressing through predictive to prescriptive approaches. Predictive analytics are uniquely suited to understand the nonlinear nature of virtualised datacenter environments. 

These advanced analytic approaches enable enterprises to combine IT and non-IT metrics in such a powerful way that the data generated by the networked computing stack can become the basis for automated and embedded business intelligence. In the most sophisticated scenarios, analytics and machine algorithms can be applied in such a way that the datacentre learns from itself and generates insight and models for decision-making approaching the level of artificial intelligence.


What's making Oregon the datacentre capital

avenkatraman | No Comments | No TrackBacks
| More

I am just back from Oregon where I attended a workshop at Intel's Hillsboro campus. What amazed me the most - apart from the most delicious Peruvian cuisine I had at Portland of course - is Intel's large presence in the area and the number of big datacentres in Oregon.

Intel is the biggest employer of the region and has multiple, vast campuses there. It even has its own airport in Hillsboro, Oregon from where it operates regular flights to its Santa Clara headquarters for its employees. Several flights that carry up to 40 Intel employees operate every day. The hotel I stayed in at Hillsboro told me that on any given day, about 70% of the people it serves are Intel-related.

Apart from Intel almost hijacking Oregon with its presence, the state is also home to many datacentre facilities. Facebook (Prineville datacentre), Google (Its first datacentre - The Dalles), Amazon (Boardman), Apple (also Prineville), and Fortune Datacentres (Hillsboro) - they all have large facilities in Oregon. 

Here's why:


One of the primary reasons many tech giants consider Oregon as the home to their datacentres is cheaper costs. Oregon does not have sales tax and this means computer products, building materials and services are cheaper than elsewhere in the US. In addition, power - which is a main datacentre money-guzzler - is cheaper in Oregon. Furthermore, the local government lures tech giants by providing incentives such as tax breaks and subsidies. All these factors attract datacentre investment here.

Prineville, Oregon

Prineville, Oregon (Photo credit: Wikipedia)

Talented workforce

Because of the tech culture of the region, many professionals develop server management and virtualisation skills. The emphasis on IT skills in the universities and Silicon Valley's investment in regular training workshops make the workforce in the area more talented and skilled for datacentre management.


Oregon's weather is comparatively mild. This makes the tricky task of datacentre cooling a little bit easier. It is simpler to devise cooling strategies for a facility when the ambient temperature does not vary highly. Oregon does not get baking hot like Texas or Kansas in the summers nor does it get overwhelmingly snowed under in winters.


The vast stretches of fibre optics cable that run even across Oregon's mountains, lakes and deserts provide fast connections and latency of milliseconds. Its proximity to Silicon Valley is another puller of datacentre investment.

Geography, stability and security

Big cloud and IT service providers love political and economic stability, and physical security and Oregon gives them that. The region is not too prone to natural disasters such as volcano eruptions, earthquakes or hurricanes - which acts as another big attraction to datacentre builders. Take Iceland for instance, despite its promise of 100% green geothermal energy and fibre optics connections to mainland Europe, many IT providers hesitate to set up datacentres there because of its vulnerability to natural disasters.  

Oregon has seismically stable soil and as part of the west coast, it has little to no lightning risk - one of the major cause of outages in the US.

As Google, which opened The Dalles in 2006 by investing $1.2bn, says, Oregon has the "right combination of energy infrastructure, developable land, and available workforce for the datacentre".

I wonder what Oregon's equivalent in Europe would be?