Using the Working Set to improve datacentre workload efficiency

cdonnelly | No Comments | No TrackBacks
| More
In this guest post, Pete Koehler, technical marketing engineer for PernixData, explains why datacentre operators need to get a handle on the Working Set concept to find out what's really going on in their facilities. 

There are no shortage of mysteries in the datacentre, as unknown influencers undermine the performance and consistency of these environments, while remaining elusive to identify, quantify, and control. 

One such mystery as it relates to modern day virtualised datacentres is known as the "working set." This term has historical meaning in the computer science world, but the practical definition has evolved to include other components of the datacentre, particularly storage. 

What is a working set? 
The term refers to the amount of data a process or workflow uses in a given time period. Think of it as hot, commonly accessed data within the overall persistent storage capacity. 
But that simple explanation leaves a handful of terms that are difficult to qualify, and quantify. 

For example, does "amount" mean reads, writes, or both? Does this include the same data written over and over again, or is it new data? 

There are a few traits of working sets that are worth reviewing. These are: 
•Driven by the applications driving the workload, and the virtual machines (VMs) they run on. Whether the persistent storage is local, shared, or distributed doesn't matter from the perspective of how the VMs see it.  
•Always related to a time period, but it's a continuum, so there will be cycles in the data activity over time.
•Comprised of reads and writes. The amount of each is important to know because they have different characteristics, and demand different things from the storage system.
•Changed as your workloads and datacentre evolves, and they are not static.

If a working set is always related to a period of time, then how can we ever define it? Well, a workload often has a period of activity followed by a period of rest. 

This is sometimes referred to the "duty cycle." A duty cycle might be the pattern that shows up after a day of activity on a mailbox server, an hour of batch processing on a SQL server, or 30 minutes compiling code. 

Working sets can be defined at whatever time increment desired, but the goal in calculating a working set will be to capture at minimum, one or more duty cycles of each individual workload.

Why it matters
Determining a working set size helps you understand the behaviours of your workloads, paving the way for a better designed, operated, and optimised environment.
 
For the same reason you pay attention to compute and memory demands, it is also important to understand storage characteristics; which includes working sets. 

Therefore, understanding and accurately calculating working sets can have a profound effect on a datacentre's consistency. For example, have you ever heard about a real workload performing poorly, or inconsistently on a tiered storage array, hybrid array, or hyperconverged environment? 

Not accurately accounting for working set sizes of production workloads is a common reason for such issues.

Calculating procedure
The hypervisor is the ideal control plane for measuring a lot of things, with storage I/O latency being a great example of that. 

It doesn't matter what the latency a storage array advertises, but what the VM actually will see. So why not extend the functionality of the hypervisor kernel so that it provides insight into working set data on a per VM basis? 

Then, once you've established the working set sizes of your workloads, it means you can start taking corrective action and optimise your environment. 

For example, you can:
•Properly size your top-performing tier of persistent storage in a storage array
•Size the flash and/or RAM on a per host basis correctly to maximize the offload of I/O from an array
•Take a look at the writes committed on the working set estimate to gauge how much bandwidth you might need between sites, which is useful if you are looking at replicating data to another datacentre.
•Learn how much of a caching layer might be needed for your existing hyperconverged environment
•Demonstrate chargeback/showback. This is one more way of conveying who are the heavy consumers of your environment, and would fit nicely into a chargeback/showback arrangement

In summary
Determining an environment's working set sizes is a critical factor of the overall operation of your environment. Providing a detailed understanding of working set sizes helps you make smart, data-driven decisions. Good design equals predictable and consistent performance, and paves the way for better datacentre investments. 

The cloud migration checklist: What to consider

cdonnelly | No Comments | No TrackBacks
| More

In this guest post, Sarvesh Goel, an infrastructure management services architect at IT consultancy Mindtree, offers enterprises a step-by-step guide to moving to the cloud.

There are many factors that influence the cloud migration journey for any enterprise. Some of them may trigger change in the way software development is approached, or even internal service level agreements, and information security standards.

The risk of downtime, and the knock-on effect this could have on the company's brand value and overall reputation, should the switch from on-premise to cloud not go to plan, is often a top concern for some.

Below, we run through some of the other issues that can dictate how an enterprise proceeds with their cloud migration, and how their IT team should set about tackling them.

Application architecture

If there are multiple applications that talk to each other often, and require high speed connections, it is best to migrate them together to avoid any unforeseen timeout or performance issues.

The dependency of applications should be carefully determined before moving them to the cloud. Standalone apps are usually easier to move, but it's worth being mindful that there are likely to be applications that simply aren't cloud compatible at all.

Network architecture

There could be a few applications that require fast access to internal infrastructure, telephone systems, a partner network or even a large user base located on-premise. These can rely on a complex network environment and present challenges that will need to be addressed before moving to cloud.

Alternatively, if there are applications that are being served to global users and require faster download of static content, cloud can still be the top choice to provide customers with access to local or closer locations for content. Such examples include the media, gaming and content delivery industries.

Business continuity plan

Business continuity and internal/external SLA with customers often drive the application migration journey to cloud for disaster recovery purposes.

Cloud is an ideal target for hosting content for disaster recovery. It provides businesses with access to certified datacentres, hybrid offerings, bandwidth, storage and all at a lower cost.

The applications can be easily tested for failover and customisations can be made to hardware sizing of applications if and when disasters occur.

Compliance requirements

There could be legal reasons why personal or sensitive information needs to remain within enterprise's firewalls or in on-premise datacentres.

Such requirements should be carefully analysed before making any decision on moving applications to cloud, even when they are technically ready.

IT support staff training

Undertaking a migration requires having people on hand who understand the cloud fundamentals and can support the move.

Such fundamentals include knowledge of storage, backup, building fault tolerant infrastructures, networking, security, recovery, access control and, most importantly, keeping a lid on costs.

Disaster recovery

For businesses around the world, including Europe, building a disaster recovery solution can be expensive, difficult, and requires regular testing. 

Many European cloud vendors offer services on a pay as you use basis, with built-in disaster recovery, application or datacentre failure recovery, and continuous replication of content.

Using cloud for disaster recovery could provide a significant cost reduction in terms of infrastructure hardware procurement and the maintenance of the datacentre footprint.

Organisations could also choose disaster recovery locations in the same region as the business or several thousand miles away.

To conclude, once the applications are tested on cloud, and the legal/compliance concerns are addressed, organisations can opt for rapid cloud transformation.

This allows the development team to adopt the cloud fundamentals and use the relevant tool sets for rapid scaling of applications and create a more robust application experience, embracing all the power that cloud provides - not to mention the fallback option that gradual cloud migration provides to enterprises.

Addressing the datacentre skills gap by changing the cloud conversation

cdonnelly | No Comments | No TrackBacks
| More

Ahead in the Clouds recently attended a tour of IO's modular datacentre facility in Slough, along with a handful of PhD students from University College London (UCL).

The event's aim  was to open up the datacentre to a group of people who may never have stepped inside one before to enlighten them about the important (and growing) role these facilities play in keeping the digital economy ticking over.

And, based on the reactions of some of the students on the tour, it's a lesson that's long overdue.

For example, all of them largely understood the concept of cloud computing, but seemed surprised to learn that it is a little more grounded in the on-premise world than its name may suggest.

Indeed, the idea that "cloud" has a physical footprint - in the form of an on-premise datacentre - seemed to come as news to almost all of them.

For most people working in the technology industry today, that's either a realisation they made a very long time ago or can be simply filed away in a folder marked "things I've always sort of known". But, if you're an outsider, why would you?

The datacentre industry prides itself on creating and running facilities that, to most people, resemble non-descript office blocks, if they bother to cast their eye over them at all.

Given the sensitivity of the data these sites house, as well as the cost of the equipment inside, it's not difficult to work out why providers aren't keen on drawing attention to them.

At the same time, datacentre operators often talk about the challenges they face when trying to recruit staff with the right skills, particularly as the push towards converged infrastructure and the use of software-defined architectures gathers pace. 

On top of that is all the talk about how the growth in connected devices, The Internet of Things (IoT), big data and future megatrends look set to transform how the datacentre operates, as well as the role it will play in the enterprise in years to come.

The latter point is one of the reasons why IO is keen to broaden the profile of people, aside from sales prospects, who visit its site.

 "Getting people from different walks of life with different skillsets and different capabilities to comment on what we're doing, why we're doing it and what the future might look like is really important," said Andrew Roughan, IO's business development director, during a follow-up chat with AitC.

"We've got to listen to them and get involved with their line of thinking as that group will be tomorrow's customers."

Opening up the datacentre

The range of PhD students the company invited along to the IO open day included some from an artsy, and more creative background, whereas others were in the throes of complex research projects into the impact of the technology industry's activities on the world's finite resources.

It was a diverse group, but isn't that what the datacentre industry is crying out for? A mix of mechanical and software engineers, business-minded folks, creatives, as well as sales and marketing types.

But, if these people don't know the datacentre exists, thanks in no small part to the veil of secrecy the industry operates under, why would they ever think to work in one?

In this respect, IO could be on to something by opening up its facilities and holding open days, but - as previously touched upon - that's not something all operators will be able or willing to do.

IO is in a better position than most to, as its customers' IT kit is locked away in self-contained datacentre chambers that only they have access to. It's a setup akin to a safety deposit box, and means the risk of some random passer-by on a datacentre tour tampering with the hardware is extremely low.

What might be altogether more effective is getting the entire industry to rethink how it positions the datacentre in the cloud conversation more generally, so its vital contribution is more explicitly stated.

Otherwise, there is a real risk the datacentre will continue to be overlooked by the techies and engineers that UK universities produce simply because they don't know it's there. 

The benefits of adopting a "what if...?" approach to datacentre management

cdonnelly | No Comments | No TrackBacks
| More

In this guest post, Zahl Limbuwala, CEO of datacentre optimisation software supplier Romonet, explains why IT departments should be employing a more philosophical approach when solving business issues

The question "what if...?" is often used to refer to the past. What if a few hundred votes in Florida had gone the other way in 2000? What if Christopher Columbus had travelled a little further north or south? What if Einstein had concentrated on his patent clerk career?

For the IT department, the question can be equally applied to the future, as it needs to know the the decisions it makes will have the best possible impact for the business. Yet IT departments are often under financial constraints, meaning for every choice it faces, the department needs to bear in mind both the business and budgetary impact of its actions.

Asking the right questions

This need is exemplified by the datacentre - one of the most complex and cost-intensive parts of modern IT. While any organisation will want to know how datacentre decisions will affect the business, in too many cases IT teams simply don't know what questions they should ask in the first place.

For example, an organisation might ask what servers they need to buy in order to meet a 10-year energy reduction target. Yet this won't tell them what to do when those servers become obsolete in three years' time. Or what proportion of their energy use will actually be reduced by choosing more efficient servers (hint: not a huge proportion). Or whether there's a better way to reduce energy use and costs.

Instead, the IT team should be asking "what if..." for every potential change it could make to the datacentre to shape its strategy. In the example above, the organisation might ask what the effect would be if it replaced expensive, branded energy-efficient servers with a lower-cost commoditised alternative. It might ask what happens if it removes cooling systems. It might even ask what happens if it moves a large part of its infrastructure to the cloud. Regardless, by asking the right questions the IT team will have a much clearer idea of the options available.

Getting the right answers

Once an organisation knows the questions to ask, it needs to consider how it wants them answered. A simple question about energy usage and cost could produce answers using a variety of measurements, some of which will be more useful than others.

For instance, does the IT department benefit most from knowing the Power Usage Efficiency (PUE) of proposed data centre changes? Or the total energy used? Or the cost of that energy? While PUE can provide some indication of efficiency, it certainly doesn't tell the entire story.

A datacentre could have an excellent PUE and still use more energy and be more expensive than a smaller (or older) data centre that better fits the organisation's needs. A much better metric in most cases would be the total energy use or cost of any options; so that the organisation can see the precise, real-world impact of any changes.

Working it out

Once the organisation knows the right question, and the right way to answer it, the actual calculations might seem simple. However, there is still a large amount of misunderstanding around what influences datacentre costs. A single datacentre can produce hundreds of separate items of data every second, all of which may or may not be useful for answering IT teams' questions.

This can make the calculation a catch-22 situation. Does the organisation consider every single possible piece of data, making calculations a time-consuming, complex process? Or does it aim to simplify the factors involved, making calculations faster but making any answer an approximation or guesstimate at best?

To solve this, IT teams need to look at how they answer questions for the rest of the business. We are increasingly seeing big data and data-driven decision making used to support business activity in all areas, from marketing to overall strategy.

IT should be able to turn these practices inwards; using the same data-driven approach to answer questions on its own strategy. For instance, there is actually a relatively small number of factors that can be used to predict data centre costs.

Combining these with the right calculations and big data tools, IT teams can quickly and confidently predict the precise impact of any potential decision they make. Combining this approach with the right "what if...?" questions, IT departments can see precisely what will be the best course of action to the business, whatever its goals.

How green is your datacentre?

cdonnelly | No Comments | No TrackBacks
| More

In this guest post, Dominic Ward, vice president of corporate and business development at datacentre provider Verne Global, explains why the green power commitments of the tech giants may not be all that they seem.

 

The rise of the digital economy has a well-kept dirty secret. The movies we stream, the photos we store in the cloud and the entire digital world we live in means, on a global basis, the power used by datacentres now generates more polluting carbon than the aviation industry.

 

Perhaps to diffuse any concerns and attention with regard to their growing use of power, tech giants like Microsoft, Apple and Google, have announced plans to open datacentres supposedly run on renewably-produced electricity.

 

Apple, for instance, claims all of the energy used to by its US operations - including its corporate offices, retail stores and datacentres - came from renewable sources, winning the consumer tech behemoth praise from environmental lobbying group Greenpeace.

 

The reality is, however, a little different.

 

If a company sources power from a solar or wind farm, what happens when night falls or the wind drops?  The company will revert to power from the main electricity grid.

 

In the US, around 10% of power comes from renewable sources, while Iceland is the only country in the world with 100% green energy production. So how can Apple claim to be 100% green at its Cork facility, or at its soon-to-open Galway datacentre, when only about 20% of Ireland's power grid is from renewable sources?

 

The answer is a little-publicised renewable market mechanism that is allowing companies from Silicon Valley and around the world to get away with a big green marketing scam: Renewable Energy Certificates (RECs).

 

This system and its sister scheme, the European Energy Certificate System (EECS), operates like airline carbon trading, allowing power users to buy 'certificates', which testify that their dollars have financed production of renewable electrons elsewhere.

 

In essence, if you use 1 kWh of coal or nuclear energy, you can buy a certificate to claim an equivalent 1kWh from renewable energy.

 

This is not real renewable energy, and it does not support the claims made by large tech companies about the provenance of their power.

 

We have known about this smokescreen for some time, but the issue gained prominence recently when Truthout, a campaigning journalism website, called the practice "misrepresentation" and "a boldfaced lie on Apple's part".

 

The problem is becoming endemic amongst tech companies, though, with the big Silicon Valley tech giants being the worst offenders.

 

All the major internet firms are using this strategy, many of whom now state that their new datacentres are 100% renewable.

 

Given their location and disclosed sources of power, this simply is not true, save for their use of purchased certificates.

 

Unless a datacentre generates all of its own power from renewable sources, or sources power from a national grid that uses entirely renewable energy, enterprises and consumers will continue to underestimate the true environmental impact of their computing. Google, to its credit, has at least publicly recognised this problem.

 

Several firms including Google and Apple this summer allowed their various initiatives to be highlighted by the White House as indication of a US commitment to the upcoming United Nations Climate Change Conference in December. Their commitments to increased generation of renewable power are welcome. But, until they abandon this certification charade, these commitments will continue to appear as hollow claims.

 

So, what needs to happen?

1. Increased transparency on power sources

The RECs and EECS schemes currently allow tech companies and datacentre operators to hide the truth about their power cleanliness. Companies should be obliged by law to disclose the true nature of their power sources, including an explicit disclosure on the purchase of energy certificates. Only then will enterprise customers and consumers know the truth about their energy consumption from computing.

 

2. Upgrade the RECs and EECS schemes

The current systems are massively flawed. What began as a well-intended mechanism to promote new generation of renewable power has been poorly executed. It is time to upgrade the system to guarantee that every dollar, euro and pound spent on an energy certificate is truly invested in the installation of new renewable power generation.

 

3. Go green for real

Most renewable energy is not naturally suited to the tech industry: wind drops and the sun sets. Yet, the technology industry needs constant energy. The easy marketing 'win' is that it is easier for tech companies to simply pay for an energy certificate rather than shift to an entirely renewable energy source.

 

However, as we enter an era in which the technology and datacentre industry now has a carbon footprint in excess of the airline industry, surely this is not the right attitude.

 

Take the time to understand the finer points of your own energy contract and where the power you are using really comes from. Is it truly 'green'? And if you haven't taken the time to do so before, put some research into green datacentre options. The reality is that the only way to move the tech industry to 100% true clean energy is to clean up the power grids or move the tech industry to grids that are already clean.







G-Cloud 7: Could the 20% contract variance cap end up harming the framework?

cdonnelly | No Comments | No TrackBacks
| More

When the G-Cloud framework was introduced back in spring 2012, its core aim was to shake-up government IT procurement, so that high-value, multi-year hardware contracts awarded to the same old big-name enterprise suppliers became a thing of the past.    

Backed by a Central Government-wide cloud-first mandate, the public sector was actively encouraged to use the framework to source cloud-based alternatives to on-premise technologies via the Digital Marketplace (formerly known as CloudStore) and a much larger pool of suppliers.

Initially, supplier contracts were only allowed to last 12-months, to prevent the public sector from falling back into buying habits synonymous with the old way of doing things, but this was later extended to two years.

It was a move that was warmly welcomed by users at the time, as it meant buyers could avoid having to retender for services so frequently, which some suppliers had flagged as a barrier to G-Cloud adoption within certain quarters of the public sector.

Against this backdrop, it's not difficult to see why the Crown Commercial Service's (CCS) new 20% rule around G-Cloud contract extensions seems to have caused so much upset within supplier circles.

The regulation, which is set to be introduced when the seventh iteration of G-Cloud goes live on 23 November, means framework users will be forced to retender if they want to use more of a certain service if their proposed contract extension looks set to exceed 20% of their original procurement's value.

To avoid this, buyers would have to work out in advance how their use of a particular cloud service is likely to take off within their department over the course of the two-year contract, which could lead to over-provisioning and surplus IT being procured, it is feared.

This, suppliers argue, is at direct odds with the pay-as-you-go ethos of both G-Cloud and cloud computing, more generally, and harks back to the dark days of government IT procurement.

"As part of a G-Cloud procurement, buyers should always work out the 'cost of success' when they are shortlisting and selecting their cloud providers," John Glover, sales and marketing director at G-Cloud provider Kahootz told Ahead In the Cloud (AitC).

"For example, if a project team initially only need to consume 100 users, but expect to expand that to 2,000 over the contract term, that should be factored in.

"But, to ask them to now order and pay for 2,000 users upfront takes us back to the bad old days when public sector organisations committed large sums of capital on 'shelfware'," he added.

Why is it being introduced?

When pressed as to why the measure is being introduced, the Cabinet Office fed Computer Weekly a wooly line about how the government is "always improving the framework to make it easier for suppliers and buyers," before confirming that it will be carefully considering any feedback it receives on the matter.

The insinuation, though, that the 20% cap could be considered an "improvement" would undoubtedly be contested by Kahootz, and many others within the G-Cloud community who have already taken steps to make the Cabinet Office aware of their disapproval.

For example, G-Cloud suppliers Skyscape Cloud Services and EduServ have both put their misgivings about the rule change in writing to the Cabinet Office, while the G-Cloud working group inside trade association EuroCloud UK issued a statement this week, expressing its concerns.

It is a shame the Cabinet Office hasn't revealed more at this time about the motivation for the move, as every supplier AitC has spoken to seems at a loss to explain it, although they have their theories.

EuroCloud UK, for instance, floated the idea that the rule could be the result of people unfamiliar with the origins of G-Cloud, but with a stake in government procurement, getting involved.

While others have apportioned blame to the recent tightening up of the EU Procurement Regulations around how much variance is permitted within contracts once they've been agreed.

Where this theory falls down slightly is around the fact these regulations permit contract variations of up to 50%, which raises further questions about why CCS is intent on enforcing a cap of 20%?

 Whatever the reason, given the furore the move has caused so far, there's every chance CCS and The Cabinet Office will backtrack, given their willingness in the past to tweak the workings of the framework in response to supplier and buyer feedback.

Whether or not they would be able to revoke the 20% rule before G-Cloud 7 goes live is doubtful, but if they choose not to now or in any future version iterations, they might have something of a revolt on their hands.

AitC has already heard from several suppliers who've said, while G-Cloud 6 continues to run, they'll be pushing that to buyers as their preferred framework until it ceases to exist in February 2016. Admittedly, that hardly constitutes a long-term solution to the problem. 

What's at stake?

A lot of those who've voiced their opposition to the changes have shared the same concern that the introduction of the 20% cap could end up undoing all of the good work the Cabinet Office has achieved with G-Cloud to-date, and ultimately put the public sector off using the framework at all.

The amount of money spent via the framework since its creation now stands at £806m, with £53m of that attributable to the volume of transactions that took place in September alone. And it would be a shame if all the momentum it's generated so far were to go to waste.  

Particularly when the success G-Cloud has had to date seems to be gaining wider industry recognition, and reports continue to circulate about how other European countries are looking to emulate the model for their own public sector IT procurement needs.

So, here's hoping the Cabinet Office is taking notice of what suppliers have to say on this matter, as The Digital Marketplace won't work without them.

 

Has end user computing made an enemy of the state?

cdonnelly | No Comments | No TrackBacks
| More

In this guest post, J. Tyler Rohrer, co-founder of Liquidware Labs, explains how the use of cloud apps can help user solve end user computing scalability issues.

We are about to enter the golden age of end-user computing (EUC), with the concept now blossoming out of the legacy client-server model and into one that is mobile, cloud, and application-centric.

The explosive growth of mobile tablets, phablets, smartphones, ultra-books, laptops, semi-reliable wireless, mobile networks and cheaper, more intelligent storage (coupled with a rise in cloud services and modern apps) is incredible.

There are some niche offerings, like application virtualisation, application layering, VDI, Desktops-as-a-Service, Storage-as-a-Service, and Enterprise Mobility Management (EMM) - but these are incremental.

These still, for certain use cases, bump up against the limits of the laws of physics like that nasty speed of light latency constraint. And not just in network performance terms, but application response times, storage retrieval times but also the very nature of Moore's law itself.

What's the problem?

In the past, Moore's law was an incredible benefit to most modern desktop administrators. We could rest assured that computing power would nearly double every 18 months, while the cost of that compute would be halved.

However, in our rush to throw progressively less expensive yet powerful hardware at most problems, we created an even larger web of intricacy. We created a topology that - while logical - lacked scale.

The tentacles of our client-server networks sprawled. Most user devices were (and still are) incredibly "stateful" - with proprietary configurations, sensitive data, and tuned applications delicately installed on commodity-class hardware.

In a thought, scale got away from us. The larger our deployments got, the more acutely painful the weight of this scale on our operations and systems management became.

Sure, we bought tools that patched the holes, rather than filled them.  While somewhat tenable in the campus environment - laptops and mobile "off network" computing was a target for both accidental and malicious data (IP) loss and risk.

Because we had varying user types with different machines, images, applications, printers, and policies, we tended to have a one-to-one relationship with each desktop - or better yet, something that automated remedial tasks.

While these tools boosted productivity somewhat, the lag to buy, image, provision, and deploy a new laptop, desktop or whatever, was still measured in days or hours at best.

And while we mention security above in the context of risks and attacks, the fact the majority of our corporate IP rests on commodity-class hard drives today, that are not backed up upon each write, could be catastrophic.

What we need to work out is how to create and deliver productive and secure workspaces for our end users, while getting scale to work for and not against us.

Stateful computing was a worst-case scenario in the past. A user might need an app, large storage, lots of memory, and - so - we gave it to them. It was cheap and promised to get cheaper. But all that "state" is what we are fighting now. 

With the rise cloud apps, very little "state" now resides on devices, particularly where smartphones and tablets are concerned.

For that reason, I think what we shall soon find is the operating system - whether it's Windows, Android, OS X, iOS, or Linux doesn't really matter when you reach a truly stateless workspaces.

The "cloud" however ushers in an entirely new way of thinking about client-server computing. Instead of long distance connections, we have a fabric.

The things we need are, or can be, a click away so the idea of having them installed becomes archaic. All this "state" being removed from the device now lives as a service, distributed across this cloud fabric, for use when, where, and as needed.

So it's the availability of a potential service I might one day need that is the solution.

And with global replication via cloud services, web-scale file systems, and hybrid models - the latency that punished the client-server architectures of old is minimised and architected around.

We see projects like Citrix Workspace Cloud, VMware Project Enzo, Amazon Web Services and Microsoft Azure, metadata rich file systems like Nutanix Medusa, and workspace tools by my company Liquidware Labs all tackling this challenge of wrangling scale back into Pandora's box on both large and individual user levels.

We are all very, very close. While the combination of these technologies will be relegated to specific use cases for the next few years - we will see convergence of x86, cloud, and mobile into single platforms.

And while we will continue to have rich and robust local processing, graphics, input, and display technologies at our fingertips our "state" will live in clouds.

VMworld 2015: How VMware stopped the Dell-EMC merger overshadowing the show

cdonnelly | No Comments | No TrackBacks
| More
The proposed Dell-EMC merger was always going to be a major talking point at VMworld in Barcelona, given news of the deal was confirmed on the eve of this year's show. 

What was unclear, as attendees filed into the conference centre for the opening day keynote on Tuesday morning, was whether VMware's senior management team would be joining the discussion or not. 

As it turned out, delegates didn't have to wait long to find out if EMC-owned VMware would make reference to (what is currently billed as) the biggest merger in enterprise IT history. 

Around six minutes in there were on-stage assurances from COO Carl Eschenbach about how the acquisition would have little impact on the way VMware operates, as it would remain a publicly-listed, independent entity, while the rest of EMC joins Dell in going private. 

Then Michael Dell appeared, albeit via a pre-recorded segment, to reinforce this message, before briefly addressing how the combination of EMC and Dell's product portfolios should open up new opportunities for the firms in the hybrid cloud and software-defined datacentre era. 

During a post-keynote press Q&A, VMware's EMEA CTO Joe Baguley continued the discussion, inviting questions from the press on the topic too, with the assembled execs going into as much detail as they probably could, while the deal's T&Cs are being hammered out somewhere between Texas and Massachusetts. 

Opening up 
The firm's willingness to reference the merger was kind of surprising, though, given how quick most vendors are to shoot down M&A talk when faced with even the smallest whiff of them becoming a takeover target. 

But things would have got hugely awkward over the course of the week if no-one acknowledged the $67bn elephant in the room, and it was refreshing to see. 

That was certainly the view of the VMware User Group (VMUG), who told Ahead in the Clouds (AitC) that VMware CEO Pat Gelsinger popped into their annual VMworld Europe luncheon to personally assure them that - pre/post-merger - it is still very much business as usual for the vendor's customers. 

"Having someone like Pat attend and have an open, unscripted dialogue like that is a huge testament to the commitment from VMware to VMUG and that they consider us to be a vital part of their organisation," VMUG president Mariano Maluf told us at VMworld. 

"It speaks volumes about his commitment and gives us confidence that VMware will continue down the path it's been going through," he said. 

All in all, he added, the group's 121,000 members are feeling confident about what the future holds for VMware once Dell gets his hands on its parent company. 

"The fact VMware will remain a publicly traded company and independent signals a confidence on the part of the investors involved that VMware adds value to the industry and the technologies and solutions will continue that," he added. 

Maluf's comments were echoed by nearly everyone AitC spoke to at the show, with the general consensus being the deal is unlikely to cause much upheaval for those who've pitched their tents in the VMware camp, while those with closer ties to EMC might want to consider attaching a few guy ropes. 

The fact is, by taking steps pre-empt what users were most likely to ask, and giving them a forum to air their views, VMware succeeded in ensuring the Dell-EMC merger didn't detract from everything else it announced at the show

Singing from a different hymn sheet 
Before we sign off, however, it would be remiss of AitC not to reference one of the other big talking points of the show - aside from VMware's hybrid cloud and end user computing plans, of course. 

Sanjay Poonen, VMware's general manager of end user computing, treated the 10,000-strong crowd to not one, but two separate sing-alongs during his second day keynote. 

The first saw Poonen break into an impromptu and acapella rendition of Let It Go from Disney's Frozen, after a handful of attendees responded to his question about who in the audience owned a BlackBerry, during his talk about VMware's enterprise mobile device management strategy. 

He then went on to lead the crowd in a re-jig of Queen's 1977 hit We Will Rock You, which saw the lyrics changed to "End User Computing Will Rock You" instead. 

We know those lyrics don't really scan well, but Poonen looked pleased with the results, so far be it for us to rain (no pun intended) on his parade. 

For anyone who hadn't managed to grab a coffee on the way to his 9am keynote, it was certainly a display that served to sharpen the senses far more than any caffeine fix could.

Safe Harbour: What are the alternatives for data-sharing cloud providers?

cdonnelly | No Comments | No TrackBacks
| More

In this guest post, Rafi Azim-Khan, head of data privacy in Europe at legal firm Pillsbury Law, explains how the cloud provider community can side-step the European Court of Justice's Safe Harbour verdict.

The European Court of Justice (ECJ), in response to a case brought by Austrian student, Maximilian Schrems against Ireland's Data Protection Commissioner, has confirmed the current Safe Harbour system of data-sharing between EEA states and the US is invalid. A conclusion that looks set to have a widespread economic impact, given just how many businesses rely on Safe Harbour to transfer and handle data in the US.

The Court has ruled that Facebook should not have been allowed to save Schrems' private data in the US and this is - essentially - a formal confirmation of what has been growing criticism of the scheme over a period of time.

 The million dollar question is now: where does this leave US companies who heavily rely on Safe Harbour? And what about US cloud providers who are yet to build a European datacentre?

The facts of the matter

To re-cap, this case has arisen from proceedings before the Irish courts brought by Schrems, in which he challenged the Irish Data Protection Commissioner's decision not to investigate claims that his personal data should have been safeguarded against security surveillance by the US intelligence services when it was in the possession of Facebook.

The claim was brought in Ireland, as Facebook's European operations are headquartered there, but was referred up to the ECJ.

So, given the serious question marks that loom over the future of Safe Harbour and the threat of significant new fines under the imminent General Data Protection Regulation, what should US businesses, including cloud providers, look to be doing now to avoid having to process their data in the EU?

Handily, there is another legal mechanism that they can turn to.

Binding Corporate Rules (BCRs) are designed to allow multinational companies to transfer personal data from the EEA to their affiliates located outside of the EEA in a compliant manner.

BCRs are increasingly becoming a preferred option for those who have a lot of data flowing internationally and wish to demonstrate compliance, keep regulators at bay and prepare for a world without Safe Harbour.

Companies who put BCRs in place commit to certain data security and privacy standards relating to their processing activities and, once approved, the "blessed" scheme allows a safe environment within which data transfers can take place.

BCRs also have material long-term benefits in the sense that some upfront work, via preparing and submitting the application, should reduce risk of fines and undoubtedly position an applicant in line for a privacy "seal" once the new EU Data Protection Regulation is introduced.

Model contract clauses, which can also be used to "adequately safeguard" data transfers from Europe, also present themselves as a safer route to ensuring compliance compared to Safe Harbour as things stand. 

However, they do have a number of drawbacks compared to BCRs, including inflexibility, large numbers of contracts being required in large organisations and the need for regular updates.

Post-Safe Harbour: Next steps

In short, any US companies, whether big brands or smaller enterprises, that have existing EU offices, customers, marketing or business partners, as well as those which are yet to build an EU datacentre, would be well advised to reassess their procedures, policies and documents regarding how they handle data.

The storm of new laws, much higher fines and enforcement, with more due shortly when the final draft of the new EU Data Protection Regulation is published, means it would be a false economy not to act now and seek advice.  

Cloud 28+: What HP must do to win over the cloud provider sceptics

cdonnelly | No Comments | No TrackBacks
| More

Boosting the take-up of cloud services across Europe has been the mission statement of both public sector and commercial organisations for several years now.

From the latter point of view, HP has been actively involved in this since the formal launch of its Cloud 28+ initiative in March 2015, which aims to provide European companies of all sizes with access to a federated catalogue that they can use to buy cloud services.

If you're thinking this sounds spookily like the UK government's G-Cloud public sector-focused procurement initiative, you would be right. The key principles are more or less the same, except the use of Cloud 28+ isn't limited to government departments or local authorities. It's open to all.

That message - during the two years that HP has been talking up its efforts in this area - doesn't seem to have reached everyone, though, particularly the providers one would assume would be a good fit for it.

Namely, the members of the G-Cloud community, who are already well-versed in how a setup like Cloud 28+ operates, and what is required to win business through it.

However, several key participants in the government procurement framework have privately expressed misgivings to Ahead In the Clouds about whether HP would welcome their involvement because they don't use its technologies to underpin their services.

Similarly, some said they weren't sure how they feel about hawking their cloud wares through an HP-branded catalogue, or if it would mean sharing details of the deals they do through Cloud 28+ with the firm.

The latter has been a long-held concern of cloud resellers, because - once the maker of the service you're reselling access to knows whose buying it - what's to stop them from cutting you out and dealing with them direct?  

HP assurance

All these points HP seemed intent on addressing during its Cloud 28+ in Action event in Brussels earlier this week, which saw the firm take steps to almost distance itself from the initiative it is supposed to be spearheading

As such, there were protestations on stage from Xavier Poisson, EMEA vice president of HP Converged Cloud,  about how Cloud 28+ belongs to the providers that populate its catalogue, not to HP, and how its future will be influenced by participants.

The attitude seems to be, while HP may have had a hand in inviting people to the Cloud 28+ party, it's not going to dictate who should be invited, the tunes they should dance to or what food gets served. It's simply providing a venue and directing people how to get there, before letting everyone get on with enjoying the revelry.  

From a governance point of view, it won't be HP calling the shots. That will be the job of a new, independent Cloud 28+ board who made their debut at the event.

On the topic of billing, the firm made a point of saying users won't be able to pay for services through Cloud 28+, and that it will - instead - rely on third-parties to handle the payment and settlement side of using the catalogue.

For those worried that being a non-user of HP technologies could preclude them from Cloud 28+, the news wasn't so good.

As it emerged that providers will have one year from joining Cloud 28+ to ensure the applications they want to sell through the catalogue run on the Helion-flavoured version of OpenStack. A move, HP said, is designed to guard users against the risk of vendor lock-in.

Even so, given the firm spent the majority of the event trying to play down its role in the initiative, it's a stipulation that might leave an odd taste in the mouth of some would-be participants and users. Especially in light of the uncertainty over just how open vendor-backed versions of OpenStack truly are 

HP said this is an area that could be reviewed later down the line by the Cloud 28+ governance board, but it will be interesting to see (once the initial hype around its launch dies down) if this emerges as turn-off for some potential participants.  

Opening up Europe for business

Admittedly, it would be short-sighted of them to dismiss joining Cloud 28+ out of hand on that basis, in light of the opportunities it could potentially open up for them to do business across Europe.

While the European Commission has stopped short of endorsing the initiative, it has acknowledged what Cloud 28+ is trying to do shares some common ground with its vision to create a Digital Single Market (DSM) across Europe, and might be worth paying attention to.

If Cloud 28+ emerges as the preferred method for the enterprise to procure IT, once the preparatory work to deliver the DSM is complete, for example, the Helion OpenStack requirement would pale in significance to the amount of business participants could gain through it.

Measuring the success of Cloud 28+

While Cloud 28+ is still under construction, it's only right the focus has been on the provider side of things, because - without them - there is no service catalogue.  

But it's what end users make of Cloud 28+ that will define its long-term success, despite HP's repeated boasts about how many providers have signed up (110 and counting) to-date. 

HP is preparing to go-live with Cloud 28+ in early December at its Discover event in London, and Poisson said the "client-side" of it will become a bigger focus after that, so it's likely we'll hear some momentum announcements around end user adoption in the New Year.

But, until there is a sizeable amount of business transacted through the catalogue, or some other form of demonstrable end user interest in it, there will remain a fair few providers who won't get why its worth their while to join.







Using big data to uncover the secrets of enterprise datacentre operations

cdonnelly | No Comments | No TrackBacks
| More

In this guest post, Frank Denneman, chief technologist of storage management software vendor PernixData, sets out why datacentre management could soon emerge as the main use case for big data analytics.

IT departments can sometimes be slow to recognise the power they yield, and the rise of cloud computing is a great example of this.

Over the last three decades IT departments focused on assisting the wider business, through automating activities that could increase output or refine the consistency of product development processes, before turning its attention to the automation of its own operations.

The same needs to happen with big data. A lot of organisations have looked to big data analytics to discover unknown correlations, hidden patterns, market trends, customer preferences and other useful business information. 

Many have deployed big data systems, forcing end users to look for hidden patterns between the new workloads and consumed resources within their own datacentre and see how this impacts current workloads and future capabilities.

The problem is virtual datacentres are comprised of a disparate stack of components. Every system is logging and presenting data the vendor seems appropriate. 

Unfortunately, variations in the granularity of information, time frames, and output formats make it extremely difficult to correlate data and understand the dynamics of the virtual datacentre.

However, hypervisors are very context-rich information systems, and are jam-packed with data ready to be crunched and analysed to provide a well-rounded picture of the various resource consumers and providers. 

Having this information at your fingertips can help optimise current workloads and identify systems better suited to host new ones. 

Operations will also change, as users are now able to establish a fingerprint of their system. Instead of micro-managing each separate host or virtual machine, they can monitor the fingerprint of the cluster. 

For example, how have incoming workloads changed the clusters' fingerprint over time, paving the way for a deeper trend analysis into resource usage. 

Information like this allows users to manage datacentres differently and - in turn - design them with a higher degree of accuracy. 

The beauty of having this set of data all in the same language, structure and format is that it can now start to transcend the datacentre. 

The dataset gleaned from each facility can be used to manage the IT lifecycle, improve deployment and operations, optimise existing workloads and infrastructure, leading to a better future design. But why stop there? 

Combining datasets from many virtual datacentres could generate insights that can improve the IT-lifecycle even more. 

By comparing facilities of the same size, or datacentres in the same vertical market, it might be possible to develop an understanding of the TCO of running the same VM on a particular host system, or storage system. 

Alternatively, users may also discover the TCO of running a virtual machine in a private datacentre versus a cloud offering. And that's the type of information needed in modern datacentre management. 

The enterprise benefits of making machine learning tools accessible to all

cdonnelly | No Comments | No TrackBacks
| More
In this guest post, Mike Weston, CEO of data science consultancy Profusion, discusses how Amazon's cloud-based push to democratise machine learning sets to benefit the enterprise.

Machine learning is the creation of algorithms that can interrogate and make predictions based on the contents of big data sets without needing to be rewritten for each new set of information. In a sense, it's a form of artificial intelligence.

The recent European launch of Amazon's machine learning platform has garnered a lot of attention and is designed so non-techies can use these tools to create predictions based on data.

Amazon's move follows Facebook's launch of a 'deep learning' lab in France to undertake research into artificial intelligence, particularly facial recognition. Both tech giants will compete with Microsoft's Azure computer service. 

Clearly, most major tech companies are pitching their tent in the data science camp. The reason is quite simple: demand. 

Data science is quickly moving from a niche service used by a few enterprises to a must have. Many business leaders are waking up to the fact that new technology like self-driving cars, the Internet of Things, smart cities and wearable devices are all powered or complimented by data science. 

The business case for using data science techniques in areas such as retail, logistics and marketing is also increasingly easy to prove. Consequently, data scientists are in demand like never before. Unfortunately, as many data scientists will tell you, their skills are still fairly rare - part computer scientist, part statistician. We're all aware that there is an acute skills gap in the technology sector and in many ways data scientists are the poster child. 

With demand increasing for data science and the pool of data science talent struggling to keep up with it, tech giants like Amazon are naturally seeking to provide non-techies with the skills needed to do it themselves. 

It may sound counterintuitive for the CEO of a data science consultancy to welcome this move, but I'm a firm believer that data science has immense power to improve businesses, cities and peoples' lives in general. 

If more people understand how to interrogate and use data to make informed decisions, the faster it will become an intrinsic part of how all businesses operate. Not only that, but the more repeatable tasks that can be undertaken by technology, the more time is freed up for data scientists to explore the information at their disposal more deeply and to innovate.

Addressing the big data skills gap
With the normalisation of data science as a business process or service, it should become more obvious and attractive for people to train in these techniques. This should eventually help plug the skills gap. 

Of course, the growth of data science platforms in Europe and the US won't, in the short-term, create an army of do-it-yourself data scientists capable of everything. Self-service software can only bring you so far. A great data scientist adds value to the data through analysis and interpretation - through asking 'why' and 'so what'. 

Highly-skilled data scientists are fundamental to the more complicated data science - uncovering profound insights from seemingly disparate data that radically change and improve how organisations relate to people. 

Nevertheless, the more data literate we all become the better we will be at both using data and asking the right questions. Businesses generally don't suffer from a lack of data. The problem tends to be that those in decision-making positions do not understand what the data could reveal and therefore what problems could be solved. This means that a business can underestimate the knowledge it holds, fail to exploit all its source of data, or fail to share information with people who could make better use of it. 

Business that understand data science and can use self-service platforms and tools to undertake basic actions will become savvier at collecting, managing and analysing it. With experience, should come an understanding of the full potential of data science and a willingness to experiment. 

Amazon's self-service platform is not in and of itself going to create a revolution in data science. However, it represents the growth in businesses seeking to empower themselves to make better use of the information they hold. 

Like any science, data science is at its most exciting when it is testing the limits of what is possible. By experimenting, repeating and refining techniques, data science becomes much more effective. 

Whether a business employs its own data scientists or gets outside help, the more these specialists work with a company, the more they understand, the better they become at creating insights and solutions, and the more value a business can extract from its data.

VDI: Why desktop virtualisation has finally come of age

cdonnelly | No Comments | No TrackBacks
| More
In this guest post, David Angwin, marketing director for Dell Cloud Client Computing, claims the benefits of desktop virtualisation now far outweigh the risks.

Desktop virtualisation (VDI) is a technology that has never been fully appreciated, despite promising benefits such as lower maintenance costs, greater flexibility and increased reliability.  

Many companies have taken advantage of server and storage virtualisation over the years, but desktops have been overlooked, and physical desktops remain the norm. 

While organisations are willing to invest heavily in virtualised back-end infrastructure, they may feel VDI will not provide much additional value, or that the drawbacks and risks outweigh the benefits. But this is not the case. 

Principles of desktop virtualisation
It is often thought VDI is about creating multiple virtual desktops on one device, but the reality is the user's desktop profile is stored on the host server and then optimised for the local device the user is logging on from. This ensures they can experience a desktop optimised for that device, allowing for a consistent experience. 

Many companies have deployed various access devices to consume VDI and are reaping the benefits. These include:

Thin Client: This is where all processing power and storage is in the datacentre, and is a very cost effective way of delivering desktops and applications to a mass audience as the devices are relatively low cost and typically use much less energy compared with standard desktops. 

Cloud PC: This is essentially a PC without a hard drive that offers full performance and is a good fit for organisations running a small datacentre. The operating system is sent to the PC from the server when the user requests a log on. 

Zero Clients: A zero client is designed for use on networks with a virtualized back-end infrastructure, and is able to offer all of the benefits of thin clients, but with added compute power. 

With the right client and back-end infrastructure, zero clients can help to optimise working conditions and cut IT running costs, as there is less equipment on the desk. 

Desktop Virtualisation Benefits
VDI does more than provide a low-cost desktop to mass users, but can help create new business opportunities, in the following areas.

Remote working: VDI enables organisations to work with companies in different locations around the world securely. By setting up remote workers on the network, users can access the data securely without putting any data at risk. This reduces the potential for data theft, corruption or loss, as the data does not leave the datacenter. 
•Business agility: With faster access to data, organisations are able to react intelligently to changing market conditions. 
•Windows migrations: Physical desktop set-ups can create challenges for IT departments when new operating systems are released. Traditionally, IT administrators needed to visit each desktop in the organisation to make the relevant updates. However, with VDI, organisations have the opportunity to reduce this cost and time, as the network can be updated centrally, meaning software patches and OS upgrades can be simplified. 

VDI brings end users and organisations a wide range of benefits including ongoing cost saving and compliance benefits. Companies in all business sectors can realise a stable and positive return on investment, while providing a desktop environment that offers users quick and easy secure access to everything on the network to enable productivity.

The invisible business: Mobile plus cloud

cdonnelly | No Comments | No TrackBacks
| More

In this guest post Amit Singh, president of Google for Work, explains why enterprises need to start adopting a mobile- and cloud-first approach to doing business if they want to remain one step ahead of the competition.

One of the most exciting things happening today is the convergence of different technologies and trends. In isolation, a trend or a technological breakthrough is interesting, at times significant. But taken together, multiple converging trends and advances can completely upend the way we do things.

Netflix is a classic example. It capitalised on the widespread adoption of broadband internet and mobile smart devices, as well as top-notch algorithmic recommendations and an expansive content strategy, to connect a huge number of people with content they love. The company just announced that it has more than 65 million subscribers.

Other examples of new and improved approaches to existing problems abound. As Tom Goodwin, SVP of Havas Media, said recently: "Uber, the world's largest taxi company, owns no vehicles. Facebook, the world's most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world's largest accommodation provider, owns no real estate. Something interesting is happening."

Each of these companies has capitalised on a convergence of various trends and technological breakthroughs to achieve something spectacular.

Some of the factors I see driving change include exponential technological growth and the democratisation of opportunity, as well as the emergence of public cloud platforms that are fast, secure and easy to use. Together, these trends underpin a powerful formula for rapid business growth: mobile plus cloud.

We know the future of computing is mobile. There are 2.1 billion smartphone subscriptions worldwide, and that number grew by 23% last year.

We spend a lot of time on our mobile devices. Since 2014, more internet traffic has come from mobile devices than from desktop computers. Forward-looking companies are building mobile-first solutions to reach their users and customers, because that's where we all are.

On the backend, the cost of computing has been dropping exponentially, and now anyone has access to massive computing and storage resources on a pay as you go basis because of cloud. Companies can get started by hosting their data and infrastructure in the cloud for almost nothing.

Hence mobile plus cloud. You can use mobile platforms to reach customers while powering your business with cloud computing. You can build lean and scale fast, and benefit automatically from the exponential growth curve of technology.

As computing power increases and costs decrease, cloud platforms grow more capable and the mobile market expands. In this state, technological change is an opportunity.

How cloud challenges the incumbents to think different

Snapchat is one of the best examples of how this can work. It was founded in 2011. The team used Google Cloud Platform for their infrastructure needs and focused relentlessly on mobile. Just four years later, Snapchat supports more than 100 million active users per day, who share more than 8,000 photos every second

The mobile plus cloud formula is exciting, but it also poses challenges for established players. According to a study by IBM, some companies spend as much as 80% of their IT budgets on maintaining legacy systems, such as onsite servers.

For these companies, technological change is a threat. Legacy systems don't incorporate the latest performance improvements and cost savings. They aren't benefitting from exponential growth, and they risk falling behind their competitors who are.

This can be daunting, since it's not realistic for most companies to make big changes overnight.

If you run a business with less than agile legacy systems, here's one practical way to respond to the fast pace of technological change: foster an internal culture of experimentation.

The cost of trying new technologies is very low, so run trials and expand them if they produce results. For example, try using cloud computing for a few data analysis projects, or give a modern browser to employees in one department of the company and see if they work better.

There are no "one size fits all" solutions, but with an open mind, smart leaders can discover what works best for their team.

It's important to try, especially as technology becomes more capable and more of the world adopts a mobile plus cloud formula. Those who experiment will be best placed to capitalise on future convergences.

Uber's success suggests enterprises need to think like startups about cloud

cdonnelly | No Comments | No TrackBacks
| More
Cloud-championing CIOs love to bang on about how ditching on-premise technologies helps liberate IT departments, as it means they can spend less time propping up servers and devote more to developing apps and services that will propel the business forward. 

It's a shift that, when successfully executed, can help make companies more competitive, as they're nimbler and better positioned to quickly respond to market changes and evolving consumer demands. 

But it takes time, with Gartner analyst John-David Lovelock telling Computer Weekly this week that companies take at least a year to get up and running in the cloud from having first considered taking the plunge. 

"It takes companies about 12 months to say, 'this server is more expensive or this storage array is too expensive so we should go for Compute-as-a-Service or Storage-as-a- Service instead'," he said. 

"Making that shift within a year is not something they can traditionally do if they weren't already on the path to the cloud." 

Future development 
Companies preparing to make such a move can't afford to be without a top-notch team of developers, if they're serious about capitalising on the agility benefits of cloud, according to Jeff Lawson, CEO of cloud communications company Twilio. 

"Every company has to think of themselves as software builders or they will probably become irrelevant. Companies are building software and iterating quickly to create great experiences for customers, and they're going to out-compete those that aren't," he told Computer Weekly. 

Lawson was in London this week to support his San Francisco-based company's European expansion plans, which have already seen Twilio invest in offices in London, Dublin and Estonia. 

In fact, the company claims to have signed up 700,000 developers across the globe, and that one in five people across the world have already interacted with an app featuring its technology. 

The firm's cloud-based SMS and voice calling API is used by taxi-hailing app Uber to send alerts out to customers when the drivers they've booked are nearby, for example, and similarly by holiday accommodation listing site, AirBnB. 

Both these companies are regularly lauded by the likes of Amazon Web Services, EMC and Google because they're both popular services that are said to be run exclusively on cloud technologies. 

Neither has to suffer the burden of having weighty, legacy technology investments eating up large portions of their IT budgets. For this reason, enterprises should be looking at them for inspiration about how to make their operations leaner, meaner and more agile, it's often said. 

Given that Uber and AirBnB have seemingly become household names overnight, it highlights - to a certain extent - why the move to cloud is something the enterprise can't afford to put off. 

Simply because, in the time it takes them to get there, a newer, nimbler, born-in-the-cloud competitor might have made a move on their territory and it may be harder to outmanoeuvre them with on-premise technologies.






What the enterprise can learn from Google's decision to go "all-in" on cloud

cdonnelly | No Comments | No TrackBacks
| More

Google has spent the best part of a decade telling firms to ditch on-premise productivity tools and use its cloud-based Google Apps suite instead. So, the news that it's moving all of the company's in-house IT assets to the cloud may have surprised some.

Surely a company that spends so much time talking up the benefits of cloud computing should have ditched on-premise technology years ago, right?

Not necessarily, and with so many enterprises wrestling with the what, when and how much questions around cloud, the fact Google has only worked out the answers for itself now is sure to be heartening stuff for enterprise cloud buyers to hear.

Reserving the right

The search giant has been refreshingly open in the past with its misgivings about entrusting the company's corporate data to the cloud (other people's clouds, that is) because of security concerns.

Instead, it prefers employees to use its online storage, collaboration and productivity tools, and has shied away from letting them use services that could potentially send sensitive corporate information to the datacentres of its competitors.

This was a view the company held as recently as 2013, but now it's worked through its trust issues, and made a long-term commitment to running its entire business from the cloud.

So much so, the firm has already migrated 90% of its corporate applications to the cloud, a Google spokesperson told the Wall Street Journal.

What makes this really interesting is the implications this move has for other enterprises. If a company the size of Google feels the cloud is a safe enough place for its data, surely it's good enough for them too?

Particularly as Google has overcome issues many other enterprises may have grappled with already (or are likely to) during their own move to the cloud.

Walking the walk

What the Google news should serve to do is get enterprises thinking a bit more about how bought-in the other companies whose cloud services they rely on really are to the idea.

While they publicly talk up the benefits of moving to the cloud, and why it's a journey all their customers should be embarking on, have they (or are they in the throes of) going on a similar journey themselves?

If not, why not, and why should they expect their customers to do so? If they are (or have), then talk about it. Not only will doing so add some much needed credibility to their marketing babble, but will show customers they really do believe in cloud, and aren't just talking it up because they've got a product to sell.

Did you believe in any of these cloud computing myths?

avenkatraman | No Comments | No TrackBacks
| More

Myths and misunderstandings around the use and benefits of cloud computing are slowing down IT project implementations, impeding innovation, inducing fear and distracting enterprises from yielding business efficiency and innovation, analyst firm Gartner has warned.

It has identified the top ten common misunderstandings around cloud:

Myth 1: Cloud is always about the money

Assuming that the cloud always saves money can lead to career-limiting promises. Saving money may end up one of the benefits, but it should not be taken for granted. It doesn't help when all the big daddies of the cloud world - AWS, Google Microsoft - are doing are tripping over each other to cut down prices. But cost savings must be seen as a nice-to-have benefit while agility and scalability should be the top reasons for adopting cloud services.

Myth 2: You have to do cloud to be good

According to Gartner, this is the result of rampant "cloud washing." Some cloud washing is based on a mistaken mantra (fed by hype) that something cannot be "good" unless it is cloud, a Gartner analyst said.

Besides, enterprises are billing many of their IT projects cloud for a tick in the box and to secure funding from the stakeholders. People are falling into the trap of believing that if something is good it has to be cloud.

There are many use cases where cloud may not be a great fit - for instance, if your business does not experience too many peaks and lulls, then cloud may not be right for you. Also, for enterprises in heavily regulated sector or those operating within strict data protection regulations, a highly agile datacentre within IT's full control may be a best bet.

Myth 3: Cloud should be used for everything

Related to the previous myth, this refers to the belief that the characteristics of the cloud are applicable to everything - even legacy applications or data-intensive workloads.

Unless there are cost savings, moving a legacy application that doesn't change is not a good candidate.

Myth 4: "The CEO said so" is a cloud strategy

Many companies don't have a cloud strategy and are doing it just because their CEO wants. A cloud strategy begins by identifying business goals and mapping potential benefits of the cloud to them, while mitigating the potential drawbacks. Cloud should be thought of as a means to an end. The end must be specified first, Gartner advises.

Myth 5: We need One cloud strategy or one vendor

Cloud computing is not one thing, warns Gartner.  Cloud services include IaaS, SaaS or PaaS models and cloud types include private, public or hybrid clouds. Then there are applications that are right candidates for one type of cloud. A cloud strategy should be based on aligning business goals with potential benefits. Those goals and benefits are different in various use cases and should be the driving force for businesses, rather than standardising on one strategy.

Myth 6: Cloud is less secure than on-premises IT

Cloud is perceived as less secure. To date, there have been very few security breaches in the public cloud -- most breaches continue to involve on-premises datacentre environments.

Myth 7: Cloud is not for mission-critical use

Cloud is still mainly used for test and development. But the analyst firm notes that many organisations have progressed beyond early use cases and are using the cloud for mission-critical workloads. There are also many enterprises (such as Netflix or Uber) that are "born in the cloud" and run their business completely in the cloud.

Myth 8: Cloud = Datacentre

Most cloud decisions are not (and should not be) about completely shutting down datacentres and moving everything to the cloud. Nor should a cloud strategy be equated with a datacentre strategy. In general, datacentre outsourcing, datacentre modernisation and datacentre strategies are not synonymous with the cloud.

Myth 9: Migrating to the cloud means you automatically get all cloud characteristics

Don't assume that "migrating to the cloud" means that the characteristics of the cloud are automatically inherited from lower levels (like IaaS), warned Gartner. Cloud attributes are not transitive. Distinguish between applications hosted in the cloud from cloud services. There are "half steps" to the cloud that have some benefits (there is no need to buy hardware, for example) and these can be valuable. However, they do not provide the same outcomes.

Myth 10: Private Cloud = Virtualistaion

Virtualisation is a cloud enabler but it is not the only way to implement cloud computing. Not only is it sufficient either. Even if virtualisation is used (and used well), the result is not cloud computing. This is most relevant in private cloud discussions where highly virtualised, automated environments are common and, in many cases, are exactly what is needed. Unfortunately, these are often erroneously described as "private cloud", according to the analyst firm.

"From a consumer perspective, 'in the cloud' means where the magic happens, where the implementation details are supposed to be hidden. So it should be no surprise that such an environment is rife with myths and misunderstandings," said David Mitchell Smith, vice president and Gartner Fellow. 

How Ucas keeps downtime away with disaster recovery strategies

avenkatraman | No Comments | No TrackBacks
| More

Business Continuity is often perceived as a concept only followed by the biggest of big business, but the reality is that the need and corresponding services increasingly underpin everyday life. An invisible safety net making sure important everyday events continue - no matter what is crucial for all verticals. And education is no exception.

In this guest blogpost, Mike Osborne, school governor & head of business continuity at Phoenix IT talks about the importance of business continuity for Ucas.

During the last few weeks, despite the fact that students now have to pay much higher fees for studying, we have seen more people than ever applying for higher education. An extra 30,000 new places were created this year. This has made the competitive battle between universities even more intense as they fight to secure the best students, especially over the clearing period.

For both -- the Universities and Colleges Admissions Service (Ucas) and universities -- the clearing and application periods are a time when the availability and function of their operations are most visible not just to students and their parents but also the Government and the Media.

In 2011, both universities and students experienced massive problems with the Ucas online system during the clearing and application periods. This year, it's more important than ever both for Ucas, the universities and students that there are no system disruptions so students can get the offers they need in a timely fashion and universities can fill their spaces.

Until 20th September, when the clearing vacancy search closed, Ucas was put to the test as thousands of students scrambled to get an offer through the clearing system. According to Ucas last year, on the first weekend after A-level results were announced some 20,000 applicants were placed at a university or college through the Clearing system. Considering the critical nature of this period, it's essential that the admission agency (Ucas) and universities have ICT and Call Centre resources operating effectively, and without interruptions affecting operations.

ICT and call centre systems are vulnerable to a variety of service disruptions, ranging from severe Disasters (e.g. fire) to mild (e.g. short-term software glitches, power or communications loss). Universities and Ucas are now taking out robust ICT contingency plans such as workplace business continuity and Cloud based DRaaS (disaster recovery as a service), to ensure that information processing systems and student data, critical to the university, are maintained and protected against relevant threats and that the organisation has the ability to recover systems in a timely and controlled manner.

With many mid-market companies also seeing the potential of Disaster Recovery using Cloud technology, it's not surprising that universities and Ucas are spending more time, money and effort on implementing DRaaS plans. DR as a Service allows data to be stored securely offsite and if the right service is selected, also provide near instantaneous system and network recovery.

When added to Call Centre recovery services as part of a Business Continuity Plan, DRaaS offers a convenient and cost effective solution.

With government and Higher Education Funding Council for England (Hefce) imposing fines on institutions for over-recruitment and with student data including unique research projects increasing, it is more essential than ever for universities and Ucas to keep system downtime to a minimum.  

Picking the cloud service that's right for you

avenkatraman | No Comments | No TrackBacks
| More

Organisations tend to have one of two IT strategies today: those who are already planning and eventually implementing cloud strategy, and those who are going to be doing it soon. But, the options that companies are faced with are dizzying, often contradictory, and usually dangerously expensive. So what's the best way for organisations to find the ideal cloud service for their specific needs?  

Determining what is needed from the cloud will drive what platform organisations should deploy on. Considerations like budget, expected performance, and project timeline all have to be carefully balanced before plunging ahead. Broadly speaking the platform options range from using someone else's public cloud, such as AWS, to building your own private cloud from scratch.  Where an organisation lands on that spectrum will be driven by how they rank the primary factors involved.

In this guest blogpost, Christopher Aedo, chief product architect at Mirantis explains how to evaluate the cloud requirements and pick the right platform

In essence there are seven key factors to address that will help businesses clarify what really matters and enable them to establish their individual cloud requirements. These are:

Control: How much control do you have over the environment and hardware? Make sure the cloud platform you select delivers the level of control you require. 

Deployment Time: How long before you need to be up and running? How much time will you burn just sorting out, ordering, racking and provisioning the hardware? It is critical that the cloud platform you choose can deployed in the right amount of time.

Available Expertise : Can your single IT staff member handle the project, or do you need a team of experts poached from the biggest cloud makers? Choose a cloud platform that matches the expertise you have available - or you can afford to bring in.

Performance: In a single server there are so many components impacting performance - from the memory bus to your NIC and everything in between. However performance directly correlates with budget - a larger budget will usually see greater performance. However there is no reason a smaller budget can't see high performance - providing you select the right option.  

Scalability: Your platform of choice should accommodate adding, or reducing, capacity quickly and easily. Will your chosen platform require downtime to scale up or down or can it be executed seamlessly?  

Commitment: From no contract "utility pricing" to the long term investment of owning all your gear - the longer you're tied up, the greater the risk.

Cost: This may be the most important and most difficult factor to account for. You can see it as an output from your other factors, or your ultimate limiter dictating where you'll make concessions.  There are definitely some good ways to maximize your dollar while minimisng your risk as long as you keep your head up and your eyes open.

By addressing these factors early on in the process of implementing a cloud based solution you will save yourself time, resource and budget in the long run. However having addressed what you want the cloud to deliver it is important that you match your requirements with the right type of cloud platform.

English: Cloud Computing Image
Here are the main cloud options:

Option 1: The Public Cloud

The big players here are AWS and RackSpace, but there are other contenders with fewer bells and whistles like DigitalOcean and Linode. These represent the lowest entry barrier (you just need 'net access and a credit card!) but also offer the least control and the greatest cost increases as you scale up.

The public cloud is priced like a utility offering the opportunity to scale up/down as needed. This is well suited to handling a highly elastic demand, but it's important to keep an eye on what you've spun up.

With a public cloud you get limited access to the underlying hardware, and no visibility into what's beneath the covers of the cloud - although you will get some flexibility in configuration and near instant deployment of service without the need for any real expert to be involved.

However, generally speaking, you're going to find relatively low performance with a public cloud with higher performance coming at significantly increased cost. You can also expect to be billed by the minute in return for not being held to any contract. Many will offer discounts with a commitment of some sort, but then you give up the ability to drop resources when you no longer need them.

Option 2: Hosted Private Cloud

There are many well-known vendors offering options in this space, ranging from complete turn-key environments to build-to-order approaches. They will provide the hardware on short-term lease, and will charge you to manage that hardware.  

Companies like RackSpace will work with you to architect your environment, and provide assistance in deployment - which could take up to 6 weeks. You'll need moderate to extreme expertise and your average junior sys-admin is going to be way out of their depth using such a service.

Levels of control will vary from high to minimal depending on how much of the platform you manage and deploy yourself. The level of commitment will also vary but the longer you commitment the more likely an alternative platform is to make sense. HPC is not well suited to an elastic demand and upscale in 2-6 weeks - and generally there will be no 'scale-down' option.

Option 3: Build your own private cloud (BYPC)

BYPC requires a high level of technical expertise within the business and will present you with the greatest technical and financial risk. However, you will have total control over the hardware design, the network design, and how your cloud components are configured - but expect this to take a year to 18 months to complete.

Your costs in the build-your-own approach can be kept down if performance and reliability are of no concern, or they can (needlessly) go through the roof if you're not making carefully planned decisions. The performance of BYPC will be entirely dependent on your budget constraints and how successful your architectural planning is.

There are lots of moving pieces, and the risks are tremendous, as you may be committing hundreds of thousands of dollars to your cloud pilot. Ask anyone who's actually tried this; it's a lot harder than it looks.

Option 4: Private-cloud-as-a-Service (PCaaS)

PCaaS, such as OpenStack, represents a balance between the value and flexibility of public cloud and the control of private cloud.

PCaaS provides total control over how hardware is used, and that hardware is 100% dedicated to you with a minimum One-day commitment on a rolling contract. As a result of the minimal commitments it can be deployed within a few hours and you will be free to scale the size of your environment up and down at nearly the same pace as if you were on a public cloud.

The costs are higher than a comparable number of VMs in a public cloud, but with no long-term commitment and clear pricing from the start, your financial risks are lower than any other private cloud approach.

You'll need a moderate skill level with PCaaS but your risks are mitigated because you're in a managed environment. Whereas, until recently, PCaaS required you to have a reasonable amount of OpenStack knowledge developments such as OpenStack Express have drastically reduced the expertise needed to implement a PCaaS.

Each of these cloud platforms has validity, as well as a real sweet spot, where that particular approach is the only obvious good choice for your business needs. If you properly consider your requirements and how they match with the options available, your cloud project will not end up as a  costly mistake.

Microsoft Azure European users take note - HDInsight performance issue

avenkatraman | No Comments | No TrackBacks
| More

Microsoft Azure cloud service status website at 5pm BST on Friday, September 26th showed that while the core Azure platform components were working properly, there was "partial performance degradation" on Azure's HDInsight service for customers in West Europe.

The status website warned that customers may experience intermittent failures when attempting to provision new HDInsight clusters in West Europe.

HDInsight is a Hadoop distribution powered by the cloud. It allows IT to process unstructured or semi-structured data from web clickstreams, social media, server logs, devices and sensors to analyse data for business insights.

Microsoft has assured cloud users that the engineers have identified the root cause of the performance degradation issue and are working out the mitigation steps.

The company has vowed to provide updates every two hours or as events update. I sense a long wait before the weekend beckons for European enterprises' Azure users.

Doesn't the NHS use Microsoft Azure HDInsight? Oh yes, it does!