Think the Open Compute Project isn't for you? Think again

cdonnelly | No Comments | No TrackBacks
| More

In this guest post, James Bailey, director of datacentre hardware provider Hyperscale IT, busts some enterprise-held myths about the Open Compute Project

Market watcher Gartner predicts the overall public cloud market will grow by 16.5% to be worth $203.9bn by the end of 2016.

This uptick in demand for off-premise services will put pressure on service providers' hardware infrastructure costs at a time when many of the major cloud players are embroiled in a race to the bottom in pricing terms, meaning innovation is key.

On the back of this, The Open Compute Project (OCP) is slowly (but surely) gaining traction.

Now in its fifth year, the initiative is designed to facilitate the sharing of industry know-how and best practice between hardware vendors and users so that the infrastructures they design and produce are efficient to run and equipped to cope with 21st century data demands.

Over time, a comprehensive portfolio of products have been created with the help of OCP. For the uninitiated, these offerings may appear to only suit the needs of an elite club of hyperscalers, but could they have a role to play in your average enterprise's infrastructure setup?

To answer this question, it is time to bust a few myths around OCP.

Myth 1: Datacentre efficiency is all that matters to OCP

This is largely true. After all, the mission statement of OCP founder, Facebook, was to create the most efficient datacentre infrastructure, combined with the lowest operational costs. The project encompasses everything from servers, networking and storage to datacentre design.

The server design is primarily geared around space and power savings. For example, many of the servers can be run at temperatures exceeding 40C, which is way higher than the industry norm, resulting in lower cooling costs.

This efficiency adds up to an important cost saving and a smaller carbon footprint. When Facebook published the initial OCP designs back in 2011, they were already 38% more energy-efficient to build and 24% less expensive to run than the company's previous setup.

Myth 2:  Limited warranty

Most OCP original design manufacturers (ODMs) offer a three-year return to base with an upfront parts warranty as standard. This can often be better than what is offered by other OEM hardware vendors today.

The warranty options do not stop there. Given the quantities most customers purchase, vendors are open to creating bespoke support and SLAs.

In recent times, some of the more mainstream players have got in on the action. Back in April 2014, HPE announced a joint venture with Foxconn, resulting in HPE Cloudline servers aimed specifically at service providers.

Myth 3: Erratic hardware specifications

Whilst specifications do indeed evolve, the changes are not taken lightly. Any specification change is submitted to the OCP body for scrutiny and acceptance. 

The reality of buying into the OCP ecosystem is that you are protecting yourself from vendor lock-in. Many manufacturers build the same interchangeable systems from the same blueprints, thus giving you a good negotiation platform.

That said, there is a splintering of design. A clear example is difference in available rack sizes. 

The original 12-volt OCP racks are 21-inches but - more recently - 'OCP-inspired' servers have emerged that fit into a standard 19-inch space. 

Overall, this is positive as you can integrate OCP-inspired machines into your existing racks, which has created a good transition path for datacentre operators looking to kit out their sites exclusively with OCP hardware.

Google's first submission to the community is for a 48V rack which would create a third option. But surely this is all healthy? 

Google estimate this could have energy-loss savings of over 30% compared to the current 12V offering, and who would not want that? There are also enough ODMs to ensure older designs will not disappear overnight.

Myth 4: OCP is only for the hyperscalers

Jay Parikh, vice president of infrastructure at OCP founder Facebook, claims using OCP kit saved Facebook around $1.2 billion in IT infrastructure costs within its first three years of use, by doing its own designs and managing its supply chain.

Goldman Sachs have a 'significant footprint' of OCP equipment in their datacentres, and Rackspace - another founding member - heavily utilises OCP for its OnMetal product. Microsoft is also a frequent contributor and runs over 90% of their hardware as OCP.

Additionally, there are a number of telcos - including AT&T, EE, Verizon, and Deutsche Telekom - that are part of the adjacent OCP Telco Infra Project (TIP).

Granted, these are all very large companies but that quantity of scale drives the price down for everyone else. So, if you are buying a rack of hardware a month, OCP could be a viable option.

Opening up the OCP

In summary, the cloud service industry has quickly grown into a multi-billion dollar concern, with hardware margins coming under close scrutiny.

The only result can be the rise of vanity-free whitebox hardware (ie hardware with all extraneous components removed). Recent yearly Gartner figures show Asian ODMs like Quanta and Wistron growing  global server market share faster than the traditional OEMs. Nevertheless, if Google is one of your customers, it is easy for these numbers to get skewed.

Even for those not at Google's scale, the commercials of whitebox servers are attractive, and it might give smaller firms that are unable to afford their own datacentre a foot in the door.

However, most importantly, the project has also led to greater innovation and that is where it really gains strength. 

OCP brings together a community from a wide range of business disciplines, with a common goal to build better hardware. They are not sworn to secrecy and can work together in the open, and that really takes the brakes off innovation.

What Apple, Dropbox and Spotify's shifting cloud strategies really mean for AWS

cdonnelly | No Comments | No TrackBacks
| More

Amazon Web Services (AWS) celebrated its 10th anniversary on 14 March, having devoted the past decade to popularising the cloud computing concept and - in turn - shaking up the IT industry.

To mark the occasion, the Infrastructure-as-a-Service (IaaS) giant released a series of blog posts that saw execs -such as CTO Werner Vogels - taking a fond look back at some of the high points of AWS' first decade in business.

These include signing up a million active users to its cloud platform, such as Netflix, AirBnB, Lebara, Guardian Media Group, Trinity Mirror Group, Aviva and others, while cultivating a product release cadence that sees it rollout hundreds of new features and services for subscribers each year.

However, while the firm and its execs set about looking back over its successes, industry watchers were busy pondering what the company's next 10 years in business are likely to look like. Particularly in light of the news that several of the firm's high-profile customers have started scaling back their use of its services. Or have they?

Music streaming site Spotify announced in February that it was in the throes of moving its IT infrastructure over to the Google Cloud Platform, having previously been hailed as a reference customer of AWS.

Earlier this week, Dropbox, a major user of Amazon's Simple Storage Service (S3), outlined details of the work it is doing to curtail its use of cloud, resulting in 90% of its users' data now being stored on-premise.

 A few days later, this was followed by a (source-led) report that consumer electronics giant Apple was following Spotify over to Google's cloud. The company is already known to run unspecified amounts off its operations in both AWS and the Microsoft Azure cloud, incidentally.

Shifting sands of enterprise IT

This apparent "mass exodus" of big AWS customers has prompted a degree of debate online about whether or not this is indicative of a wider industry trend, and that - after a decade of steady growth and big customer wins - Amazon might be losing its hold on the cloud market. I personally don't subscribe to that notion.

You see, while the Spotify, Apple and Dropbox news is certainly interesting (I wouldn't have written about it, if it wasn't), I personally don't think what we're bearing witness to here is necessarily a sign that Amazon's grip on the cloud market is weakening.

According to Ahead In the Cloud sources, Spotify and Dropbox are still using Amazon's cloud. And - certainly in the latter case - looks set to use more of its capacity over time to prop up its international operations for data sovereignty purposes.

So, no, I don't think we're witness the beginning of the end of AWS, despite what some rather over-excited folks on Twitter might claim.

Instead, what we're actually seeing is the cloud market coming of age. And, by that, I mean really starting to deliver on the promises the industry's great and good have made in the past about off-premise services giving enterprises greater freedom when it comes to IT.

I've spent more time than I ever care to think about sat in IT conference keynotes, listening to vendor execs wax lyrical about how cloud will allow enterprises to move their workloads - based on their cost, performance and security requirements - to wherever makes most sense to run them.

With that in mind, what we're really seeing - in the case of Spotify, Dropbox and (allegedly) Apple - is simply them exercising their right to do this.

It's also worth mentioning that cloud is still a relatively nascent technology concept, and many companies are still getting to grips with how best to use it, undoubtedly resulting in several tweaks to their product and supplier strategy as time goes on. Again, what we're seeing with Spotify, Dropbox and Apple (reportedly) is probably them going through the same process.

Social gaming firm Zynga went through something similar several years ago, that saw it set out plans to ditch AWS, in favour of building out its own datacentres because - given the sheer number of people playing its games - they could achieve the economies of scale needed to make it worthwhile for them.

Unfortunately, this change in strategy occurred just before demand for its flavour of desktop- and browser-based games dropped through the floor, and mobile gaming took off, prompting it to abandon its build-your-own datacentre strategy and ramp up its use of AWS again.

All-in or all-out? 

What the Zynga example neatly serves to highlight is the futility of discussing company's cloud strategies in absolute terms: you're either all-in or you're not, and once you've moved your final workload or whatever to a certain provider's cloud, the job's done.

The reality is, for many firms, their cloud strategies will probably end up being a lot more fluid than that, with end users moving to shift workloads from one provider to another or back on-premise as and when they want and need to.

As the price of using cloud continues to drop and providers add more features and functionality to their platforms, end users will get more comfortable with using off-premise services. This, in turn, means they will become more adept at switching providers - if someone is offering a sweeter deal elsewhere - or move to adopt a multi-provider cloud strategy.

While we watch and wait for all this to play out, here's to the next ten years of cloud. Or whatever we're calling it by then. 

Could cloud be the gateway to innovation for financial services firms?

cdonnelly | No Comments | No TrackBacks
| More

In this guest post, Ashish Gupta, BT's UK president of corporate and global banking financial markets, shares his views on how the banking sector should go about embracing cloud.

The need to scale - add more customers, trade new asset classes, expand locations - at speed has overcome the financial services sector's initial reluctance to use cloud services. What's more, the flexibility of cloud-based resources and services is an attractive alternative to the expense of owning and running large datacentres.

When talking cloud, the first question is always about security. Just how secure can customer data and commercial operations be when stored on someone else's infrastructure? The short answer is: very secure indeed. Cloud services should be at least (if not more inherently secure) than their in-house equivalents.  

However, the absence of industry-wide standards and the ease with which an individual department or business unit can sign up to the cloud mean some organisations are using cheaper, consumer-grade cloud services that could leave them vulnerable to security breaches. 

A piece of research  by BT exploring attitudes and levels of preparedness towards distributed denial of service (DDoS) attacks found more than a third of financial services organisations admit using mass market cloud services. Others may not even know they are.

Innovation is key to success

Of course, one of the great positives about cloud computing is that it encourages innovation, helping to build a more responsive, agile organisation. But if allowed to flourish uncontrolled, so called 'shadow IT' can open up a host of problems.

As such, banks and financial services companies need to know where their customer data is at all times, and details about how it is being handled. They need to be sure that an external cloud service isn't going to leave the door open to malicious activity and DDoS attacks. 

For the CIO, the challenge is how to let the organisation exploit the choice and flexibility of on-demand services without compromising corporate security or contravening regulatory requirements.

A CIO must - somehow - exercise a degree of control over the whole varied and shifting cloud estate.

Specialised cloud services for the financial community are part of the solution; they provide a highly secure ecosystem that connects thousands of applications and services with users worldwide. But what about your broader enterprise cloud applications? They also need to be secure. 

The answer is to roll all your distinct cloud services - public, private and hybrid - into one single cloud that you can manage and secure centrally.

Adopting this type of approach without the support of an external service partner is quite a big task, even for the most experienced of IT professionals. The pragmatic CIO will look for an expert partner, such as an independent global network provider with skills in connectivity, security and integration.

Or, as industry analyst Ovum puts it: "Enterprises are increasingly likely to discriminate toward cloud service providers with combined datacentre and networking orchestration skills as their trusted brokers across hybrid clouds."

Bursting the cloud of uncertainty

Centralising control with this type of strategy will help build security into the whole cloud environment, so employees (or customers) will to be able to connect securely from anywhere on any device to any service.

There's no reason why mobile devices cannot be as secure as a desktop PC with the right controls. So cloud-based proxy servers let users connect securely via the internet from wi-fi, fixed and mobile lines.

You can remotely lock down the microphone and camera on smartphones so they can be used securely on the trading floor. Your own app store gives you control over what your users can download and use over the cloud.

Financial regulators including the SEC and the Financial Conduct Authority are taking a keen interest in cyber security. Taking a an approach like this will help financial services companies demonstrate that they understand the operational risks of cloud computing and have the right measures in place for secure trading and to protect data. For business, it offers the best of both worlds: the freedom to innovate, in a secure and compliant environment.  

EU-US Privacy Shield: A viable alternative to Safe Harbour?

cdonnelly | No Comments | No TrackBacks
| More

In a joint guest post, Rafi Azim-Khan, the European head of data privacy, and Steven Farmer, Counsel, for Pillsbury Law set out the reasons why cloud firms and users must tread carefully around Safe Harbour's replacement 

The European Commission and the US Department of Commerce have reached an accord on a new transatlantic data transfer protocol to replace the defunct 'Safe Harbour' agreement.

Known as the EU-US Privacy Shield, the new-look agreement was met with a mixed reaction from those relying on Safe Harbour (which was invalidated in October 2015) to shift EU data to the US. But, is it really the cure-all solution that industry watchers in some quarters have heralded it to be?

Although the text of the new framework is not yet available, reported key features of the Privacy Shield include:

  • Stronger obligations to be imposed on U.S. companies to protect the personal data of EU citizens, and stronger monitoring and enforcement to be carried out by the US Department of Commerce and Federal Trade Commission. It is yet to be confirmed how such activities will take shape.
  • Written assurances from the US that its government will not commit indiscriminate mass surveillance of data transferred pursuant to the Privacy Shield, and that government access to EU citizens' data for law enforcement and national security purposes will be subject to clear limitations, safeguards, and oversight mechanisms.
  • Similar to Safe Harbour, US companies wishing to rely upon the Privacy Shield will have to register their commitment to do so with the US Department of Commerce.
  • Imposing a "necessary and proportionate" requirement for when the US government can snoop on EU citizens' data that would otherwise be protected.
  • New contractual privacy protections and oversight for data transferred by participating US companies to third parties (or processed by those companies' agents).
  • A privacy ombudsman within the US to whom EU citizens can direct data privacy complaints and, as a last resort, the Privacy Shield would offer EU citizens a no-cost, binding arbitration mechanism.
  • An annual joint review of the Shield that would also consider issues of national security access.

While adoption of the Privacy Shield is arguably preferable to the gaping hole that was left by the defunct Safe Harbour, there are several issues that may undermine its value.

With the new framework not yet finalised, it is possible the threshold for keeping tabs on EU citizen data may not be satisfactorily defined. 

This could lead to the re-establishment of a vague legal standard subject to political whims on both sides of the Atlantic. The end result being that companies relying on the Privacy Shield could be subjected to shifting policies and interpretations.

Additionally, if the annual joint review of the framework allows for it to be dismantled or substantially changed each year, then this could also diminish the certainty that US companies would seek to achieve through compliance.

All this raises the question of whether the Privacy Shield will offer a more valuable solution to those currently available to US importers of data. At this point, maybe not.

With uncertainty surrounding the Privacy Shield, other options for transatlantic data transfers - namely model contract clauses and binding corporate rules - are arguably more attractive alternatives for US companies transferring data Europe at this point.

More will be revealed as the EU and US move closer towards a binding agreement but at this stage companies might be better off considering the alternatives rather than putting all of their faith in the Privacy Shield.

Using the Working Set to improve datacentre workload efficiency

cdonnelly | No Comments | No TrackBacks
| More
In this guest post, Pete Koehler, technical marketing engineer for PernixData, explains why datacentre operators need to get a handle on the Working Set concept to find out what's really going on in their facilities. 

There are no shortage of mysteries in the datacentre, as unknown influencers undermine the performance and consistency of these environments, while remaining elusive to identify, quantify, and control. 

One such mystery as it relates to modern day virtualised datacentres is known as the "working set." This term has historical meaning in the computer science world, but the practical definition has evolved to include other components of the datacentre, particularly storage. 

What is a working set? 
The term refers to the amount of data a process or workflow uses in a given time period. Think of it as hot, commonly accessed data within the overall persistent storage capacity. 
But that simple explanation leaves a handful of terms that are difficult to qualify, and quantify. 

For example, does "amount" mean reads, writes, or both? Does this include the same data written over and over again, or is it new data? 

There are a few traits of working sets that are worth reviewing. These are: 
•Driven by the applications driving the workload, and the virtual machines (VMs) they run on. Whether the persistent storage is local, shared, or distributed doesn't matter from the perspective of how the VMs see it.  
•Always related to a time period, but it's a continuum, so there will be cycles in the data activity over time.
•Comprised of reads and writes. The amount of each is important to know because they have different characteristics, and demand different things from the storage system.
•Changed as your workloads and datacentre evolves, and they are not static.

If a working set is always related to a period of time, then how can we ever define it? Well, a workload often has a period of activity followed by a period of rest. 

This is sometimes referred to the "duty cycle." A duty cycle might be the pattern that shows up after a day of activity on a mailbox server, an hour of batch processing on a SQL server, or 30 minutes compiling code. 

Working sets can be defined at whatever time increment desired, but the goal in calculating a working set will be to capture at minimum, one or more duty cycles of each individual workload.

Why it matters
Determining a working set size helps you understand the behaviours of your workloads, paving the way for a better designed, operated, and optimised environment.
For the same reason you pay attention to compute and memory demands, it is also important to understand storage characteristics; which includes working sets. 

Therefore, understanding and accurately calculating working sets can have a profound effect on a datacentre's consistency. For example, have you ever heard about a real workload performing poorly, or inconsistently on a tiered storage array, hybrid array, or hyperconverged environment? 

Not accurately accounting for working set sizes of production workloads is a common reason for such issues.

Calculating procedure
The hypervisor is the ideal control plane for measuring a lot of things, with storage I/O latency being a great example of that. 

It doesn't matter what the latency a storage array advertises, but what the VM actually will see. So why not extend the functionality of the hypervisor kernel so that it provides insight into working set data on a per VM basis? 

Then, once you've established the working set sizes of your workloads, it means you can start taking corrective action and optimise your environment. 

For example, you can:
•Properly size your top-performing tier of persistent storage in a storage array
•Size the flash and/or RAM on a per host basis correctly to maximize the offload of I/O from an array
•Take a look at the writes committed on the working set estimate to gauge how much bandwidth you might need between sites, which is useful if you are looking at replicating data to another datacentre.
•Learn how much of a caching layer might be needed for your existing hyperconverged environment
•Demonstrate chargeback/showback. This is one more way of conveying who are the heavy consumers of your environment, and would fit nicely into a chargeback/showback arrangement

In summary
Determining an environment's working set sizes is a critical factor of the overall operation of your environment. Providing a detailed understanding of working set sizes helps you make smart, data-driven decisions. Good design equals predictable and consistent performance, and paves the way for better datacentre investments. 

The cloud migration checklist: What to consider

cdonnelly | No Comments | No TrackBacks
| More

In this guest post, Sarvesh Goel, an infrastructure management services architect at IT consultancy Mindtree, offers enterprises a step-by-step guide to moving to the cloud.

There are many factors that influence the cloud migration journey for any enterprise. Some of them may trigger change in the way software development is approached, or even internal service level agreements, and information security standards.

The risk of downtime, and the knock-on effect this could have on the company's brand value and overall reputation, should the switch from on-premise to cloud not go to plan, is often a top concern for some.

Below, we run through some of the other issues that can dictate how an enterprise proceeds with their cloud migration, and how their IT team should set about tackling them.

Application architecture

If there are multiple applications that talk to each other often, and require high speed connections, it is best to migrate them together to avoid any unforeseen timeout or performance issues.

The dependency of applications should be carefully determined before moving them to the cloud. Standalone apps are usually easier to move, but it's worth being mindful that there are likely to be applications that simply aren't cloud compatible at all.

Network architecture

There could be a few applications that require fast access to internal infrastructure, telephone systems, a partner network or even a large user base located on-premise. These can rely on a complex network environment and present challenges that will need to be addressed before moving to cloud.

Alternatively, if there are applications that are being served to global users and require faster download of static content, cloud can still be the top choice to provide customers with access to local or closer locations for content. Such examples include the media, gaming and content delivery industries.

Business continuity plan

Business continuity and internal/external SLA with customers often drive the application migration journey to cloud for disaster recovery purposes.

Cloud is an ideal target for hosting content for disaster recovery. It provides businesses with access to certified datacentres, hybrid offerings, bandwidth, storage and all at a lower cost.

The applications can be easily tested for failover and customisations can be made to hardware sizing of applications if and when disasters occur.

Compliance requirements

There could be legal reasons why personal or sensitive information needs to remain within enterprise's firewalls or in on-premise datacentres.

Such requirements should be carefully analysed before making any decision on moving applications to cloud, even when they are technically ready.

IT support staff training

Undertaking a migration requires having people on hand who understand the cloud fundamentals and can support the move.

Such fundamentals include knowledge of storage, backup, building fault tolerant infrastructures, networking, security, recovery, access control and, most importantly, keeping a lid on costs.

Disaster recovery

For businesses around the world, including Europe, building a disaster recovery solution can be expensive, difficult, and requires regular testing. 

Many European cloud vendors offer services on a pay as you use basis, with built-in disaster recovery, application or datacentre failure recovery, and continuous replication of content.

Using cloud for disaster recovery could provide a significant cost reduction in terms of infrastructure hardware procurement and the maintenance of the datacentre footprint.

Organisations could also choose disaster recovery locations in the same region as the business or several thousand miles away.

To conclude, once the applications are tested on cloud, and the legal/compliance concerns are addressed, organisations can opt for rapid cloud transformation.

This allows the development team to adopt the cloud fundamentals and use the relevant tool sets for rapid scaling of applications and create a more robust application experience, embracing all the power that cloud provides - not to mention the fallback option that gradual cloud migration provides to enterprises.

Addressing the datacentre skills gap by changing the cloud conversation

cdonnelly | No Comments | No TrackBacks
| More

Ahead in the Clouds recently attended a tour of IO's modular datacentre facility in Slough, along with a handful of PhD students from University College London (UCL).

The event's aim  was to open up the datacentre to a group of people who may never have stepped inside one before to enlighten them about the important (and growing) role these facilities play in keeping the digital economy ticking over.

And, based on the reactions of some of the students on the tour, it's a lesson that's long overdue.

For example, all of them largely understood the concept of cloud computing, but seemed surprised to learn that it is a little more grounded in the on-premise world than its name may suggest.

Indeed, the idea that "cloud" has a physical footprint - in the form of an on-premise datacentre - seemed to come as news to almost all of them.

For most people working in the technology industry today, that's either a realisation they made a very long time ago or can be simply filed away in a folder marked "things I've always sort of known". But, if you're an outsider, why would you?

The datacentre industry prides itself on creating and running facilities that, to most people, resemble non-descript office blocks, if they bother to cast their eye over them at all.

Given the sensitivity of the data these sites house, as well as the cost of the equipment inside, it's not difficult to work out why providers aren't keen on drawing attention to them.

At the same time, datacentre operators often talk about the challenges they face when trying to recruit staff with the right skills, particularly as the push towards converged infrastructure and the use of software-defined architectures gathers pace. 

On top of that is all the talk about how the growth in connected devices, The Internet of Things (IoT), big data and future megatrends look set to transform how the datacentre operates, as well as the role it will play in the enterprise in years to come.

The latter point is one of the reasons why IO is keen to broaden the profile of people, aside from sales prospects, who visit its site.

 "Getting people from different walks of life with different skillsets and different capabilities to comment on what we're doing, why we're doing it and what the future might look like is really important," said Andrew Roughan, IO's business development director, during a follow-up chat with AitC.

"We've got to listen to them and get involved with their line of thinking as that group will be tomorrow's customers."

Opening up the datacentre

The range of PhD students the company invited along to the IO open day included some from an artsy, and more creative background, whereas others were in the throes of complex research projects into the impact of the technology industry's activities on the world's finite resources.

It was a diverse group, but isn't that what the datacentre industry is crying out for? A mix of mechanical and software engineers, business-minded folks, creatives, as well as sales and marketing types.

But, if these people don't know the datacentre exists, thanks in no small part to the veil of secrecy the industry operates under, why would they ever think to work in one?

In this respect, IO could be on to something by opening up its facilities and holding open days, but - as previously touched upon - that's not something all operators will be able or willing to do.

IO is in a better position than most to, as its customers' IT kit is locked away in self-contained datacentre chambers that only they have access to. It's a setup akin to a safety deposit box, and means the risk of some random passer-by on a datacentre tour tampering with the hardware is extremely low.

What might be altogether more effective is getting the entire industry to rethink how it positions the datacentre in the cloud conversation more generally, so its vital contribution is more explicitly stated.

Otherwise, there is a real risk the datacentre will continue to be overlooked by the techies and engineers that UK universities produce simply because they don't know it's there. 

The benefits of adopting a "what if...?" approach to datacentre management

cdonnelly | No Comments | No TrackBacks
| More

In this guest post, Zahl Limbuwala, CEO of datacentre optimisation software supplier Romonet, explains why IT departments should be employing a more philosophical approach when solving business issues

The question "what if...?" is often used to refer to the past. What if a few hundred votes in Florida had gone the other way in 2000? What if Christopher Columbus had travelled a little further north or south? What if Einstein had concentrated on his patent clerk career?

For the IT department, the question can be equally applied to the future, as it needs to know the the decisions it makes will have the best possible impact for the business. Yet IT departments are often under financial constraints, meaning for every choice it faces, the department needs to bear in mind both the business and budgetary impact of its actions.

Asking the right questions

This need is exemplified by the datacentre - one of the most complex and cost-intensive parts of modern IT. While any organisation will want to know how datacentre decisions will affect the business, in too many cases IT teams simply don't know what questions they should ask in the first place.

For example, an organisation might ask what servers they need to buy in order to meet a 10-year energy reduction target. Yet this won't tell them what to do when those servers become obsolete in three years' time. Or what proportion of their energy use will actually be reduced by choosing more efficient servers (hint: not a huge proportion). Or whether there's a better way to reduce energy use and costs.

Instead, the IT team should be asking "what if..." for every potential change it could make to the datacentre to shape its strategy. In the example above, the organisation might ask what the effect would be if it replaced expensive, branded energy-efficient servers with a lower-cost commoditised alternative. It might ask what happens if it removes cooling systems. It might even ask what happens if it moves a large part of its infrastructure to the cloud. Regardless, by asking the right questions the IT team will have a much clearer idea of the options available.

Getting the right answers

Once an organisation knows the questions to ask, it needs to consider how it wants them answered. A simple question about energy usage and cost could produce answers using a variety of measurements, some of which will be more useful than others.

For instance, does the IT department benefit most from knowing the Power Usage Efficiency (PUE) of proposed data centre changes? Or the total energy used? Or the cost of that energy? While PUE can provide some indication of efficiency, it certainly doesn't tell the entire story.

A datacentre could have an excellent PUE and still use more energy and be more expensive than a smaller (or older) data centre that better fits the organisation's needs. A much better metric in most cases would be the total energy use or cost of any options; so that the organisation can see the precise, real-world impact of any changes.

Working it out

Once the organisation knows the right question, and the right way to answer it, the actual calculations might seem simple. However, there is still a large amount of misunderstanding around what influences datacentre costs. A single datacentre can produce hundreds of separate items of data every second, all of which may or may not be useful for answering IT teams' questions.

This can make the calculation a catch-22 situation. Does the organisation consider every single possible piece of data, making calculations a time-consuming, complex process? Or does it aim to simplify the factors involved, making calculations faster but making any answer an approximation or guesstimate at best?

To solve this, IT teams need to look at how they answer questions for the rest of the business. We are increasingly seeing big data and data-driven decision making used to support business activity in all areas, from marketing to overall strategy.

IT should be able to turn these practices inwards; using the same data-driven approach to answer questions on its own strategy. For instance, there is actually a relatively small number of factors that can be used to predict data centre costs.

Combining these with the right calculations and big data tools, IT teams can quickly and confidently predict the precise impact of any potential decision they make. Combining this approach with the right "what if...?" questions, IT departments can see precisely what will be the best course of action to the business, whatever its goals.

How green is your datacentre?

cdonnelly | No Comments | No TrackBacks
| More

In this guest post, Dominic Ward, vice president of corporate and business development at datacentre provider Verne Global, explains why the green power commitments of the tech giants may not be all that they seem.


The rise of the digital economy has a well-kept dirty secret. The movies we stream, the photos we store in the cloud and the entire digital world we live in means, on a global basis, the power used by datacentres now generates more polluting carbon than the aviation industry.


Perhaps to diffuse any concerns and attention with regard to their growing use of power, tech giants like Microsoft, Apple and Google, have announced plans to open datacentres supposedly run on renewably-produced electricity.


Apple, for instance, claims all of the energy used to by its US operations - including its corporate offices, retail stores and datacentres - came from renewable sources, winning the consumer tech behemoth praise from environmental lobbying group Greenpeace.


The reality is, however, a little different.


If a company sources power from a solar or wind farm, what happens when night falls or the wind drops?  The company will revert to power from the main electricity grid.


In the US, around 10% of power comes from renewable sources, while Iceland is the only country in the world with 100% green energy production. So how can Apple claim to be 100% green at its Cork facility, or at its soon-to-open Galway datacentre, when only about 20% of Ireland's power grid is from renewable sources?


The answer is a little-publicised renewable market mechanism that is allowing companies from Silicon Valley and around the world to get away with a big green marketing scam: Renewable Energy Certificates (RECs).


This system and its sister scheme, the European Energy Certificate System (EECS), operates like airline carbon trading, allowing power users to buy 'certificates', which testify that their dollars have financed production of renewable electrons elsewhere.


In essence, if you use 1 kWh of coal or nuclear energy, you can buy a certificate to claim an equivalent 1kWh from renewable energy.


This is not real renewable energy, and it does not support the claims made by large tech companies about the provenance of their power.


We have known about this smokescreen for some time, but the issue gained prominence recently when Truthout, a campaigning journalism website, called the practice "misrepresentation" and "a boldfaced lie on Apple's part".


The problem is becoming endemic amongst tech companies, though, with the big Silicon Valley tech giants being the worst offenders.


All the major internet firms are using this strategy, many of whom now state that their new datacentres are 100% renewable.


Given their location and disclosed sources of power, this simply is not true, save for their use of purchased certificates.


Unless a datacentre generates all of its own power from renewable sources, or sources power from a national grid that uses entirely renewable energy, enterprises and consumers will continue to underestimate the true environmental impact of their computing. Google, to its credit, has at least publicly recognised this problem.


Several firms including Google and Apple this summer allowed their various initiatives to be highlighted by the White House as indication of a US commitment to the upcoming United Nations Climate Change Conference in December. Their commitments to increased generation of renewable power are welcome. But, until they abandon this certification charade, these commitments will continue to appear as hollow claims.


So, what needs to happen?

1. Increased transparency on power sources

The RECs and EECS schemes currently allow tech companies and datacentre operators to hide the truth about their power cleanliness. Companies should be obliged by law to disclose the true nature of their power sources, including an explicit disclosure on the purchase of energy certificates. Only then will enterprise customers and consumers know the truth about their energy consumption from computing.


2. Upgrade the RECs and EECS schemes

The current systems are massively flawed. What began as a well-intended mechanism to promote new generation of renewable power has been poorly executed. It is time to upgrade the system to guarantee that every dollar, euro and pound spent on an energy certificate is truly invested in the installation of new renewable power generation.


3. Go green for real

Most renewable energy is not naturally suited to the tech industry: wind drops and the sun sets. Yet, the technology industry needs constant energy. The easy marketing 'win' is that it is easier for tech companies to simply pay for an energy certificate rather than shift to an entirely renewable energy source.


However, as we enter an era in which the technology and datacentre industry now has a carbon footprint in excess of the airline industry, surely this is not the right attitude.


Take the time to understand the finer points of your own energy contract and where the power you are using really comes from. Is it truly 'green'? And if you haven't taken the time to do so before, put some research into green datacentre options. The reality is that the only way to move the tech industry to 100% true clean energy is to clean up the power grids or move the tech industry to grids that are already clean.

G-Cloud 7: Could the 20% contract variance cap end up harming the framework?

cdonnelly | No Comments | No TrackBacks
| More

When the G-Cloud framework was introduced back in spring 2012, its core aim was to shake-up government IT procurement, so that high-value, multi-year hardware contracts awarded to the same old big-name enterprise suppliers became a thing of the past.    

Backed by a Central Government-wide cloud-first mandate, the public sector was actively encouraged to use the framework to source cloud-based alternatives to on-premise technologies via the Digital Marketplace (formerly known as CloudStore) and a much larger pool of suppliers.

Initially, supplier contracts were only allowed to last 12-months, to prevent the public sector from falling back into buying habits synonymous with the old way of doing things, but this was later extended to two years.

It was a move that was warmly welcomed by users at the time, as it meant buyers could avoid having to retender for services so frequently, which some suppliers had flagged as a barrier to G-Cloud adoption within certain quarters of the public sector.

Against this backdrop, it's not difficult to see why the Crown Commercial Service's (CCS) new 20% rule around G-Cloud contract extensions seems to have caused so much upset within supplier circles.

The regulation, which is set to be introduced when the seventh iteration of G-Cloud goes live on 23 November, means framework users will be forced to retender if they want to use more of a certain service if their proposed contract extension looks set to exceed 20% of their original procurement's value.

To avoid this, buyers would have to work out in advance how their use of a particular cloud service is likely to take off within their department over the course of the two-year contract, which could lead to over-provisioning and surplus IT being procured, it is feared.

This, suppliers argue, is at direct odds with the pay-as-you-go ethos of both G-Cloud and cloud computing, more generally, and harks back to the dark days of government IT procurement.

"As part of a G-Cloud procurement, buyers should always work out the 'cost of success' when they are shortlisting and selecting their cloud providers," John Glover, sales and marketing director at G-Cloud provider Kahootz told Ahead In the Cloud (AitC).

"For example, if a project team initially only need to consume 100 users, but expect to expand that to 2,000 over the contract term, that should be factored in.

"But, to ask them to now order and pay for 2,000 users upfront takes us back to the bad old days when public sector organisations committed large sums of capital on 'shelfware'," he added.

Why is it being introduced?

When pressed as to why the measure is being introduced, the Cabinet Office fed Computer Weekly a wooly line about how the government is "always improving the framework to make it easier for suppliers and buyers," before confirming that it will be carefully considering any feedback it receives on the matter.

The insinuation, though, that the 20% cap could be considered an "improvement" would undoubtedly be contested by Kahootz, and many others within the G-Cloud community who have already taken steps to make the Cabinet Office aware of their disapproval.

For example, G-Cloud suppliers Skyscape Cloud Services and EduServ have both put their misgivings about the rule change in writing to the Cabinet Office, while the G-Cloud working group inside trade association EuroCloud UK issued a statement this week, expressing its concerns.

It is a shame the Cabinet Office hasn't revealed more at this time about the motivation for the move, as every supplier AitC has spoken to seems at a loss to explain it, although they have their theories.

EuroCloud UK, for instance, floated the idea that the rule could be the result of people unfamiliar with the origins of G-Cloud, but with a stake in government procurement, getting involved.

While others have apportioned blame to the recent tightening up of the EU Procurement Regulations around how much variance is permitted within contracts once they've been agreed.

Where this theory falls down slightly is around the fact these regulations permit contract variations of up to 50%, which raises further questions about why CCS is intent on enforcing a cap of 20%?

 Whatever the reason, given the furore the move has caused so far, there's every chance CCS and The Cabinet Office will backtrack, given their willingness in the past to tweak the workings of the framework in response to supplier and buyer feedback.

Whether or not they would be able to revoke the 20% rule before G-Cloud 7 goes live is doubtful, but if they choose not to now or in any future version iterations, they might have something of a revolt on their hands.

AitC has already heard from several suppliers who've said, while G-Cloud 6 continues to run, they'll be pushing that to buyers as their preferred framework until it ceases to exist in February 2016. Admittedly, that hardly constitutes a long-term solution to the problem. 

What's at stake?

A lot of those who've voiced their opposition to the changes have shared the same concern that the introduction of the 20% cap could end up undoing all of the good work the Cabinet Office has achieved with G-Cloud to-date, and ultimately put the public sector off using the framework at all.

The amount of money spent via the framework since its creation now stands at £806m, with £53m of that attributable to the volume of transactions that took place in September alone. And it would be a shame if all the momentum it's generated so far were to go to waste.  

Particularly when the success G-Cloud has had to date seems to be gaining wider industry recognition, and reports continue to circulate about how other European countries are looking to emulate the model for their own public sector IT procurement needs.

So, here's hoping the Cabinet Office is taking notice of what suppliers have to say on this matter, as The Digital Marketplace won't work without them.


Has end user computing made an enemy of the state?

cdonnelly | No Comments | No TrackBacks
| More

In this guest post, J. Tyler Rohrer, co-founder of Liquidware Labs, explains how the use of cloud apps can help user solve end user computing scalability issues.

We are about to enter the golden age of end-user computing (EUC), with the concept now blossoming out of the legacy client-server model and into one that is mobile, cloud, and application-centric.

The explosive growth of mobile tablets, phablets, smartphones, ultra-books, laptops, semi-reliable wireless, mobile networks and cheaper, more intelligent storage (coupled with a rise in cloud services and modern apps) is incredible.

There are some niche offerings, like application virtualisation, application layering, VDI, Desktops-as-a-Service, Storage-as-a-Service, and Enterprise Mobility Management (EMM) - but these are incremental.

These still, for certain use cases, bump up against the limits of the laws of physics like that nasty speed of light latency constraint. And not just in network performance terms, but application response times, storage retrieval times but also the very nature of Moore's law itself.

What's the problem?

In the past, Moore's law was an incredible benefit to most modern desktop administrators. We could rest assured that computing power would nearly double every 18 months, while the cost of that compute would be halved.

However, in our rush to throw progressively less expensive yet powerful hardware at most problems, we created an even larger web of intricacy. We created a topology that - while logical - lacked scale.

The tentacles of our client-server networks sprawled. Most user devices were (and still are) incredibly "stateful" - with proprietary configurations, sensitive data, and tuned applications delicately installed on commodity-class hardware.

In a thought, scale got away from us. The larger our deployments got, the more acutely painful the weight of this scale on our operations and systems management became.

Sure, we bought tools that patched the holes, rather than filled them.  While somewhat tenable in the campus environment - laptops and mobile "off network" computing was a target for both accidental and malicious data (IP) loss and risk.

Because we had varying user types with different machines, images, applications, printers, and policies, we tended to have a one-to-one relationship with each desktop - or better yet, something that automated remedial tasks.

While these tools boosted productivity somewhat, the lag to buy, image, provision, and deploy a new laptop, desktop or whatever, was still measured in days or hours at best.

And while we mention security above in the context of risks and attacks, the fact the majority of our corporate IP rests on commodity-class hard drives today, that are not backed up upon each write, could be catastrophic.

What we need to work out is how to create and deliver productive and secure workspaces for our end users, while getting scale to work for and not against us.

Stateful computing was a worst-case scenario in the past. A user might need an app, large storage, lots of memory, and - so - we gave it to them. It was cheap and promised to get cheaper. But all that "state" is what we are fighting now. 

With the rise cloud apps, very little "state" now resides on devices, particularly where smartphones and tablets are concerned.

For that reason, I think what we shall soon find is the operating system - whether it's Windows, Android, OS X, iOS, or Linux doesn't really matter when you reach a truly stateless workspaces.

The "cloud" however ushers in an entirely new way of thinking about client-server computing. Instead of long distance connections, we have a fabric.

The things we need are, or can be, a click away so the idea of having them installed becomes archaic. All this "state" being removed from the device now lives as a service, distributed across this cloud fabric, for use when, where, and as needed.

So it's the availability of a potential service I might one day need that is the solution.

And with global replication via cloud services, web-scale file systems, and hybrid models - the latency that punished the client-server architectures of old is minimised and architected around.

We see projects like Citrix Workspace Cloud, VMware Project Enzo, Amazon Web Services and Microsoft Azure, metadata rich file systems like Nutanix Medusa, and workspace tools by my company Liquidware Labs all tackling this challenge of wrangling scale back into Pandora's box on both large and individual user levels.

We are all very, very close. While the combination of these technologies will be relegated to specific use cases for the next few years - we will see convergence of x86, cloud, and mobile into single platforms.

And while we will continue to have rich and robust local processing, graphics, input, and display technologies at our fingertips our "state" will live in clouds.

VMworld 2015: How VMware stopped the Dell-EMC merger overshadowing the show

cdonnelly | No Comments | No TrackBacks
| More
The proposed Dell-EMC merger was always going to be a major talking point at VMworld in Barcelona, given news of the deal was confirmed on the eve of this year's show. 

What was unclear, as attendees filed into the conference centre for the opening day keynote on Tuesday morning, was whether VMware's senior management team would be joining the discussion or not. 

As it turned out, delegates didn't have to wait long to find out if EMC-owned VMware would make reference to (what is currently billed as) the biggest merger in enterprise IT history. 

Around six minutes in there were on-stage assurances from COO Carl Eschenbach about how the acquisition would have little impact on the way VMware operates, as it would remain a publicly-listed, independent entity, while the rest of EMC joins Dell in going private. 

Then Michael Dell appeared, albeit via a pre-recorded segment, to reinforce this message, before briefly addressing how the combination of EMC and Dell's product portfolios should open up new opportunities for the firms in the hybrid cloud and software-defined datacentre era. 

During a post-keynote press Q&A, VMware's EMEA CTO Joe Baguley continued the discussion, inviting questions from the press on the topic too, with the assembled execs going into as much detail as they probably could, while the deal's T&Cs are being hammered out somewhere between Texas and Massachusetts. 

Opening up 
The firm's willingness to reference the merger was kind of surprising, though, given how quick most vendors are to shoot down M&A talk when faced with even the smallest whiff of them becoming a takeover target. 

But things would have got hugely awkward over the course of the week if no-one acknowledged the $67bn elephant in the room, and it was refreshing to see. 

That was certainly the view of the VMware User Group (VMUG), who told Ahead in the Clouds (AitC) that VMware CEO Pat Gelsinger popped into their annual VMworld Europe luncheon to personally assure them that - pre/post-merger - it is still very much business as usual for the vendor's customers. 

"Having someone like Pat attend and have an open, unscripted dialogue like that is a huge testament to the commitment from VMware to VMUG and that they consider us to be a vital part of their organisation," VMUG president Mariano Maluf told us at VMworld. 

"It speaks volumes about his commitment and gives us confidence that VMware will continue down the path it's been going through," he said. 

All in all, he added, the group's 121,000 members are feeling confident about what the future holds for VMware once Dell gets his hands on its parent company. 

"The fact VMware will remain a publicly traded company and independent signals a confidence on the part of the investors involved that VMware adds value to the industry and the technologies and solutions will continue that," he added. 

Maluf's comments were echoed by nearly everyone AitC spoke to at the show, with the general consensus being the deal is unlikely to cause much upheaval for those who've pitched their tents in the VMware camp, while those with closer ties to EMC might want to consider attaching a few guy ropes. 

The fact is, by taking steps pre-empt what users were most likely to ask, and giving them a forum to air their views, VMware succeeded in ensuring the Dell-EMC merger didn't detract from everything else it announced at the show

Singing from a different hymn sheet 
Before we sign off, however, it would be remiss of AitC not to reference one of the other big talking points of the show - aside from VMware's hybrid cloud and end user computing plans, of course. 

Sanjay Poonen, VMware's general manager of end user computing, treated the 10,000-strong crowd to not one, but two separate sing-alongs during his second day keynote. 

The first saw Poonen break into an impromptu and acapella rendition of Let It Go from Disney's Frozen, after a handful of attendees responded to his question about who in the audience owned a BlackBerry, during his talk about VMware's enterprise mobile device management strategy. 

He then went on to lead the crowd in a re-jig of Queen's 1977 hit We Will Rock You, which saw the lyrics changed to "End User Computing Will Rock You" instead. 

We know those lyrics don't really scan well, but Poonen looked pleased with the results, so far be it for us to rain (no pun intended) on his parade. 

For anyone who hadn't managed to grab a coffee on the way to his 9am keynote, it was certainly a display that served to sharpen the senses far more than any caffeine fix could.

Safe Harbour: What are the alternatives for data-sharing cloud providers?

cdonnelly | No Comments | No TrackBacks
| More

In this guest post, Rafi Azim-Khan, head of data privacy in Europe at legal firm Pillsbury Law, explains how the cloud provider community can side-step the European Court of Justice's Safe Harbour verdict.

The European Court of Justice (ECJ), in response to a case brought by Austrian student, Maximilian Schrems against Ireland's Data Protection Commissioner, has confirmed the current Safe Harbour system of data-sharing between EEA states and the US is invalid. A conclusion that looks set to have a widespread economic impact, given just how many businesses rely on Safe Harbour to transfer and handle data in the US.

The Court has ruled that Facebook should not have been allowed to save Schrems' private data in the US and this is - essentially - a formal confirmation of what has been growing criticism of the scheme over a period of time.

 The million dollar question is now: where does this leave US companies who heavily rely on Safe Harbour? And what about US cloud providers who are yet to build a European datacentre?

The facts of the matter

To re-cap, this case has arisen from proceedings before the Irish courts brought by Schrems, in which he challenged the Irish Data Protection Commissioner's decision not to investigate claims that his personal data should have been safeguarded against security surveillance by the US intelligence services when it was in the possession of Facebook.

The claim was brought in Ireland, as Facebook's European operations are headquartered there, but was referred up to the ECJ.

So, given the serious question marks that loom over the future of Safe Harbour and the threat of significant new fines under the imminent General Data Protection Regulation, what should US businesses, including cloud providers, look to be doing now to avoid having to process their data in the EU?

Handily, there is another legal mechanism that they can turn to.

Binding Corporate Rules (BCRs) are designed to allow multinational companies to transfer personal data from the EEA to their affiliates located outside of the EEA in a compliant manner.

BCRs are increasingly becoming a preferred option for those who have a lot of data flowing internationally and wish to demonstrate compliance, keep regulators at bay and prepare for a world without Safe Harbour.

Companies who put BCRs in place commit to certain data security and privacy standards relating to their processing activities and, once approved, the "blessed" scheme allows a safe environment within which data transfers can take place.

BCRs also have material long-term benefits in the sense that some upfront work, via preparing and submitting the application, should reduce risk of fines and undoubtedly position an applicant in line for a privacy "seal" once the new EU Data Protection Regulation is introduced.

Model contract clauses, which can also be used to "adequately safeguard" data transfers from Europe, also present themselves as a safer route to ensuring compliance compared to Safe Harbour as things stand. 

However, they do have a number of drawbacks compared to BCRs, including inflexibility, large numbers of contracts being required in large organisations and the need for regular updates.

Post-Safe Harbour: Next steps

In short, any US companies, whether big brands or smaller enterprises, that have existing EU offices, customers, marketing or business partners, as well as those which are yet to build an EU datacentre, would be well advised to reassess their procedures, policies and documents regarding how they handle data.

The storm of new laws, much higher fines and enforcement, with more due shortly when the final draft of the new EU Data Protection Regulation is published, means it would be a false economy not to act now and seek advice.  

Cloud 28+: What HP must do to win over the cloud provider sceptics

cdonnelly | No Comments | No TrackBacks
| More

Boosting the take-up of cloud services across Europe has been the mission statement of both public sector and commercial organisations for several years now.

From the latter point of view, HP has been actively involved in this since the formal launch of its Cloud 28+ initiative in March 2015, which aims to provide European companies of all sizes with access to a federated catalogue that they can use to buy cloud services.

If you're thinking this sounds spookily like the UK government's G-Cloud public sector-focused procurement initiative, you would be right. The key principles are more or less the same, except the use of Cloud 28+ isn't limited to government departments or local authorities. It's open to all.

That message - during the two years that HP has been talking up its efforts in this area - doesn't seem to have reached everyone, though, particularly the providers one would assume would be a good fit for it.

Namely, the members of the G-Cloud community, who are already well-versed in how a setup like Cloud 28+ operates, and what is required to win business through it.

However, several key participants in the government procurement framework have privately expressed misgivings to Ahead In the Clouds about whether HP would welcome their involvement because they don't use its technologies to underpin their services.

Similarly, some said they weren't sure how they feel about hawking their cloud wares through an HP-branded catalogue, or if it would mean sharing details of the deals they do through Cloud 28+ with the firm.

The latter has been a long-held concern of cloud resellers, because - once the maker of the service you're reselling access to knows whose buying it - what's to stop them from cutting you out and dealing with them direct?  

HP assurance

All these points HP seemed intent on addressing during its Cloud 28+ in Action event in Brussels earlier this week, which saw the firm take steps to almost distance itself from the initiative it is supposed to be spearheading

As such, there were protestations on stage from Xavier Poisson, EMEA vice president of HP Converged Cloud,  about how Cloud 28+ belongs to the providers that populate its catalogue, not to HP, and how its future will be influenced by participants.

The attitude seems to be, while HP may have had a hand in inviting people to the Cloud 28+ party, it's not going to dictate who should be invited, the tunes they should dance to or what food gets served. It's simply providing a venue and directing people how to get there, before letting everyone get on with enjoying the revelry.  

From a governance point of view, it won't be HP calling the shots. That will be the job of a new, independent Cloud 28+ board who made their debut at the event.

On the topic of billing, the firm made a point of saying users won't be able to pay for services through Cloud 28+, and that it will - instead - rely on third-parties to handle the payment and settlement side of using the catalogue.

For those worried that being a non-user of HP technologies could preclude them from Cloud 28+, the news wasn't so good.

As it emerged that providers will have one year from joining Cloud 28+ to ensure the applications they want to sell through the catalogue run on the Helion-flavoured version of OpenStack. A move, HP said, is designed to guard users against the risk of vendor lock-in.

Even so, given the firm spent the majority of the event trying to play down its role in the initiative, it's a stipulation that might leave an odd taste in the mouth of some would-be participants and users. Especially in light of the uncertainty over just how open vendor-backed versions of OpenStack truly are 

HP said this is an area that could be reviewed later down the line by the Cloud 28+ governance board, but it will be interesting to see (once the initial hype around its launch dies down) if this emerges as turn-off for some potential participants.  

Opening up Europe for business

Admittedly, it would be short-sighted of them to dismiss joining Cloud 28+ out of hand on that basis, in light of the opportunities it could potentially open up for them to do business across Europe.

While the European Commission has stopped short of endorsing the initiative, it has acknowledged what Cloud 28+ is trying to do shares some common ground with its vision to create a Digital Single Market (DSM) across Europe, and might be worth paying attention to.

If Cloud 28+ emerges as the preferred method for the enterprise to procure IT, once the preparatory work to deliver the DSM is complete, for example, the Helion OpenStack requirement would pale in significance to the amount of business participants could gain through it.

Measuring the success of Cloud 28+

While Cloud 28+ is still under construction, it's only right the focus has been on the provider side of things, because - without them - there is no service catalogue.  

But it's what end users make of Cloud 28+ that will define its long-term success, despite HP's repeated boasts about how many providers have signed up (110 and counting) to-date. 

HP is preparing to go-live with Cloud 28+ in early December at its Discover event in London, and Poisson said the "client-side" of it will become a bigger focus after that, so it's likely we'll hear some momentum announcements around end user adoption in the New Year.

But, until there is a sizeable amount of business transacted through the catalogue, or some other form of demonstrable end user interest in it, there will remain a fair few providers who won't get why its worth their while to join.

Using big data to uncover the secrets of enterprise datacentre operations

cdonnelly | No Comments | No TrackBacks
| More

In this guest post, Frank Denneman, chief technologist of storage management software vendor PernixData, sets out why datacentre management could soon emerge as the main use case for big data analytics.

IT departments can sometimes be slow to recognise the power they yield, and the rise of cloud computing is a great example of this.

Over the last three decades IT departments focused on assisting the wider business, through automating activities that could increase output or refine the consistency of product development processes, before turning its attention to the automation of its own operations.

The same needs to happen with big data. A lot of organisations have looked to big data analytics to discover unknown correlations, hidden patterns, market trends, customer preferences and other useful business information. 

Many have deployed big data systems, forcing end users to look for hidden patterns between the new workloads and consumed resources within their own datacentre and see how this impacts current workloads and future capabilities.

The problem is virtual datacentres are comprised of a disparate stack of components. Every system is logging and presenting data the vendor seems appropriate. 

Unfortunately, variations in the granularity of information, time frames, and output formats make it extremely difficult to correlate data and understand the dynamics of the virtual datacentre.

However, hypervisors are very context-rich information systems, and are jam-packed with data ready to be crunched and analysed to provide a well-rounded picture of the various resource consumers and providers. 

Having this information at your fingertips can help optimise current workloads and identify systems better suited to host new ones. 

Operations will also change, as users are now able to establish a fingerprint of their system. Instead of micro-managing each separate host or virtual machine, they can monitor the fingerprint of the cluster. 

For example, how have incoming workloads changed the clusters' fingerprint over time, paving the way for a deeper trend analysis into resource usage. 

Information like this allows users to manage datacentres differently and - in turn - design them with a higher degree of accuracy. 

The beauty of having this set of data all in the same language, structure and format is that it can now start to transcend the datacentre. 

The dataset gleaned from each facility can be used to manage the IT lifecycle, improve deployment and operations, optimise existing workloads and infrastructure, leading to a better future design. But why stop there? 

Combining datasets from many virtual datacentres could generate insights that can improve the IT-lifecycle even more. 

By comparing facilities of the same size, or datacentres in the same vertical market, it might be possible to develop an understanding of the TCO of running the same VM on a particular host system, or storage system. 

Alternatively, users may also discover the TCO of running a virtual machine in a private datacentre versus a cloud offering. And that's the type of information needed in modern datacentre management. 

The enterprise benefits of making machine learning tools accessible to all

cdonnelly | No Comments | No TrackBacks
| More
In this guest post, Mike Weston, CEO of data science consultancy Profusion, discusses how Amazon's cloud-based push to democratise machine learning sets to benefit the enterprise.

Machine learning is the creation of algorithms that can interrogate and make predictions based on the contents of big data sets without needing to be rewritten for each new set of information. In a sense, it's a form of artificial intelligence.

The recent European launch of Amazon's machine learning platform has garnered a lot of attention and is designed so non-techies can use these tools to create predictions based on data.

Amazon's move follows Facebook's launch of a 'deep learning' lab in France to undertake research into artificial intelligence, particularly facial recognition. Both tech giants will compete with Microsoft's Azure computer service. 

Clearly, most major tech companies are pitching their tent in the data science camp. The reason is quite simple: demand. 

Data science is quickly moving from a niche service used by a few enterprises to a must have. Many business leaders are waking up to the fact that new technology like self-driving cars, the Internet of Things, smart cities and wearable devices are all powered or complimented by data science. 

The business case for using data science techniques in areas such as retail, logistics and marketing is also increasingly easy to prove. Consequently, data scientists are in demand like never before. Unfortunately, as many data scientists will tell you, their skills are still fairly rare - part computer scientist, part statistician. We're all aware that there is an acute skills gap in the technology sector and in many ways data scientists are the poster child. 

With demand increasing for data science and the pool of data science talent struggling to keep up with it, tech giants like Amazon are naturally seeking to provide non-techies with the skills needed to do it themselves. 

It may sound counterintuitive for the CEO of a data science consultancy to welcome this move, but I'm a firm believer that data science has immense power to improve businesses, cities and peoples' lives in general. 

If more people understand how to interrogate and use data to make informed decisions, the faster it will become an intrinsic part of how all businesses operate. Not only that, but the more repeatable tasks that can be undertaken by technology, the more time is freed up for data scientists to explore the information at their disposal more deeply and to innovate.

Addressing the big data skills gap
With the normalisation of data science as a business process or service, it should become more obvious and attractive for people to train in these techniques. This should eventually help plug the skills gap. 

Of course, the growth of data science platforms in Europe and the US won't, in the short-term, create an army of do-it-yourself data scientists capable of everything. Self-service software can only bring you so far. A great data scientist adds value to the data through analysis and interpretation - through asking 'why' and 'so what'. 

Highly-skilled data scientists are fundamental to the more complicated data science - uncovering profound insights from seemingly disparate data that radically change and improve how organisations relate to people. 

Nevertheless, the more data literate we all become the better we will be at both using data and asking the right questions. Businesses generally don't suffer from a lack of data. The problem tends to be that those in decision-making positions do not understand what the data could reveal and therefore what problems could be solved. This means that a business can underestimate the knowledge it holds, fail to exploit all its source of data, or fail to share information with people who could make better use of it. 

Business that understand data science and can use self-service platforms and tools to undertake basic actions will become savvier at collecting, managing and analysing it. With experience, should come an understanding of the full potential of data science and a willingness to experiment. 

Amazon's self-service platform is not in and of itself going to create a revolution in data science. However, it represents the growth in businesses seeking to empower themselves to make better use of the information they hold. 

Like any science, data science is at its most exciting when it is testing the limits of what is possible. By experimenting, repeating and refining techniques, data science becomes much more effective. 

Whether a business employs its own data scientists or gets outside help, the more these specialists work with a company, the more they understand, the better they become at creating insights and solutions, and the more value a business can extract from its data.

VDI: Why desktop virtualisation has finally come of age

cdonnelly | No Comments | No TrackBacks
| More
In this guest post, David Angwin, marketing director for Dell Cloud Client Computing, claims the benefits of desktop virtualisation now far outweigh the risks.

Desktop virtualisation (VDI) is a technology that has never been fully appreciated, despite promising benefits such as lower maintenance costs, greater flexibility and increased reliability.  

Many companies have taken advantage of server and storage virtualisation over the years, but desktops have been overlooked, and physical desktops remain the norm. 

While organisations are willing to invest heavily in virtualised back-end infrastructure, they may feel VDI will not provide much additional value, or that the drawbacks and risks outweigh the benefits. But this is not the case. 

Principles of desktop virtualisation
It is often thought VDI is about creating multiple virtual desktops on one device, but the reality is the user's desktop profile is stored on the host server and then optimised for the local device the user is logging on from. This ensures they can experience a desktop optimised for that device, allowing for a consistent experience. 

Many companies have deployed various access devices to consume VDI and are reaping the benefits. These include:

Thin Client: This is where all processing power and storage is in the datacentre, and is a very cost effective way of delivering desktops and applications to a mass audience as the devices are relatively low cost and typically use much less energy compared with standard desktops. 

Cloud PC: This is essentially a PC without a hard drive that offers full performance and is a good fit for organisations running a small datacentre. The operating system is sent to the PC from the server when the user requests a log on. 

Zero Clients: A zero client is designed for use on networks with a virtualized back-end infrastructure, and is able to offer all of the benefits of thin clients, but with added compute power. 

With the right client and back-end infrastructure, zero clients can help to optimise working conditions and cut IT running costs, as there is less equipment on the desk. 

Desktop Virtualisation Benefits
VDI does more than provide a low-cost desktop to mass users, but can help create new business opportunities, in the following areas.

Remote working: VDI enables organisations to work with companies in different locations around the world securely. By setting up remote workers on the network, users can access the data securely without putting any data at risk. This reduces the potential for data theft, corruption or loss, as the data does not leave the datacenter. 
•Business agility: With faster access to data, organisations are able to react intelligently to changing market conditions. 
•Windows migrations: Physical desktop set-ups can create challenges for IT departments when new operating systems are released. Traditionally, IT administrators needed to visit each desktop in the organisation to make the relevant updates. However, with VDI, organisations have the opportunity to reduce this cost and time, as the network can be updated centrally, meaning software patches and OS upgrades can be simplified. 

VDI brings end users and organisations a wide range of benefits including ongoing cost saving and compliance benefits. Companies in all business sectors can realise a stable and positive return on investment, while providing a desktop environment that offers users quick and easy secure access to everything on the network to enable productivity.

The invisible business: Mobile plus cloud

cdonnelly | No Comments | No TrackBacks
| More

In this guest post Amit Singh, president of Google for Work, explains why enterprises need to start adopting a mobile- and cloud-first approach to doing business if they want to remain one step ahead of the competition.

One of the most exciting things happening today is the convergence of different technologies and trends. In isolation, a trend or a technological breakthrough is interesting, at times significant. But taken together, multiple converging trends and advances can completely upend the way we do things.

Netflix is a classic example. It capitalised on the widespread adoption of broadband internet and mobile smart devices, as well as top-notch algorithmic recommendations and an expansive content strategy, to connect a huge number of people with content they love. The company just announced that it has more than 65 million subscribers.

Other examples of new and improved approaches to existing problems abound. As Tom Goodwin, SVP of Havas Media, said recently: "Uber, the world's largest taxi company, owns no vehicles. Facebook, the world's most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world's largest accommodation provider, owns no real estate. Something interesting is happening."

Each of these companies has capitalised on a convergence of various trends and technological breakthroughs to achieve something spectacular.

Some of the factors I see driving change include exponential technological growth and the democratisation of opportunity, as well as the emergence of public cloud platforms that are fast, secure and easy to use. Together, these trends underpin a powerful formula for rapid business growth: mobile plus cloud.

We know the future of computing is mobile. There are 2.1 billion smartphone subscriptions worldwide, and that number grew by 23% last year.

We spend a lot of time on our mobile devices. Since 2014, more internet traffic has come from mobile devices than from desktop computers. Forward-looking companies are building mobile-first solutions to reach their users and customers, because that's where we all are.

On the backend, the cost of computing has been dropping exponentially, and now anyone has access to massive computing and storage resources on a pay as you go basis because of cloud. Companies can get started by hosting their data and infrastructure in the cloud for almost nothing.

Hence mobile plus cloud. You can use mobile platforms to reach customers while powering your business with cloud computing. You can build lean and scale fast, and benefit automatically from the exponential growth curve of technology.

As computing power increases and costs decrease, cloud platforms grow more capable and the mobile market expands. In this state, technological change is an opportunity.

How cloud challenges the incumbents to think different

Snapchat is one of the best examples of how this can work. It was founded in 2011. The team used Google Cloud Platform for their infrastructure needs and focused relentlessly on mobile. Just four years later, Snapchat supports more than 100 million active users per day, who share more than 8,000 photos every second

The mobile plus cloud formula is exciting, but it also poses challenges for established players. According to a study by IBM, some companies spend as much as 80% of their IT budgets on maintaining legacy systems, such as onsite servers.

For these companies, technological change is a threat. Legacy systems don't incorporate the latest performance improvements and cost savings. They aren't benefitting from exponential growth, and they risk falling behind their competitors who are.

This can be daunting, since it's not realistic for most companies to make big changes overnight.

If you run a business with less than agile legacy systems, here's one practical way to respond to the fast pace of technological change: foster an internal culture of experimentation.

The cost of trying new technologies is very low, so run trials and expand them if they produce results. For example, try using cloud computing for a few data analysis projects, or give a modern browser to employees in one department of the company and see if they work better.

There are no "one size fits all" solutions, but with an open mind, smart leaders can discover what works best for their team.

It's important to try, especially as technology becomes more capable and more of the world adopts a mobile plus cloud formula. Those who experiment will be best placed to capitalise on future convergences.

Uber's success suggests enterprises need to think like startups about cloud

cdonnelly | No Comments | No TrackBacks
| More
Cloud-championing CIOs love to bang on about how ditching on-premise technologies helps liberate IT departments, as it means they can spend less time propping up servers and devote more to developing apps and services that will propel the business forward. 

It's a shift that, when successfully executed, can help make companies more competitive, as they're nimbler and better positioned to quickly respond to market changes and evolving consumer demands. 

But it takes time, with Gartner analyst John-David Lovelock telling Computer Weekly this week that companies take at least a year to get up and running in the cloud from having first considered taking the plunge. 

"It takes companies about 12 months to say, 'this server is more expensive or this storage array is too expensive so we should go for Compute-as-a-Service or Storage-as-a- Service instead'," he said. 

"Making that shift within a year is not something they can traditionally do if they weren't already on the path to the cloud." 

Future development 
Companies preparing to make such a move can't afford to be without a top-notch team of developers, if they're serious about capitalising on the agility benefits of cloud, according to Jeff Lawson, CEO of cloud communications company Twilio. 

"Every company has to think of themselves as software builders or they will probably become irrelevant. Companies are building software and iterating quickly to create great experiences for customers, and they're going to out-compete those that aren't," he told Computer Weekly. 

Lawson was in London this week to support his San Francisco-based company's European expansion plans, which have already seen Twilio invest in offices in London, Dublin and Estonia. 

In fact, the company claims to have signed up 700,000 developers across the globe, and that one in five people across the world have already interacted with an app featuring its technology. 

The firm's cloud-based SMS and voice calling API is used by taxi-hailing app Uber to send alerts out to customers when the drivers they've booked are nearby, for example, and similarly by holiday accommodation listing site, AirBnB. 

Both these companies are regularly lauded by the likes of Amazon Web Services, EMC and Google because they're both popular services that are said to be run exclusively on cloud technologies. 

Neither has to suffer the burden of having weighty, legacy technology investments eating up large portions of their IT budgets. For this reason, enterprises should be looking at them for inspiration about how to make their operations leaner, meaner and more agile, it's often said. 

Given that Uber and AirBnB have seemingly become household names overnight, it highlights - to a certain extent - why the move to cloud is something the enterprise can't afford to put off. 

Simply because, in the time it takes them to get there, a newer, nimbler, born-in-the-cloud competitor might have made a move on their territory and it may be harder to outmanoeuvre them with on-premise technologies.

What the enterprise can learn from Google's decision to go "all-in" on cloud

cdonnelly | No Comments | No TrackBacks
| More

Google has spent the best part of a decade telling firms to ditch on-premise productivity tools and use its cloud-based Google Apps suite instead. So, the news that it's moving all of the company's in-house IT assets to the cloud may have surprised some.

Surely a company that spends so much time talking up the benefits of cloud computing should have ditched on-premise technology years ago, right?

Not necessarily, and with so many enterprises wrestling with the what, when and how much questions around cloud, the fact Google has only worked out the answers for itself now is sure to be heartening stuff for enterprise cloud buyers to hear.

Reserving the right

The search giant has been refreshingly open in the past with its misgivings about entrusting the company's corporate data to the cloud (other people's clouds, that is) because of security concerns.

Instead, it prefers employees to use its online storage, collaboration and productivity tools, and has shied away from letting them use services that could potentially send sensitive corporate information to the datacentres of its competitors.

This was a view the company held as recently as 2013, but now it's worked through its trust issues, and made a long-term commitment to running its entire business from the cloud.

So much so, the firm has already migrated 90% of its corporate applications to the cloud, a Google spokesperson told the Wall Street Journal.

What makes this really interesting is the implications this move has for other enterprises. If a company the size of Google feels the cloud is a safe enough place for its data, surely it's good enough for them too?

Particularly as Google has overcome issues many other enterprises may have grappled with already (or are likely to) during their own move to the cloud.

Walking the walk

What the Google news should serve to do is get enterprises thinking a bit more about how bought-in the other companies whose cloud services they rely on really are to the idea.

While they publicly talk up the benefits of moving to the cloud, and why it's a journey all their customers should be embarking on, have they (or are they in the throes of) going on a similar journey themselves?

If not, why not, and why should they expect their customers to do so? If they are (or have), then talk about it. Not only will doing so add some much needed credibility to their marketing babble, but will show customers they really do believe in cloud, and aren't just talking it up because they've got a product to sell.