Containing the application security problem

Bob Tarzey | No Comments
| More
The benefit of containers for the easy, efficient and portable deployment of applications has become more and more apparent in recent years, especially where application development and delivery is continuous through a DevOps-style process. The trend has been helped by the availability of open source container implementations, the best known of which is Docker, as well as proprietary ones such as VMware's ThinApp.

Whereas a virtual machine has all the features required of a full physical device, containers can be limited to just those needed for the application itself; for example, if a given application has no need to store and retrieve local data then no disk i/o functions need be included. This makes container-deployed applications portable, compact and efficient; many application containers can be deployed inside a single VM. There are also two big implications for application security.

The first is all about how secure the DevOps process and container-based applications are in the first place. Such development often relies on a series of pre-built software layers (or components), about which the developer may know little; their security cannot go unchecked. Furthermore, when deployment is continuous, security checks must be too. This has led to the rise of new security tools focussed purely on container security. One start-up in this area that has been attracting recent attention is Twistlock.

Twistlock's Container Security Suite has two components. First there is image hygiene, the checking of container-based applications before they go live; for example scanning for known vulnerabilities, the use of unsafe components and controlling policy regarding deployments. Twistlock has announced a partnership with Sonatype, a software supply chain company focussed on the security and traceability of the open source and other components that end up in containerised applications. Black Duck is another software supply chain vendor providing similar capabilities.

Second is run time protection, detecting misconfigurations and potential compromises and, if necessary, preventing a container from being launched in the first place or killing active ones that are considered to have become unsafe. Twistlock has just announced a partnership with Google, where its suite is being closely integrated with the Google Cloud Container Engine. As Google pointed out to Quocirca, problems with deployed applications are inevitable; you have to have checks in place.

Of course, there is nothing new about checking software code for vulnerabilities before deployment and scanning applications for problems for after deployment. Broader software security specialists such as Veracode, White Hat, HP Fortify and IBM AppScan have been doing so for years, using the terms SAST and DAST (static/dynamic application security testing) for pre and post deployment checking respectively. However, they will need to catch up with the agility of those that have set out to protect the emerging requirement of the dynamic DevOps and containerised approaches. Twistlock and its ilk are potentially ripe acquisition targets as venture investors look for a return.

The second implication for security is that containerised applications can themselves improve software safety through their limited access to resources. If all an application needs to do is crunch numbers, then give it disk i/o but no network access; that way it can read data from disk but not exfiltrate, it even when compromised.

Some have taken this approach to extremes; for example, two vendors Bromium and Invincea use a container-like approach to protect user end points. Bromium isolates every task on a Microsoft Windows device so, for example, a newly opened web page cannot write a drive-by payload to disk as it is not given the access it needs to do so. Some may question the overheads of doing this, but it certainly increases security. Menlo Security claims to keep such overheads down by containing just higher risk activities such as opening emails and web pages. Another vendor, Spikes Security, focusses just on web browsing; its approach is to contain pages on a proxy server before sending on clean content.

Containerisation looks like it is here to stay, helping to enable continuous, agile software development. This throws up security challenges but also helps solve some of them.

FireEye - Ninja of Incidence Response

Bernt Ostergaard | No Comments
| More

When a bank is attacked by armed robbers intent on stealing money, public sympathy is with the bank. When the same robbers return over the Internet to steal the bank's customer information, public sentiment turns against the bank. Similarly network security companies providing DDoS protection or encryption services to corporate sites are deemed as good, but when their services are employed to protect contentious sites, they are deemed a menace. Security posture must be accompanied by ready-laid plans for disaster recovery and re-establishing customer trust

Despite spending some $30bn annually on Incidence Response (IR), every major corporation and government institution worldwide has been hit by severe and successful hacker attacks stealing money, customer information, intellectual property, strategic data etc. Companies incur significant financial losses (estimated this year by one analyst firm to be around 1,6% of annual revenues), along with dents in customer trust and lost competitive edge. Only one-third of these breaches are discovered by the victimised company itself, and perpetrators have on an average been inside the victim organisation for more than 6 months before discovery.

This is the market that FireEye addresses: rapid discovery and prevention of attacks and efficient cleansing of corporate sites once an attack has been discovered. FireEye emphasises that IR is not just about discovering and preventing attacks, it's also about preparing a company to handle breaches and thus minimising the damage in the aftermath of a successful attack.

FireEye - An American in Europe

At its first European analyst briefing event in London, FireEye's EMEA management team and key-technologists provided an in-depth view into the murky underworld of Internet crime, and the tools and procedures FireEye uses to protect its 3500 customers worldwide. It was one of those presentations where you are not provided with a copy of the slides and analysts are reminded repeatedly not to mention company names or customer details.

FireEye is today a red-hot Internet security company entrusted with IR security management in thousands of global corporates and scores of national governments. The company is today loaded with cash, talent, products and services underpinning its 400% growth curve from 2012-15. The hockey stick really took off after the company acquired Mendiant back in 2012. But the company remains a well-kept secret in the European corporate world.

The FireEye global platform distributed on 8 regional SOCs (Security Operations Centre) inspects 50 billion objects daily, with some notable successes.

FireEye claims to have discovered 16 of the latest 22 zero-day exploits - more than all the other IR providers together. The SOCs and the company's 300 analysts look for interesting and unusual activities on the wire, using very aggressive algorithms to identify them. In that huge 'haystack' of data, the platform picks out any piece of hay that even remotely resembles a needle, and then determines whether this needle (among the many other needles found in the haystack) is potentially harmful to customers.

The FireEye 20:20 Focus

FireEye builds its business on three pillars:

  • Technology - primarily based on its analytical engine that inspects gigabytes of network traffic, using virtual machine introspection (VMI) developed by the company founder Ashar Aziz back in 2004 as the analysis mechanism.
  • Intelligence Gathering - from its incident handling, its wide-flung sensor net and close contacts to customer CIRTs (Computer Incidence Response Teams).
  • Security Expertise - 10 years experience with APT attacks

Screen Shot 2015-11-23 at 15.27.59.png

Classic on-premise antimalware software must reliably block malware upon entry; if one is missed, on-premise defences can be deactivated. On the other hand, with VMI, the antimalware runs on a host, outside of the customer data centre, and is thus impossible to deactivate by malware that subverted the customer system.

What are the challenges facing FireEye today?

Lack of corporate presence - especially in Europe is the top business expansion priority. Being a relatively young player in the IT security space, FireEye opted in its early years to concentrate on the US government sector. Governments remain the largest vertical for the company today, with 77 governments on its client list.

Better funding after 2009 and the Mendiant acquisition in 2012 helped FireEye to expand globally and open up more corporate business.

But the company still lags in corporate IR recognition behind the major hardware vendors (HP, IBM, Cisco, Dell and Fujitsu), global telcos (Verizon, Telefonica, BT, T-Systems), major system integrators (Cap Gemini, CGI, Atos, TCS) and a security vendor (Symantec).

One obvious step is to build more alliances with leading service providers (telcos and SI) with strong corporate ties and that are willing to launch 'FireEye Inside' security services.

Another option is to put more emphasis on its FireEye-as-a-Service offering which allows FireEye to sell services to the mid-market corporate segment, predominant in Europe.

Then there are the FireEye product names that are completely anonymous: ETP, NX, HX, FaaS, TAP - neither exciting nor meaningful - unless you are already on the inside. FireEye needs to emerge from its own shell.

But of course asking a ninja to step into the glare of wider public recognition requires careful consideration.

A simpler online life: trusted use of your social identity

Bob Tarzey | No Comments
| More
We are being increasingly asked to use our established social identities as a source of trust for communicating with a range of online services. Is this a good idea, is it safe and, if we go ahead, which ones should we choose to use? This needs consideration from both the consumer and the supplier side.

It is a good idea for consumers, if it makes our online lives both more convenient and secure. Having a smaller number of identities with stronger authentication meets those objectives. Using an established social identity can achieve this, if we trust it. To some extent this will depend on what service we want to access; for example, you may find it more acceptable to use PayPal, which you already trust with financial information, to identity yourself to an online retailer, than Twitter.

A consideration should be whether the social source of identity in question has some form of strong authentication. Google, Yahoo, Microsoft, Twitter, PayPal, LinkedIn and Facebook, for example, all do, enabling you to restrict the use of your identity relating to their service to certain pre-authorised devices. This is usually enabled using onetime keys delivered by SMS (test message), a separate independent channel of communication. The trouble is that this is usually an optional layer of security which many do not switch on, perhaps due to perceived inconvenience. If we are going to use certain social identities for more than just accessing the social network in question we should get used to this concept.

With regard to online suppliers, Quocirca published a research report in June 2015 (Getting to know you, sponsored by Ping Identity) which included data on the perceived trust of various social networks as a source of identity. First, it should be said, that overall compared to other sources of identity, social identities are the least trusted. Just 12% considered them highly trustworthy, compared to 23% for identities provided from government databases and 16% for those from telco service providers. 53%, 22% and 32% respectively considered these sources untrustworthy. Clearly those proposing social identity as a source of trust have some way to go to overcome the reticence.

That said, there is huge variation in the level of trust placed in the different potential providers of social identity and it varies depending on whether the organisation involved is consumer-facing or dealing mainly with business users (non-consumer-facing). For all, the top three most trusted were clear; Microsoft, PayPal and Amazon. For Microsoft this can perhaps be put down to long standing familiarity and the fact that it is already the most highly trusted source of identities in business through Active Directory. For PayPal and Amazon it will be because they already deal with consumers' money and have established the trust of business overtime to do so.

Other social identities are less trusted by businesses. Consumer-facing organisations find Facebook more acceptable that their non-consumer-facing counterparts, with LinkedIn it is the other way around. Google is equally trusted by both, Yahoo less so and Twitter is trusted by few.

To make use of social identities, businesses need to do two things. First they need to ensure their customers and prospects have a degree of choice. In most cases it will not make business sense to dictate all users must have a Facebook account or be on Twitter before they can transact with you. Good news, vendors of social identity broking products sort all this out for you and even offer local account creation if a consumer does not want to use social login. The three best known are probably Janrain, Gigya and LoginRadius. The consumer choses the identity they want to use, the broker authenticates it and passes credentials on to the provider of the services to be accessed. Janrain told Quocirca that its Social Login product inherits any strong authentication that users have put in place. 

The provider must then decide what resources are to be made available. For example, a standard customer dealing with a travel agent may need access to separate systems for airline and hotel booking; an executive customer may also get access to a concierge service. This can all be decided by a single-sign-on system (SSO), an area where Quocirca's research shows many are now investing in new capabilities.

In particular they are turning to on-demand services (software-as-a-service/SaaS). Consumer-facing organisations are especially likely to be doing so. Providers include Ping Identity (the sponsor of Quocirca's research), Intermedia, Okta, OneLogin and Symplified. IBM, CA, Symantec and Dell all now have SaaS-based SSO systems as well.

For many, the broader success of a re-vamped identity and access management (IAM) strategy will be the ability to manage these social identities alongside other sources of identity, for example, those of employees and partners. There will be some common resources and therefore policies that cover different groups and the need for common governance and compliance reporting across all; that requires federated identity management. Many of the leading identity vendors now support federate identity management, some such a Radiant Login specialise in it.

Furthermore both on-premise and cloud-based IAM requirements may need linking. Many vendors offer on-premise and cloud based product pairs; for example Ping Identity's Ping Federated and Ping One, IBM's Security Access Manager (ISM) and its Cloud Identity Service, CA Identity Manager and CA Identity Manager-SaaS and SailPoint's IdentityIQ and IdentityNow.

As the range of end-users organisations need to deal with continues to expand so do the options for sourcing and authenticating their identities and the products available to enable and manage them.

Quocirca's report, Getting to know you, is free to download HERE

Digital Disruption: Future Opportunities for the Print Industry

Louella Fernandes | No Comments
| More

Printer vendors are having to make major changes to their business models to sustain their leadership and relevance. Long term survival will depend on their ability to adapt to market disruption through both innovation and building relationships with complementary product or technology providers.

The print industry, like many industries, is on the brink of significant change. Market disruption, characterised by intense competition, more demanding customers and a constantly shifting technological landscape, is threatening the legacy hardware-driven business. As the print industry struggles with declining print volumes, hardware commoditisation, lower margins and sustaining growth, vendors are increasingly re-examining the structure of their businesses and looking for ways to deliver better financial performance.

As such, the industry is poised for a wave of acquisition and restructuring as vendors look to adapt to new market demands and shed assets that no longer meet strategic needs. Lexmark and Xerox are the latest to declare that they are exploring strategic options. While typically, hardware companies have relied on earnings growth to deliver shareholder value, shrinking legacy hardware markets has seen revenue falter, leading to acquisitions in the software and services space.

Lexmark's bid to expand its enterprise software presence began with its acquisition of Perceptive Software in 2010. It has since made 13 software related acquisitions, the most recent being Kofax for $1Bn. Meanwhile Xerox acquired ACS in 2010 to build its business process outsourcing service capabilities. This has paid off for Xerox, with services now accounting for 57% of its revenue. Speculation is now rife as to whether Lexmark's hardware business will be acquired, or if and when Xerox will split into two separate technology and services businesses - in a similar way to HP's split into HP Inc. and Hewlett Packard Enterprise.

Whatever the outcome, the market is undoubtedly set for consolidation.  All vendors are navigating the same path and trying to understand where the new markets lie - the cloud, mobile, big data and the Internet of Things. While some vendors ,such as HP and Ricoh, are working to commercialise their 3D printing technology - this for now is a relatively nascent market.

The shifting business landscape may be daunting, but there are some key opportunities for print manufacturers to maintain or even enhance their competitive positions:

  • Adapting to the "as-a-service economy" The consumer preference for services over products and subscriptions over purchases is permeating into the business market. This is driven by increasing customer demand for flexibility that will allow them to take advantage of new technologies. With an as-a-service model, customers are not burdened by significant upgrade costs and can more accurately estimate the on-going cost of access to technology. Managed Print Services (MPS) is already an established service model in the market, offering a lucrative recurring services revenue model along with increased customer retention long after the printer hardware sale. While the MPS market is relatively mature in the enterprise space, there are further opportunities to tap into the largely under-penetrated SMB market. For the channel, digital services around printer device diagnostics and predictive/ preventative maintenance have significant untapped potential. MPS vendors should drive further innovation in their engagements around cloud delivery, security and mobility. These are key enablers, not only for the as-a-service economy but also digital transformation.
  • Driving the digital transformation journey. Despite talk of its demise, paper remains a key element of the connected and collaborative office workplace and still plays critical role in the business processes of many organisations. However, paper bottlenecks can hinder business productivity and efficiency. Print vendors are uniquely positioned to connect the paper and digital worlds and are developing stronger expertise in workflow solutions and services. In many cases leveraging investments in Smart MFPs, which have evolved to become sophisticated document processing platforms, provides vendors an opportunity to maximise the value of their hardware offerings. Vendors need to change legacy perceptions of their brand and be accepted as a trusted partner in the enterprise digitisation journey. Business process optimisation and workflow capabilities will become a key point of differentiation for vendors in the industry, requiring a balanced hardware, software and service portfolio.
  • Exploiting the Internet of Things (IoT). All printers are things and the connected Smart MFP is part of the IoT landscape. Vendors can exploit the enormous data generation to monitor actual customer product and service usage. This data enables manufacturers to deliver better service performance through predictive data analytics (think proactive service and supplies replenishment)  and  by collecting information about customer usage of products or services, vendors can improve product design and accelerate innovation. Developing strategic partnerships with open technology vendors can also pave the way for seamless integration of printers/MFPs with mobile devices and drive the development of a broader mobile solutions and services ecosystem.
  • Expanding high value print services. Notably, this year some online brands, such as Net-a-Porter and Airbnb have expanded their brands into print. In fact print launches amongst independent publishers are at a 10-year high. Print's tangibility and durability, its credibility and trust can set it apart from the noisy cluttered online landscape. Research has shown that readers are more likely to retain information on printed material leading to higher engagement levels. Many of the traditional print vendors can leverage their own or third party hardware (including print, visual and display signage technology), services and tools to develop cross media channel communications. Partnerships and alliances with technology vendors in this space will enable print vendors to participate in both the online and offline customer communications space.

The print industry cannot afford to rest on its laurels and must be mindful of the speed and dramatic transformation experienced in other industries. Consider how Salesforce, Amazon Web Services and even Uber have rewritten the rules of their markets. Is there potential for a similar disruptive force in the largely closed and proprietary print industry? Disruption may not come from traditional competitors but from those outside the industry. To adapt and thrive the industry must become more open, expand partnerships outside the industry and continuously innovate. This means creating new products and/or channels and engaging customers, partners and employees in new ways. Ultimately the question remains, is the print industry ready to disrupt itself?

Read more opinion articles on the print industry at

TARDIS analysis - X-Determinate (and Y, Z and T)

Clive Longbottom | No Comments
| More

English: The current TARDIS seen at BBC TV Cen...
Let me provide you with some data.  100111010001.

There - that's useful isn't it?  OK - it's not; but that's the problem with data.  In itself, it has no value - it only starts to deliver value when it is analysed and contextualised.

There is the oft-used phrase of 'drowning in data', and, to paraphrase Coleridge, for many organisations it is now a case of 'data, data everywhere, nor any stop to think'.  All too often data is collected, but is then not used due to a perceived lack of capability in dealing with it in a satisfactory manner. 

There is a need to deal with the 'V's of big data analysis - you may be dealing with high Volumes of a large Variety of data types, at high Velocity - with the Veracity and Value of the data being suspect.

The question then is - how does your organisation extract the value from that data that enables you to move up the strategic insight pyramid from a sea of data, through to a pool of information, on to a puddle of knowledge that gives the drop of knowledge that creates that strategic insight that adds to the organisation's bottom line?

Effective analysis of that sea of data needs to be centred on knowing when and where an activity occurred.  Having access to this data allows for trend analysis (using the time variable) and also for spatial analysis.  By combining the two, predictive analysis can be brought to bear that can add massive value to an organisation.

Let's consider a retail organisation.  It has collected lots of data about its customers through the use of loyalty scheme cards.  It therefore knows what its customers bought from it - and should also have when these items were bought, and where from.  It should also have enough details about the customers - via their post codes, for example - to be able to position them on a map and assess their socio-economic status as well.

The retailer can therefore look at purchasing cycles of certain foodstuffs - just when do customers start buying strawberries - and do they buy cream at the same time?  This basic analysis can avoid bringing in too many high-price early strawberries before customers want them, and can avoid stocking too many late strawberries when the season falls away - and also avoids overstocking with cream as well.

The retailer can also create heat maps of where its customers live - and can identify where it would make the most sense to build another outlet - or where closing one would have the least impact on customer loyalty. 

The above spatial analysis is only looking at two dimensions (post codes or other map co-ordinates) along with time.  In many other cases, three dimensions are a necessity.

Consider air traffic control.  It has to know to the split second where all the planes under its control are.  However, just having an X and a Y (along with a T) coordinate to show the plane's position at a specific time on a 2-dimensional system is useless - are these two planes at similar X,Y coordinates at the same T going to crash?  Not if they are 1,000 m apart in the Z dimension.

Similarly, tracking items or entities across time and three dimensions can help in emergency situations.  Taking something as complex as a large oil rig out in the North Sea, plans need to be in place should there be an emergency.  Workers are told where muster points are, where lifeboats and lifebuoys are and so on - but what happens if the emergency prevents people from getting to these resources - or if some people are incapacitated by the emergency?

If employees use wearables with GPS positioning in them, emergency services can more easily identify where people are - and so plan rescue and evacuation more effectively.  With an oil rig being a large three-dimensional space, knowing where every person is exactly within that space is of major importance.  Think of a helicopter as part of the rescue: it only has a defined amount of fuel available, which then defines how long it can stay at the site.  If it has to spend a large part of that time identifying where people are, it is working ineffectively, and people's lives are in danger.  Allow the helicopter aircrew to identify where people are as they fly in, and they can then be far more effective when they do arrive at the site.

So - maybe Dr Who was the first data scientist?  His vehicle - the Time and Relative Dimension In Space (TARDIS) - sums up exactly what is required to analyse data in a way to gain strategic insights.

And with the number of times The Doctor has saved the earth - if it's good enough for him, then it must be good enough for you - surely?

Quocirca has written a report on the importance of 'Dealing with data dimensions', sponsored by Esri UK.  The report can be downloaded for free here.

Text messaging 4 businesses - the rise of the machines

Rob Bamforth | No Comments
| More

Now that so many people have smartphones, mobile network data plans have become more generous and instant messaging has sprouted everywhere from being embedded in social networks to tools like Skype, WeChat, WhatsApp and Viber, what does the future hold for SMS?

Blossoming since the first festive text greeting in December 1992, this twenty something year old technology is starting to feel its age. Sure it has been hugely popular with massive peaks on special days like New Year and Valentines day, but usage declined in 2013 for the first time in two decades.

It has been clear for many years with the growth of an open and mobile internet that person to person (P2P) SMS would start to decline eventually, both in volume and revenue, and the industry set about picking up the slack with the use of automated or application generated messages sent to people (A2P).

This has not been plain sailing, and five years ago it looked particularly bad as many exploited the opportunity of sending bulk SMS, often coming in through 'grey' (non fully sanctioned) routes at very low costs, as a way of delivering mobile SPAM messages. This was not a good experience for the users and did not really drive significant benefit for the legitimate businesses sending out messages. It might not have had the volume impact on operator networks that email SPAM does on internet service provider networks, but the negative effect on subscribers meant that eventually operators needed to act.

While SMS spam has not entirely disappeared, it is now much diminished as mobile operators have taken much more control of messages on their networks with traffic inspection and filtering. Rogue or grey routes into mobile networks can now be spotted and stopped within a matter of hours not weeks or months, as might have been the case in the past.

This has all been to the overall benefit for the A2P sector as it has focussed attention on getting value rather than just volume from A2P messaging. Recent studies predict steady if not stellar growth of A2P SMS of just over 4% compound annual growth to the end of the decade. Some of the volume will continue to be the legacy of mobile marketing, with further use of promotional messaging, polls and campaigns that drive awareness in a non-SPAM-like way, but where are the main application growth drivers?

There are several strong use cases, but they need to be treated separately as each has distinct characteristics - not all messages are the same.

Many uses are in the enterprise, where the key in a world of information and email overload is to get critical messages received and understood. A text message generally grabs people's attention more readily than an email and so it becomes a great way to notify changes in time dependent aspects of business processes. These could be alerts when things are going wrong e.g. running out of stock, temperature in fridge rising above a threshold, or might be timely information or notifications of things of interest, such as a forthcoming delivery.

For some applications, the lack of need for cellular data (3G/4G) means greater coverage potential as it extends deployments to be able to use 'old' 2G networks, where 'things' can run on minimal energy and modems only power up when they need to send a message. For example remote street furniture and signage could send text messages when lights fail - too simple to require a micro controller and internet connection, but a message would be useful to set in train a maintenance visit.

A2P SMS use cases do not have to be one-way. Some logistics companies send simple SMS requests to customers about suitable delivery options and have them confirm or reschedule with a simple response. This concept also works well in areas such as healthcare, where scheduling and shifting appointments can be complex and costly when visits are missed. Other organisations are using SMS responses to automated messages to get instant feedback on service quality or on offers or advertising. These would all be possible to do with email, but the crispness and brevity of SMS lowers the impact on the person responding and allows them to do it there and then with ease.

SMS has also taken a much greater role in supporting security. Its use as an out-of-band channel to deliver one time passcode to users wherever they are provides a second factor of authentication when logging in to use enterprise resources or critical external services such as mobile banking. Rather than having to remember to carry special physical tokens, the mobile phone will generally be carried everywhere and has its unique SIM independent of how a device might be connected to a network via an IP address

All of these use cases rely on a message getting through, and in a timely manner, wherever the user is located - SMS makes that easy. A2P applications no longer look risky or problematic in the way that bulk SMS once often appeared, and mobile network operators are now seeing A2P as a welcome revenue potential - especially important as SMS has historically formed a large element of operator revenues.

With all the current hype about the IoT, mobility and interconnected everything, something important risks being missed. It is not simply about the connection, but the purpose - that is the message. It is all about the intent to meet a business imperative, so it is vital that the message gets through to the right recipient at the right time and in a way that they can deal with it.

In this regard, with operator support and the right business use cases, SMS still delivers.

Digital customer experience - where do mobile apps fit?

Rob Bamforth | No Comments
| More

Many organisations are trying to get closer to their customers and see mobile, in particular mobile apps, as a way to engage - but are they always taking the best approach?

The impetus is often somewhat similar to that of putting a business online; everyone else is doing it; if you are not then you run the risk of missing out or being left behind. However, just doing that quickly becomes insufficient. When websites first appeared they were static brochures, they then added interaction and commerce, although as many organisations have discovered even this is not enough and to retain a worthwhile digital presence, websites require a whole load more attention.

The same is true when adding a mobile connection to customers. A mobile-enabled website is at least a good first step, but to really get the interaction going this will be a bit limited - hence the interest in mobile apps.

However, mobile apps require a little more pulling power than websites; not only do they have to be downloaded or installed, they need to be retained. This is getting harder to achieve and recent statistics on app retention from analytics and marketing platform provider, Localytics, indicated that roughly a quarter of apps are abandoned after a single use, and only around a third make it to being used more than 10 times.

To avoid app abandonment, mobile apps need to be a bit fitter to survive:

  • fit to the moment - a real world purpose to get you to open the app while mobile
  • fit with the flow - apps must be really easy to use
  • fit to the everyday - apps need to be habit forming

For many organisations, the main route to start down is m-commerce, but mobile users are not necessarily always in the mood to buy, but to browse, be entertained and get support. Apps will have more staying power if they nudge and engage rather than blatantly try to sell.

So, many organisations have responded with wrapping mobile application usage around loyalty, and a more personal connection through social media. This can again circle quickly back to trying to just increase orders - nothing inherently wrong with that of course - but part of building loyalty is widening the relationship from a simple buy/sell.

One way this could be done is to engage with customers as if they were 'part of the team'. The goal here is not necessarily to grow sales (although that might still happen), but to reduce running costs and increase customer satisfaction.

This is akin to the incentive programs offered by media companies and others such as the recent 'GoPro Awards' campaign, offered by the action camera maker, which is planning to invest around $5m per year in rewards to customers who get recognition and prizes for submitting creative content. GoPro could have simply commissioned professional creative companies, but by incentivising customers it is creating an extra reason for them to keep using its products and rewarding those who produce something special.

Customer feedback is important in any sector, even when it appears to be mostly negative, as if dealt with honestly it can generate more interest and ultimately loyalty. It could be useful for example in public transport, where travellers are quick to jump on their favourite social platform when there are problems, but combining and analysing this information to create a bigger picture would actually be useful for the travel operator and perhaps fellow passengers.

Indeed, by being pro-active and asking passengers for feedback - perhaps along the lines of 'Tell us about something that made you smile during your trip today' - could generate even more passenger engagement.  While it may attract many sarcastic responses (which in themselves, when used correctly can show the company as being more human), it could also provide positive responses that can be propagated through the company's social media outlets to give good news while recognising individual customers.

Netatmo, the makers of a stylish online connected weather station, do something similar by pulling together all of their subscribing customer's current readings and presenting them on a global map, which can be accessed along with personal weather station data via a mobile app. By using the distributed remote feeds from all of its subscribers it can create a big picture that in turn feeds greater weather data value back to its customers, rather than just being of value to the organisation itself. The feeds of the many outweigh the feeds of the one?

Another example of where this might work is in the hospitality industry, where increasingly the service relationship - good or bad - is more out in the open, due mainly to the explosion of social review sites and services.  Rather than waiting for, and responding to, criticism of services on public review pages, organisations could adopt a more pro-active approach.

This would be a good fit for a mobile app, especially in the increasingly crowded field of 'loyalty'. Rather than offering a cheap discount for harvesting user preferences, encouraging current customers to do something that will reward them and improve services for future users could stimulate more regular usage.

In the hotel sector this might take the form of 'user generated maintenance'. Instead of critical reviews plastered around the social sphere, establishments could offer reward points for 'things we may have missed' or suggestions for improvements. A simple app on a mobile phone can record, locate and document the issue, submit it to the hotel and gain something of value for the user, while at the same time helping the service provider fix a problem - all without it going public!

M-commerce might seem an appealing route to help directly sell goods and services, but the first thing all mobile apps need to do is sell themselves, otherwise all that development effort runs the risk of being abandoned very quickly. Mobile apps need to be very fit for purpose and offering something of value to customers should be a big part of any mobile strategy.

Mobile - shifting from 'enabled' to 'optimised'

Rob Bamforth | No Comments
| More

Workers have used various tools to access IT on the move for some time, but much of it was simply shifting desktop applications onto laptops and it took the arrival and massive success of smart phones and tablets to really get the mobile application sector moving. This in turn has created mobile app marketplaces and an opportunity for developers. 

Way back in 2008, Quocirca conducted some research amongst mobile app developers for a major mobile platform provider. These developers wanted stable platforms, developer support and market opportunities in which to sell their offerings, and largely over the intervening years the commercial elements of these needs have been met, certainly in the Android and Apple environments.

Many of the technical challenges remain. It is still difficult to keep up with new versions and derivatives of platforms, plus specific mobile app development skills can be hard to find, but the huge take up of devices and acceptance of use throughout the home and workplace means that there is an expectant user community and commercial opportunities.

But one bar has risen - user expectations - and it is no longer sufficient to 'mobilise' an application to work on mobile devices. The whole mobile user experience needs to be optimised, and this means successfully tackling two elements; the immediate user interface on device and the end-to-end experience.

At one time it was possible to separate out those aspects that might impinge on consumers from those that affected workers, but with so much crossover from the consumerisation of technology, the boundaries are blurred. Even applications for the workplace need to be written to appeal to the consumer appetites of individuals as well as meet the needs of businesses.

The user experience is significant as it impacts directly on the ability of an individual to do their job as effectively and efficiently as possible. Non-intuitive or even 'different' usage models in user interfaces force people to spend more time trying to learn the style of the application rather than getting the most effective use out of its substance. Some will just give up.

In the mobile context this is exacerbated by the immediate environment surrounding the mobile user; distractions, limited input/output, less comfort (standing or walking) and no time to wait for slow networks or applications. Presenting mobile users with too much or difficult to navigate information, expecting complex responses and not pre-filing with known mobile context e.g. location data, is not going to encourage effective frequent use.

Mobile users are impatient, often with good reason, and user experiences that mirror, mobilise or 'mobile-enable' the traditional desktop, whilst delivering consistency across different devices, often struggle to optimise for mobile needs. The recent market wavering between native and web interfaces for mobile indicates the dilemma faced by developers. They need balance a need for consistency and minimizing porting effort versus delivering a more mobile optimised user experience.

Current trends seem to be again moving towards favouring native mobile apps, indicating a rise in the desire for optimisation, but this alone is not sufficient for delivering the complete user experience that users increasingly expect, as 'mobile' will also often mean 'remote'.

Taking more control of the user interaction on device is one thing, but the end-to-end experience is more complex, with the trend towards delivering services to mobile devices, often going hand-in-hand with concentrating the service delivery into some mix of remote cloud services. These may be public, private or hybrid and accessed over a mix of service provider networks - cellular or Wi-Fi - of varying coverage, capacity and quality, so controlling and delivering of an end-to-end experience requires much more thought and attention. Network and remote application access needs to be streamlined and optimised to ensure the mobile working experience is fully productive. This will mean gaining a greater understanding of the challenges of mobile networks, which will often impact on application design.

Application developers and companies targeting mobile working need to start from the outset with a mobile strategy oriented around users, not just devices or even applications. They need to design for the constraints and advantages of the mobile environment to optimise for mobile working not just in the device - no dependence on mice, keyboards or seats - but also in the challenges faced in accessing the network and services that the device (and therefore user) relies upon.

Smart phones need smart apps and smart networks for mobile employees to be smart workers.

What can we read into Dell's customer event?

Clive Longbottom | No Comments
| More

English: Michael Dell, founder & CEO, Dell Inc.

Michael Dell, founder & CEO, Dell Inc. (Photo credit: Wikipedia)

Dell's recent DellWorld event in Austin, Texas was impeccably timed such that little of any strategic nature could be discussed.  Coming close on the heels of the 'definitive agreement' for Dell to purchase EMC, all that Michael Dell could really state were the obvious things around matching portfolios, complementary channel routes, greater reach into high end markets and so on.  What he couldn't discuss - not only because Dell cannot do so due to legal constraints, but also as it is unlikely that even Michael or Joe (Tucci - EMC CEO and Chairman) yet know - was the strategy around products and rationalisation of not only these products, but also in the direction the Federation of companies will have to take.

However, while the fundamentals of the deal are hammered out, Dell has to continue moving forward.  Therefore, there were several announcements at the event that were newsworthy and will likely survive through the acquisition.

One of these was the Dell/Microsoft Cloud Platform System Standard box - a Dell-built box that contains a Microsoft OS stack along with several Azure services, enabling organisations to more easily create a hybrid private/public cloud platform for Microsoft workloads. This box can be either purchased (including via flexible terms through its Cloud Flex Play offering), or rented direct from Dell for $9,000 per month.

Next was the new Dell Edge Gateway 5000, following on from the announcement of a far more basic Gateway at last year's event.  This new Gateway has the same approach of the old one as an internet of things (IoT) aggregator, but has been considerably hardened and improved in what it can provide.  It now supports pretty much any device's output as an input, acting as an inline extraction, transform and load (ETL) system to ensure that different data schemas can be simplified.  The Gateway can then analyse the data collected and either disregard it (as being useless), store it (as being possibly interesting), or raise an alarm (as it identifies an issue based on the data). 

From Quocirca's point of view, this approach is fundamental to a large IoT deployment - data volumes have to be controlled, otherwise the core network will collapse under the volume of small packets being sent to a central data lake by masses of uncontrolled devices.  The Gateway 5000 also gets around another problem - that of obviating the need for customers to replace all their existing devices, as the Gateway can take data in from proprietary, and even analogue, devices.

Dell is also extending its bespoke services to service providers, via a new group called Dell Datacentre Scalable Services (Dell DSS).  The high-capacity service provider markets (including telecoms and web-scale customers) have different needs to a general commercial customer - they are not bothered about badging of products, but require bespoke designs in many cases.  However, they are also extremely cost sensitive.  Dell believes that it can still make enough profit on deals in this space due to the numbers of bespoke or semi-bespoke designs it has to come up with - it expects that many of the designs will still sell in the tens of thousands to a single customer.

Dell also gave a rousing and staunch defence of its PC markets.  It believes (rightly so, in Quocirca's view) that the PC is not going away any time soon, and also believes (slightly more questionably) that there is still plenty of space for innovation in the space.  While this may be true to an extent, the real innovation will be around ensuring that new access devices are flexible enough for on-device applications as well as web-based apps and services, and that there is a great deal of consistency between on-desk and mobile devices.  While Dell has a decent Windows tablet portfolio, it pulled out of smaller device formats some time back - and this may hurt it in the longer term as others, particularly Lenovo, come through with more end-to-end systems.

However, what everyone wanted to talk about was the EMC deal.  All that can be said here is Quocirca's view on it, based on what knowledge is already in the public domain.  We have previously written about the problems for any company that would acquire EMC - and we still stand by much of what was said at the time.

Tucci created a federation of companies that had built-in 'poison pills' that count against a smooth acquisition.  Joint ventures, such as VCE and Pivotal will make it difficult for Dell to unpick and embrace them fully into the Dell company itself.  Taking EMC private while leaving VMware public does give some room for manoeuvre - EMC has an over 80% holding in VMware: Michael Dell has already stated that he would look to offload some of that, providing at least some payback to the investors backing the buy-out of EMC. 

Whereas it would make sense to get rid of the idea of a Federation completely, so removing the confusion in many prospects' and customers' minds as to how it all hangs together, Michael Dell did hint to a possibility of retaining such a structure - something that Quocirca would advise strongly against.  Indeed, even as the deal was being announced, Pivotal, EMC and VMware created a new member of the Federation by spinning out previous acquisition, VirtuStream, to carry much of the overall cloud strategy for the rest of the Federation.  While Dell has eschewed a public cloud system hosted by itself, it has become a cloud aggregator, using systems such as its own acquisition of Boomi to provide integration between partner and other public clouds.  Whether the idea of creating VirtuStream is to enable a quick offload of EMC's own cloud pretentions when the deal closes (mooted for the middle of next year), it remains to be seen.

What is clear is that this is a very brave move by Michael Dell.  As Dell itself is only just coming out from a period of quiet after taking itself private, to suddenly take on the largest (by far) tech acquisition that will enforce true strategic silence for almost a year will stretch customers' patience to the limit.  Dell must be thankful that with IBM still reinventing itself around cloud services (via SoftLayer, BlueMix and Watson as a Service), and HP in the throws of pulling itself apart, it should suffer little loss of customers to either of these companies.

However, with storage companies such as Pure Storage, Violin and Kaminario waiting in the wings, along with hyperconverged vendors such as Nutanix and Simplivity, it could be here where Dell should be focusing.

Dell is in for an interesting 9 months - what comes out the other end should be interesting.  For Michael Dell's sake, Quocirca hopes that it makes sense to customers and prospects, and that it doesn't create a company that is too large and slow moving to respond to the incoming, new-kids-on-the-block competition.

Getting the business message across

Rob Bamforth | No Comments
| More

With so many forms of communication available, is it still necessary to think carefully about medium as well as message?

Sometimes, yes. Electronic communications might all appear to be instant and completely effective, but there are the occasional 'blips'.

The odd "didn't get your text in time" or ''didn't you get my email?" probably does not matter that much for personal communication, but for business, the problems can be much more serious - missing orders, payments or critical alerts can have disastrous consequences.

On the face of it, all business communication is about sending a message or opening a conversation, between two or more participants. But all of these activities have a purpose (apart perhaps from some of those agenda-less meetings that drag on all morning and have no actions or outcomes, that afflict many large organisations!) The purpose is inherent in the business process and may imply many things that will influence what form of communication should be used - intent, importance, timeliness, criticality, relevance - in order to achieve a desired business outcome.

For all conversations from informal ones over the phone, ad hoc videoconferences, team gatherings or right up to formal meetings, the outcome can (and should) be clear. As long as there is some sort of implicit or explicit agenda along with sufficient interaction leading to a raising of the mutual level of understanding, the results can be seen, heard and if required, minuted.

Sending business messages - whether using electronic data interchange (EDI) or personal email, text or otherwise - can be much more problematic partly due to the lack of feedback. Was it definitely sent, or received, and by the correct recipient audience? But the content is also more important. Was it securely delivered, recorded for audit or suitable compliance longevity, and checked against the leaking of private information to the wrong places?

With highly connected businesses and their supply chains increasingly becoming digital, messages are sent automatically as instructions and alerts, which have important consequences if not delivered to the right place at the right time or even in the right format.

This can throw up some surprising 'blasts from the past'.

Research Quocirca conducted in 2008 suggested that 80% of employees had fax numbers on their business cards, only slightly behind those having a mobile number. Many might have been surprised it was so high back then, but surely now fax has gone the way of the Telex?

Not at all, and the reasons might be surprising. When a twenty-something recently started a new role in a UK Internet of Things (IoT) business, he tweeted how shocked he was to send a message using apparently archaic media - fax. It turned out that the industry sector his customers occupied still generally expected to receive faxes.

A conversation with business messaging specialist, Retarus, revealed that not only is this not unusual, but in some areas it is seeing increasing use of fax. Some of this is apparent at a regional level; in Europe fax is generally being replaced by other messaging systems, but perhaps surprisingly fax usage is still very strong in the US and actually growing in Asia.

One reason perhaps for fax longevity is the growth in the so-called 'sharing economy', where an internet order is placed to a centralised service, which is then assigned to a local service provider for delivery. It might look 'uber' new wave, but it is not a new business model, having being around since 1910 with the arrival of Interflora and messages being telegraphed between florists.  

Today, a similar approach is being taken by online platforms for ordering food from takeaways found via centralised service platforms. So, from a mobile app, linked to a central ordering platform, the message goes out via fax to the local organisation delivering the service - food, flowers etc. Not everyone has the wherewithal to accept these disseminated requests in a fully electronic manner: this is where the venerable fax machine provides a simple, trusted and cheap mechanism for the recipient to receive them.

These messages have to get through and media should be chosen for effectiveness in meeting the business purpose, not from using the latest cool technology. Guaranteeing the message gets deliver is a service all of its own, and this is where companies like Retarus fit in. Telecoms networks enable connectivity, but organisations and especially their value chains are diverse, dispersed and global. Ensuring communication is secure, efficient and effective requires some thought and effort to avoid awkward questions like "didn't you get my order?" or "why wasn't I alerted as soon as it failed?". Businesses need to make sure they are getting the right strategic message, not just the facts.

VMworld 2015 in Barcelona: Taking Risk is the Lowest-Risk Corporate Strategy According to VMware CEO

Bernt Ostergaard | No Comments
| More
Competition today is about speed
The message to corporate IT shops from VMworld 2015 in Barcelona was that VMware intends to unify the hybrid cloud environment to allow any device to access any app on any public or private cloud - securely. The message to business was that the competition between incumbents and start-ups is being replaced with the competition between the fast and the slow - irrespective of company size. If a company is risk averse, it's on the road to extinction - an acquisition target for faster moving competitors. VMware's view (and hope) is that cloud will play a pivotal role in that respect.

Cloud services today store about 10% of the global data with the rest in on-prem data centres.  VMware CEO Pat Gelsinger estimated that cloud will account for 50% of stored data by 2030. This indicates a long growth curve ahead for the whole cloud business - one which VMware wants to grab a large share of.  However, VMware is up against OpenStack as well as the many variants of this (HP Helion, Cisco's cloud offering, even IBM's SoftLayer OpenStack offering).  As the on-prem incumbent, it now has to walk-the-walk of its talk - it may well not be the incumbent that wins out overall.

The new VMware Stack
The undisputed market leader in virtualisation and aspiring cloud software leader put on an impressive show for the 10,000+ participants (with 150 press & analysts jointly designated as 'influencers') at this year's VMworld event in Barcelona. Just six weeks after VMworld in the US, the company still managed rapid-fire announcements of new products and partnerships from projects to develop secure points-of-presence on a guest device to ensure integrity and protect critical data, through to a new security architecture focusing on security of people, apps and data.  The presentations by corporate management were technical and couched in acronyms that put even hardened analysts on the back foot.

What does the VMware Software Defined Data Centre looks like these days?
VMware's SDDC, their version of a software-defined data centre has essentially four layers:

The announcements from the US VMworld focused on the Unified Hybrid Cloud to bridge between public and private clouds, with EVO:Rail SDDC and vSAN 6.1 in the data centre. On the development front announcements focused on vSphere Integrated Containers and the Photon developers' platform. The main security annoucement centered on Identity Manager and Project A2 for universal apps delivery and device management. Barcelona announcements then added the management layer updates with:
- vRealize Automation 7.0 delivering unified blueprints for integrated multi-level application environments, 
- vRealize Business 7.0 for costing and business planning, and 
- vRealize Operations 6.1 to deliver heterogeneous support and the customer experience with an event broker solution especially in the upgrade process. Intel uses the new vRealize Operations to manage IoT edge gateway that collect and analyse data and raises alerts, e.g. in vending machines.

The Dell-EMC acquisition
Annual vendor events rarely take place right on top of momentous changes to the company's environmentt. So what greeted arriving participants at this year's VMworld event in Barcelona was game-changing - though in exactly what direction remains 'up in the clouds'.

Will Dell with its announced intent to acquire EMC, the 81% owner of VMware make life harder for VMware with its core agnostic approach to hardware? Well, Michael Dell was on the big screen at the opening keynote emphasising that VMware will remain an independent, publicly traded company and that the continued profitability of VMware represents an important financial contribution to paying off the sizeable ($67bn) debt incurred by Dell. On top of that the 81% EMC ownership of VMware will be halved inasmuch as half of the EMC stock will become publicly tradable (so-called tracking stocks) owned by EMC investors.  

CEO Gelsinger believes that getting the global Dell sales force economically incentivised to sell VMware products along with their hardware sales will generate additional sales of $1bn within two years. He also mentioned that their acquisition of Pivotal a few years ago was not going to stop Pivotal management from opting for an IPO exit from the VMware family, if they decided that was in the best interest of the company.

However,  as I made the rounds in the VMworld exhibitor area in Barcelona there were tensions - one carrier customer noted that they are delaying their VMware investments to see how the acquisition pans out before going ahead with their unified hybrid cloud service plans. Another partner expressed concern about the changes in relationships between HP and VMware, if Dell now becomes the preferred server platform. So we may see existing VMware business partnerships that will deteriorate or go away. I would certainly expect Lenovo and HP to expand relations with virtualisation vendors KVM and Citrix Zen where possible. 

So, it seems most likely that there will be no attempt to subvert or cramp VMware's business plans or product range. And for the record, Dell is already a very significant VMware distributor - at least in Europe. Cooperation between Dell-EMC and VMware will certainly be initially strengthened in areas where there is already working relationships. In other areas of network management and security, VMware will continue to compete with Dell - unless of course Dell opts to exit business areas that conflict with VMware.  This is highly unlikely, as Dell needs to build on its hyperconverged systems based around FX2.  It is far more likely to slowly move away from VMware, allowing VMware to be more independent, first, getting rid of any investment in the EMC/VMware/Cisco joint venture of VCE.

The future, whichever way it pans out for VMware, is looking pretty good.  It will either be embraced more closely by Dell, while Dell has to leave it to continue with existing partnerships with Dell's competitors, or it will be a completely independent company that can invest in its own strategy.  Quocirca believes that this new future for VMware will be a high growth one: Dell's future is slightly less apparent.

Capacity planning - aiming for the sky, but hitting the cloud ceiling?

Clive Longbottom | No Comments
| More

At a recent round table event hosted by Sumerian (a provider of IT capacity planning tools), discussions took place around what role capacity planning should be playing in an organisation.

Sumerian had commissioned some research to see how capacity planning was perceived.  Some headline stats are worth mentioning - whereas 55% of businesses already say they are using some form of capacity planning tooling, in reality, for 45% this is simply an Excel spreadsheet.  56% of respondents felt that capacity planning would be increasingly important for them in the coming year - but 40% perceived a shortage of skills and 36% stated that they had a gap in their planning capabilities.

Such findings drove the discussion - why should anyone be bothered about capacity planning; what does capacity planning mean for an organisation's IT platform; and where are the skills going to come from to provide a suitable system to manage planning to the level the business will demand?

Starting with the first issue, Quocirca's own research over the years has shown that utilisation rates of servers in a physical, one-workload-per-server (or cluster) environment rarely reach above an average of 5-10%.  Storage utilisation rarely gets above 30%.  Network utilisation often gets above 80% due to poor data management, leading to sawtoothing and data collisions.  Such low utilisation rates in servers and storage combined with poor overall performance due to overwhelmed networks is becoming apparent to the business - just why should they pay for systems that are 90% underutilised yet underperforming due to network issues?  Why can't IT get more out of the systems?

On the second point, what does it mean for IT platforms, it was felt that the complexity of a modern platform, with a mix of physical and virtualised environments along with private and public cloud means that it is becoming impossible to effectively plan for highly flexible existing workloads, never mind for when new workloads are implemented.

This then led on to the third issue - are the skills required human or can technology replace them?  The general impression was that it will be a mix of both, but that the main 'grunt work' will have to be automated to as great an extent as possible.

For cloud to fulfil on its promise, it will have to be dealing with multiple workloads on one flexible and elastic virtualised platform that mixes compute, storage and network capabilities.  Each aspect of this mix will be dependent on the others - for example, a storage issue may be 'cured' by throwing more storage capacity at it, but this can then cause a network issue that does not fully solve the actual problem of the performance of the end-to-end system.

Tools will be needed that can rapidly learn how business workloads operate; create patterns of usage; predict future usage and advise accordingly - or can take immediate action to prevent problems from occurring in the first place.

Behind this, there will be a need for business architects who can listen to what the business needs and come up with a range of possible solutions that they have investigated, fully understanding what extra capacity is required.  These architects can then report back to the business in terms of costs, risks and value for the business to make the final decision on which solution fits best with the business' own risk strategy.  Only then should the IT department implement the solution.

The truth is that for an organisation to have an effective IT platform that is responsive current business needs and can support future needs, requires capacity planning.  Otherwise, the organisation will be faced with supporting a platform that is heavily over-engineered, resulting in excess licensing, consuming more energy than is necessary (not just in powering servers, storage and networks, but also in cooling them) and wasting of space in the private or co-location data centre that also costs money. Even with this over-engineering, it is still likely that a basic misunderstanding of the contextual relationships between the various components of an IT platform will still lead to the business being badly supported.

Another finding from the research was that many organisations who go into public cloud computing to save money do not find those savings.  From Quocirca's point of view, this is not surprising - we always advise that any change is done for the benefits it provides to the business, not the cost savings it is meant to provide.  However, in this case, the higher than expected costs could well be down to the lack of effective capacity planning being in place, leading to over-provision of resources in the cloud 'just in case'. It is important for organisations to ensure that any agreement with a public cloud provider is based on correct capacity planning up front, followed by the use of cloud elasticity to provide for additional resources when required.  By using good capacity planning tools, the need for considerable 'overage' charges (where excess resource is required on a regular basis) can be avoided.

Again, good capacity planning tools can ensure that the right amount of cloud resources are planned for and built upon as required - resulting in the cost savings that are being searched for.

HP: Putting Print Security on the CISO Agenda

Louella Fernandes | No Comments
| More

Amidst a rapidly evolving threat landscape, where malware and exploits continue to proliferate, endpoint security often fails to adequately protect networked printer and multifunction printer (MFP) devices. With its new enhanced LaserJet enterprise printer range, announced on 22 September 2015, HP is demonstrating its serious commitment to closing the print security gap. 

In today's increasingly mobile and interconnected digital enterprise, cyberattacks are increasingly sophisticated, designed to inflict maximum damage to an organisation's systems and networks. The loss of sensitive information - be it personal or financial - can have huge repercussions - both financial and legal - not to mention the impact on brand reputation. According to the Ponemon Institute, the average consolidated total cost of a data breach is $3.79 million. Meanwhile, Quocirca's recent enterprise managed print services (MPS) study revealed that over 70% of organisations have suffered at least one data breach as the result of unsecured printing. Yet printing is an overlooked area in the Chief Information Security Officer (CISO) agenda. While focus is given to protecting traditional IT endpoints such as laptops, PCs and mobile devices, ignoring printers as a vital endpoint in an overall information security plan can leave an organisation exposed and vulnerable.

The print security gap
So what is the importance of securing these supposedly "peripheral devices"? Today's MFPs are advanced and intelligent document processing hubs which print, copy, scan and email. Information resides on hard disk, in memory and with most MFPs now running advanced web servers, these devices are exposed to the same risk as any PC device. At a basic level, there is the opportunity for uncollected sensitive or confidential information to be picked up from output trays - accidentally or maliciously - by the wrong recipient. Fortunately there are a range of simple tools that enable user authentication (either via a smartcard or user PIN) to ensure print jobs are only released to authorised users. But at a deeper level, networked printers and MFPs need to be protected at the firmware and network level. Without adequate protection, the web server on an MFP can be exploited and compromised, providing open access to an enterprise's network. Indeed, it is not specifically the data on an MFP that may be targeted, it is an entry point to the wider network.

HP's security enhanced enterprise LaserJet products
HP's recent announcements aim to address these vulnerabilities and demonstrate a significant advancement in printer security. It boldly claims its new HP LaserJet Enterprise 500-series printers are "the world's most secure printers" because they support a strong set of default security features and settings, but perhaps, most importantly include advanced embedded security capabilities, specific only to HP devices. These include:

  • HP Sure Start. To prevent an attack at the point of start-up, HP is implementing BIOS-level security with HP Sure Start. This applies the same BIOS security protecting HP's Elite line of PCs since 2013 to new HP LaserJet Enterprise printers. In the event of a compromised BIOS, a hardware protected "golden copy" of the BIOS is loaded to self-heal the device to a secure state.
  • Whitelisting. This ensures that only HP authentic code and firmware can be installed and loaded onto devices.
  • Run-time Intrusion Detection. This protects the printer by continuously monitoring memory to identify, detect and highlight potential attacks to Security Information and Event Management (SIEM) tools like ArcSight. The device will automatically reboot flushing memory and bringing it back to a safe state. This technology was developed in partnership with Red Balloon Security, a US based embedded device security company.

Additionally, HP will retro fit legacy devices, allowing customers to benefit from these security features for devices from 2011. According to HP, with a firmware update, all three features can be enabled on the HP LaserJet Enterprise printers delivered since April 2015. For HP LaserJet Enterprise printers launched since 2011, two of the features, whitelisting and Run-time Intrusion Detection, can be enabled through an HP FutureSmart service pack update.

Notably, HP is also addressing the needs of enterprises which operate a mixed fleet environment. HP JetAdvantage Security Manager, currently the industry's only policy-based printer security compliance solution, enables IT to establish and maintain security settings such as closing ports, disabling access protocols, auto-erase files and more. When a reboot occurs, the HP Instant-On Security feature will check and reset any impacted settings automatically to bring devices into compliance with the organisation's policy. Quocirca believes this is a real opportunity for HP to set industry standards with integrated print security management, much in the same way HP Web JetAdmin has become a standard tool for enterprise print management.

HP also offers a comprehensive Printing Security Advisory Service, which evaluates an enterprise's current print security position and recommends solutions to address an organisation's print security risk exposure. Indeed, Quocirca is seeing that managed print services customers are most advanced here, often undertaking security assessments which identify vulnerabilities. In fact 90% of organisations using MPS had started or completed a security assessment. Certainly, this is having a positive impact, with Quocirca's research revealing that data loss is much lower amongst those that have conducted an assessment. Almost half of those that had conducted a security assessment indicated no data loss compared to 14% of enterprises that have started the process.


Quocirca Opinion
Print is often an afterthought in the security equation, leaving an organisation's data and networks exposed to unnecessary risk. While all manufacturers offer some form of built-in security features along with third party secure print solutions, there remains the opportunity to educate enterprises on the real risks that unsecured printers and MFPs pose. Consequently enterprises remain uncertain of how to implement a secure print strategy that integrates with a broader information security strategy. Quocirca recommends that enterprise clients consider a managed print service that offers a broad security assessment and addresses the need for a layered approach to security, dependent on the business needs.
HP certainly now has a comprehensive set of hardware, software and services offerings to evaluate and minimise the risk exposure for their enterprise clients. This enterprise LaserJet product range introduction will certainly raise more awareness of the need to better secure the print environment and we expect that HP's competitors will respond by highlighting their solutions and services in this area. HP's market dominance positions it well to lead the market and potentially set industry standards.
Chief Information Security Officers (CISOs) need to tighten print security, not only to protect information that resides on printer endpoints but also recognise that an unsecured printer is a potential gateway to the corporate network. Ultimately, any security strategy is only as strong as its weakest link.

Mobile email innovation? Geronimo!

Rob Bamforth | No Comments
| More

At a casual glance the mobile world of hardware and apps appears fast moving and packed with innovation, but often much of what occurs is improvement and refinement. The old IT hardware vendor mantra of "smaller, cheaper, faster, more" is very welcome and can lead to marked change based on the positive benefits of both Moore and Metcalfe's laws of processing power and connectivity benefits respectively, but it's not radical is it?

Step changes both in hardware and software come from doing something different. They may not always work, but without these valiant attempts to break the mould we end up with more of the same. Consider the humble mobile phone. Once we had bricks, candy bars and clamshells, with a few even odder shapes (remember the Nokia 7600?), and now we have what? Thin rectangular slabs, metal back, glass front with a grid of icon apps - from everyone.

We may be able to thank BlackBerry for the initial step change to creating a viable smartphone with a usable email app, but despite any number of mobile email software companies (many now disappeared) and even the mighty Apple, most of the efforts appear to be pretty 'samey'. So, why not have some diversity rather than homogeneity?

One could argue 'standardisation' and the lack of need for something different when an adequate version already exists. This line of reasoning popped up in comments following the recent announcement of Geronimo, a radically different approach to mobile email by San Francisco start-up Jumpin Labs. After a brief review of the novel ideas - and Geronimo has a lot of them - the tone of some comments heads towards, "why will people want all this when they already have X or Y?"

Having taken a closer look at Geronimo, and being a 200-a-day email receiver for over 25 years, I ought to remind those making these sorts of observations that email, in common with many other aspects of IT, is far from perfect and perhaps we could all benefit from some occasional radical changes, rather than just polishing.

So what exactly does Geronimo do differently?

Firstly, emails can be viewed horizontally in a date/time line, rather than just vertically. Ok, that's only slightly different, but when combined with compressing and expanding each day into a vertical list it makes a full inbox look more manageable. Geronimo adds other controls that simplify dealing with masses of emails and makes it a little fun at the same time. Not too much fun, remember email is work and there is always plenty of it to do...

First, the content of the email stack can be rearranged to meet personal priorities. Not every email is important and urgent based on its arrival time or sender, so dragging individual emails into a new order can be quite useful. Favourite senders can be supplied with icons and viewing all emails from a particular sender is a cinch. 

Next, non-human or 'robot' emails can be removed from view by a tap on the phone. Assuming there is good SPAM filtering in place, this is moving the BACN (something signed up for, but now overwhelming the inbox) out of the way so the real 'meat' can be seen. Not deleted, just guiding focus.

Also, Geronimo is designed for a mobile phone so makes use of other sensors for users to take control. This includes wrist flicks to move between days and emails and additional touch screen gestures to gather several items into a bundle and flick them away into a screen corner (previously designated with a particular function e.g. a particular mail folder). These might seem a bit of a gimmick to some, but they are not mandatory, and users can still push 'traditional' smart phone clicks and buttons.

It also has a companion app for the Apple smart watch, which goes beyond the basics of current wrist email by sticking with the idea of robots versus human and filtering to offer human-only notifications, and offering search and compose on the watch as well as read and reply.

It is an early product so there are still limitations. It only works on the iPhone (and Apple watch), which narrows the audience a bit, plus the first release only handles Gmail accounts, although this will no doubt quickly change. Not all of Geronimo's concepts will appeal to everyone, but many will and it would be great to see more vendors thinking along radical lines every once in a while.

Why bother trying to improve email? Well, many people are overwhelmed by the masses in their inbox - valid, useful, time-wasting or SPAM - and tools that bring back a feeling of control are badly needed. Not only to keep on top of the volume, but also to reduce some of the time spent on email, by being able to handle it and respond more efficiently, especially while on the move. Geronimo may or may not be the ultimate solution, but it has a good go at addressing many email problems in a really interesting and appealing way.

Mobile, it's hardly about standing still is it?

The robots are coming!

Clive Longbottom | No Comments
| More

Two headlines on the BBC website recently caught my eye - "How to robot-proof your career" and "Which jobs will robots steal first?"

The second headline uses the media's clickbait approach of using a strong word like 'steal' - this raises the question of whether robots have already got to the level of intelligence to understand the concept of taking something from someone without their permission, which they patently haven't.

What is important around the robotisation of the workforce is not whether this means that your job may be removed from the list that humans are needed to fulfil, but what does this mean to your position within the community as a whole?

There has been a continued evolution of work toward increasing automation.  As an example, in 1801, Joseph Marie Jacquard demonstrated the first automated loom, using a series of punch cards to create patterns in woven fabric.  This enabled much faster, automated weaving to take place with a higher level of consistency in the final product.  In a similar vein, James Hargreaves had developed the Spinning Jenny, enabling multiple spools of yarn to be created by a single person at the same time.

In the modern workplace, we no longer rely on manual telephone exchanges; we have self-service checkouts at supermarkets; we use the web to order goods which are picked by autonomous vehicles from massive warehouses, packed onto (manually driven) lorries that are automatically tracked via GPS devices.

The march of automation is nothing new - all that robotics is doing is moving this along to the next stage.

The impact could be massive.  Many jobs could go: think of the ones that have already gone - secretarial pools, many clerical jobs, the majority of bank tellers, etc). It is relatively easy to automate out a lot of functions currently carried out by humans.  For example, those warehouses mentioned earlier - many now employ only a handful of people as robots take over in the stocking and destocking of goods.  Supertankers have crews of a dozen or less, as the majority of actions are all automated.  Web sites are using robotic avatars to engage with customers.  Some companies are already implementing robotic receptionists.

Scary?  Only if we cling to the concept that a job is necessary.

If robotics could replace 10% of the UK workforce, it would put another 3 million people out of a job.  That is not really a stretch target - if we really wanted to, it would be possible to replace a lot more workers than that through advanced automation and the use of artificial intelligence.

But we all need a job to live, surely?  The economy cannot afford to maintain a mass of 3 million people on benefits - can it?

Why not?  If automation improves productivity - which is the only real reason to use it - then the overall economy improves.  The robots are not paid a wage (although they do need paying for and then maintaining), so those organisations that automate the most will increase productivity fastest.  Moving the government's approach away from targeting 'full employment' to 'full automation' would create a booming economy - one which can then afford to support a nation of people who are not employed in the traditional sense.

At an economic level, to be globally competitive, a country's output has to be cheaper or more niche than another country's.  If the UK remains dependent on manual labour, then they have to be more productive than other countries or accept lower wages. If the UK moves to automated labour, then the removal of the labour costs allows us to be more competitive - there is a great deal more control over selling prices, as margins will be higher.

This does not mean that we just create a nation of more couch potatoes - no, what we need to create is a nation of people who can all choose what they would like to do, whether this be something more artistic or something which is vocational.  Encouragement would be required so that people did not just stew in a position of not knowing or caring what they do.  There will always be people who are altruistic leaders: these people can galvanise others into doing things. These people can then galvanise others - and so on in a continuous virtuous cascade.

The country could have more people carrying out voluntary work to help those who are in the most need - the lonely old; the infirm who cannot get about; the underprivileged who have not had the right chances to date.

People could choose to work together - maybe building affordable houses as a collective; maybe working more on an exchange basis of things like swapping vegetables they have grown themselves for beer that someone else has brewed, or skills in the use of technology for a hand-made wooden bowl.

People could choose to go for an increased amount of lifetime learning - they will have more free time, so can get deeply into areas that interest them.  They can do more activities - sport, walking, sailing, whatever - so creating a healthier population.

Overall, people could become more creative and less task-driven.  There would be less looking down on the unemployed - they could be very productive in their own way.

Sure, there would be a lot of jobs that did still require humans.  With more people available and with many parts of the job automated to avoid wasted time, some of these jobs could become shared positions - still paying the same salary as a full-time position - freeing up time for the workers and providing a standard of living higher than those who are not 'employed' under any standard contract.

We should not fight the coming of the robots - we should embrace it.  We do need to plan for the future - how do we deal with the first sets of people whose jobs are replaced so as to create an environment where they can become productive in the greater scheme of things, and how we deal with those whose jobs are not replaced.  These required workers may feel left out - so will need to be paid accordingly and, through approaches such as job sharing, provided with more time to themselves to do what they want to do.

This still leaves us with the one major issue - how will the robots view us when they become self-aware?  I think I'll leave that issue for another day...

Enterprise social media - where are they now?

Rob Bamforth | No Comments
| More

After an initial rush towards enterprise adoption of social media and social networks, it seems that for many organisations, the idea of enterprise tools to stimulate a collaborative and social approach to the working environment is stuttering the same way that many early unified communications (UC) solutions once did.

The initial thought process seemed to make some sense. Everyone is being overloaded by email, businesses want their workers to collaborate more, and they like sharing and communicating with contacts on social media, so give them an enterprise social media tool. Bingo, they will all adopt it!

The supplier side of the industry went along with the idea with the same enthusiasm it did for UC; buy up companies with the latest staccato social media sounding name (has no one done 'spatter' or 'splutter' yet?), place it in a 'suite' alongside other communications tools, then everyone will use everything - right?

No. Most of us have evolved our communication preferences over a long period of time and we are pretty much all set in our ways and need some convincing, encouragement and good reasons, to change. It may be true that millennials are more au fait with technology and are more accustomed to using the latest things, but all this means is that the ways they are 'set in' are just more 'current'. Like anyone else, they have preferences and work best when they can choose their favourites.

Furthermore, with so many types of communication, media and social groupings at our disposal the chances for the recipients and instigators all having the same preferences for (or even access to) particular forms of communication diminishes. Who you contact and whether they will respond will depend on an increasingly complex negotiation of shared tools and media. It will also depend on favourite devices and current context, as for many people desktops and fixed-line phones have given way to tablets and mobiles.

It gets even worse. You may prefer to hear from your boss over email, your general colleagues on the phone and those you are friendlier with via a shared multi-media portal - and that might vary depending on time of day, subject matter and how things are going generally.

Outside work, this is less important, as individuals can make their own decisions about what technology they want to acquire and to a large extent who they want to communicate with. But anyone wanting to become accepted by a social group, gang or tribe needs to adopt its preferred methods of communicating.

This in theory should be the case at work, but the trouble is that the forms of communication made available are dictated from the leadership down, rather than bottom up from the consensus of the 'workgroup'. It means that there may be a shared common platform, but it might be resented and rebelled against, which impacts on the benefits that most organisations were aiming for in the first place; getting people to better communicate, collaborate and be more effective as a team.

It might also cause extra expense. For example, while many companies thought that calls via direct dial extensions from fixed desktop phones over IP trunks from site-to-site and country-to-country was a great money saving idea, employees were sat at their desks making calls from their mobile. It was to hand and more convenient for them, despite the higher cost, as those being called were probably already in the employees' list of the contacts, so dialled without having to remember a number.

Simply having a top down strategy or investing in some new cool tools, no matter how well integrated they are to current systems and business processes, then throwing them at users and expecting things to stick, wont work. That is a poor approach for any technology, but for social tools in the current communications environment it is a non-starter.

So how could enterprises approach social media tools better?

Take a friendly and informal approach. Adoption is critical; get people engaged, find out preferences before even investing in any tools and so from day one it is possible to have support, consensus and common building blocks. It will not be perfect and will have to adapt and evolve, so grow a strategy with that in mind. 

Just like the BYOD challenge, corporate communications has been overtaken by personal preference and choice. Rather than fight this, as with BYOD the best thing is to not blindly go with, but harness, the flow. Technology can really help people communicate, but only when they adopt it on their own terms and the adoption of social tools for the enterprise can benefit from cultivating a 'grass roots' movement, not treating employees like mushrooms...

Managed Print Services and the Circular Economy

Louella Fernandes | No Comments
| More

The circular economy is fast gaining ground as the latest buzzword in sustainability, bringing together emerging practices such as collaborative consumption and traditional concepts such as recycling and remanufacturing.  The circular economy aims to eradicate waste, departing from the linear "take, make and dispose" model and its reliance on infinite natural resources and energy. According to McKinsey, each year around 80% of the $3.2 trillion worth of materials used in consumer goods are not recovered 1

Through a more effective use of materials, the circular economy envisions a smarter approach to the creation, use and disposal/recycling of products. As well as the obvious environmental benefits, the transition to a circular economy will be driven by the promise of over $1tn in business opportunities, as estimated by the Ellen MacArthur Foundation 2. This includes material savings, increased productivity and new jobs, and new product and business categories.

International momentum

Some countries are already starting to introduce legislative drivers such as waste prevention targets and incentives around eco-design to promote products that are easier to reuse, remanufacture and disassemble.

China has set up CACE, a government-backed association to encourage circular growth, while Scotland has issued its own circular economy blueprint. In a significant move, the European Commission's circular economy framework, to be released in late 2015, is expected to introduce higher recycling targets and a landfill ban on recyclable materials across all 28 EU member states.

According to weight-based material flow analysis conducted in 2010 by the Waste & Resources Action Programme (WRAP) 3, 19% of the UK economy is already operating in a circular fashion. This relates to the weight of domestic material input (600 million tonnes) entering the economy compared with the amount of material (115 million tonnes) recycled. WRAP predicts that this figure could rise to nearly 27% by 2020, if 137 million tonnes of material were recycled from a direct material input of 510 million tonnes.

Services innovation

Indeed, our relationship with the products and services we purchase could be dramatically changed under a circular economy. This shifts buyers from ownership to favouring access and performance.  By selling the benefits of products as part of an overall solution, instead of the actual products, manufacturers begin to design against different criteria, monetising product longevity through service, upgrade and remanufacturing.

Some product categories are more likely to benefit from being a service-based proposition than others.  A recent Guardian survey found a majority of business owners (66%) felt technology hardware/equipment offered most value as a product-service model, followed by electronic and electrical equipment (56%) and cars, tyres and parts (51%) 4.   Indeed, smart, connected products are expected to transform the next wave of manufacturing. Self-monitoring enables remote control, optimisation, and automation. This monitoring allows the tracking of a product's operating characteristics and history and to better understand product usage. This data has important implications for both product design and after-sales service - enabling proactive and automated service and maintenance.

This approach facilitates a shift to usage-based models, offering the potential to extend the 'pay per use' contracts associated with smartphones to other products such as washing machines or even clothes. Already, Philips, a strong advocate of the circular economy, now sells lighting as a service for its business customers. Customers only pay for the light and Philips takes care of the technology risk and investment. It can also take the equipment back to recycle the materials or upgrade them for reuse.

The next frontier for printer manufacturers

The circular economy approach is nothing new in the print industry, which has long been striving to enhance its sustainability credentials. This includes the manufacturing process, the responsible recycling of ink and toner, or the provision of hardware, software and services, which eliminate wasteful paper and energy usage.

From a manufacturer perspective, many are already designing and building products that are part of a value network where reuse and refurbishment, on a product, component and material level, assures continuous re-use of resources. Meanwhile, manufacturers have already developed innovative models to move away from selling printers to selling printing as a service. To support this transition to a services model, most manufacturers now offer managed print services (MPS) as a way to help customers reduce the cost, complexity and risk of an unmanaged print infrastructure. Through a usage model, MPS offers businesses predictable expenses and eliminates capital expenditure whilst reducing operational expenses. In this way, manufacturers retain ownership of their products, and sell their use as a service enabling the optimal use of resources.

Quocirca sees a significant opportunity for MPS in the circular economy model, not only to reduce the environmental impact of the products that a business uses, but also as a way for manufacturers to deliver more innovative products and services to meet the changing needs of the business.

Quocirca recommends the following best practices to drive a more sustainable MPS for the circular economy.

  1. Assess current environmental impact

Begin with assessing energy consumption, paper use, carbon footprint and costs across the printer fleet. Some MPS providers offer environmental or carbon footprint calculators or assessments specifically for this purpose. An assessment should focus on identifying areas where the business can lower its environmental impact and recommend a balanced deployment of hardware and software to decrease usage of energy, paper and consumables. By redesigning the print infrastructure with fewer devices, the fleet is optimised with less hardware that is more energy efficient. MPS can provide further benefits by leveraging best practices through management of change and print policy enforcement. This encourages users to print responsibly, eliminating wasteful paper usage and encouraging better recycling practices.

  1. Save energy

Consider energy-efficient products that meet eco-labelling qualifications, such as ENERGY STAR, EPEAT or Blue Angel. Devices that meet the most recent ENERGY STAR requirements can be up to 40% more energy-efficient than others. Look for printers and MFPs with fast warm-up times and deep-sleep and toner-saving modes. Intelligent print management tools can also ensure the most appropriate device is used for each print job by automatically routing large jobs to lower cost, more energy-efficient printers or MFPs.

  1. Reduce the paper trail

Reducing paper usage is one of the fundamental benefits of how MPS can reduce environmental impact. This can be achieved through better solutions for mobility and security. Using MFPs that allow users to scan documents then store and share them digitally, either on-premise or in the cloud, minimises an inefficient and costly paper trail. Meanwhile simple ways to reduce paper wastage include setting double-sided printing as default or introducing booklet printing. Pull printing or PIN printing saves jobs on a virtual print server until users log in at the print device. This reduces the risks of users forgetting to pick up their documents and reprinting them later or the wrong person picking them up, compromising security and confidentiality.

  1. Encourage good recycling practices

Consider how effective existing approaches to recycling paper, print cartridges and older printing devices are, and set recycling guidelines for these items. Look for providers that offer a take-back program and responsibly recycle returned toner cartridges. For imaging equipment, the Nordic Swan and Blue Angel labels ensure this support is in place. Switching to recycled or sustainably sourced paper can also lead to considerable environmental savings, particularly in terms of carbon emissions.

  1. Measure and manage

Integrated reporting provides enterprise-wide visibility of a print infrastructure's environmental impact, including the amount of paper used, overall energy consumption and carbon footprint. This provides excellent opportunities for continuous improvement. In fact, many manufacturers now offer tools and resources to help organisations quantify the impact of their printer environment and develop plans for improvement.


The circular economy represents a markedly different way of doing business, forcing companies to rethink everything from the way they design and manufacture products to their relationships with customers. Offering customers access to, rather than ownership of printing resources will lead to more sustainable consumption. Some leading print manufacturers are already starting out on this journey and seeing positive results with MPS or other cloud-enabled service offerings.

Rethinking MPS for the circular economy requires a different approach across the value chain: leasing rather than selling products, remanufacturing goods, seeking ways to extend the life of products or their components, and changing the behaviours of end-users. Given the changing consumer, business and government attitudes towards consumption and the environment, the circular economy is poised to help make organizations operate more smartly - a transformation from selling boxed products to supplying ongoing services, ensuring a more effective use of raw materials and increasing competitiveness by nurturing relationships with customers rather than relying on a one-way model of selling and buying.

View full infographic


McKinsey & Co, Remaking the industrial economy, Insights & Publications, February 2014

Towards the Circular Economy, published in Davos by the World Economic Forum (WEF), in partnership with the Ellen MacArthur Foundation and McKinsey, January 2014

3   Analysis by Waste &Resources Action Programme (WRAP), WRAP's vision for the UK circular economy to 20202010

4   Circular economy: this is the future for business - interactive. Published by the Guardian in partnership with Philips, December 2014

Intel - the end of speeds and feeds?

Clive Longbottom | No Comments
| More

Every year, Intel holds its tech fest for developers in San Francisco - Intel Developers' Form, or IDF.  For as long as I can remember, one of the main highlights has been the announcement of the latest 'tick' or 'tock' of its processor evolution, along with a riveting demonstration of how fast the processor is with a speed dial showing the megahertz of the processor (yawn).

This year, things were different.  Sure, desktops and laptops were still talked about, but in a different way.  The main discussions were around how the world is changing - and how Intel also has to change to remain valid in the market.  So, CEO Brian Krzanich took to the stage with a range of new demonstrations and messages around the internet of things (IoT), security, mobility and other areas.

As an example, the keynote kicked off with two large inflated beach balls containing positional sensors being punched around the hall by the audience.  These were shown in real time on the projection screen on the stage as two comets that had to hit an image of an asteroid on the screen - a simple but effective demonstration of the use of small sensors being used in a 3D environment.  This was made even clearer when Intel placed two of its latest devices based on its 'Curie' micro system on chip (SoC) on a jump bike.  The bike's movements could be replicated against a computer model, providing the rider with performance stats as well as visual feedback - or for viewers to see data on any jump trick performed.

A larger SoC design can be seen in the Intel Edison - more of a development platform for techies, but again, showing how Intel is providing more of a series of platforms rather than just bare-bones processors and chipsets.  Intel's 'next unit of computing' (NUC) takes things one step further with a series of full, very small form factor computer systems.

As well as being a general small form factor device, the NUC forms the basis for further broadening Intel's potential reach.

The first of its offers built on NUC uses Intel's Unite software - a communication and collaboration hub that makes meeting rooms easier and more effective to use.  Currently available from HP (with Dell and Lenovo amongst others expected to offer systems soon), a Unite system provides the capability for different people to connect to projectors via WiDi (Intel's wireless screen sharing protocol), as well as enabling screen sharing and collaboration capabilities amongst a distributed group.  Unite is also a complementary system to existing collaboration systems, so can integrate in to Microsoft Lync or Polycom audio and videoconferencing systems.  Intel is committed to further upgrades to the Unite system creating an intelligent hub for a smart conference room, with controls for lighting, heating and so on.

On the security front, Intel is looking at building security into chips themselves.  Having recognised that challenge and response systems were far too easy to break, Intel is looking at areas such as biometrics.  However, it also realised the perils behind holding details of anyone's biometrics in a manner that could be stolen (i.e. if a hacker can steal your fingerprint or retinal 'signature', they can then back-engineer a means of feeding this to systems that hold that signature)  - replacing retinas, fingerprints of facial features is not as easy as replacing a stolen password.  Therefore, it will be building into future chips the capability for the biometric data to be stored in a non-retrievable manner within the chip itself.  A biometric reader will take a reading from the user and send this to the chip.  The chip will compare it to what it has stored, and will then issue a one-time token that is then used to access the network and applications. Through this means, biometric data is secured, and users may never have to remember a password ever again.

This then brings us back to the thorny question of desktops and laptops.  Intel still sells a lot of processors for these devices, but in the age of web-based applications being served from the cloud, the need for high-powered access devices has been shrinking.  Gone are the days when organisations pretty much had to update their machines every three years in order to provide the power to deal with bloated desktop applications.  With cloud and virtual desktops, the refresh cycle has been extending - and many organisations have now been looking to five or even seven years (with a few just looking to 'replace when dead') refreshes.

To drive refresh, Intel needs new messaging for its PC manufacturer partners.  This is where security comes in to play.  The security-on-chip approach is not backwards compatible, organisations that wish to become more secure and move away from the username/password paradigm will need to move to the newer processors.

However, this still only presents a one-off refresh: unless Intel can continue to bring new innovations to the processor itself, then it will continue to see the role of desktops and laptops decrease as mobile devices take more market share.

So, Intel also needs to play in the mobility sector.  Here, it talked about its role in 5G - the rolling up of all the previous mobile connectivity standards into a single, flexible platform.  The idea here is that systems will be capable of moving from one technology to another for connectivity, and that intelligence will be used to ensure that data uses the best transport for its needs - for example, ensuring that low-latency, high bandwidth traffic, such as video, goes over 4G, whereas email could go over 2G.  Pulling all of this together will require intelligence in the silicon and software where Intel plays best.

Overall, IDF 2015 was a good event - lots of interesting examples of where Intel is and can play.  The devil is in the detail, though, and Intel will need to compete with not only its standard foes (AMD, ARM and co), but also those it is bringing into greater play with its software and mobile offerings (the likes of Polycom, Infineon, TI, Motorola, etc.).

For Intel, it will all be about how well it can create new messaging and how fast it can get this out to its partners, prospects and customers.  Oh dear - it is all about speeds and feeds after all...

Matching the Internet of Things to the pace of the business

Rob Bamforth | No Comments
| More

I must be a fan of smart connected things - sitting here with 2 wrist wearable devices in a house equipped with thirteen wireless thermostats and an environmental (temp, humidity, co2) monitoring system. However, even with all this data collection, an Internet of Things (IoT) poster-child application that works out the lifestyles of those in the household and adapts the heating to suit would be a total WOMBAT (waste of money, brains and time).

Why? Systems engineering - frequency response and the feedback loop.

The house's heating 'system' has much more lag time than the connected IT/IoT technology would expect. Thermal mass, trickle under floor heating and ventilation heat recovery systems mean a steady state heating system, not one optimised by high frequency energy trading algorithms. The monitoring is there for infrequent anomaly detection (and re-assurance), not minute by minute variation and endless adjustments.

The same concepts can be applied to business systems. Some are indeed high frequency, with tight feedback loops that can, with little or no damping or shock absorption, be both very flexible and highly volatile. For example, the Typhoon Eurofighter aircraft with its inherent instability can only be supported by masses of data being collected, analysed and fed back in real-time to make pin-point corrections to keep control. Another example is the vast connected banking and financial sector, where there is feedback, but with no over-arching central control the systems occasionally either do not respond quickly enough or go into a kind of destructive volatile resonance.

Most business systems are not this highly strung. However, there is still a frequency response, or measure of the outputs in response to inputs that characterise the dynamics of the 'system', i.e. the business processes. Getting to grips with this is key to understanding the impact of change or what happens when things go wrong. This means processes need to be well understood - measured and benchmarked.

In the 'old days', we might have called these "time and motion" studies; progress chasers with stopwatches and clipboards measuring the minutiae of activities of those working on a given task. A problem was that workers (often rightly) thought they were being individually slighted for any out of the ordinary changes or inefficiency in the process, when in reality other (unmeasured) things were often at fault. This approach did not necessarily measure the things that mattered, only things that were easy to measure - a constant failing of many benchmarking systems, even today.

Fast-forward to the 1990s and a similar approach tried to implement improvements through major upheavals under a pragmatic guise - business process re-engineering (BPR). A good idea in principal, especially to bring a closer relationship between resources such as IT and business process, but unfortunately many organisations ditched the engineering principals and took a more simplistic route by using BPR as a pretext to reduce staff numbers. BPR became synonymous with 'downsizing'.

Through the IoT there is now an opportunity to pick up on some of the important BPR principles, especially those with respect to measurement, having suitable resources to support the process and monitoring for on-going continuous improvement (or unanticipated failures). With a more holistic approach to monitoring, organisations can properly understand the behaviour and frequency response of a system or process by capturing a large and varied number of measurements in real time, and then be able to analyse all the data and take steps to make improvements.

Which brings us to the feedback loop. The mistake that technologists often make is that since automating part of a process appears to make things a little more efficient, then fully automating it must make it completely efficient.

While automating and streamlining can help improve efficiency, they can also introduce risks if the automation is out of step with the behaviour of the system and its frequency response. This leads to wasting money on systems that do not have the ability to respond quickly or alternatively, destructive (resonant) behaviour in those that respond too fast.

It might seem cool and sexy to go after a futuristic strategy of fully automated systems, but the IoT has many practical tactical benefits by holding a digital mirror up to the real world and a good first step that many organisations would benefits from would be to use it for benchmarking, analysis and incremental improvements.

IT service continuity costs - not for the faint hearted?

Clive Longbottom | No Comments
| More

IT service continuity - an overly ambitious quest that is pretty laughable for any but those with pockets deeper than those in the high-rolling financial industries?  Is it possible for an organisation to aim for an IT system that is always available, without it costing more than the organisation's revenues?

I believe that we are getting closer - but maybe we're not quite there yet.

To understand what total IT service continuity needs, it is necessary to understand the dependencies involved.

Firstly, there is the hardware - without a hardware layer, nothing else above it can run.  The hardware consists of servers, storage and networking equipment, and may also include specialised appliances such as firewalls.  Then, there is a set of software layers, from hypervisors through operating systems and application servers to applications and functional services themselves.

For total IT service continuity, everything has to be guaranteed to stay running - no matter what happens.  Pretty unlikely, eh?

This is where the business comes in.  Although you are looking at IT continuity, the board has to consider business continuity.  IT is only one part of this - but it is a growing part, as more and more of an organisation's processes are facilitated by IT. The business has to decide what is of primary importance to it - and what isn't so important.

For example, keeping the main retail web site running for a pure eCommerce company is pretty much essential, whereas maintaining an email server may not be quite so important.  For a financial services company, keeping those parts of the IT platform that keep the applications and data to do with customer accounts running will be pretty important, whereas a file server for internal documents may not be.

Now, we have a starting point.  The business has set down its priorities - IT can now see if it is possible to provide full continuity for these services.

If a mission critical application is still running in a physical instance on a single server, you have no chance.  This is a disaster waiting to happen.  The very least that needs doing is moving to a clustered environment to provide resilience if one server goes down.  Same with storage - data must be mirrored (or at least run over a redundant array, preferably based on erasure code redundancy, but at least RAID 0). Network paths also need redundancy - so dual network interface cards (NICs) should also be used.

Is this enough?  Not really.  You have put in place a base level of availability that can manage with a critical item failure - a single server, a disk drive or a NIC can fail, and continuity will still be there.  How about for a general electricity failure in the data centre?  Is your uninterruptable power supply (UPS) up to supporting all those mission critical workloads - and is the auxiliary generator up to running such loads for an extended period of time if necessary?  What happens if the UPS or generator fails - are they configured in a redundant manner as well?

Let's go up a step: let's use virtualisation as a platform, rather than a simple physical layer, let's now put in a hypervisor and go virtual.  Do this across all the resources we have - servers, storage and network - and a greater level of availability is there for us. The failure of any single item should have very little impact on the overall platform - provided that it has been architected correctly.  To get that architecture optimised, it really should be cloud.  Why? Because a true cloud provides flexibility and elasticity of resources - the failure of a physical system where a virtual workload has a dependency can be rapidly (and, hopefully, automatically) dealt with through applying more resource from a less critical workload.  Support all of this with modular UPSs and generators, and systems availability (and therefore business continuity) is climbing.

Getting better - but still not there.  Why?  Well - an application can crash due to poor coding - memory leaks, a sudden trip down a badly coded path that has never been used before, whatever.  Even on a cloud platform, such a crash will leave you with no availability - unless you are using virtual machines (VMs).  A VM contains a copy of the working application that can be held on disk or in memory, and so can be spun up to get back to a working situation rapidly.

Even better are containers - these can hold more than just the application; or less.  A container can be everything that is required by a service above the hypervisor, or it can be just a function that sits on top of a virtualised IT platform. Again, these can be got up and live again very rapidly, working against mirrored data as necessary.

Wonderful.  However, the kids on the back seat are still yelling "are we there yet" - and the answer has to be "no".

What happens if your datacentre is flooded, or there is a fire, an earthquake or some other disaster that takes out the datacentre?  All that hard work carried out to give high availability comes tumbling down - there is zero continuity.

Now we need to start looking at remote mirroring - and this is what has tended to scare off too many organisations in the past.  Let's assume that we have decided that cloud is the way to go, with container-based applications and functions.  We know that the data, being live, cannot be containerised, so that needs to be mirrored on a live, as-synchronous-as-possible basis.  Yes, this has an expense against it - it is down to the business to decide if it can carry that expense, or carry the risk of losing continuity. Bear in mind that redundancy of network connections will also be required.

With mirrored data, it then comes down to whether the business has demanded immediate continuity, or whether a few minutes of down time is OK.  If immediate, then 'hot' spinning images of the applications and servers will be required, with elegant failover from the disaster site to the remote site.  This is expensive - so may not be what is actually required.

Storing containers on disk is cheap - they are taking up no resources other than a bit of storage.  Spinning them up in a cloud-based environment can be very quick - a matter of a few minutes.  Therefore, if the business is happy with a short break, this is the affordable IT service management approach - mirrored live data, with 'cold' containers stored in the same location, and an agreement with the service provider that when a disaster happens, they will spin the containers up and place them against the mirrored data to provide an operating backup site.

For the majority, this will be ideal - some will still need full systems availability for maximum business continuity.  For us mere mortals, a few minutes of downtime will often be enough - or at least, downtime for most of our systems, with maybe one or two mission critical systems being run as 'hot' services to keep everyone happy.

Have you entered our awards yet?

Find recent content on the main index or look in the archives to find all content.


Recent Comments

  • paul simone: I usually use Vagrant for sliding options. This tutorial is read more
  • ChrisMaldini: Nice write-up, Clive. I agree there is a need to read more
  • Adam: Cloud computing and BYOD go hand-in-hand. Cloud computing can make read more
  • David Chassels: Hi Clive Is the business emphasis not wrong in looking read more
  • Clive Longbottom: After a discussion with CA Technologies, I would just like read more




-- Advertisement --