Connected to the mobile world, but not to the room?

Rob Bamforth | No Comments
| More

Mobile devices provide a mix of connectivity and compute power on the move for most employees, and with the huge upsurge in adoption of tablets and smartphones this personal power can be used anywhere and remotely from other colleagues. However, it is still important in most working environments to get together, share information, make decisions and assign actions - i.e. have meetings.


These might be vital, but too much of a good thing is a problem, plus meeting places in many organisations are in short supply, so meetings need to be effective and efficient.


The spread of digital visual technologies into meeting rooms has kicked out the old overhead projector and in some cases created some fantastically equipped conference room facilities with telepresence and high definition video. But the average meeting space has at best a flat screen monitor (or projector) with a connection for video input and an ethernet connection to the network.


With luck there might be instructions and adaptor cables, but with so many options - presenters expect to use their preferred tablet or smartphone as well as laptops - there will be problems. "Please talk amongst yourselves for a few minutes" is all too often blurted out at the beginning of a meeting or presentation, and across a group of waiting attendees, those few minutes cost dearly.


It should not be like this, and it ought to be possible to easily harness the technology that participants bring to the meeting to make it more engaging, collaborative and ultimately, effective.


First, wireless video connection. Pretty much all potential presentation devices, whether laptop, tablet or smartphone, will have wi-fi, so it makes sense to get rid of all the messing around trying to remember to bring or find cables and adaptors and use wireless instead.


Next, simple, secure sharing. One person presenting is fine at times, but participants need to be engaged in meetings and many will have content to share too. Seamlessly switching between presenters and even having several providing content at the same time would be valuable, so too would ensuring that participants have content made available to them after the meeting.


Finally, the outside world. Making an external connection to remote participants can add further delays to any meeting, often until outside help can be brought in. If remote attendees can connect and share content using the same secure mechanism as participants in the room, the experience is much more conducive to effective working and collaboration.


While these involve audio visual (AV) technology as well as IT, replacing existing expensive AV equipment just to get wireless connectivity or remote conferencing is an unwarranted expense, and not workable if solutions only work with particular AV equipment. Much of the problem is in the IT space - wireless, simple sign on and remote connectivity - and ultimately someone is going to have to make everything work and manage it all.


It will fall to IT to take the management lead, so in order to converge AV with IT, a universal 'hub' option that could be applied in any meeting space, would seem a much better bet. This is the approach Intel has offered with its Unite technology. Initially developed for internal use to help its own meetings go more smoothly, Intel has released the software for device manufacturers to build AV integration hub 'appliances'. Based on Intel's vPro CPU, the ones that have already appeared are sufficiently low cost to be added to an existing meeting space for the cost of a small laptop.


While a powerful processor might seem overkill, it makes the box easy to manage as a regular device on the network and has the capability to go much further, such as adding video conferencing or tools to automate some of the remaining drudgery from meetings, perhaps by audio recording and transcribing for meeting minutes.


Meetings might not be the most enjoyable part of working life, but they perform a vital function in the daily flow of information. High end IT and AV solutions can deliver stunning solutions for some, such as telepresence, at a cost, but simple technology appliances that can remove some of the everyday pain from work should be applicable pretty much anywhere.


For more thoughts on how to use technology to make everyday meetings more effective, check out Quocirca's latest free to download report "Smarter, connected meeting spaces".


Blue Coat, from surf control to riding the security wave

Bob Tarzey | No Comments
| More
The IT security sector is good for producing start-ups that soon disappear, seemingly without trace. Of course, this is not a bad thing. Whilst it is true some wither on the venture capital vine, many others get acquired as the good idea they came up with (or imitated) gets recognised as a necessity and then absorbed into the security mainstream though acquisition.

In this way whole genres of security products have more or less disappeared. For example, most spam-filtering, SSL VPN and data loss prevention (DLP) vendors have been absorbed by the security giants; Symantec, Intel Security (nee McAfee), Trend Micro or other large IT vendors that deal in security - HP, IBM, Microsoft etc.

However, there is another path, for a small niche vendor itself to grow into a broad-play vendor. The aforementioned three security giants all came from an anti-virus background. Another that is now challenging them, started out addressing the need to make our online business lives fast, then safe and has now joined the security mainstream - Blue Coat.

Blue Coat was founded in the mid-1990s as CacheFlow, its initial aim being to improve web performance. However, it soon saw the need to address web security as well and changed its name to the more generic Blue Coat as it added content filtering and instant messaging security to its portfolio. By 2003 it had 250 employees and turnover of around $150M.

13 years on, in 2016 Blue Coat now has 1,600 employees. The company last reported revenues in 2010 of around $500M before being bought by private equity firm Thoma Bravo in 2011 for $1.3B. Thoma Bravo has now sold Blue Coat on to Bain Capital for $2.4B. A nice mark up, but one that reflects a lot of acquisition investment that has taken Blue Coat even further from its origins to becoming a vendor that can address broad enterprise security requirements.

The acquisition trail for Blue Coat started well before 2011. Entera, Packeteer and NetCache all enhanced its network performance capabilities. Whilst Ositis (anti-virus), Cerberian (URL filtering), Permeo (SSL VPN) added to the security portfolio.

The Thoma Bravo takeover saw the pace increase:
  • Crossbeam (firewall aggregation) gives Blue Coat a platform to build on both its enterprise and service provider customer base.
  • Netronome, which Blue Coat says is now its fastest growing product line, enables the inspection of encrypted web traffic, addressing the growing problem of hidden malware coming in and stolen data being egressed.
  • Solera is a full packet capture reporter; whatever we may think of service providers being required to store data long term, if regulators say they must, Blue Coat can now assist with the job. 
  • Norman Shark for malware analytics

Since the Bain Capital takeover there has already been two more. First, Elastica, one of the early cloud access security brokers (CASB). This gives Blue Coat the capability to monitor and audit the use of online applications, a growing need with the rise of shadow IT as lines-of-business and employees increasingly subscribe to on-demand services without direct reference to the IT function. The other is Perspecsys for the encryption and tokenisation of cloud data.

Through all this Blue Coat has also been investing in supporting capabilities; this includes building a threat intelligence network and the integration of all the various acquisitions. Blue Coat's delivery has typically been via on-premise appliances which still suits many of its customers, especially service providers. However, Elastica is a cloud-based service, capabilities from which will be integrated into its appliances and Blue Coat says the future looks increasingly hybrid.

As a private company Blue Coat does not disclosed revenues. However, with all the acquisitions and value-added in the five years it must be close to joining the rare club of IT security vendors with revenues in excess of $1B. Blue Coat has become and should remain a major player in the IT security sector providing it does not lose its way bringing all these acquisitions together into a set of coherent offerings.

The rapid morphing of the 'datacentre'

Clive Longbottom | No Comments
| More

room3.jpgOn-premise. Co-location. Public cloud.  These terms are being bandied about as if they are 'either/or', whereas the reality is that many organisations will end up with a mix of at least two of them.

This is why it is becoming important to plan for a hybrid datacentre - one where your organisation has different levels of ownership over different parts of its total data centre environment.  It is time to move away from the concept of the singularity of the 'datacentre' and look more at how this hybrid platform works together.

This makes decision making far more strategic.  If the organisation has reached a point where its existing data centre is up for review, it may make more sense to move to a co-location facility, which then enables greater flexibility in growth or shrinkage as the organisation's technology needs change - and indeed, as the technology itself continues to change.

However, although co-location takes away the issues of ownership of the facility and all that it brings with it (the need to manage power distribution, backup, auxiliary power, cooling and so on), it still leaves the ownership and management of the IT equipment within the co-location facility to the end-user organisation.

This can, in itself, be expensive.  Few resellers offer an OpEx model to acquire hardware (although capital agreements with the likes of BNP Paribas Leasing Solutions can turn a CapEx project into more of an OpEx-style one), so with co-location organisations still have to find the upfront money to purchase hardware.  Then there are the costs of licences and maintenance for all parts of the software stack - as well as the human costs for applying updates and patches to the software and for fixing any equipment that fails.  However, where an organisation believes that it can provide a better platform for its workloads than a third party platform provider can, colocation is a great way to gain better overall security, scalability, performance and flexibility.

If ownership of the stack offers no particular benefit, this is where public cloud comes in. Infrastructure as a service (IaaS) gets rid of the CapEx issues around hardware; platform as a service moves further to get rid of the costs of licencing and maintenance of operating system and certain other parts of the stack; software as a service (SaaS) gets rid of all these costs, rolling everything up into one subscription cost.

None of these choices is a silver bullet, however.  For whatever reasons - whether these be for solid, logically thought through or more visceral ones - many organisations will end up with some workloads where they want more ownership of the platform, alongside other workloads where they just want as easy a way to deploy as possible.

Planning for a hybrid platform brings in the need for looking well beyond the facilities themselves - how is high availability and business continuity going to be achieved?  How is security going to be implemented and managed?  How is data sovereignty impacted?  How is the end-user experience to be optimised, monitored and maintained?

The world of the datacentre is going through a period of rapid change not seen since the hegemony of the mainframe was broken by the emergence of distributed computing.  Few can see beyond an event horizon of just the next couple of years - virtualisation and cloud are still showing how a datacentre needs to shrink; other changes in storage, networking and computing power may yet have further impacts in how future datacentres will have to change.

At this year's Data Centre World, attendees will be able to engage with vendors and customers involved in this journey.

The rise and rise of public cloud services

Bob Tarzey | No Comments
| More

There is no reason why the crowds arriving at Cloud Expo Europe in April 2016 should not be more enthusiastic than ever about the offerings on show. Back In 2013, a Quocirca research report, The Adoption of Cloud-based Services (sponsored by CA Inc.) looked at the attitude of European organisations to the use of public cloud services. There were two extremes; among the UK respondents, 17% were enthusiasts who could not get enough cloud whilst 23% were proactive avoiders. Between these end points were those who saw cloud as complimentary to in-house IT or evaluated on a case-by-case basis.

In 2015 Quocirca published another UK-focussed research project, From NO to KNOW: The Secure use of Cloud-Based Services, (sponsored by Digital Guardian) which asked the same question. How things have changed! The proportion of enthusiasts had risen to 32% whilst the avoiders had fallen to just 10%. Those using cloud services on a case-by-case basis had risen too, from 17% to 23%, whilst those who regarded them as supplementary fell from 43% to 35%.

The questionnaire used in the research did not use the terms avoiders and enthusiasts but more nuanced language. What we dubbed enthusiasts during the analysis had actually agreed with the statement "we make use of cloud-based services whenever we can, seeing such services as the future for much of our IT requirement". Avoiders on the other hand felt either "we avoid cloud-based services" or "we proactively block the use of all cloud-based services". So the research should be a reasonable barometer for changing attitudes rather than a vote on buzzwords.

The 2015 report went on to look at the benefits associated with positive attitudes to cloud, such as ease of interaction with outsiders, especially consumers, and the support for complex information supply chains. It also showed the confidence in the use of cloud-based services was underpinned by confidence in data security.

The research looked at the extent to which respondents' organisations had invested in 20 different security measures. Enthusiasts were more likely than average to have invested in all of them, with policy based access rights and next generation firewalls topping the list. However, avoiders were no laggards; they were more likely than average to have invested in certain technologies too, data loss prevention (DLP) topped their list followed by a range of end-point controls as they sought to lock down their users' cloud activity.

Supplementary users of cloud services took a laissez-faire approach; they were less likely than average to have invested in nearly all 20 security measures. Case-by-case users tended to have thought things through, and were more likely than average to have in place a range of end-point security measures and to be using secure proxies for cloud access.

What is the message here? The direction of travel is clear; there is increasing confidence in, and recognition of the benefits of, cloud-based services. However, for many it is one step at a time, with initial adoption of cloud services for many in limited specific use cases. The security measures to enable all this, as the 2015 report's title points out, are not to block the use of cloud services by saying NO but to be able to control them by being in a position to KNOW what is going on.

There are two underlying drivers that can't be ignored by IT departments, whose role is to be a facilitator of the business rather than a constraint on it. First, many digital natives (those born around 1980 or later) are now entering into business management roles; bringing positive attitudes to cloud services with them. Second, regardless of what IT departments think, lines-of-business are recognising the benefits of cloud services and seeking them out; so-called shadow IT which is delivering all sorts of business benefits. Why would any organisation want to avoid that?  

The Compliance Oriented Architecture - are we there yet?

Clive Longbottom | No Comments
| More
Over a decade ago, Quocirca looked at the current means of securing data, and decided that there was something fundamentally wrong. The concept of solely relying on network edge protection, along with internal network and application defences misses the point. It has always been the data that matters - in fact, not really even the data, but the information and intellectual property that data represents. 

To our minds, enterprise content management (ECM) has not lived up to expectations around information security: it only dealt with a very small subset of information; it was far too expensive; and has not evolved to support modern collaboration mechanisms. It is also easy to circumvent its use, and far too easy for information assets to escape from within its sphere of control. 

As an increased need for decentralised collaboration evolved and cloud computing offered new ways of sharing information, the problem became more complex. There was an increase in the difficulty of defining the network edge as the value chain of contractors, consultants, suppliers, customers and prospects grew, and in ensuring that the new silos of data and information being held in places such as Dropbox, Box and other cloud-based data stores were secure. However, in contrast to the problems with ECM, the use of cloud-based information sharing systems was in trying to stop individuals from using them: usage has grown, and in many cases, the organisation is oblivious to these new data stores. 

Sure, these silos have evolved to provide greater levels of security - but they are self-contained, with any such security being based primarily around encrypting files at the application or email level, or managing documents/files as long as they remain within the secure cloud repository or local secure 'container' (the encapsulation of a file in a proprietary manner to apply security on that file) on the host. 

The problem with just using application- or email-based encryption is that if that passcode created by the user is not strong, it can be hacked. Keys also have to be distributed to each person that needs to have access to the data - and such sharing is difficult and insecure in itself. Each key created has to be managed by the owning organisation (even where key management tools are in place), which presents another problem when keys are lost and have to be recovered. However, all data that is outside of the central repository is now out there forever - once received and unlocked, it can be forwarded as emails, be modified, it leaves uncontrolled copies of itself all over the place. 

The same with the use of containers to try and track and monitor how data is being dealt with. It is difficult, outside of a full digital/information rights management (DRM/IRM) platform to track data across a full value chain of suppliers and customers - and it is expensive. Using containerised defences within a system still has drawbacks: the security only works across those using the same system or cloud container. Once that file leaves the container, the data is in the clear for anyone to do whatever they wish with (as described above). 

To try and address the problem, Quocirca came up with an idea we called a compliance oriented architecture, or a COA. The idea was to provide security directly to data, such that it was secure no matter where it was within or outside of a value chain. At the time, the best we could come up with to create a COA was a mix of encryption, data leak prevention (DLP) and DRM. We accepted that this would be expensive - and reasonably easy for individuals to work around. 

Since then, we have seen many technical products that have gone some way towards information security, yet none, to our mind has hit the spot of the COA. 

Now, we wonder whether FinalCode has come up with the closest system yet. 

When Quocirca first spoke with FinalCode, although we liked the approach, we had worries over its interface and overall usability. We liked the technical approach - but felt that individuals may not have enough understanding of its value and operation to actually use it. With its latest release, FinalCode 5, Quocirca believes that the company has managed to come up with a system that offers the best COA approach to date. 

What does FinalCode do? It acts as a secure proxy between an information asset and the individual. Either directly through its own graphical interface or through the use of its application program interface (API), documents can be secured as close to source as possible - with policy being enforced by the OS and through the application being used (e.g. Microsoft Office, CAD applications, etc) in most cases. So the sender and recipients work in the application they are accustomed to. 

Once the document to be shared is put through FinalCode, the FinalCode system encrypts it with a one-time code, and manages keys as necessary. The information creator (or a corporate policy) applies rules around how the information can be used - and by whom. Joe may have read, edit and email forward capabilities; Jane may only have read. When the document reaches them, they first have to download a very small FinalCode client (a one-time activity). From there on, everything is automated - they do not have to know any keys, and they will be informed at every step what they can do. 

So, if Jane tries to forward on the document, she will be informed that she is not allowed to do this. If she tries to cut and paste any content from the document to another one, she will be prevented. 

It makes no odds where Jane is - she can be within the same organisation as the originator; could be a trusted partner in the value chain, or could be an accidental inclusion into an email list. All the actions that she can do are controlled by the file originator or a corporate policy. Should Jane have received the file by accident, she won't be able to do anything with it, as her name will not be in the list created by the originator for her to gain access to the content of the file itself. If a trusted person leaves the company they work for, then the files they have access can be remotely deleted by the originator. It also means that the document can be stored anywhere, distributed in any way - as FinalCode's capabilities are not container based, files can be used in whatever workflow suites the user or business requires; secured files can be output to a local disk, a network share or a cloud service - and all its restrictions and functionalities are maintained. 

Other functions include setting the number of times a document can be opened, including a visible or invisible watermark on documents and allowing recipients access to a file for a set time period only. 

This is all managed without FinalCode 'owning' the data at all. Although FinalCode operates as a cloud service, it is only really operating as a key management and functional control mechanism. As far as it is concerned, all information is just ones and zeros; it never actually sees the data in the clear. Encryption is carried out at the originating client; decryption is carried out at the receiving client. And the receiving client obtains the usage permissions all maintained by the FinalCode server. 

With pricing being based on low-cost subscriptions, FinalCode is a system that can be rolled out pretty much to everyone within an organisation, providing this high level of a COA. There will be problems for FinalCode - there always are for vendors. It is, as yet, still not a well-known name. It also runs the risk of being confused with the likes of Dropbox and Box. However, with the right messaging, FinalCode can deal with the second problem (indeed, it should be able to work well alongside such cloud stores) - and as its usage grows, its name should spread organically. 

So, when the business asks from the back seat as to whether they are there yet in their seemingly endless journey to a COA, IT can now honestly respond with an "almost there, yes". (Note: since writing this article, another company, Vera, has come to Quocirca's attention that looks similar. We will be investigating...)






The great transition from constrained certainty to good-enough risk

Bernt Ostergaard | No Comments
| More

For much of the past 150 years of telecoms history, customers have had little influence on the evolution of the Wide Area Network (WAN). Corporate WAN strategies centered on adapting to what was available. There were certain bedrocks of stability and authority that you did not question - the only question was: how much of it could you afford?

The upside has been global availability, the downside has been high cost and sub-optimal utilisation. With the enormous expansion of available bandwidth and the shift from proprietary hardware and software to software defined network, this is about to change - giving enterprise users the chance to simultaneously lower telecom costs and expand available capacity.

The PTT monopolies When it came to WAN service providers, choices emerged in the early 1980s. Until then the national telco infrastructure was built and operated by the national Post, Telegraph & Telephone (PTT) monopoly, and its pricing was not cost based, but rather determined by how much revenue it, as a public service wanted to charge. International services relied on bilateral service and tariff agreements between individual PTTs.

For the business executives it meant predictability. Telecoms was a purely technical and engineering issue. Communications was a punitive cost item on the company books - like it or lump it!

The 1980s changed all that. The first challenge came from MCI Communications in the US. The company built a privately owned microwave link between Chicago and New York in 1982. This led to the break-up of the American Telegraph & Telephone (AT&T) monopoly, the emergence of regional Bell companies and the rapid decentralisation of telco infrastructure ownership.

This event forced European governments to gradually open their markets to competition in the mid-1980's, with the final competitive push coming from international mobile telephony in the early 1990s! With the emergence of Global Service Mobility (GSM) technology and standards, European governments decided that each country should have at least three contending mobile service providers - the leading one was invariably the national telco, the second one was most often a neighbouring PTT, but the next ones were the real challengers! Vodafone, Orange and Virgin in the UK, Bouygues and Vivendi in France, Comcast, Century Link, Level 3 in the US, PCCW in Hong Kong, KDDI in Japan to name a few front-runners.

How has choice changed business thinking?

Executives now weight the business value of expensive, guaranteed quality connectivity versus cheaper and more flexible best-effort connectivity. With the 1000-fold increase of available bandwidth, connectivity not only becomes very price competitive, it also shifts from being a cost item to becoming a possible revenue generator for the enterprise. The CTO is no longer in the dog house being responsible for 'major costs to the company balance sheet', but an entrepreneur able to create new opportunities for the lines of business.

As corporate data users, we have become more comfortable in the 2000s with good-enough connectivity, and less concerned by scare stories about the dire consequences of business comms disruptions, such as: o Banks going out of business, if their systems go off line for a few hours or days, o ERP systems unable to work in less than five-nines uptime environments o Hackers threatening company survival when trade secrets and customer details are stolen, o Professional communications could not rely on Skype or other consumer video conferencing services due to lack of any quality of service (QoS) guarantees. These scares are all being disproved by everyday company resilience, and user acceptance of minor service disruptions.

Fundamental voice telephony has taken us on the same technical journey from clear uninterrupted, echo free analog QoS telephony in the 1970'ies to mobile calls today that vary significantly in call quality, come with frequent call degradations, interruptions and cut-offs.

There is simply more upside in today's digital communication scene! We have achieved mobility, lower cost, greater bandwidth, media diversity and much higher levels of video, voice and data integration. So our perception of risk has changed - we are less risk averse, having experienced the advantages of digital integration.

What does SDN and SD-WAN mean for telcos, vendors and the enterprise?

However, the infrastructure telcos are still here, and the telco equipment vendors still sell networking equipment with proprietary software that doesn't easily communicate with competitor's equipment.

This is what the shift to Software Defined Networking (SDN) aims to address, by defining network functions in software that can be installed on multi-functioning hardware. Essentially, by separating software from a specific hardware platform, it becomes much easier, quicker and cheaper for telcos and service providers to renew and improve their network infrastructure. The vendors that are slow to adapt risk becoming low margin commodity vendors if they do not own the software or worse, going out of business if commodity x86 hardware or cloud platforms can be used instead.

However, enterprise users must still subscribe to a wide range of distinct infrastructure services encompassing mobile and fixed-line voice, video and data services, using a limited subset of equipment to maintain compatibility, and does not allow them to pool all their network capacity into a single virtual connection.

This is what the Software Defined Wide Area Networking (SD-WAN) aims to address by concatenating all the network access channels and managing them as a single virtual channel for any kind of WAN traffic. SD-WAN components like encryption, path control, overlay networks and subscription-based pricing are well known, but now with orchestrated delivery and unified management capabilities driving the SD-WAN momentum.

Intel Stepping Outside - If it computes, it connects

Bernt Ostergaard | No Comments
| More

Intel wants to portray itself as much more than a chip company leading a declining PC market. Intel's direct computing revenues shrank to 60% of total revenues in 2015. Its hardware data centre business with IoT elements and memory technology contributed 30% of revenue. Software & services and network products contribute the remaining 10%. Network related activities are the fastest growing part of Intel's business, and will continue to lead in the 2016-2020 timeframe.

The core infrastructure

The Intel contribution to NFV/SDN has focused on enabling and optimizing with data insights such as predictive analytics, in-memory analytics and predictive simulation exemplified by:

  • Data Plane Development Kit (DPDK) - a set of data plane libraries and Network Interface Card (NIC) drivers that provide a programming framework for fast, high speed network packet processing on general purpose processors.
  • Open Network Platform (ONP) Switch and Server Reference Architecture, which provides hardware that is optimized for a comprehensive stack of the latest open source software releases, as a validated template to enable rapid development. The Reference Architecture includes architecture specs, test reports, scripts for optimization and support for Intel network interfaces from 1 GbE to 40GbE.
  • High Performance Compute (HPC) with the Intel framework. Working to create a standard open solution.

Intel has also launched a number of Software Defined Infrastructure (SDI) initiatives focused on compute/network/storage orchestration:

  • Intel's SNAP project takes telemetry data and puts it on an open-source platform telemetry framework that is able to collect and expose data about underlying infrastructure that is running a cloud environment. For now, SNAP is its own open-source project on Github, but in the future, it could land under the auspices of a larger open-source group.
  • Intel Rack Scale architecture aims to increase data centre flexibility, enable greater utilization of assets, and drive down the total cost of ownership for IT infrastructure. Trial systems will launch with 6 OEM partners in 2016.
  • Secure Compliant Hybrid Cloud. Together with VMware and Hytrust, Intel is developing cloud integrity technology that allows an IT department to control where a virtual user can land to comply with safe harbour or national data storage and retention legislation.
  • NFV deployment working with 8 Euro telcos.

Defining 5G Mobility

Certainly, the momentous 2015 shift, where mobile data traffic has overtaken fixed line traffic, speaks volumes about the importance of developing mobile technology capabilities. 2015 saw the emergence of 4G LTE Advanced services, and 5G hovers on the 2016-2020 horizon, with technical trials already planned in 2016, and precursor transmissions planned for the 2018 Winter Olympic Games in South Korea and the 2020 Summer Olympic games in Japan.

Intel believes that smart phones will continue the 40% YoY growth 2016-2020, and cites GSMA figures indicating that mobile broadband connections will grow from 2.2bn in 2013 to 5.69bn in 2020 - representing a 15% YoY growth. Intel is investing significant resources in the 5G standards process, primarily in IEEE, 3GPP, Alliance for Power, and the One M2M Wireless Broadband Alliance.

What does 5G entail?

  • The 5th generation mobile infrastructure must address 100X capacity needs
  • The network must provide very low latency and very high reliability. Intel expects 5-10 IoT devices for every PC, laptop, tablet and handset.
  • The network must be able to handle extreme device density (Intel expects 200bn Internet connected devices, by 2020, Ericsson is more conservative expecting 50bn devices),
  • Meet end-to-end performance requirements going beyond the user interface to include the application, because data traffic growth will depend on available apps, notably for Big Data and IoT.

Intel sees its sweet-spot is in the distributed processing, both on the device and in the network. 5G.png

The World Radiocommunication Conference (WRC) has allocated spectrum for 5G services in unlicensed bands in the 2-6 MHz range to business. The 6-30 MHz and 30-100 MHz frequencies have been allocated to licensed 5G services. High-frequency bands will typically be used for very dense deployments for which one can expect much more dynamic traffic variations. Intel provides Ray Tracing for this higher frequency channel communication. The first 5G components mobile trials will begin in 2016 with Intel involved in trial works with Verizon, DoCoMo and SingTel.

Security

Intel with its McAfee acquisition is in an interesting position to develop multi-factor authentication e.g. OS login, VPN login, biometric ID and walk-away locking. With its hardware platform Intel can do away with serial identification processes and instead evolve simultaneous automated processes using a management platform in the chip - which is inaccessible to anyone but the authorised user. Stolen biometric data is difficult to erase, so relying on it alone may be a weakness. Instead the IT department can specify which minimum policy requirements to adhere to (in accordance with company governance policies). The user is then free to define additional login parameters.

This is of course especially relevant for all mobile devices, notably phones and tablets. Intel is making this a part of its 2016 6th generation vPro core firmware, and synched with its WLAN, authentication middleware and authentication phone apps.

Stacking up against the competition

With a presence in so many markets, Intel of course faces intense competition from:

  • Network infrastructure vendors like Ericsson, NSN, Huawei and Cisco,
  • Mobile chip designers like ARM and Qualcomm,
  • Security vendors like Symantec, CheckPoint and FireEye, and
  • Storage vendors like EMC and HP.

Intel has decisively moved beyond its chip-based comfort zone in recognition of where market growth is shifting to. It has a unique position in its chip technology business, and the financial strength, production capacity and skills to support such market shifts. With Brian Krzanich at the helm, the behemoth is even tugging at the grass roots with its support of the Maker movement, and seems willing to risk its Chipzilla dominance to stay ahead.

Intel Stepping Outside - with its eye on the clouds

Bernt Ostergaard | No Comments
| More

Intel continues to surf the digital computing jet steam, and provide the engine power to drive computing across global markets. At the recent European Analyst Event in London speaker after speaker affirmed that founder Gordon Moore's law still holds sway over the company and its product plans. But Intel is now clearly stepping out from under the covers of desktops and laptops.

Intel's client computing business revenues have dropped from 75% of total turnover in 2010 to 60% today, giving way to Intel's activities in the Data Centre, in IoT elements, memory and software & services, and in networks (that alone contributed 10% to intel 2015 revenues).

Cloud computing Cloud computing today is driven by people accessing apps, tomorrow it will be driven by things communicating with things, the so-called Internet of Things (IoT). That change will combine data centres, memory and the tiered connectivity required by IoT. Intel is particularly interested in two layers:

  • The Super-7 Cloud giants: 4 in the US (FB, Google, AWS, Microsoft) and 3 in China (Alibaba, TenCen, Baidu)
  • The Next-50 cloud providers (many of them in Europe) are growing fast to reach a 50/50 balance with Super-7 in 2016

Intel is committing more resources to Europe and Asia because cloud growth is faster outside the US. 66% of cloud usage is consumer based, and 33% is business, which is split between conversion of existing on-prem DC to cloud, and new usages complementing existing on-site usage. Business usage is growing faster than consumer cloud usage, and Intel believes that hybrid cloud deployments are the future over the next 5-10 years, with big opportunities for IoT in Europe in automation, energy and manufacturing, with telcos providing the back-end support.

Integrated Designs The Intel Data Centre business was boosted with the $16.7bn (£11.2bn) deal to acquire programmable chip maker Altera bringing in fast field programmable gate array (FPGA) technology that are a better alternative than ASICs for a larger number of higher-volume applications, thus contributing to accelerate the virtuous cycle between DC and IoT. On the memory side Optane SSD and DIMMS based on 3D X-Point building blocks are defining the near-future for high performance storage and memory. This opens a whole new tier of server persistent memory. 2016 will provide fascinating views on how that is adopted by business and public sector users.

3D Xpoint.png

3D XPoint faster-than-Flash media technology

Flies in the ointment Of course not everything is rosy in the Intel camp. Intel market expectations have been somewhat over-optimistic since 2009:

  • Growth in markets that were supposed to drive expansion have disappointed.
  • Smart phones have become the first purchase rather than tablets and ultra thin laptops,
  • The positive China market stats are simply misleading. Every company selling into China reports drops in sales (except Apple),
  • The strong dollar makes sales hard, so even when dollar prices drops it still translates into increased prices in foreign markets. While PC's saw continuous price drops between 2000-2007 and notebook prices dropped from 2000-2010, prices in local currencies have gone up again since 2013.
  • While sales in established markets are stable, emerging markets' consumer take-up has disappointed, dropping by 35% from 2010-2016.
  • Replacement cycles are slowing down, and battery life still cannot supplant tablets with phablets.

So after dominating the PC space since 1981 Intel is facing the decline of the 'fat desktop' and shifting its attention to focus on cloud services. Intel intends to roll with the punches, extending the functionality of its hardware and software components while minimising channels to market conflicts by staying away from the device and server product space. It remains to be seen whether it has the management acumen to steer this new course.

At least 1 in 5 Europe enterprises lose data through targeted cyber attacks

Bob Tarzey | No Comments
| More

Talk Talk, J. D. Weatherspoon, Carphone Warehouse etc.; there has been much reporting of cyber-attacks targeted at UK and European businesses during 2015. Those organisations that have not been hit will be breathing a collective sigh of relief and let's face it, it could have been any of them.

 

Recently published Quocirca research shows that the overwhelming majority of Europe businesses now recognise the reality of targeted attacks, 24% say it is inevitable they will be hit at some point, most of the rest are concerned, just 6% being complacent thinking the problem of targeted attacks to be exaggerated or admitting a lack of awareness.

 

The high profile stories are not the only reason for concern. Much of it must be put down to bitter experience. The research behind Quocirca's new report, called "The trouble at your door", which was sponsored by Trend Micro, surveyed 600 Europe enterprises. Of these 369 admitted they had been targeted, 251 saying the attackers had been successful. In 133 cases this led to a data loss (64 said it was a lot of data) and 94 said they had suffered serious or significant reputational damage.

 

In other words, beyond the headlines about individual attacks, quantitative data shows that European enterprises have at least a 1 in 5 chance of losing data through a targeted cyber-attack. With the final details of the EU General Data Protection Regulation (GDPR) agreed this week, with fines of up to 4% of global turnover, this is not good news. Grimmer still, the reality is probably worse than this; around half of the 231 who did not think they had been targeted were not really sure. Many of them may have lost data too but simply did not know about it.

 

The majority think the most likely attackers will be cybercriminals, rather than hacktivists, nation states or other commercial organisations and the target is mostly payment card and/or personal data. However, there is a positive angle; cybercriminals mostly target an organisation because its defences are weak, not because there is some specific malicious intent. In other words, if your organisation's defences are hard to overcome, the criminals may just move on to one of your rivals.

 

So, as we head into a New Year, with the onslaught of targeted cybercrime showing no sign of abating, what can be done? The research shows that certain measures reduce the likelihood of an attack being successful and limit the seriousness of the outcome when the inevitable does happen.

 

For example, those with technology in place to detect previously unseen malware were more likely to discover attacks underway and/or report that such attacks only had a minor impact. This is because they were able to detect attacks in hours rather than days or weeks. Even when attacks succeed other measures can make a difference; those with breach response plans in place were considerably less likely to report reputational damage.

 

Of course, certain vendors will tell you that their technology will stop all attacks upfront, whilst other will tell you this is not possible, you must protect all your data at source using their product. No organisation can afford to take such claims at face value; a prudent defence strategy must include a range of before, during and after measures. Quocirca's report looks in more detail at the effectiveness of such measures and well as listing the Worst 40 attacks recorded during the research.

 

Quocirca's report - The trouble at your door - was sponsored by Trend Micro and is free to download at this link:

https://resources.trendmicro.com/cyberattacks?_ga=1.86817944.1634337558.1450108759

Neustar to securely guide things

Bob Tarzey | No Comments
| More
The growing number of network attached devices (that constitute the Internet of Things) is a big opportunity for a little known vendor called Neustar to increasing its stake in the European IT security market. This will happen due to Neustar's heritage in the provision of real-time information services.

Neustar developed this core expertise providing a mundane but essential service in the US market, running a contract for the US Number Portability Administration Center (NPAC) for both landline telephone services. Neustar has acted as a neutral broker between telephony service providers, so as when a customer chooses to move from one to another, their number can be traced and moved to the new provider. The "neutral" is where the "Neu" in Neustar comes from.

The contract is to be been terminated, however, Neustar believes the heritage will serve it well with all the other services it has acquired and developed which will also enable it to further expand its horizons beyond the USA. 18 years playing a central role in the US telephony market has given Neustar two key assets. First, experience of building and running infrastructure to support high-volume, real-time information services. Second, Neustar has accumulated a large database of basic information about US consumers.

This information is personal and cannot be divulged directly; however this has not stopped Neustar from considering how it could develop chargeable side-line services. It realised that whilst it could not divulge certain information, it could use its necessarily accurate data to verify the quality of other companies' data - "yes, your information agrees with ours", or "no, your data is out-of-date". 

This has led Neustar into the broader provision of real-time marketing information services. For example, in 2010 Neustar acquired Quova, an IP geolocation vendor that reports in real-time the likely city the user of a given IP address is located in (Neustar claims 99.1% accuracy via a PWC audit) and in November 2015 it acquired MarketShare (a digital marketing measurement vendor).

In IP geolocation we can start to see how Neustar has become more involved in IT security. Knowing likely origin of an IP address is useful for online sales and marketing and for controlling use of online services (for example, the BBC used Neustar to limit access to its iPlayer), but also for helping to identify suspicious online behaviour - "what is that device doing in that location".

Neustar's ability to provide fast real-time information services also took it in to the domain name services (DNS) business. It provides authoritative domain name services, for example it runs the .NET domain, but also through its Ultra DNS runs the DNS requirements of large enterprise customers. To provide further value and help customers improve the performance of websites and online applications Neustar has also developed a line of web performance management services, the need for which is underlined by research report published by Quocirca (and sponsored by Neustar) earlier in 2015 - Online domain maturity.

DNS has also led Neustar further into the security market. DNS infrastructure is vulnerable to volume distributed denial of service attacks (DDoS). Using hardware from Arbor (the leader in the field), Neustar built out its own DDoS protection capability. It soon realised that the scale of the infrastructure it was building could be used to protect other organisations and in 2011 it launched its SiteProtect service. It has some weighty competitors including the leading provider of content distribution services, Akamai, which acquired Prolexic, another provider of online DDoS protection services in 2013.

Put all this together and you can see why Neustar sees the growth of the Internet of Things (IoT) as an exciting opportunity. It is as important to know that a thing is what it purports to be and it is operating from an expected location. As many things communicated directly with each other (machine-to-machine/M2M), there is a need for such verification to be done very fast: in real-time. Neustar's DNS, geolocation and a new registry of things all have a role to play here.

Furthermore, new Quocirca research to be published next week - The many guises of the Internet of Things - shows that one of the greatest concerns about the Internet of Things in the increased attack surface it will expose that will need protecting from, amongst other threats, denial of service attacks. The new research identifies many benefits that UK businesses expect the IoT to herald. It also underlines the need for real-time information and protection services to ensure the reliable performance of IoT applications. A chance for Neustar to shine bright in Europe as its light in the USA fades a little







Store Wars - The Purse Awakens

Rob Bamforth | No Comments
| More

In the history of mobile payments, it seems that much has been promised, many technologies and innovations have appeared, but a universal payment system still seems far, far away. Yes we have PayPal (former member of the trade federation with EBay) for online payments, but the in-store digital payment experience is complicated and patchy.


The reality for most people (if we ignore digital currencies such as Bitcoin) is that any digital transaction will be linked to a bank account. A payment instrument is required to take funds from the account - in advance to load up another token, an immediate debit, or a credit for the account holder and a delay in taking the funds.


Plastic cards issued by the banking empires provide debit and credit instruments, and increasingly debit cards are contactless EMV cards (Europay, MasterCard, Visa). Pre-loaded tokens are also typically plastic cards, not generally issued by a bank, but by a rebel alliance of non-financial organisations. These work well in closed systems, for example where employees use them for cashless payments inside their organisation, say in vending machines or staff canteens, or public closed systems such as transport ticketing and payments, like the Oyster card.


Plastic cards have long been the payment instrument of choice, but perhaps these are not the final devices we should be looking for, and we should move along to something new? 


Once mobile phones became one of the three universally carried items (keys, wallet/purse, phone), there was an opportunity for mobile operators to step in. But, despite some regional successes in mobile payments and transfers (e.g. Vodafone's M-Pesa), simple to make payments over SMS and the potential for NFC (near field communication - another contactless mechanism), many mobile operator options still jar-jar. The problem is not really one of technology, but driving universal retailer and user adoption.


However, the next wave of mobile payment solutions have already dropped out of hyperspace - Google's Android Pay, Apple Pay and Samsung Pay - and unlike operators these span multiple geographies, as well as different devices.


They also bring something else that will offset any lingering disturbing lack of faith in mobile payments - user desire. This emanates not only from the shiny devices which have style appeal beyond their functionality, but also from their ability to deliver a market and appetite for applications and media content. Users are comfortable with spending their republic dataries (or local equivalent economic currency units) on buying apps and in-app purchases.


However, while this familiarity has led to much needed confidence in the security of making payments, the transition into mobile payments for other goods in physical locations has not been entirely seamless. Thus far, Google, despite being first with both NFC-phones and its Wallet in 2011, started off with a pretty poor user experience and has not been sufficiently aggressive at promoting its technology - this is critical for widespread adoption as any payment mechanism has to be available whenever a user might want the convenience of using it.


Despite being a bit later adding a payment mechanism, Apple has had a Wallet (originally badly named Passbook) in place since 2012. Recognition that wallets hold receipts, tickets and other items of value apart from payment instruments is important to user acceptance, but so too is numbers and with the capability for contactless payment alongside the wallet integrated into phones from the iPhone 6 range of models onwards, Apple looks set to ensure the other players will get a bad feeling.


It might seem that businesses looking to receive mobile payments still have many options, but Apple has put a lot of effort into security, and critically privacy, for purchasers. So much so that banks are very enthusiastic, in some cases paying Apple for transactions due to the reduction in the cost to the bank of anti-fraud provisions. This appeal with the banks is positive for user adoption as well, and will make it a more interesting proposition for retailers.


Google is sure to fight back now that its Wallet's second iteration, Android Pay has started to roll out, and Samsung picked up some interesting technology (Magnetic Secure Transmission - MST) when it acquired LoopPay, which massively improves its potential retail reach. MST means that phones with Samsung Pay will work on any payment terminal that accepts regular swiped contactless cards, as well as NFC.


Ensuring that each payment system is broadly accepted is also a problem.  At this stage of the market (r)evolution, this broad acceptance is more about getting retailers to have the right equipment in place for NFC payments.  However, what would fragment the market would be to have the old Visa/Mastercard, American Express, Diners Club battles - with Diners Club losing out big time, and American Express only being accepted in certain places.


It is in the best interests of the Republic and the Federation to ensure that there is fair play for pay - there is enough in the way of transactions out there for the main battle to still be around the overall device and its capabilities, rather than solely around the means of retail transaction payment.


However, as well as having the contactless technology in place in the retailer and the hands of the consumers via their phones, the banks need to be part of the rebel alliances. In this regard, Samsung is lagging Google but Apple has come up very fast with its approach to privacy. 1983 was a long time ago when the first commercial mobile phone (Motorola DynaTAC 8000X) and Return of The Jedi appeared, but it now looks like mobile payments will finally become a force to be reckoned with.


Containing the application security problem

Bob Tarzey | No Comments
| More
The benefit of containers for the easy, efficient and portable deployment of applications has become more and more apparent in recent years, especially where application development and delivery is continuous through a DevOps-style process. The trend has been helped by the availability of open source container implementations, the best known of which is Docker, as well as proprietary ones such as VMware's ThinApp.

Whereas a virtual machine has all the features required of a full physical device, containers can be limited to just those needed for the application itself; for example, if a given application has no need to store and retrieve local data then no disk i/o functions need be included. This makes container-deployed applications portable, compact and efficient; many application containers can be deployed inside a single VM. There are also two big implications for application security.

The first is all about how secure the DevOps process and container-based applications are in the first place. Such development often relies on a series of pre-built software layers (or components), about which the developer may know little; their security cannot go unchecked. Furthermore, when deployment is continuous, security checks must be too. This has led to the rise of new security tools focussed purely on container security. One start-up in this area that has been attracting recent attention is Twistlock.

Twistlock's Container Security Suite has two components. First there is image hygiene, the checking of container-based applications before they go live; for example scanning for known vulnerabilities, the use of unsafe components and controlling policy regarding deployments. Twistlock has announced a partnership with Sonatype, a software supply chain company focussed on the security and traceability of the open source and other components that end up in containerised applications. Black Duck is another software supply chain vendor providing similar capabilities.

Second is run time protection, detecting misconfigurations and potential compromises and, if necessary, preventing a container from being launched in the first place or killing active ones that are considered to have become unsafe. Twistlock has just announced a partnership with Google, where its suite is being closely integrated with the Google Cloud Container Engine. As Google pointed out to Quocirca, problems with deployed applications are inevitable; you have to have checks in place.

Of course, there is nothing new about checking software code for vulnerabilities before deployment and scanning applications for problems for after deployment. Broader software security specialists such as Veracode, White Hat, HP Fortify and IBM AppScan have been doing so for years, using the terms SAST and DAST (static/dynamic application security testing) for pre and post deployment checking respectively. However, they will need to catch up with the agility of those that have set out to protect the emerging requirement of the dynamic DevOps and containerised approaches. Twistlock and its ilk are potentially ripe acquisition targets as venture investors look for a return.

The second implication for security is that containerised applications can themselves improve software safety through their limited access to resources. If all an application needs to do is crunch numbers, then give it disk i/o but no network access; that way it can read data from disk but not exfiltrate, it even when compromised.

Some have taken this approach to extremes; for example, two vendors Bromium and Invincea use a container-like approach to protect user end points. Bromium isolates every task on a Microsoft Windows device so, for example, a newly opened web page cannot write a drive-by payload to disk as it is not given the access it needs to do so. Some may question the overheads of doing this, but it certainly increases security. Menlo Security claims to keep such overheads down by containing just higher risk activities such as opening emails and web pages. Another vendor, Spikes Security, focusses just on web browsing; its approach is to contain pages on a proxy server before sending on clean content.

Containerisation looks like it is here to stay, helping to enable continuous, agile software development. This throws up security challenges but also helps solve some of them.

FireEye - Ninja of Incidence Response

Bernt Ostergaard | No Comments
| More

When a bank is attacked by armed robbers intent on stealing money, public sympathy is with the bank. When the same robbers return over the Internet to steal the bank's customer information, public sentiment turns against the bank. Similarly network security companies providing DDoS protection or encryption services to corporate sites are deemed as good, but when their services are employed to protect contentious sites, they are deemed a menace. Security posture must be accompanied by ready-laid plans for disaster recovery and re-establishing customer trust

Despite spending some $30bn annually on Incidence Response (IR), every major corporation and government institution worldwide has been hit by severe and successful hacker attacks stealing money, customer information, intellectual property, strategic data etc. Companies incur significant financial losses (estimated this year by one analyst firm to be around 1,6% of annual revenues), along with dents in customer trust and lost competitive edge. Only one-third of these breaches are discovered by the victimised company itself, and perpetrators have on an average been inside the victim organisation for more than 6 months before discovery.

This is the market that FireEye addresses: rapid discovery and prevention of attacks and efficient cleansing of corporate sites once an attack has been discovered. FireEye emphasises that IR is not just about discovering and preventing attacks, it's also about preparing a company to handle breaches and thus minimising the damage in the aftermath of a successful attack.

FireEye - An American in Europe

At its first European analyst briefing event in London, FireEye's EMEA management team and key-technologists provided an in-depth view into the murky underworld of Internet crime, and the tools and procedures FireEye uses to protect its 3500 customers worldwide. It was one of those presentations where you are not provided with a copy of the slides and analysts are reminded repeatedly not to mention company names or customer details.

FireEye is today a red-hot Internet security company entrusted with IR security management in thousands of global corporates and scores of national governments. The company is today loaded with cash, talent, products and services underpinning its 400% growth curve from 2012-15. The hockey stick really took off after the company acquired Mendiant back in 2012. But the company remains a well-kept secret in the European corporate world.

The FireEye global platform distributed on 8 regional SOCs (Security Operations Centre) inspects 50 billion objects daily, with some notable successes.

FireEye claims to have discovered 16 of the latest 22 zero-day exploits - more than all the other IR providers together. The SOCs and the company's 300 analysts look for interesting and unusual activities on the wire, using very aggressive algorithms to identify them. In that huge 'haystack' of data, the platform picks out any piece of hay that even remotely resembles a needle, and then determines whether this needle (among the many other needles found in the haystack) is potentially harmful to customers.

The FireEye 20:20 Focus

FireEye builds its business on three pillars:

  • Technology - primarily based on its analytical engine that inspects gigabytes of network traffic, using virtual machine introspection (VMI) developed by the company founder Ashar Aziz back in 2004 as the analysis mechanism.
  • Intelligence Gathering - from its incident handling, its wide-flung sensor net and close contacts to customer CIRTs (Computer Incidence Response Teams).
  • Security Expertise - 10 years experience with APT attacks

Screen Shot 2015-11-23 at 15.27.59.png

Classic on-premise antimalware software must reliably block malware upon entry; if one is missed, on-premise defences can be deactivated. On the other hand, with VMI, the antimalware runs on a host, outside of the customer data centre, and is thus impossible to deactivate by malware that subverted the customer system.

What are the challenges facing FireEye today?

Lack of corporate presence - especially in Europe is the top business expansion priority. Being a relatively young player in the IT security space, FireEye opted in its early years to concentrate on the US government sector. Governments remain the largest vertical for the company today, with 77 governments on its client list.

Better funding after 2009 and the Mendiant acquisition in 2012 helped FireEye to expand globally and open up more corporate business.

But the company still lags in corporate IR recognition behind the major hardware vendors (HP, IBM, Cisco, Dell and Fujitsu), global telcos (Verizon, Telefonica, BT, T-Systems), major system integrators (Cap Gemini, CGI, Atos, TCS) and a security vendor (Symantec).

One obvious step is to build more alliances with leading service providers (telcos and SI) with strong corporate ties and that are willing to launch 'FireEye Inside' security services.

Another option is to put more emphasis on its FireEye-as-a-Service offering which allows FireEye to sell services to the mid-market corporate segment, predominant in Europe.

Then there are the FireEye product names that are completely anonymous: ETP, NX, HX, FaaS, TAP - neither exciting nor meaningful - unless you are already on the inside. FireEye needs to emerge from its own shell.

But of course asking a ninja to step into the glare of wider public recognition requires careful consideration.

A simpler online life: trusted use of your social identity

Bob Tarzey | No Comments
| More
We are being increasingly asked to use our established social identities as a source of trust for communicating with a range of online services. Is this a good idea, is it safe and, if we go ahead, which ones should we choose to use? This needs consideration from both the consumer and the supplier side.

It is a good idea for consumers, if it makes our online lives both more convenient and secure. Having a smaller number of identities with stronger authentication meets those objectives. Using an established social identity can achieve this, if we trust it. To some extent this will depend on what service we want to access; for example, you may find it more acceptable to use PayPal, which you already trust with financial information, to identity yourself to an online retailer, than Twitter.

A consideration should be whether the social source of identity in question has some form of strong authentication. Google, Yahoo, Microsoft, Twitter, PayPal, LinkedIn and Facebook, for example, all do, enabling you to restrict the use of your identity relating to their service to certain pre-authorised devices. This is usually enabled using onetime keys delivered by SMS (test message), a separate independent channel of communication. The trouble is that this is usually an optional layer of security which many do not switch on, perhaps due to perceived inconvenience. If we are going to use certain social identities for more than just accessing the social network in question we should get used to this concept.

With regard to online suppliers, Quocirca published a research report in June 2015 (Getting to know you, sponsored by Ping Identity) which included data on the perceived trust of various social networks as a source of identity. First, it should be said, that overall compared to other sources of identity, social identities are the least trusted. Just 12% considered them highly trustworthy, compared to 23% for identities provided from government databases and 16% for those from telco service providers. 53%, 22% and 32% respectively considered these sources untrustworthy. Clearly those proposing social identity as a source of trust have some way to go to overcome the reticence.

That said, there is huge variation in the level of trust placed in the different potential providers of social identity and it varies depending on whether the organisation involved is consumer-facing or dealing mainly with business users (non-consumer-facing). For all, the top three most trusted were clear; Microsoft, PayPal and Amazon. For Microsoft this can perhaps be put down to long standing familiarity and the fact that it is already the most highly trusted source of identities in business through Active Directory. For PayPal and Amazon it will be because they already deal with consumers' money and have established the trust of business overtime to do so.

Other social identities are less trusted by businesses. Consumer-facing organisations find Facebook more acceptable that their non-consumer-facing counterparts, with LinkedIn it is the other way around. Google is equally trusted by both, Yahoo less so and Twitter is trusted by few.

To make use of social identities, businesses need to do two things. First they need to ensure their customers and prospects have a degree of choice. In most cases it will not make business sense to dictate all users must have a Facebook account or be on Twitter before they can transact with you. Good news, vendors of social identity broking products sort all this out for you and even offer local account creation if a consumer does not want to use social login. The three best known are probably Janrain, Gigya and LoginRadius. The consumer choses the identity they want to use, the broker authenticates it and passes credentials on to the provider of the services to be accessed. Janrain told Quocirca that its Social Login product inherits any strong authentication that users have put in place. 

The provider must then decide what resources are to be made available. For example, a standard customer dealing with a travel agent may need access to separate systems for airline and hotel booking; an executive customer may also get access to a concierge service. This can all be decided by a single-sign-on system (SSO), an area where Quocirca's research shows many are now investing in new capabilities.

In particular they are turning to on-demand services (software-as-a-service/SaaS). Consumer-facing organisations are especially likely to be doing so. Providers include Ping Identity (the sponsor of Quocirca's research), Intermedia, Okta, OneLogin and Symplified. IBM, CA, Symantec and Dell all now have SaaS-based SSO systems as well.

For many, the broader success of a re-vamped identity and access management (IAM) strategy will be the ability to manage these social identities alongside other sources of identity, for example, those of employees and partners. There will be some common resources and therefore policies that cover different groups and the need for common governance and compliance reporting across all; that requires federated identity management. Many of the leading identity vendors now support federate identity management, some such a Radiant Login specialise in it.

Furthermore both on-premise and cloud-based IAM requirements may need linking. Many vendors offer on-premise and cloud based product pairs; for example Ping Identity's Ping Federated and Ping One, IBM's Security Access Manager (ISM) and its Cloud Identity Service, CA Identity Manager and CA Identity Manager-SaaS and SailPoint's IdentityIQ and IdentityNow.

As the range of end-users organisations need to deal with continues to expand so do the options for sourcing and authenticating their identities and the products available to enable and manage them.

Quocirca's report, Getting to know you, is free to download HERE


Digital Disruption: Future Opportunities for the Print Industry

Louella Fernandes | No Comments
| More

Printer vendors are having to make major changes to their business models to sustain their leadership and relevance. Long term survival will depend on their ability to adapt to market disruption through both innovation and building relationships with complementary product or technology providers.

The print industry, like many industries, is on the brink of significant change. Market disruption, characterised by intense competition, more demanding customers and a constantly shifting technological landscape, is threatening the legacy hardware-driven business. As the print industry struggles with declining print volumes, hardware commoditisation, lower margins and sustaining growth, vendors are increasingly re-examining the structure of their businesses and looking for ways to deliver better financial performance.

As such, the industry is poised for a wave of acquisition and restructuring as vendors look to adapt to new market demands and shed assets that no longer meet strategic needs. Lexmark and Xerox are the latest to declare that they are exploring strategic options. While typically, hardware companies have relied on earnings growth to deliver shareholder value, shrinking legacy hardware markets has seen revenue falter, leading to acquisitions in the software and services space.

Lexmark's bid to expand its enterprise software presence began with its acquisition of Perceptive Software in 2010. It has since made 13 software related acquisitions, the most recent being Kofax for $1Bn. Meanwhile Xerox acquired ACS in 2010 to build its business process outsourcing service capabilities. This has paid off for Xerox, with services now accounting for 57% of its revenue. Speculation is now rife as to whether Lexmark's hardware business will be acquired, or if and when Xerox will split into two separate technology and services businesses - in a similar way to HP's split into HP Inc. and Hewlett Packard Enterprise.

Whatever the outcome, the market is undoubtedly set for consolidation.  All vendors are navigating the same path and trying to understand where the new markets lie - the cloud, mobile, big data and the Internet of Things. While some vendors ,such as HP and Ricoh, are working to commercialise their 3D printing technology - this for now is a relatively nascent market.

The shifting business landscape may be daunting, but there are some key opportunities for print manufacturers to maintain or even enhance their competitive positions:

  • Adapting to the "as-a-service economy" The consumer preference for services over products and subscriptions over purchases is permeating into the business market. This is driven by increasing customer demand for flexibility that will allow them to take advantage of new technologies. With an as-a-service model, customers are not burdened by significant upgrade costs and can more accurately estimate the on-going cost of access to technology. Managed Print Services (MPS) is already an established service model in the market, offering a lucrative recurring services revenue model along with increased customer retention long after the printer hardware sale. While the MPS market is relatively mature in the enterprise space, there are further opportunities to tap into the largely under-penetrated SMB market. For the channel, digital services around printer device diagnostics and predictive/ preventative maintenance have significant untapped potential. MPS vendors should drive further innovation in their engagements around cloud delivery, security and mobility. These are key enablers, not only for the as-a-service economy but also digital transformation.
  • Driving the digital transformation journey. Despite talk of its demise, paper remains a key element of the connected and collaborative office workplace and still plays critical role in the business processes of many organisations. However, paper bottlenecks can hinder business productivity and efficiency. Print vendors are uniquely positioned to connect the paper and digital worlds and are developing stronger expertise in workflow solutions and services. In many cases leveraging investments in Smart MFPs, which have evolved to become sophisticated document processing platforms, provides vendors an opportunity to maximise the value of their hardware offerings. Vendors need to change legacy perceptions of their brand and be accepted as a trusted partner in the enterprise digitisation journey. Business process optimisation and workflow capabilities will become a key point of differentiation for vendors in the industry, requiring a balanced hardware, software and service portfolio.
  • Exploiting the Internet of Things (IoT). All printers are things and the connected Smart MFP is part of the IoT landscape. Vendors can exploit the enormous data generation to monitor actual customer product and service usage. This data enables manufacturers to deliver better service performance through predictive data analytics (think proactive service and supplies replenishment)  and  by collecting information about customer usage of products or services, vendors can improve product design and accelerate innovation. Developing strategic partnerships with open technology vendors can also pave the way for seamless integration of printers/MFPs with mobile devices and drive the development of a broader mobile solutions and services ecosystem.
  • Expanding high value print services. Notably, this year some online brands, such as Net-a-Porter and Airbnb have expanded their brands into print. In fact print launches amongst independent publishers are at a 10-year high. Print's tangibility and durability, its credibility and trust can set it apart from the noisy cluttered online landscape. Research has shown that readers are more likely to retain information on printed material leading to higher engagement levels. Many of the traditional print vendors can leverage their own or third party hardware (including print, visual and display signage technology), services and tools to develop cross media channel communications. Partnerships and alliances with technology vendors in this space will enable print vendors to participate in both the online and offline customer communications space.

The print industry cannot afford to rest on its laurels and must be mindful of the speed and dramatic transformation experienced in other industries. Consider how Salesforce, Amazon Web Services and even Uber have rewritten the rules of their markets. Is there potential for a similar disruptive force in the largely closed and proprietary print industry? Disruption may not come from traditional competitors but from those outside the industry. To adapt and thrive the industry must become more open, expand partnerships outside the industry and continuously innovate. This means creating new products and/or channels and engaging customers, partners and employees in new ways. Ultimately the question remains, is the print industry ready to disrupt itself?

Read more opinion articles on the print industry at www.louellafernandes.com







TARDIS analysis - X-Determinate (and Y, Z and T)

Clive Longbottom | No Comments
| More

English: The current TARDIS seen at BBC TV Cen...
Let me provide you with some data.  100111010001.

There - that's useful isn't it?  OK - it's not; but that's the problem with data.  In itself, it has no value - it only starts to deliver value when it is analysed and contextualised.

There is the oft-used phrase of 'drowning in data', and, to paraphrase Coleridge, for many organisations it is now a case of 'data, data everywhere, nor any stop to think'.  All too often data is collected, but is then not used due to a perceived lack of capability in dealing with it in a satisfactory manner. 

There is a need to deal with the 'V's of big data analysis - you may be dealing with high Volumes of a large Variety of data types, at high Velocity - with the Veracity and Value of the data being suspect.

The question then is - how does your organisation extract the value from that data that enables you to move up the strategic insight pyramid from a sea of data, through to a pool of information, on to a puddle of knowledge that gives the drop of knowledge that creates that strategic insight that adds to the organisation's bottom line?

Effective analysis of that sea of data needs to be centred on knowing when and where an activity occurred.  Having access to this data allows for trend analysis (using the time variable) and also for spatial analysis.  By combining the two, predictive analysis can be brought to bear that can add massive value to an organisation.

Let's consider a retail organisation.  It has collected lots of data about its customers through the use of loyalty scheme cards.  It therefore knows what its customers bought from it - and should also have when these items were bought, and where from.  It should also have enough details about the customers - via their post codes, for example - to be able to position them on a map and assess their socio-economic status as well.

The retailer can therefore look at purchasing cycles of certain foodstuffs - just when do customers start buying strawberries - and do they buy cream at the same time?  This basic analysis can avoid bringing in too many high-price early strawberries before customers want them, and can avoid stocking too many late strawberries when the season falls away - and also avoids overstocking with cream as well.

The retailer can also create heat maps of where its customers live - and can identify where it would make the most sense to build another outlet - or where closing one would have the least impact on customer loyalty. 

The above spatial analysis is only looking at two dimensions (post codes or other map co-ordinates) along with time.  In many other cases, three dimensions are a necessity.

Consider air traffic control.  It has to know to the split second where all the planes under its control are.  However, just having an X and a Y (along with a T) coordinate to show the plane's position at a specific time on a 2-dimensional system is useless - are these two planes at similar X,Y coordinates at the same T going to crash?  Not if they are 1,000 m apart in the Z dimension.

Similarly, tracking items or entities across time and three dimensions can help in emergency situations.  Taking something as complex as a large oil rig out in the North Sea, plans need to be in place should there be an emergency.  Workers are told where muster points are, where lifeboats and lifebuoys are and so on - but what happens if the emergency prevents people from getting to these resources - or if some people are incapacitated by the emergency?

If employees use wearables with GPS positioning in them, emergency services can more easily identify where people are - and so plan rescue and evacuation more effectively.  With an oil rig being a large three-dimensional space, knowing where every person is exactly within that space is of major importance.  Think of a helicopter as part of the rescue: it only has a defined amount of fuel available, which then defines how long it can stay at the site.  If it has to spend a large part of that time identifying where people are, it is working ineffectively, and people's lives are in danger.  Allow the helicopter aircrew to identify where people are as they fly in, and they can then be far more effective when they do arrive at the site.

So - maybe Dr Who was the first data scientist?  His vehicle - the Time and Relative Dimension In Space (TARDIS) - sums up exactly what is required to analyse data in a way to gain strategic insights.

And with the number of times The Doctor has saved the earth - if it's good enough for him, then it must be good enough for you - surely?

Quocirca has written a report on the importance of 'Dealing with data dimensions', sponsored by Esri UK.  The report can be downloaded for free here.


Text messaging 4 businesses - the rise of the machines

Rob Bamforth | No Comments
| More

Now that so many people have smartphones, mobile network data plans have become more generous and instant messaging has sprouted everywhere from being embedded in social networks to tools like Skype, WeChat, WhatsApp and Viber, what does the future hold for SMS?


Blossoming since the first festive text greeting in December 1992, this twenty something year old technology is starting to feel its age. Sure it has been hugely popular with massive peaks on special days like New Year and Valentines day, but usage declined in 2013 for the first time in two decades.


It has been clear for many years with the growth of an open and mobile internet that person to person (P2P) SMS would start to decline eventually, both in volume and revenue, and the industry set about picking up the slack with the use of automated or application generated messages sent to people (A2P).


This has not been plain sailing, and five years ago it looked particularly bad as many exploited the opportunity of sending bulk SMS, often coming in through 'grey' (non fully sanctioned) routes at very low costs, as a way of delivering mobile SPAM messages. This was not a good experience for the users and did not really drive significant benefit for the legitimate businesses sending out messages. It might not have had the volume impact on operator networks that email SPAM does on internet service provider networks, but the negative effect on subscribers meant that eventually operators needed to act.


While SMS spam has not entirely disappeared, it is now much diminished as mobile operators have taken much more control of messages on their networks with traffic inspection and filtering. Rogue or grey routes into mobile networks can now be spotted and stopped within a matter of hours not weeks or months, as might have been the case in the past.


This has all been to the overall benefit for the A2P sector as it has focussed attention on getting value rather than just volume from A2P messaging. Recent studies predict steady if not stellar growth of A2P SMS of just over 4% compound annual growth to the end of the decade. Some of the volume will continue to be the legacy of mobile marketing, with further use of promotional messaging, polls and campaigns that drive awareness in a non-SPAM-like way, but where are the main application growth drivers?


There are several strong use cases, but they need to be treated separately as each has distinct characteristics - not all messages are the same.


Many uses are in the enterprise, where the key in a world of information and email overload is to get critical messages received and understood. A text message generally grabs people's attention more readily than an email and so it becomes a great way to notify changes in time dependent aspects of business processes. These could be alerts when things are going wrong e.g. running out of stock, temperature in fridge rising above a threshold, or might be timely information or notifications of things of interest, such as a forthcoming delivery.


For some applications, the lack of need for cellular data (3G/4G) means greater coverage potential as it extends deployments to be able to use 'old' 2G networks, where 'things' can run on minimal energy and modems only power up when they need to send a message. For example remote street furniture and signage could send text messages when lights fail - too simple to require a micro controller and internet connection, but a message would be useful to set in train a maintenance visit.


A2P SMS use cases do not have to be one-way. Some logistics companies send simple SMS requests to customers about suitable delivery options and have them confirm or reschedule with a simple response. This concept also works well in areas such as healthcare, where scheduling and shifting appointments can be complex and costly when visits are missed. Other organisations are using SMS responses to automated messages to get instant feedback on service quality or on offers or advertising. These would all be possible to do with email, but the crispness and brevity of SMS lowers the impact on the person responding and allows them to do it there and then with ease.


SMS has also taken a much greater role in supporting security. Its use as an out-of-band channel to deliver one time passcode to users wherever they are provides a second factor of authentication when logging in to use enterprise resources or critical external services such as mobile banking. Rather than having to remember to carry special physical tokens, the mobile phone will generally be carried everywhere and has its unique SIM independent of how a device might be connected to a network via an IP address


All of these use cases rely on a message getting through, and in a timely manner, wherever the user is located - SMS makes that easy. A2P applications no longer look risky or problematic in the way that bulk SMS once often appeared, and mobile network operators are now seeing A2P as a welcome revenue potential - especially important as SMS has historically formed a large element of operator revenues.


With all the current hype about the IoT, mobility and interconnected everything, something important risks being missed. It is not simply about the connection, but the purpose - that is the message. It is all about the intent to meet a business imperative, so it is vital that the message gets through to the right recipient at the right time and in a way that they can deal with it.


In this regard, with operator support and the right business use cases, SMS still delivers.


Digital customer experience - where do mobile apps fit?

Rob Bamforth | No Comments
| More

Many organisations are trying to get closer to their customers and see mobile, in particular mobile apps, as a way to engage - but are they always taking the best approach?


The impetus is often somewhat similar to that of putting a business online; everyone else is doing it; if you are not then you run the risk of missing out or being left behind. However, just doing that quickly becomes insufficient. When websites first appeared they were static brochures, they then added interaction and commerce, although as many organisations have discovered even this is not enough and to retain a worthwhile digital presence, websites require a whole load more attention.


The same is true when adding a mobile connection to customers. A mobile-enabled website is at least a good first step, but to really get the interaction going this will be a bit limited - hence the interest in mobile apps.


However, mobile apps require a little more pulling power than websites; not only do they have to be downloaded or installed, they need to be retained. This is getting harder to achieve and recent statistics on app retention from analytics and marketing platform provider, Localytics, indicated that roughly a quarter of apps are abandoned after a single use, and only around a third make it to being used more than 10 times.


To avoid app abandonment, mobile apps need to be a bit fitter to survive:


  • fit to the moment - a real world purpose to get you to open the app while mobile
  • fit with the flow - apps must be really easy to use
  • fit to the everyday - apps need to be habit forming


For many organisations, the main route to start down is m-commerce, but mobile users are not necessarily always in the mood to buy, but to browse, be entertained and get support. Apps will have more staying power if they nudge and engage rather than blatantly try to sell.


So, many organisations have responded with wrapping mobile application usage around loyalty, and a more personal connection through social media. This can again circle quickly back to trying to just increase orders - nothing inherently wrong with that of course - but part of building loyalty is widening the relationship from a simple buy/sell.


One way this could be done is to engage with customers as if they were 'part of the team'. The goal here is not necessarily to grow sales (although that might still happen), but to reduce running costs and increase customer satisfaction.


This is akin to the incentive programs offered by media companies and others such as the recent 'GoPro Awards' campaign, offered by the action camera maker, which is planning to invest around $5m per year in rewards to customers who get recognition and prizes for submitting creative content. GoPro could have simply commissioned professional creative companies, but by incentivising customers it is creating an extra reason for them to keep using its products and rewarding those who produce something special.


Customer feedback is important in any sector, even when it appears to be mostly negative, as if dealt with honestly it can generate more interest and ultimately loyalty. It could be useful for example in public transport, where travellers are quick to jump on their favourite social platform when there are problems, but combining and analysing this information to create a bigger picture would actually be useful for the travel operator and perhaps fellow passengers.


Indeed, by being pro-active and asking passengers for feedback - perhaps along the lines of 'Tell us about something that made you smile during your trip today' - could generate even more passenger engagement.  While it may attract many sarcastic responses (which in themselves, when used correctly can show the company as being more human), it could also provide positive responses that can be propagated through the company's social media outlets to give good news while recognising individual customers.


Netatmo, the makers of a stylish online connected weather station, do something similar by pulling together all of their subscribing customer's current readings and presenting them on a global map, which can be accessed along with personal weather station data via a mobile app. By using the distributed remote feeds from all of its subscribers it can create a big picture that in turn feeds greater weather data value back to its customers, rather than just being of value to the organisation itself. The feeds of the many outweigh the feeds of the one?


Another example of where this might work is in the hospitality industry, where increasingly the service relationship - good or bad - is more out in the open, due mainly to the explosion of social review sites and services.  Rather than waiting for, and responding to, criticism of services on public review pages, organisations could adopt a more pro-active approach.


This would be a good fit for a mobile app, especially in the increasingly crowded field of 'loyalty'. Rather than offering a cheap discount for harvesting user preferences, encouraging current customers to do something that will reward them and improve services for future users could stimulate more regular usage.


In the hotel sector this might take the form of 'user generated maintenance'. Instead of critical reviews plastered around the social sphere, establishments could offer reward points for 'things we may have missed' or suggestions for improvements. A simple app on a mobile phone can record, locate and document the issue, submit it to the hotel and gain something of value for the user, while at the same time helping the service provider fix a problem - all without it going public!


M-commerce might seem an appealing route to help directly sell goods and services, but the first thing all mobile apps need to do is sell themselves, otherwise all that development effort runs the risk of being abandoned very quickly. Mobile apps need to be very fit for purpose and offering something of value to customers should be a big part of any mobile strategy.


Mobile - shifting from 'enabled' to 'optimised'

Rob Bamforth | No Comments
| More

Workers have used various tools to access IT on the move for some time, but much of it was simply shifting desktop applications onto laptops and it took the arrival and massive success of smart phones and tablets to really get the mobile application sector moving. This in turn has created mobile app marketplaces and an opportunity for developers. 


Way back in 2008, Quocirca conducted some research amongst mobile app developers for a major mobile platform provider. These developers wanted stable platforms, developer support and market opportunities in which to sell their offerings, and largely over the intervening years the commercial elements of these needs have been met, certainly in the Android and Apple environments.


Many of the technical challenges remain. It is still difficult to keep up with new versions and derivatives of platforms, plus specific mobile app development skills can be hard to find, but the huge take up of devices and acceptance of use throughout the home and workplace means that there is an expectant user community and commercial opportunities.


But one bar has risen - user expectations - and it is no longer sufficient to 'mobilise' an application to work on mobile devices. The whole mobile user experience needs to be optimised, and this means successfully tackling two elements; the immediate user interface on device and the end-to-end experience.


At one time it was possible to separate out those aspects that might impinge on consumers from those that affected workers, but with so much crossover from the consumerisation of technology, the boundaries are blurred. Even applications for the workplace need to be written to appeal to the consumer appetites of individuals as well as meet the needs of businesses.


The user experience is significant as it impacts directly on the ability of an individual to do their job as effectively and efficiently as possible. Non-intuitive or even 'different' usage models in user interfaces force people to spend more time trying to learn the style of the application rather than getting the most effective use out of its substance. Some will just give up.


In the mobile context this is exacerbated by the immediate environment surrounding the mobile user; distractions, limited input/output, less comfort (standing or walking) and no time to wait for slow networks or applications. Presenting mobile users with too much or difficult to navigate information, expecting complex responses and not pre-filing with known mobile context e.g. location data, is not going to encourage effective frequent use.


Mobile users are impatient, often with good reason, and user experiences that mirror, mobilise or 'mobile-enable' the traditional desktop, whilst delivering consistency across different devices, often struggle to optimise for mobile needs. The recent market wavering between native and web interfaces for mobile indicates the dilemma faced by developers. They need balance a need for consistency and minimizing porting effort versus delivering a more mobile optimised user experience.


Current trends seem to be again moving towards favouring native mobile apps, indicating a rise in the desire for optimisation, but this alone is not sufficient for delivering the complete user experience that users increasingly expect, as 'mobile' will also often mean 'remote'.


Taking more control of the user interaction on device is one thing, but the end-to-end experience is more complex, with the trend towards delivering services to mobile devices, often going hand-in-hand with concentrating the service delivery into some mix of remote cloud services. These may be public, private or hybrid and accessed over a mix of service provider networks - cellular or Wi-Fi - of varying coverage, capacity and quality, so controlling and delivering of an end-to-end experience requires much more thought and attention. Network and remote application access needs to be streamlined and optimised to ensure the mobile working experience is fully productive. This will mean gaining a greater understanding of the challenges of mobile networks, which will often impact on application design.


Application developers and companies targeting mobile working need to start from the outset with a mobile strategy oriented around users, not just devices or even applications. They need to design for the constraints and advantages of the mobile environment to optimise for mobile working not just in the device - no dependence on mice, keyboards or seats - but also in the challenges faced in accessing the network and services that the device (and therefore user) relies upon.


Smart phones need smart apps and smart networks for mobile employees to be smart workers.


What can we read into Dell's customer event?

Clive Longbottom | No Comments
| More

English: Michael Dell, founder & CEO, Dell Inc.

Michael Dell, founder & CEO, Dell Inc. (Photo credit: Wikipedia)

Dell's recent DellWorld event in Austin, Texas was impeccably timed such that little of any strategic nature could be discussed.  Coming close on the heels of the 'definitive agreement' for Dell to purchase EMC, all that Michael Dell could really state were the obvious things around matching portfolios, complementary channel routes, greater reach into high end markets and so on.  What he couldn't discuss - not only because Dell cannot do so due to legal constraints, but also as it is unlikely that even Michael or Joe (Tucci - EMC CEO and Chairman) yet know - was the strategy around products and rationalisation of not only these products, but also in the direction the Federation of companies will have to take.

However, while the fundamentals of the deal are hammered out, Dell has to continue moving forward.  Therefore, there were several announcements at the event that were newsworthy and will likely survive through the acquisition.

One of these was the Dell/Microsoft Cloud Platform System Standard box - a Dell-built box that contains a Microsoft OS stack along with several Azure services, enabling organisations to more easily create a hybrid private/public cloud platform for Microsoft workloads. This box can be either purchased (including via flexible terms through its Cloud Flex Play offering), or rented direct from Dell for $9,000 per month.

Next was the new Dell Edge Gateway 5000, following on from the announcement of a far more basic Gateway at last year's event.  This new Gateway has the same approach of the old one as an internet of things (IoT) aggregator, but has been considerably hardened and improved in what it can provide.  It now supports pretty much any device's output as an input, acting as an inline extraction, transform and load (ETL) system to ensure that different data schemas can be simplified.  The Gateway can then analyse the data collected and either disregard it (as being useless), store it (as being possibly interesting), or raise an alarm (as it identifies an issue based on the data). 

From Quocirca's point of view, this approach is fundamental to a large IoT deployment - data volumes have to be controlled, otherwise the core network will collapse under the volume of small packets being sent to a central data lake by masses of uncontrolled devices.  The Gateway 5000 also gets around another problem - that of obviating the need for customers to replace all their existing devices, as the Gateway can take data in from proprietary, and even analogue, devices.

Dell is also extending its bespoke services to service providers, via a new group called Dell Datacentre Scalable Services (Dell DSS).  The high-capacity service provider markets (including telecoms and web-scale customers) have different needs to a general commercial customer - they are not bothered about badging of products, but require bespoke designs in many cases.  However, they are also extremely cost sensitive.  Dell believes that it can still make enough profit on deals in this space due to the numbers of bespoke or semi-bespoke designs it has to come up with - it expects that many of the designs will still sell in the tens of thousands to a single customer.

Dell also gave a rousing and staunch defence of its PC markets.  It believes (rightly so, in Quocirca's view) that the PC is not going away any time soon, and also believes (slightly more questionably) that there is still plenty of space for innovation in the space.  While this may be true to an extent, the real innovation will be around ensuring that new access devices are flexible enough for on-device applications as well as web-based apps and services, and that there is a great deal of consistency between on-desk and mobile devices.  While Dell has a decent Windows tablet portfolio, it pulled out of smaller device formats some time back - and this may hurt it in the longer term as others, particularly Lenovo, come through with more end-to-end systems.

However, what everyone wanted to talk about was the EMC deal.  All that can be said here is Quocirca's view on it, based on what knowledge is already in the public domain.  We have previously written about the problems for any company that would acquire EMC - and we still stand by much of what was said at the time.

Tucci created a federation of companies that had built-in 'poison pills' that count against a smooth acquisition.  Joint ventures, such as VCE and Pivotal will make it difficult for Dell to unpick and embrace them fully into the Dell company itself.  Taking EMC private while leaving VMware public does give some room for manoeuvre - EMC has an over 80% holding in VMware: Michael Dell has already stated that he would look to offload some of that, providing at least some payback to the investors backing the buy-out of EMC. 

Whereas it would make sense to get rid of the idea of a Federation completely, so removing the confusion in many prospects' and customers' minds as to how it all hangs together, Michael Dell did hint to a possibility of retaining such a structure - something that Quocirca would advise strongly against.  Indeed, even as the deal was being announced, Pivotal, EMC and VMware created a new member of the Federation by spinning out previous acquisition, VirtuStream, to carry much of the overall cloud strategy for the rest of the Federation.  While Dell has eschewed a public cloud system hosted by itself, it has become a cloud aggregator, using systems such as its own acquisition of Boomi to provide integration between partner and other public clouds.  Whether the idea of creating VirtuStream is to enable a quick offload of EMC's own cloud pretentions when the deal closes (mooted for the middle of next year), it remains to be seen.

What is clear is that this is a very brave move by Michael Dell.  As Dell itself is only just coming out from a period of quiet after taking itself private, to suddenly take on the largest (by far) tech acquisition that will enforce true strategic silence for almost a year will stretch customers' patience to the limit.  Dell must be thankful that with IBM still reinventing itself around cloud services (via SoftLayer, BlueMix and Watson as a Service), and HP in the throws of pulling itself apart, it should suffer little loss of customers to either of these companies.

However, with storage companies such as Pure Storage, Violin and Kaminario waiting in the wings, along with hyperconverged vendors such as Nutanix and Simplivity, it could be here where Dell should be focusing.

Dell is in for an interesting 9 months - what comes out the other end should be interesting.  For Michael Dell's sake, Quocirca hopes that it makes sense to customers and prospects, and that it doesn't create a company that is too large and slow moving to respond to the incoming, new-kids-on-the-block competition.


Have you entered our awards yet?

Find recent content on the main index or look in the archives to find all content.

Archives

Recent Comments

  • paul simone: I usually use Vagrant for sliding options. This tutorial is read more
  • ChrisMaldini: Nice write-up, Clive. I agree there is a need to read more
  • Adam: Cloud computing and BYOD go hand-in-hand. Cloud computing can make read more
  • David Chassels: Hi Clive Is the business emphasis not wrong in looking read more
  • Clive Longbottom: After a discussion with CA Technologies, I would just like read more

Dilbert

 

 

-- Advertisement --