Intel - the end of speeds and feeds?

Clive Longbottom | No Comments
| More

Every year, Intel holds its tech fest for developers in San Francisco - Intel Developers' Form, or IDF.  For as long as I can remember, one of the main highlights has been the announcement of the latest 'tick' or 'tock' of its processor evolution, along with a riveting demonstration of how fast the processor is with a speed dial showing the megahertz of the processor (yawn).

This year, things were different.  Sure, desktops and laptops were still talked about, but in a different way.  The main discussions were around how the world is changing - and how Intel also has to change to remain valid in the market.  So, CEO Brian Krzanich took to the stage with a range of new demonstrations and messages around the internet of things (IoT), security, mobility and other areas.

As an example, the keynote kicked off with two large inflated beach balls containing positional sensors being punched around the hall by the audience.  These were shown in real time on the projection screen on the stage as two comets that had to hit an image of an asteroid on the screen - a simple but effective demonstration of the use of small sensors being used in a 3D environment.  This was made even clearer when Intel placed two of its latest devices based on its 'Curie' micro system on chip (SoC) on a jump bike.  The bike's movements could be replicated against a computer model, providing the rider with performance stats as well as visual feedback - or for viewers to see data on any jump trick performed.

A larger SoC design can be seen in the Intel Edison - more of a development platform for techies, but again, showing how Intel is providing more of a series of platforms rather than just bare-bones processors and chipsets.  Intel's 'next unit of computing' (NUC) takes things one step further with a series of full, very small form factor computer systems.

As well as being a general small form factor device, the NUC forms the basis for further broadening Intel's potential reach.

The first of its offers built on NUC uses Intel's Unite software - a communication and collaboration hub that makes meeting rooms easier and more effective to use.  Currently available from HP (with Dell and Lenovo amongst others expected to offer systems soon), a Unite system provides the capability for different people to connect to projectors via WiDi (Intel's wireless screen sharing protocol), as well as enabling screen sharing and collaboration capabilities amongst a distributed group.  Unite is also a complementary system to existing collaboration systems, so can integrate in to Microsoft Lync or Polycom audio and videoconferencing systems.  Intel is committed to further upgrades to the Unite system creating an intelligent hub for a smart conference room, with controls for lighting, heating and so on.

On the security front, Intel is looking at building security into chips themselves.  Having recognised that challenge and response systems were far too easy to break, Intel is looking at areas such as biometrics.  However, it also realised the perils behind holding details of anyone's biometrics in a manner that could be stolen (i.e. if a hacker can steal your fingerprint or retinal 'signature', they can then back-engineer a means of feeding this to systems that hold that signature)  - replacing retinas, fingerprints of facial features is not as easy as replacing a stolen password.  Therefore, it will be building into future chips the capability for the biometric data to be stored in a non-retrievable manner within the chip itself.  A biometric reader will take a reading from the user and send this to the chip.  The chip will compare it to what it has stored, and will then issue a one-time token that is then used to access the network and applications. Through this means, biometric data is secured, and users may never have to remember a password ever again.

This then brings us back to the thorny question of desktops and laptops.  Intel still sells a lot of processors for these devices, but in the age of web-based applications being served from the cloud, the need for high-powered access devices has been shrinking.  Gone are the days when organisations pretty much had to update their machines every three years in order to provide the power to deal with bloated desktop applications.  With cloud and virtual desktops, the refresh cycle has been extending - and many organisations have now been looking to five or even seven years (with a few just looking to 'replace when dead') refreshes.

To drive refresh, Intel needs new messaging for its PC manufacturer partners.  This is where security comes in to play.  The security-on-chip approach is not backwards compatible, organisations that wish to become more secure and move away from the username/password paradigm will need to move to the newer processors.

However, this still only presents a one-off refresh: unless Intel can continue to bring new innovations to the processor itself, then it will continue to see the role of desktops and laptops decrease as mobile devices take more market share.

So, Intel also needs to play in the mobility sector.  Here, it talked about its role in 5G - the rolling up of all the previous mobile connectivity standards into a single, flexible platform.  The idea here is that systems will be capable of moving from one technology to another for connectivity, and that intelligence will be used to ensure that data uses the best transport for its needs - for example, ensuring that low-latency, high bandwidth traffic, such as video, goes over 4G, whereas email could go over 2G.  Pulling all of this together will require intelligence in the silicon and software where Intel plays best.

Overall, IDF 2015 was a good event - lots of interesting examples of where Intel is and can play.  The devil is in the detail, though, and Intel will need to compete with not only its standard foes (AMD, ARM and co), but also those it is bringing into greater play with its software and mobile offerings (the likes of Polycom, Infineon, TI, Motorola, etc.).

For Intel, it will all be about how well it can create new messaging and how fast it can get this out to its partners, prospects and customers.  Oh dear - it is all about speeds and feeds after all...

Matching the Internet of Things to the pace of the business

Rob Bamforth | No Comments
| More

I must be a fan of smart connected things - sitting here with 2 wrist wearable devices in a house equipped with thirteen wireless thermostats and an environmental (temp, humidity, co2) monitoring system. However, even with all this data collection, an Internet of Things (IoT) poster-child application that works out the lifestyles of those in the household and adapts the heating to suit would be a total WOMBAT (waste of money, brains and time).


Why? Systems engineering - frequency response and the feedback loop.


The house's heating 'system' has much more lag time than the connected IT/IoT technology would expect. Thermal mass, trickle under floor heating and ventilation heat recovery systems mean a steady state heating system, not one optimised by high frequency energy trading algorithms. The monitoring is there for infrequent anomaly detection (and re-assurance), not minute by minute variation and endless adjustments.


The same concepts can be applied to business systems. Some are indeed high frequency, with tight feedback loops that can, with little or no damping or shock absorption, be both very flexible and highly volatile. For example, the Typhoon Eurofighter aircraft with its inherent instability can only be supported by masses of data being collected, analysed and fed back in real-time to make pin-point corrections to keep control. Another example is the vast connected banking and financial sector, where there is feedback, but with no over-arching central control the systems occasionally either do not respond quickly enough or go into a kind of destructive volatile resonance.


Most business systems are not this highly strung. However, there is still a frequency response, or measure of the outputs in response to inputs that characterise the dynamics of the 'system', i.e. the business processes. Getting to grips with this is key to understanding the impact of change or what happens when things go wrong. This means processes need to be well understood - measured and benchmarked.


In the 'old days', we might have called these "time and motion" studies; progress chasers with stopwatches and clipboards measuring the minutiae of activities of those working on a given task. A problem was that workers (often rightly) thought they were being individually slighted for any out of the ordinary changes or inefficiency in the process, when in reality other (unmeasured) things were often at fault. This approach did not necessarily measure the things that mattered, only things that were easy to measure - a constant failing of many benchmarking systems, even today.


Fast-forward to the 1990s and a similar approach tried to implement improvements through major upheavals under a pragmatic guise - business process re-engineering (BPR). A good idea in principal, especially to bring a closer relationship between resources such as IT and business process, but unfortunately many organisations ditched the engineering principals and took a more simplistic route by using BPR as a pretext to reduce staff numbers. BPR became synonymous with 'downsizing'.


Through the IoT there is now an opportunity to pick up on some of the important BPR principles, especially those with respect to measurement, having suitable resources to support the process and monitoring for on-going continuous improvement (or unanticipated failures). With a more holistic approach to monitoring, organisations can properly understand the behaviour and frequency response of a system or process by capturing a large and varied number of measurements in real time, and then be able to analyse all the data and take steps to make improvements.


Which brings us to the feedback loop. The mistake that technologists often make is that since automating part of a process appears to make things a little more efficient, then fully automating it must make it completely efficient.


While automating and streamlining can help improve efficiency, they can also introduce risks if the automation is out of step with the behaviour of the system and its frequency response. This leads to wasting money on systems that do not have the ability to respond quickly or alternatively, destructive (resonant) behaviour in those that respond too fast.


It might seem cool and sexy to go after a futuristic strategy of fully automated systems, but the IoT has many practical tactical benefits by holding a digital mirror up to the real world and a good first step that many organisations would benefits from would be to use it for benchmarking, analysis and incremental improvements.


IT service continuity costs - not for the faint hearted?

Clive Longbottom | No Comments
| More

IT service continuity - an overly ambitious quest that is pretty laughable for any but those with pockets deeper than those in the high-rolling financial industries?  Is it possible for an organisation to aim for an IT system that is always available, without it costing more than the organisation's revenues?

I believe that we are getting closer - but maybe we're not quite there yet.

To understand what total IT service continuity needs, it is necessary to understand the dependencies involved.

Firstly, there is the hardware - without a hardware layer, nothing else above it can run.  The hardware consists of servers, storage and networking equipment, and may also include specialised appliances such as firewalls.  Then, there is a set of software layers, from hypervisors through operating systems and application servers to applications and functional services themselves.

For total IT service continuity, everything has to be guaranteed to stay running - no matter what happens.  Pretty unlikely, eh?

This is where the business comes in.  Although you are looking at IT continuity, the board has to consider business continuity.  IT is only one part of this - but it is a growing part, as more and more of an organisation's processes are facilitated by IT. The business has to decide what is of primary importance to it - and what isn't so important.

For example, keeping the main retail web site running for a pure eCommerce company is pretty much essential, whereas maintaining an email server may not be quite so important.  For a financial services company, keeping those parts of the IT platform that keep the applications and data to do with customer accounts running will be pretty important, whereas a file server for internal documents may not be.

Now, we have a starting point.  The business has set down its priorities - IT can now see if it is possible to provide full continuity for these services.

If a mission critical application is still running in a physical instance on a single server, you have no chance.  This is a disaster waiting to happen.  The very least that needs doing is moving to a clustered environment to provide resilience if one server goes down.  Same with storage - data must be mirrored (or at least run over a redundant array, preferably based on erasure code redundancy, but at least RAID 0). Network paths also need redundancy - so dual network interface cards (NICs) should also be used.

Is this enough?  Not really.  You have put in place a base level of availability that can manage with a critical item failure - a single server, a disk drive or a NIC can fail, and continuity will still be there.  How about for a general electricity failure in the data centre?  Is your uninterruptable power supply (UPS) up to supporting all those mission critical workloads - and is the auxiliary generator up to running such loads for an extended period of time if necessary?  What happens if the UPS or generator fails - are they configured in a redundant manner as well?

Let's go up a step: let's use virtualisation as a platform, rather than a simple physical layer, let's now put in a hypervisor and go virtual.  Do this across all the resources we have - servers, storage and network - and a greater level of availability is there for us. The failure of any single item should have very little impact on the overall platform - provided that it has been architected correctly.  To get that architecture optimised, it really should be cloud.  Why? Because a true cloud provides flexibility and elasticity of resources - the failure of a physical system where a virtual workload has a dependency can be rapidly (and, hopefully, automatically) dealt with through applying more resource from a less critical workload.  Support all of this with modular UPSs and generators, and systems availability (and therefore business continuity) is climbing.

Getting better - but still not there.  Why?  Well - an application can crash due to poor coding - memory leaks, a sudden trip down a badly coded path that has never been used before, whatever.  Even on a cloud platform, such a crash will leave you with no availability - unless you are using virtual machines (VMs).  A VM contains a copy of the working application that can be held on disk or in memory, and so can be spun up to get back to a working situation rapidly.

Even better are containers - these can hold more than just the application; or less.  A container can be everything that is required by a service above the hypervisor, or it can be just a function that sits on top of a virtualised IT platform. Again, these can be got up and live again very rapidly, working against mirrored data as necessary.

Wonderful.  However, the kids on the back seat are still yelling "are we there yet" - and the answer has to be "no".

What happens if your datacentre is flooded, or there is a fire, an earthquake or some other disaster that takes out the datacentre?  All that hard work carried out to give high availability comes tumbling down - there is zero continuity.

Now we need to start looking at remote mirroring - and this is what has tended to scare off too many organisations in the past.  Let's assume that we have decided that cloud is the way to go, with container-based applications and functions.  We know that the data, being live, cannot be containerised, so that needs to be mirrored on a live, as-synchronous-as-possible basis.  Yes, this has an expense against it - it is down to the business to decide if it can carry that expense, or carry the risk of losing continuity. Bear in mind that redundancy of network connections will also be required.

With mirrored data, it then comes down to whether the business has demanded immediate continuity, or whether a few minutes of down time is OK.  If immediate, then 'hot' spinning images of the applications and servers will be required, with elegant failover from the disaster site to the remote site.  This is expensive - so may not be what is actually required.

Storing containers on disk is cheap - they are taking up no resources other than a bit of storage.  Spinning them up in a cloud-based environment can be very quick - a matter of a few minutes.  Therefore, if the business is happy with a short break, this is the affordable IT service management approach - mirrored live data, with 'cold' containers stored in the same location, and an agreement with the service provider that when a disaster happens, they will spin the containers up and place them against the mirrored data to provide an operating backup site.

For the majority, this will be ideal - some will still need full systems availability for maximum business continuity.  For us mere mortals, a few minutes of downtime will often be enough - or at least, downtime for most of our systems, with maybe one or two mission critical systems being run as 'hot' services to keep everyone happy.

The growing role of cloud in unified communications

Rob Bamforth | No Comments
| More

As communications systems evolved from the analogue, 'plain old telephone system' (POTS) into the digital world we were promised 'pretty awesome new stuff' (PANS), and part of that was 'unified communications' (UC). The reality has been somewhat harder to deliver, so why have communications proved so difficult to unify?


Well, what is it that makes UC tick and how does it really add value to an organisation? - is it everything being over IP (XoIP), shared presence showing who is available, lower networking costs, high definition video or the presence of the 800 pound gorilla, Microsoft's, logo next to Lync? (or now, with a rather more confused brand image, 'Skype for Business')


Probably none of the above.


Industry giants like Microsoft, and to a lesser extent Cisco, have each given the emerging UC market a major boost or two in their time, and there are a plethora of players from digital media-driven UC specialists to purveyors of PBXs with a telecoms legacy all addressing the market with worthy solutions. Integrators have gamely endeavoured to stick together these many disparate platforms and the frenzied staccato of messages from social platforms have all been embraced.


Yet still it all feels less than fully 'unified' communications from the individual's perspective, and this is critical, not only because good communication infrastructure is important to any organisation, but because getting real value from communications hinges on the attitudes of the individual. Communication can be described as "raising the level of mutual understanding" and no matter what tools are used, it is people that make it happen.


And where is it that the people are? They are 'mobile' and they have their own preferences.


In the early years of UC the emphasis was on unifying networks - the plumbing - and hence why many early adopters were being sold to on the premise of communication unification, but actually bought in to free phone calls over IP. Not that there was anything wrong with that in principal, but it really did not get far enough into where the real value was being pitched - getting disparate people to work better together, in true collaboration.


Mobile devices started to be added into the UC mix as additional end-points in UC solutions to redirect calls to, but this missed the point by again focusing on the technology, not the person. It is the people who are mobile and although the first mobile communications device was a phone, individuals now use a multiplicity of devices and media to communicate; some personally owned, some corporately supplied, some large, some small, some formal, some social.


Unifying all this is a bigger problem than fixing the plumbing or the endpoint, and it is not solved by appending the word 'collaboration' onto the end of 'unified communications' either.


The solution ultimately lies in the cloud and allowing people to chose what device and media works best for them.


Instead of convergence in the wiring, PBX or end-user device, the integration of communications services needs to take place in shared infrastructure, just like it did with the plain old telephone system.


There has been a surge in unified communications solutions delivered as a service (UCaaS), with companies such as 8x8, RingCentral, ShoreTel and Mitel leading the charge. It might only be a quarter to a third of the total market, but this is the area that is growing the fastest.


Rapid growth in adoption tends to indicate that individuals are comfortable and like it, and the key to this for communications is ubiquity - the IT industry's adaptation of the Martini principal - anyone, anytime, anywhere on anything. If those involved can continue to focus on the user and the user experience, through delivering control and access to all from of communication in a consistent manner across all devices and seamlessly incorporating real-time services like video and telephony, for example through universal browser-based technology (WebRTC), then the next stage in the evolution of unified communications could finally be delivering pretty awesome new stuff.


Get it simple, useful and effective for the individual to chose what they use 'everywhere' and they will deliver the long-awaited benefits of closer, co-operative working that help make their employing organisation more efficient and productive.


The rise of digital natives and their enthusiasm for cloud services

Bob Tarzey | No Comments
| More

Two years ago Quocirca published a research report, The Adoption of Cloud-based Services, which looked at the attitude Europe organisations had to the use of public cloud platforms (sponsored by CA Inc.) The research showed two extremes, among the UK respondents, 17% could not get enough cloud; we dubbed them enthusiasts, whilst another 23% were proactive avoiders. In late 2014 Quocirca ran another UK-focussed research project, From NO to KNOW: The Secure use of Cloud-Based Services, which asked the same question (sponsored by Digital Guardian). How things have changed!

The latest report shows the proportion of enthusiasts to have risen to 32% whilst the avoiders has fallen to just 10%. Changes were observed between these extremes as well. Those that evaluated cloud services on a case-by-case basis had risen from 17% to 23%, whilst those who regarded them as supplementary to in-house IT had fallen from 43% to 35%. These two groupings, not really distinguished between in the first report turned out to be more interesting than expected when we looked at the nitty gritty of approaches taken to security.

Slide1.PNG















To be clear, what Quocirca likes to think are snappy terms, such as avoiders and enthusiasts, are applied during the analysis. The research questionnaire used drier and more nuanced language, for example enthusiasts actually agreed with the statement "we make use of cloud-based services whenever we can, seeing such services as the future for much of our IT requirement" rather than the avoiders who agree either "we avoid cloud-based services" or "we proactively block the use of all cloud-based services". So the research is a reasonable barometer for changing attitudes rather than a vote on buzzwords.

The report goes on to look at a range of benefits that are associated with positive attitudes to cloud, such as ease of interaction with outsiders (especially direct connections with consumers) and the support for complex information supply chains (which was the subject of the a second report, Weak Links, and this blog post). It also showed the confidence in the use of cloud-based services was under pinned by confidence in data security (the subject of a first report, Room for improvement, and this blog post). Both reports were also sponsored by Digital Guardian.

The research looked at the extent to which respondents' organisations had invested in 20 different security capabilities. These were broadly grouped into data, end-point and cloud data security measures. Enthusiasts were more likely than average to have invested in all 20, with policy based access rights (based on device and location etc.) and next generation firewalls topping the list. However, avoiders were no laggards; they were more likely than average to have invested in certain technologies too, data loss prevention (DLP) topped their list followed by a range of end-point controls as they seek to lock down their users cloud activity.

There was clear differentiation in the middle ground too. Supplementary users of cloud services took a laissez-faire approach; they were less likely than average to have invested in nearly all 20 security measures. Case-by-case users had thought things through more and were more likely than average to have in place a range of end-point security measures and to be using secure proxies for cloud access.

What does all this tell us? First, that the direction of travel is clear; there is increasing confidence in, and enthusiasm for, cloud-based services. Second, that for many it is one step at a time, with more and more turning to cloud services for specific use cases. The security measures are being put in place to enable all this, often, as the new report's title points out, not to block the use of cloud services by saying NO but to be able to control them by being in a position to KNOW what is going on.

This reflects two realities that no organisation can ignore. First, digital natives (those born around 1980 or later) have been rising as a proportion of the workforce over the last two decades and many are now in management positions; they bring positive attitudes to cloud services with them. Second, regardless of what IT departments think, digitally native or otherwise, users and their lines-of-business recognise the benefits of cloud services and will seek them out. This so-called shadow IT is delivering all sorts of benefits to businesses. Why would any organisation want to avoid that?







The new Sage - does it know its onions?

Clive Longbottom | No Comments
| More

The herb Sage has a Latin name of Salvia Officinalis - Salvia coming from the Latin for 'well' or 'unharmed'. 

Sage, the UK-based company famous for its accounting software, seems to have been living up to its name.  Even with a raft of new on-line competitors, Sage's revenues have been relatively steady, as have profits (barring 2013, where a swathe of non-core assets were disposed of, including Interact - Sage's bad incursion into full customer relationship management (CRM) software of ACT! and SalesLogix).

However, Sage has got itself a name for being rather conservative in product development, and also for pretty bad customer service.  The time has come for Sage to stop sitting on its laurels, counting on its dominance in accountancy practices for maintaining revenues and to face up to a changing world.

So, what can Sage do?

I was at Sage's recent Sage Summit in New Orleans, where new (as of November 2014) CEO Stephen Kelly promised that Sage would from now on be putting the customer at the centre of everything.  He even went so far as to state that the company's share price was something he didn't pay much attention to (close to an illegal statement for a CEO). His belief is that happy customers lead to a vibrant company that will drive shareholder value - rather than focusing on shareholder value and finding that you have an unhappy and failing business underneath it.

Sage has been relatively slow to the software as a service (SaaS) model. Others, such as Xero and Zoho, have fully embraced the SaaS model - indeed to the exclusion of an on-premise capability.  Sage has realised that although a large portion of its own customers are conservative and want to stay with an on-premise capability, others are looking to SaaS - and Sage has had to be discounted as an option.

Sure, in 2013, Sage launched Sage One as a SaaS offering - but this was for very small companies.  This could compete with bookkeeping SaaS offerings, but was not really up to competing with the more functional accounting SaaS products.  Now, Kelly is changing Sage's approach.

Alongside Sage 100 and 300 come 100C and 300C - cloud versions of the standard systems, which have themselves been updated to use HTML5 interfaces to provide commonality between the on-premise and SaaS versions along with greater mobile device support.  Customers can therefore choose when to move to the cloud at their own speed, with the same underlying accounting engine meaning that data transfer is seamless.  Sage 50 remains as a small business, desktop option for those who are happy with a native PC application.

Alongside this comes Sage Live (renamed, somewhat confusingly, from Sage Life) - a SaaS only offering that is based on the Salesforce1 cloud platform.  Launched in the US, Sage Live will be arriving in the UK in the coming weeks and will then roll out elsewhere, starting with other English-speaking countries.  There is no need to be a Salesforce user to subscribe to Sage Live - but if you are, then full integration is possible, so that Sage users have access to the Salesforce data from within Sage, and Salesforce users have access to the Sage data within Salesforce.

Sage has also had an ERP offering - Sage ERP X3.  Kelly made a bold statement - he is declaring the end of ERP.  Not the product - just the term.  His view is that ERP has become a constraint within businesses: most ERP packages have become too big and unwieldy to help a dynamic organisation.  The newly named SaaS-based Sage X3 aims to support such dynamic organisations - it is far more of a process engine than an ERP system - and includes much of the accountancy capabilities that you would get in Sage Live.

This is all fine - but it does lead to some problems around where does each product position itself?  The stated aim of giving customers the choice between on-premise and SaaS works for Sage 100 and 200 - it doesn't work with Sage One, Sage 50, Sage Live, or Sage X3.  Sage Live will grow to include a degree of the functionality that Sage X3 has - the functional differences will need to be messaged succinctly and solidly to ensure that users go for the right product.  Sage Live and Sage X3 are, by the nature of the platforms they are written for, completely different beasts at a code level - porting over 'goodness' from one platform to the other is not easy - nor will data transfer be easy should a customer need to move over from one platform to another.

One other point is that Sage wants to become the voice for small businesses.  Starting in the UK, it has started a petition (http://bit.ly/payin30days) to try and get large organisations to pay smaller organisation's invoices within 30 days.  This is all well and good, but the UK already has the Institute of Directors, the Chambers of Commerce, the Federation of Small Businesses along with many more groups nominally lobbying for the 'small guy'. Sage may be seen as more of a company with a commercial agenda, rather than an independent voice in this area.

Similarly, if it wants to be a major voice and provider to the small and medium business sector, then along with getting rid of the term ERP, maybe it needs to downplay the use of Accounting as well.  If Sage moves to being far more of a process-based vendor, then it can focus on providing a financial 'engine' to SMBs.  Here, it can be a solid system of record for everything financially related, and through effective technical partnerships, can make sure that other business functions (for example, information management, business analytics and intelligence) all access and utilise the Sage engine.

In reality, this seems to be where Sage is heading: partnerships are being put in place, and Sage has created several sets of APIs (in itself, a problem - the API sets need to be brought down to a single set) so that different systems of engagement (user interfaces) can plug directly into Sage's system of record.

It is, as yet, too early to say if Sage is biting off more than it can chew: this will only become apparent in 12-18 months' time.  However, it is good to see that under Kelly's leadership, Sage is ceasing to be such a conservative company.  Maybe it will come out of this new direction, 'well'. Salvia, Sage!

Confidence in data security part 2 - Weak links - the info supply chain

Bob Tarzey | No Comments
| More

A previous blog post, Room for improvement, showed that organisations which invest in user education, advanced technologies and the ability to co-ordinate both security policy and incident response, improved their confidence in data security. All well and good, but what does this do for an organisation other than help prove it is able to meet various regulatory requirements?

The research reports behind this series of blogs also looked at the impact confidence in security had on information supply chains. Manufacturers and retailers in particular have extensive physical supply chains to move goods around. However, all organisations now share data with external users across public networks through information supply chains and all can exploit these better if they improve their confidence in information security.

The complexity of these information supply chains varies. They are more complex when overlaying an extensive physical supply chain, in larger organisations and when the individuals and organisations involved cover a broad geographic area. More complexity provides more motivation to invest in the measures that improve confidence in data security. The investments made vary depending on the types of data involved.

In retail, distribution and transport, payment card data is by far the greatest concern. To protect this there is more likely than average to be investment in next generation firewalls (that help deal with the PCI DSS requirement to secure applications), policy based access rights to cloud resources and locking down user end-points, for example through configuration change controls and mobile app management.

Financial services firms also worry about payment card data, however personally identifiable data (PID) comes a close second. To protect PID similar technologies are favoured; however, the degree to which a given organisation is more likely that average to invest is considerably higher than it is for payment card data. The reason for this is that the security of payment card data can outsourced to payment gateway providers, whilst the ultimate responsibility for PID always remains with the data controller (the business that owns the data) regardless of where it is stored.

When it comes to intellectual property (a big concern for manufacturers) data loss prevention (DLP) and digital rights management (DRM) are high on the list of technologies that are more likely than average to be deployed. Even higher on the list are ways to monitor user behaviour in the cloud and on end points.

Effective information security is not just about ticking boxes to meet the expectations of regulators (although that is necessary), it is about providing the confidence to safely share information far and wide through the increasingly complex information supply chains that enable business processes. Those which fail to do this will lose the confidence of customers and partners. Losing that will probably damage your business faster than any regulator can.

Quocirca's report, Weak Links, was sponsored by Digital Guardian (a supplier of data protection products) and is free to download at the following link: http://quocirca.com/content/weak-links-strengthening-information-supply-chain


Do simple single function consumer wearable devices have a future or are they just a passing fad?

Rob Bamforth | No Comments
| More

The current market driver for wearables appears to be fitness applications and there are plenty of fitness devices already on the market with new entrants popping up all the time. Some started life as simple digital pedometers, but by adding elevation, geographic position and heart rate as well as more sophisticated sensors, they can capture more useful information. To gain real insight, as well as monetising the data in some way, a cloud-based service needs to be added to supplement the physical device; free to start with, or always free for simpler tasks, and then perhaps paid for as perceived value increases.


It is a common model, applied elsewhere in IT, and in the physical world approximating to the popular 'razor and blades' marketing approach.  However, longer term wearable digital fitness risks falling into a disposable fad in a similar way to gym membership. Many join up after a particular event - New Year resolution, birthday, etc. - but only stay the course a few months.


Yes, of course a number will persevere, but are the sweaty worlds of working out or the massed ranks of hikers the best places for a top end, high tech gadget with a monthly subscription to Aerobics-as-a-service? If only a simple device is required - digital pedometer plus - then it's much harder to see many valued added apps following or a growing ecosystem of additional functions providing any meaningful payback. In fact, some early device providers, such as Nike with its Fuelband, have already stumbled and fallen by the wayside.


Health, rather than fitness makes much more sense as there is a greater opportunity for added value applications. Fitness and health should go hand in hand, but individuals are generally more attuned to the benefits of reducing their risks than the opportunity of 'adding value' via increasing fitness - the instinct for survival perhaps being greater than overcoming natural laziness? Devices and services that monitor health and provide actionable advice may have more monetise-able longevity. After all, people pay premiums for faster access to (private) healthcare, 'healthy' superfoods and organic produce, so the model of paying to be healthier is well understood and well-funded.


Completing the feedback loop by monitoring and feeding data back to a central hub adds value to the health sector by allowing for better targeting, applying resources and understanding results. So, as well as the health benefit, the individuals involved might also see lower personal costs especially if health monitoring is tied to overall costs associated with the person's health etc. As with other forms of detailed data gathering, the insurance industry, through life and medical insurance, should recognise the benefits of this insight and incorporate it into honing its offerings. Where the individuals' health cover or insurance is paid for by their employers (essentially monitoring to reduce human 'maintenance' costs) there may also be cost saving incentives for employers as well as health, fitness and even financial incentives for employees.


From a society perspective there are also huge potential benefits for supporting the vulnerable, unwell or those at risk - not only for the social good, but also saving costs, in only needing to send first responders or other care workers out when there is a physical need (notwithstanding the needs for social contact - but this can still be scheduled for care workers with different skill sets). However, there is a risk that these benefits, while hugely worthy, are difficult to attribute and so may end up being ancillary rather than primary drivers for adoption. So there has to be a clear personal incentive to wear - being safe, being cared for or being financially rewarded.


Simple consumer wearables may have an important future in health and fitness, but they need more than cool tech, sensors, cloud services and long battery lives. They need to engage both the wearer and service providers, financially as well as through 'cool marketing', to successfully fuel the surrounding ecosystem and to ensure they continue to be regularly worn and used.


It might be that in the longer term, smart fabrics will increasingly incorporate sensors, providing pervasive monitoring without inconveniencing the wearer in any way, or other routinely carried devices, such as mobile phones or emerging ones like smartwatches, will capture sufficient data. In the meantime, there is a significant simple wearable opportunity on the wrist, but the value has to be clear to all parties in order to have staying power.

Securing the Internet of Things - time for another look at Public Key Infrastructure (PKI)?

Bob Tarzey | No Comments
| More
The Internet of Things (IoT) is a broad area that is attracting much discussion. Wikipedia starts its IoT definition as follows: 'the network of physical objects or "things" embedded with electronics, software, sensors, and connectivity to enable objects to exchange data with the manufacturer, operator and/or other connected devices....' Such capabilities are nothing new. However, three things are happening that drive the current discussion:

  • The increasing tendency to open previously closed networks of things to the public internet for ease of management and increased value through greater connectivity.
  • The ever decreasing cost and size of embedded chip sets makes it easy to IP-enable all sorts of devices, from consumer gadgets to industrial probes and sensors, and attach them to standardised networks.
  • This connectivity leaves IoT deployments open to attack and the volume of devices makes the resulting attack surface potentially huge. It is here that the renewed case for Public Key Infrastructure (PKI) is being made.

PKI vendors provide management capabilities for the issuing and revoking of digital certificates which ensure secure communications across public networks.  This is not the same as publicly trusted certificate authorities (CAs) which issue SSL certificates, for example to secure online retail and banking, although some PKI vendors do this too.

When the widespread use of the internet took-off in the mid-90s there were expected to be rich pickings from PKI. However, PKI adoption was slower than expected as many found the complexity and cost hard to justify. The main PKI platform providers that are still around today include Entrust Datacard, EMC's RSA and Verizon (that ended up with the PKI assets of Baltimore, which reached stellar levels on the FTSE in 2000).

So what's new, why might PKI help today's businesses reap the benefits of the IoT whilst minimising this risk? First it is necessary to understand why the IoT is vulnerable and why it might be targeted. Network connectivity is of course a prime reason; although closed networks are not immune (the initial STUXNET attack on the Iranian centrifuges was on a closed system).

Second, as it stands many things, such as probes, sensors, cameras and medical devices  are more vulnerable that traditional computing end-points such as servers, PCs and smartphones because they run with embedded software (firmware) that has been developed without online security in mind. As well as software flaws this can include really basic stuff such as back-end management interfaces with default credentials and non-encrypted communications. Worse still, regimes for updating the firmware are often non-existent and the software on devices can quickly become out of date. This is a particular problem as older things become connected, since their firmware may have been left unchanged for years.

Third, the identity of things and the entities trying to communicate with them is often not authenticated to the same level as would be the case for traditional users and their devices. Solving this third problem would go some way to solving to first two. One of the challenges with identity is that things often communicate direct with other things or back end servers (machine-to-machine or M2M), so traditional methods for authentication, such as passwords, biometrics and tokens cannot be used; hello again PKI!

So, why would the bad guys seek to exploit these weaknesses? Hackers are always seeking weak points for initial entry to networks; enabling them to pose as trusted insiders and move sideways (the well publicised 2014 attack on Target Corporation's payments systems was initiated via a cooling system maintenance application). IoT deployments may also be targeted in their own right to cause damage to a business processes for some reason. Furthermore, the very volume of things makes them attractive for recruitment to botnets which can be used to perpetrate other sorts of attacks. This has already happened; for example in 2014 Akamai reported a malware kit named Spike that enabled 'routers, smart thermostats, smart dryers and other devices' to be recruited to botnets and used to launch DDoS attacks.

Whatever the vulnerabilities and motives for attack, the risks can be minimised by authenticating the identity of any given entity trying to communicate with a thing. An entity can be another thing or a computer with or without a human operator. Once machines start communicating directly with each other there is no human latency to slow communications down which can create huge amounts of network chatter. This is a problem in its own right and, when it comes to security, it makes it hard to find potential attack traffic. A single human controlled attack in amongst a high volume of legitimate M2M traffic could easily go unnoticed; all the more reason to authenticate each and every communication.

In principal that is easy through the use of digital certificates. Equip all things with a private key and issue public keys to any other device that has a valid reason to access it. In practice, there are three related problems to overcome:

  • Volume: the number of individual things has the potential to be so vast that issuing public keys to each and every one becomes impractical. This issue can be dealt with by using a layered approach to the way things are deployed, providing private keys to hubs that control subnets of things.
  • Key life-cycle management: even if volume is kept under control a management structure is still needed to understand what public and private keys have been issued, when they need renewing and revoking them when necessary. PKI is a way of managing this.
  • Cost: the keys and the PKI infrastructure have a cost, that needs to be affordable within the overall value proposition for any set of IoT applications a given organisation is intending to roll out.

The security risks associated with the IoT are worth overcoming to reap the benefits. PKI was first developed to secure communications between user devices and servers over public networks, so the IoT seems an obvious extension of this use case. The options for deploying IoT applications, using PKI and the viability of vendors such as Entrust Datacard, EMC/RSA and Verizon to achieve this will be the subject of a second article. 

Half of large enterprises plan to increase spending on managed print services

Louella Fernandes | No Comments
| More

The managed print services market continues to gain momentum as enterprises seek to tackle escalating print costs and drive greater business efficiency. The market is relatively buoyant with 51% of organisations (either already using or planning to use MPS) indicating they plan to increase expenditure on MPS over the next year.  The market is characterised by a tightly packed group of leaders: Xerox, HP, Ricoh, Lexmark and Canon.

Quocirca estimates that almost 50% of large enterprises (over 1,000 employees) are now using some form of MPS, with stronger prevalence in very large enterprises. Overall, a further 20% are planning to use MPS within the next year, reflecting the growing maturity of the market. Whilst broader workflow solutions are proving to be a significant differentiator, service delivery remains a key MPS market driver. Quocirca believes that continued investment to drive enhanced service performance through predictive analytics and focus on consistent delivery through integrated back-end platforms is ultimately, what sets the leading providers apart.

On average, organisations have been using MPS for 3 years with an average of 23 locations and 6 countries covered by an MPS contract. The majority (64%) are in the second phase of their engagements - having optimised their fleet and now implementing document workflow tools. Overall, 70% operate a multivendor fleet managed by a single MPS provider (mixed fleet) reflecting the need for strong multivendor support capabilities at the outset of any MPS contract. However, almost 80% indicate that they intend to consolidate on a single brand. Operating a standardised fleet offers a range of efficiency benefits - for both IT management and end-users. Clearly those MPS providers that are able to offer the broadest hardware portfolio are best positioned to address the diverse printing and imaging needs of large enterprises.

 

A fully outsourced approach pays dividends

Currently, the majority of respondents indicated that they use a hybrid MPS approach, retaining some print management tasks in-house. However, the fully outsourced approach is the one paying the most dividends. Overall, 90% of those using a fully outsourced service are satisfied or very satisfied with the management and performance of their print infrastructure, compared to 68% of those taking a hybrid approach. In fact, while overall respondents reported an average saving of 26% on the cost of printing over the past year through using MPS, it is those using a fully outsourced approach that report the highest savings. Almost 40% of this segment indicates savings of over 30% compared to 24% of those using a hybrid approach.

 

Certainly, with a full-outsourced approach, organisations can achieve significant cost efficiencies and the scale and experience of the MPS provider can go beyond what is available from internal resources. It can also enable an organisation to drive innovation and change by freeing up internal IT staff to spend on development projects that are more closely aligned to achieving business objectives. Whilst a hybrid approach can help an organisation retain some level of in-house control, it requires robust governance to ensure efficiency and consistent service level quality.

 

Security and cost are top drivers for MPS

Overall, for the first year, security has risen to the top of the agenda with 75% indicating that this was an important or very important driver.  Document security was rated the highest by professional service and financial sector respondents with government, despite their heavy reliance on printing, paying it the lowest priority.  Unsurprisingly cost remains a top driver - particularly amongst organisations with more than 3,000 employees and those in the professional services and financial sector.  Clearly, despite many of these organisations transitioning to digital processes, the cost of printing is still a key challenge, which they are looking to mitigate through MPS. Service quality follows closely behind; improving service levels through better governance, SLA quality and reporting and analytics is now a key differentiator for the top MPS providers.

MPS is successfully tackling paper to digital workflow

Overall, 72% of respondents indicated that they have some paper free processes and are planning more. For those already using MPS, this rose to 74% compared to 57% that have yet to start MPS. So how well is MPS faring when it comes to helping organisations transition to digital workflows?

A key consideration is the smart multi-function printer (MFP), which when effectively utilised is the foundation to bridging the paper and digital gap. With sophisticated document capture and routing capabilities, these devices can integrate directly with enterprise content management (ECM) and other systems such as enterprise resource planning (ERP). So, for instance, paper invoices or expense receipts can be scanned and routed directly to an accounts application from the MFP interface panel.

Quocirca's survey revealed that 37% of organisations have a well-defined strategy that maximises the benefits of smart MFPs, with 50% indicating that they understand the value of smart MFPs and are starting to exploit them. Notably, 46% of MPS users have a well-defined MFP strategy compared to just 14% of those that are yet to begin their MPS engagement.

What is differentiating the leaders in the market?

Whilst broader workflow solutions are proving to be a significant differentiator, service delivery remains a key MPS market driver. Leading MPS providers are those making continuous investment to drive enhanced service performance through predictive analytics and a focus on consistent delivery through integrated back-end platforms. There is no room for complacency - the market is increasingly competitive and as MPS providers jostle for position, it will not only be their future strategy to improve business processes that is key to retaining existing customers, it will also be maintaining a secure and reliable and cost-efficient print infrastructure which will be key to retaining existing customers.

 

Read the summary Quocirca MPS Market Landscape 2015 for more information.







Confidence in data security part 1 - Room for improvement

Bob Tarzey | No Comments
| More

It will not come as a surprise to many that UK enterprises could do more to improve their confidence in data security. Security managers know this, but often struggle to get the funds they need. The first of three 2015 Quocirca research reports quantifies the business benefits of a range of measures in building confidence in data security that help make the case for investment.

Only 29% of the organisations say they are very confident about data security. This rises to 52% in financial services but drops to just 16% in retail, distribution and transport. However, it is not just the industry sector that a given organisation is in that affects confidence levels. The levels of user knowledge about data security, the types of technologies deployed and the ability to co-ordinate policy all vary significantly and each can make a big difference.

To start with, those organisations who had educated their employees to be very knowledgeable about data protection measures were 3 to 4 times as likely to be very confident about data security as those who had not. When it comes to security technology widely deployed capabilities such as email and web content filtering make little difference; these have become hygiene factors that most now have in place. Countering advanced threats to data requires more state-of-the-art technology.

For example, those organisations that have deployed data loss prevention (DLP) are also 3 to 4 times as likely to be very confident about data security compared to those that have not. Specific measures for securing data sharing in the cloud, such as the use of secure proxies, secure links and the ability to profile users and devices, have similar impact. Certain end-point security measures more than double the level saying they are very confident.

Threats can come from within and without the organisation. Being able to understand who is doing what with data and co-ordinating the response accordingly makes a big different too. A highly co-ordinated capability to respond to insider threats more than doubles the numbers saying they are very confident; a co-ordinated response to criminal hackers triples the figure.

Put it all together and the results are startling. None of those doing poorly at all of the above are very confident about data security; 30% of them say they are not confident at all. At the other end of the spectrum, all of those that do a good job of educating users and co-ordinating responses alongside deploying certain advanced technologies are very confident (63%) or somewhat confident (37%) about data security.

Interestingly, those organisations lying at these two extremes do share one characteristic. They have a similar average number of security suppliers and repositories for defining security policy. However, with the laggards it is just they have not deployed much in the first place, with the leaders it is because they rationalised and streamlined their approach to data security. They get a number of benefits from doing this, for example stronger information supply chains and insight into what their users are up to in the cloud, these will be the subject of two further blog posts.

No businesses can duck these issues, so the laggards should take a leaf out the leader's book and get real about data security. Quocirca's report, Room for improvement, was ponsored by Digital Guardian (a supplier of data protection products) and is free to download at the following link: http://quocirca.com/content/room-improvement-building-confidence-data-security

Making the most of data: making data more open

Clive Longbottom | No Comments
| More

A while back, I published an article on the use of open data sets, covering some of the things that were already being done and what may be possible in the near- to mid-term.  After this was published, David Patterson of KnowNow Information (KnowNow) got in touch to ask to meet up to discuss what his company has been up to in this space.

I met up with David at IBM's Hursley Laboratories in Hampshire. As a member of IBM's Global Entrepreneur Programme, KnowNow has access to IBM's skills and capabilities, helping to maximise KnowNow's capabilities in how it deals with data.

With access to IBM's Bluemix and Node-RED technologies, KnowNow can focus on its own deep domain expertise - managing datasets to ensure that its customers get the end results they need.  Indeed, KnowNow has a viewpoint that is quite refreshing - it wants to promulgate what it is doing to as many people and companies as possible: it does not want to be proprietary. It wants to be able to monetise its domain expertise in how it understands, analyses and feeds information back as the customer needs it - not from dealing with data in any 'hidden' manner.

Part of this is undoubtedly based on KnowNow's relationship with the Open Data Institute (ODI).  The ODI is pushing for as many datasets to be made available via open APIs as possible. KnowNow's approach has been to utilise as much 'free' data as possible - and therefore, it doesn't believe that it can charge for the data itself - only for the results.

So, what has KnowNow been up to?  The reason that David wanted to meet up was primarily to show me what KnowNow has been doing around flood monitoring and event prediction.  With a system aimed at the Environment Agency and DEFRA in the UK, the idea was to be able to run predictive simulations of where resources would be required in the case of a flood event occurring.  These resources could be anything from improved signage, presence of fire brigade, ambulance or army, based on available open data sets, included near real time data on rainfall, river levels and weather forecasts against geospatial information such as 2D and 3D map information.  An example of an event in this case could be where a ford is likely to get to a level where a car could get washed away - the simple provision of a 'no entry' sign would prevent this.  The use of the open data sets enables KnowNow to predict such events to a good degree of accuracy.  It is easy to see how such a model can be used in areas such as brush or forest fires, drought and other weather events.

However, it is finding councils and central government hard to persuade to pay for the service - essentially, government is reactive, rather than proactive, so KnowNow may need to wait for the winter and for floods to happen before the government purse is opened.  There has been more immediate interest from insurance companies - they can see value from this type of approach in setting premiums and in dealing with any fall out after an event.

KnowNow is also looking at other areas - and this is where its existing play in the open data market is key. The rise of the internet of things (IoT), could have a big impact on an areas such as healthcare.  Take as an example a person who, for whatever reason, is still capable of living alone but requires a degree of oversight.  At the moment, this will be carried out via timed visits from care workers - and it is apparent that the system is overstretched.  Instead, create an intelligent environment around the person.  Have they opened their medicine container when they should?  Have they opened the front door at all in the past hours or days?  Are they moving around? Has there been a sharp movement monitored as they move around, which could denote a fall?  Have they used the kettle, or the shower, opened the fridge door, watched TV or listened to the radio?  Monitoring all of these can enable analysis that can identify events where targeted interventions makes sense.  Where a person has not spoken to someone for a while, a person can be sent round to have a chat.  Where a meal hasn't been had, someone who can prepare one for them.  A fall? Send a first responder. 

Such targeted interventions can ensure that a person gets the help that they require, when they require it.  It can also help optimise use of the NHS' scarce resources.

Even with a more concentrated care environment, such as a care home, such an approach can help in optimising the care a person receives.  Heart rates, breathing, temperature can all be monitored.  Even areas such as the condition of adult nappies could be monitored, sending alerts when these need changing.  This again frees up the care staff to concentrate on the more human aspects of the job - talking and interacting with each person, rather than checking up on them.

But, all this needs some form of better standardisation around how data is held and transferred in such environments. Where near real time intervention is needed, any transformations of data can just slow things down.  What KnowNow would like to see is more of an agreement around how datasets and APIs are created and managed - to make them more open, more available, more usable.  This would be in the best interests of everyone involved - the IoT can only be effective where data is easily moved around and analysed.

I found the discussions with David very interesting - to me, KnowNow is one of just a few companies at the forefront of dealing with data in a manner that is suitable to the IoT.  It is apparent that data formats and APIs will be a sticking point for a truly effective IoT - it is incumbent on all players in the market to ensure that data is easily and freely available from their devices.

Is your identity and access management fit for purpose?

Bob Tarzey | No Comments
| More

In the old days, identity and access management (IAM) was a mainly internal affair; employees accessing applications, all safely behind a firewall. OK, perhaps the odd remote user, but they tunnelled in using a VPN and, to all intents and purposes, they were brought inside the firewall. Those days are long gone.

 

Today the applications can be anywhere and the users can come from anywhere. Quocirca research (Masters of Machines II, June 2015) shows almost 75% of organisations are now using cloud-based software-as-a-service (SaaS) applications with a similar number using infrastructure or platform-as-a-service (IaaS/PaaS) to deploy applications that run in 3rd party data centres. As for the users, as another recent Quocirca research report shows (Getting to know you, June 2015), they can be anywhere too.

 

It is not just the rise in the number of employees working remotely, but the fact that applications are opened up to outsiders. Whether it is better managing supply chains through sharing applications with partners and suppliers, managing distribution online or transacting directly with consumers, almost all organisations are interacting with external users beyond their firewall.

 

Furthermore, this is not a small scale opening up to a discrete set of users; the numbers involved are big. The average European enterprise is dealing with approaching a quarter of a million registered external users. For organisations that are dealing with consumers, such as financial services and transport organisations the numbers are even higher. Dealing with this complete reconfiguration of the way IT applications are managed and accessed has required a re-think of IAM.

 

The "Getting to know you" research shows that only 20% of organisations think their current IAM systems are fit for purpose. IAM covers a range of capabilities including user provisioning, compliance reporting and single-sign-on. There is also an increasing requirement for federated identity management, which is the bringing together of identities from multiple sources and apply a common policy. For the majority, the primary source of identity for employees remains Microsoft Active Directory but this is now supplemented by a range of other sources for external users. These include partner directories, government databases, lists from telco service providers, member lists of professional bodies and, especially when it comes to consumers, social media.

 

The trouble is that many IAM systems were designed to deal with the old way of doing things. They were often purchased as part of software stack from a vendor like Oracle, CA or IBM. Many organisations are now struggling to adapt these legacy IAM systems for the new use cases. As with any legacy system, wholesale replacement is often impractical if not impossible. The result is that new IAM suppliers are being introduced and integrated with the old.

 

The average organisation has at least 2 IAM suppliers; the number is higher when stack-based IAM is being adapted to deal with external users. The second IAM system is likely to be a SaaS system, designed for provisioning users from a wide range of identity sources to other cloud applications. IAM systems are becoming hybridised, legacy IAM for internal users and some older relationships (such as those with contractors) integrated with cloud-based management for remote workers and users from partners, business customers and consumers. 39% of the respondents to the "Getting to know you" research are taking a hybrid approach to federating identities and 53% are doing so for single sign on, a particularly effective way of handling access to cloud-based resources for internal and external users. Both numbers rise for consumer-facing organisations.

 

A small number of organisations, around 10%, have moved entirely over to a SaaS-based IAM system such as Ping Identity's PingOne, Intermedia's AppID (from its SaaS ID acquisition), Okta, OneLogin or Symplified. Traditional stack-IAM vendors are updating their products; for example, CA SiteMinder, Symantec's SAM and IBM via its 2014 acquisition of Lighthouse Security. Other cloud service providers, such as Salesforce, have entered the IAM market, in its case by working with the open source provider ForgeRock.

 

The last decade has seen a revolution in the IAM market. The old guard will attempt to keep up with the up-starts. However, it seems that simply being an incumbent IAM supplier is not enough, so in order to keep up there is likely to more acquisition and consolidation.

 

Simple security in the mobile 'jungle'

Rob Bamforth | No Comments
| More

Last month's 8th annual IT Security Analyst & CISO (chief info security officer) Forum organised and hosted by Eskenzi PR brought together a fascinating combination of those in charge of securing household names in the insurance, banking, accounting, pharmaceuticals and media verticals and a rich vein of vendors offering their security wares.


Shortly after, the tempo and feel of the event was well documented in my colleague, Bob Tarzey's event report, in his blog "From 'no' to 'know'", which explored the highly pragmatic idea of not blocking users, but understanding what they doing - and why.


This is especially important in the 'mobile' context, where the edge of the network is no longer a beige box running one operating system sat on the desk, but a plethora of pocket-able, smart, highly connected and increasingly wearable devices used by pretty much everyone and anyone. Each comes not only with a diversity of operating systems and huge ecosystems of apps, but also the personal preferences and idiosyncrasies of each user.


Finding enterprise tools that span and control devices, data, apps and ultimately the person using them is increasingly challenging - the problem could be characterised as no longer simply 'herding cats', but 'juggling lions'.


Images from popular culture indicated that lion tamers used to manage with a whip and a chair - essentially let the lion loose, but always kept within the keen eyes of the tamer plus a bit of fear from the potential of the whip and a prod from the chair in the right direction - so could IT security learn from this approach?


Many of the vendors at the Forum offered keen eyes to detect threats and problems, including vendors RiskIQ, Tenable and OpenDNS as well as others offering tools to whip applications, users and policies into shape such as Veracode, PulseSecure and Illumio.


However, one particular vendor caught my eye from a mobile perspective - Duo Security with its simple approach to two-factor authentication.


Humans are generally the weakest element in security, in IT just as in everywhere else. If it's counter-intuitive (their perception, not yours), slow or just 'a bit difficult', it will not be used or not used properly. Even the most loyal employees will find ways round cumbersome tools that impede them in addressing the task at hand.


Duo addresses this by making it simple for a user to authenticate; online, offline, while mobile or just over a landline. This can be accomplished by one touch on an app on the screen of a favourite mobile device, an SMS via a mobile phone if there is no internet available, or an automated voice call to a phone. Not to leave anyone or more 'old-fashioned' circumstances out, Duo also supports hardware devices via display tokens or the YubiKey USB device.


The enterprise using Duo's authentication service can keep control of the branding so that users recognise it as their own and since users can self enrol and configure authentication methods based on their preferences, they are engaged from the outset. Offered by monthly subscription per user, and geared to different levels of functionality for different sizes of business or requirements, this is simple authentication as a service.


Security can often be seen as painful to endure by users making it difficult to get buy-in and easy to be obstructive, which does not really help with the core intention - improving security. With so much user choice and preferences being exerted, it is far better to use tools that fit with 'lifestyles' as well as prodding security in the right direction. Here is a potential lion tamer's wooden stool; simple to use, works anywhere and perhaps even better, the 'lions' can self-enrol.


Masters of Machines II - lifting the fog of ignorance in IT management

Bob Tarzey | No Comments
| More

In 2014 Quocirca published a research report looking of the value European organisations were deriving from operational intelligence. Now, in June 2015, there is a sequel; Masters of Machines II. Both reports were sponsored by Splunk a provider of an operational intelligence platform. The new report is freely available to download at the given link and Quocirca will be presenting the results at a webinar on July 16th 2015 (free registration HERE).

 

The research looks at the changing priorities of European IT managers. In particular how cost-related concerns have dropped away with improving economic conditions in most European countries, whilst concerns around the customer experience, data chaos, inflexible IT monitoring and, in particular, IT security, have all risen.

 

The research goes on to look at how effective operational intelligence is at addressing some of these concerns as well as two other issues. The first of these is increasing IT complexity as more and more cloud-based resources are used to supplement in-house IT, thereby creating hybridised platforms. Second is the role operational intelligence plays in supporting commercial activities, especially the cross channel customer experience (that is the mixed use of web sites, mobile apps, social media, email, voice and so on by individual consumers to communicate with product and service suppliers).

 

Effective operational intelligence requires the comprehensive collection of machine data and tools that are capable of consolidating, processing and analysing it. The research looks at the ability European organisations have to do all this through the use of an operational intelligence index, which was used in both the 2014 and 2015 reports. The index covers 12 capabilities from the most basic to "capture, store, and search machine data" to the most advanced to "provide the business views from machine data analysis that drive real-time decision-making and innovation (customer insights, marketing insights, usage insights, product-centric insights)"

 

In nearly all areas there is a strong positive correlation between operational intelligence capability and the ability to address various IT and commercial management challenges. The exception is IT security, where concerns increase with better operational intelligence. The conclusion here is dark; only once deep enough insight is gained do organisations really see the scale of the security challenge. Some with little insight may exist in a state of blissful ignorance, however, that will not last. It is better to know the movements of your foes, than have them emerge at an expected time and place out of the fog of ignorance.







Securing joined-up government: the UK's Public Service Network (PSN)

Bob Tarzey | No Comments
| More

A common mantra of the New Labour administration that governed the UK from 1997 to 2007 (when the 'new' was all but dropped with the departure of Tony Blair), was that Britain must have more joined-up government. An initiative was kicked-off in 2007 to make this a digital reality with the launch of the UK Public Sector Network (PSN, since relabelled the Public Service Network).

 

Back then digital reform, data sharing, sustainability and multi-agency working were all top of mind. However, an effective PSN also makes it easier for smaller suppliers to participate in the public sector market place, an issue which interested the Coalition government that replaced Labour in 2010 and its recent Conservative successor. This saw the government focus shifted to public sector spending cuts and a desire to break up mega technology and communications contracts into smaller chunks.

 

In short, the PSN is a dedicated high performance internet for the UK government, a standardised network of networks, provided by large service provider such as BT, Virgin Media, Vodafone and Level 3 Communications and a host of smaller companies, keen to get in on the action. The PSN architecture is similar to the internet but separated from it with performance guaranties. Separate, but not isolated, how else could citizens be served?

 

Information sharing via the PSN is controlled, the aim is to be open when appropriate but secure when necessary. One objective is to reduce the reported instances of data leaks. According to the UK Information Commissioner's Office (ICO) Data Breach Trends, in the last finance year there were 35 report breaches for central government and 233 for local government, the latter only being beaten by healthcare with an atrocious 747. That made government organisations responsible for about 15% of all incidents (excluding health, education and law enforcement).

 

An organisation wanting to access the PSN must pass the PSN Code of Connection (CoCo), an information assurance mechanism that aims to ensure all the various member organisations can have an agreed level of trust through common levels of security.

 

Advice on compliance is laid out by the government on the PSN web site and advice is also available from Innopsis, a trade association for communications, network and application suppliers to the UK public sector. Innopsis was previously known as Public Service Network GB (PSNGB). Innopsis helps its members understand and deal with the complexities of the public sector ICT market, especially with regard to use of the UK's Public Service Network (PSN).

 

The PSN rules include making sure the end-points that attach to the network are compliant which means they must be managed in some way (i.e. ad hoc bring-your-own-device is not allowed). Example controls include; ensuring software is patched to the latest levels, preventing the execution of unauthorised software, deploying anti-malware and using encryption on remote and mobile devices. A PSN member organisation can have unmanaged devices on its own network, but this must be clearly and securely separated from the CoCo compliant sector of the network.

 

Innopsis was represented by its chairman, Phil Gibson on a panel facilitated by Quocirca at InfoSec in June 2015, which looked at secure network access in the UK public sector. Also on the panel was the ICT Security Manager for the NHS South East Commissioning Support Unit (CSU) talking about a project to roll out the Sussex Next Generation Community of Interest Network (NG-COIN); one of four Linked WANs that South East CSU manages.

NHS organisations currently use another dedicated network called N3; however, this is being replaced by a PSN for healthcare, which is to be labelled the Health and Social Care Network (HSCN).  The Sussex NG-COIN involved 30,000 end user devices across 230 sites with anything from 1 to 5,000 users; many of the sites required public network access. There are 15 different organisations using the NG-COIN with varying security requirements and thousands of applications containing sensitive clinical information.

 

The old COIN relied on an ageing and ineffective intrusion prevention system (IPS). With NG-COIN this was replaced by a network access control (NAC) system. The cost difference to the 15 user organisations was absorbed as a security line item cost, which they were already accustomed to.

 

ForeScout's CounterACT NAC system was selected in 2013. It proved to be fast to deploy, 95% of the network was being monitored within one week. It was compatible with all the legacy networking equipment from a range of vendors including Cisco, HP and 3Com (now owned by HP). The system provided flexibility to define policies by device type, site owner, user type, etc. and was integrated with the existing wireless solution to provide authenticated guest access.

 

CounterACT also fulfilled reporting requirements providing complete information about access and usage across the whole network; what, where, when and who from a single console. It also provided the ability to automatically block access to non-compliant devices or limit access based on usage policies.

 

These are all issues that any organisation needs to be able address before attaching to the UK PSN. NAC provided Sussex NHS with a way to ensure controlled and complaint use of its network that any organisation wanting to attach to the UK PSN compliantly could follow as an example.

IBM labs: people having fun while changing the future?

Clive Longbottom | No Comments
| More

As an industry analyst, I go to plenty of technology events.  Some are big, aimed at a specific vendor's customers and prospects.  Others are targeted just at analysts.  Many are full of marketing - and all too often, not a lot else.

However, once in a while, I get a real buzz at an event.  Such a one was a recent visit to IBM's Zurich Labs to talk directly with scientists there working on various aspects of storage and other technologies.  These sort of events tend to be pretty stripped of the marketing veneer around the actual work, and as much of what is going on tends to be at the forefront of science, then it also forces us to think more deeply about what we are seeing.

Starting with a general overview, IBM then dived deep into flash storage.  The main problem with flash storage is that it has a relatively low endurance, as each cell of memory can only be written to a given number of times before it no longer works.  IBM has developed a system which can take low-cost consumer-level solid state drives (SSDs) and elevate their performance and endurance to meet enterprise requirements. The team has demonstrated 4.6x more endurance with a specific low-cost SSD model. The impact on flash-storage economics using such an approach will be pretty massive.

However, IBM has to look beyond current technologies, and so is already researching what could take over from flash memory.  We were given the first demonstration to non-IBM people of phase change memory (PCM). To date, read/write electronic storage has been carried out on magnetic medium (tape, hard disks) or flash-based media. Read-only storage has also included the likes of CDs and other optical media, where a laser beam is used to change the state of a layer of material from one phase to another (switching between amorphous and crystalline states).  For read only, this is fine: once the change is made, it can be left and has a high level of stability.  Read/write optical disks need to be able to apply heat to the layer of material back to its base state - and this is where the problems have been when looking at moving the technology through to a more dynamic memory use.

PCM requires that the chosen material can be changed from one state to another very rapidly and back again.  It also needs to be stable, and needs to be able to store data over a long period of time.  Whereas in optical memory, a laser beam is used, in memory, it has to be carried out through the use of an electronic current. However, there is also a problem called drift. Here, the resistance of the amorphous state rises according to the power law, and this makes the use of PCM in a multi cell configuration (needed to provide enough memory density) a major problem.

IBM demonstrated how far it has got by writing a jpg image to the memory and then reading it back.  In a raw form, the picture was heavily corrupted.  However, by using intelligent technology developed by IBM, it has created a means of more clearly delineating the levels between the states within the material it is using.  Using that system then showed how the image recovered from the memory was near perfect.

Why bother?  Multi-Level Cell (MLC) Flash tops out at around 3,000 read/write cycles, but PCM can endure at least 10 million. PCM is also faster than flash and cheaper than RAM: creating a PCM memory system gets closer still to being able to manage large data sets in real time - which many organisations will be willing to pay for.

Next was what being called a "datacentre in a box" (a term I have heard so many times before, that I winced when I heard it).  However, on this occasion, it may be closer to being realistic than before.  Rather than just trying to increase densities to the highest point, IBM is taking a new architectural approach, using server-on-chip Power architecture systems on a board about the same size as a dual in-line memory module (DIMM), as used in modern PCs.  These modules can run a full version of Linux, and can be packed into a 2U unit using a novel form of water cooling.  Instead of the cooling being directly on the chip, a copper laminate sheet is laid across the top of the CPU, with the ends of the sheet clamped into large copper bus bars at each end.  These bus bars also carry the power required for the systems, so meeting two needs in one design.  The aim if for 128 of these modules to be held in a single 2U rack mount chassis, consuming less than 6kW in power.  The heat can also be scavenged and used elsewhere when hot water cooling is used. Although "hot water cooling" may sound weird, the core temperature of the CPU only has to be kept below 80°C, so used to cool the CPUs is passed by an external heat exchanger, where its temperature drops to just a low enough temperature to keep the CPU below 80°C before being pumped back round to the CPU.  The heat is high grade and can be used for space heating or heating water for use in e.g. washing facilities - so saving on a buildings overall energy bill.

We also saw IBM's quiet rooms.  No - not some Google-esque area where its employees could grab a few moments of sleep, but rooms specifically built to create as ideal a place for nanometer technology experimentation as possible.  By "quiet", IBM is not just looking at how much audible noise is in the room.  Sure, there are anechoic chamber elsewhere which have less noise.  IBM wanted areas where the noise of electromagnetic forces, of movement, of temperature and humidity could be minimised to such an extent that when they want to hit a molecule with an electron beam from a meter away, they know it will happen.

These rooms were designed not by an architect or a civil engineer.  It took one of IBM's physicists to look at what he really needed from such an environment, and then to come up with a proto-design and talk to others about whether this would be feasible.  Being told that it wouldn't be, he went ahead anyway.  These rooms are unique on the planet at this time - and put IBM at the forefront of nanometer research.

These areas we were shown, along with others, raised lots of questions and created discussions that were interesting.  As we all agreed as we left the facility - if only all analyst events could be like that.

From "no" to "know": a report from the Eskenzi CISO forum

Bob Tarzey | No Comments
| More

This year's Eskenzi PR annual IT Security Analyst & CISO (chief info security officer) Forum was the 8th such event and attracted the security leaders of some of the largest UK organisations. Household names from insurance, banking, accounting, pharmaceuticals and media were all represented, as well as a large service provider and one true 21st century born-in-the-cloud business.

 

Whilst media outlets are never going to see all issues to do with IT security in the same ways as insurers ("journalists have to act in anomalous ways compared to users in role-based organisations" said one), there was consensus in many areas.

 

All accepted the reality of bring-your-own-device (BYOD) however it is managed and implemented. Shadow IT was recognised as a widespread issue, but one to be managed not banished. The mood was well summarised by a comment from one CISO - "we have to move from NO to KNOW"; that is, do not block the users from trying to do their jobs, but do make sure you have sufficient insight into their activity. A good analogy offered up by another was of a newly built US university campus that was surrounded by newly laid lawns with no footpaths. Only after a year, when the students had made clear the most trodden routes were hard paths laid. Within reason, IT security can be managed in the same way - to suit users.

 

There was some disagreement about how news of software vulnerabilities and exploits should be reported in the press; is it better that some high profile cases raise awareness amongst management or does over-reporting lead to complacency? Denial-of-service (DoS) attacks were recognised as a ubiquitous problem; not to be accepted but controlled. Perhaps the greatest consensus was reached about the need to deal with privileged user access.  One CISO observed that if the use of privilege internally is well managed it goes a long way towards mitigating external threats as well; hackers invariable seek out privileges to perpetrate their attacks.

 

The two day event, which as well as CISOs included industry analysts (such as Quocirca) and a host of other IT security professionals was sponsored by a dozen or so IT security vendors. So what message was there for them from the attendees?

 

Clearly Wallix, a supplier of privileged user management tools would have gone away with a renewed sense of mission to limit the powers of internal users and unwanted visitors. As would Duo Security, whose two-factor authentication, through the use of one time keys on mobile devices, would also help keep unauthorised outsiders at bay.

 

Of course hackers will do all they can to find weaknesses in your applications and infrastructure; all the more reason to scan software code for vulnerabilities with services from Veracode both before and after deployment. Nevertheless vulnerabilities will always exist, so when a new one is made known, Tenable Security can scan your systems to find where the dodgy components are installed and highlight the riskiest deployments for priority fixing.

 

Should hackers and/or malware find their way onto the CISOs systems, new technology from Illumio enables the mapping of inter-workload traffic, including between virtual machines running on the same platform. Anomalous traffic can be identified, reported and blocked - it is a common tactic of hackers and malware to ingress one server and attempt to move sideways. Hopefully, such traffic would not include anything related to DoS attacks which could be blocked by services from Verisign or from other such providers that may base their prevention on DoS hardware appliances from Corero.

 

Enabling users to safely use the web is a key to saying YES and remaining safe. OpenDNS, amongst other things, protects users wherever they are from perilous web sites and other threats. RiskIQ eliminates the unknown greyness that can prevail in such matters by classifying any web resource as either known or rogue. Venafi says its monitoring of the use of and cleansing systems of SSL keys acts like an immune system for the internet. Meanwhile Pulse Secure (a 2014 spinoff from Juniper Networks) combines its mature SSL-VPN technology with network access control (NAC) to provide end point monitoring way out in the cloud. It also has newly acquired technology called Mobile Spaces to enable BYOD through the creation of local mobile containers on Android or iPhone devices.

 

Impressive claims from all the vendors, however, one CISO was keen to remind suppliers; "do not over-promise and under-deliver". His peers all nodded in agreement.

 

Li-Fi fantastic - Quocirca's report from Infosec 2015

Bob Tarzey | No Comments
| More

As with any trade show, Infosec (Europe's biggest IT security bash) can get a bit mind-numbing, with one vendor after another going on about the big issues of the day - advanced threat detection, threat intelligence networks, the dangers of the internet-of-things and so on. They all have a different take these topics, but they all talk the same language, it can be hard to see the wood for the trees.

 

It is, therefore, refreshing when you come across something completely different. So it was as I wandered among the small booths on the upper floor of Olympia. These are reserved for new innovators with their smaller marketing budgets (as well as a few old hands, who made last minute decisions to take cheap exhibition space!)

 

"Do you want to see something really amazing" I was asked, as I walked past the tiny stand of Edinburgh-based pureLiFi. Too rude to refuse, I agreed. "Light does not go through walls" I am told (and agreed), so it is a more secure way to transmit data than WiFi. I can't argue with that. So, I am shown a streaming video being transmitted direct to a device from a light above the stand, the stream can be stopped by simply by the intervention of a hand. "Line of sight only" I say, true, but the device is then moved across the stand to another light source, where, state aware, the streaming continues. Actually, Li-Fi is not a new concept, there is Wikipedia page on the subject and the Li-Fi Consortium was founded in 2011. However, pureLiFi seems to be the first to attempt to commercialise it.

 

pureLifi was not alone in coming to Infosec with a product that is not entirely about security but sees the show as a good place to promote its product by alluding to security specifics. Some IT industry old hands were to be seen at Infosec 2015 for the first time. For example Perforce Software, a software tool for managing software development teams, which was promoting its recently announced intellectual property (i.e. software) protection capabilities. Another was Bomgar, a tool for accessing and managing remote user devices that now has something to say about the secure use of privilege.

 

Many of the vendors might be majoring on advanced threats, but their actual or potential customers at Infosec often took the conversation elsewhere. Several moaned to Quocirca that they could still not get some of their senior managers to take security training seriously. This is a real problem, as recent Quocirca research, sponsored by Digital Guardian (an exhibitor at Infosec) shows; knowledge about data security at all levels in an organisation has a big role to play in improving security confidence.

 

PhishMe, another exhibitor, had something to say about this too; it runs in-company campaigns to raise awareness of email and web risks. It now includes immediate micro-training modules (one minute or less) for any employee that finds themselves taken in by a test email scam. It hopes even the most red-faced business manager will take the time to view these.

 

The overall size of Infosec 2015, compared to when the show started 20 years ago, is bewildering. And that is without some of the biggest names taking high-profile stand space; little sign of Symantec, Intel Security (aka McAfee), Microsoft, HP or IBM. However, no visitor should have gone away without new insight and ideas, the global stars of information security dominated the central space, including Trend Micro, FireEye, Palo Alto Networks and ForeScout. They were joined by other innovators from around the world, from China and Australia and every corner of Europe and the UK. Infosec Europe highlights not just the challenge of IT security but the central role security now plays in every aspect of IT.

 

Is the IoT actually becoming workable?

Clive Longbottom | No Comments
| More

In 2013, I wrote a piece (http://blog.silver-peak.com/hub-a-dub-dub-3-things-in-a-tub) discussing the issues that the internet of things (IoT) would bring to the fore, including how the mass chattiness of traffic created by the devices could bring a network to its knees.  In the piece, I recommended that a hub and spoke approach be used to control this.

The idea is that a host of relatively unintelligent devices would all be subservient to a local hub.  That local hub would control security and would also be the first level filter for data, ensuring that only data that needed to move more toward the main centre of the network did so, removing the chattiness at as an early a stage as possible.

The approach would then be hierarchical - multiple hubs would be subservient to more intelligent hubs and so on until a big data analytical centre hub would be there to carry out pattern recognition and complex event processing to optimise the value of the IoT.

At the time, I wrote this from a theoretical point of view - no vendor seemed to be looking at the IoT in this way, and I was very worried that the IoT could find itself discredited at an early stage as those who stood to gain the most from a well architected IoT ran up against the issues of an anything-connected-to-anything network.

So at a recent event, it was refreshing to see that Dell has taken a hub approach to the IoT.  Dell announced that it has set up a dedicated IoT unit, and the availability of its first product, an Intel-based IoT "gateway", using dual-core processors in a small and hardened form factor.  These devices can then be placed within any environment where IoT devices are creating data, and can act as an intelligent collection and filtering point.

Dell is actively partnering with companies that are in the IoT device space.  One such company is KMC Controls, which is looking to use Dell's IoT Gateway as a means of enabling it to continue to provide low-cost building monitoring and automation devices while using the centralised standardised data management and security of the IoT Gateway.

Dell's first IoT Gateway is a generic device coming in at under $500 that users can utilise in current projects or as a device being used in an IoT proof of concept (PoC).  It can run many flavours of Linux or Microsoft's specialised Windows IoT natively, so allowing IoT applications and functions to be layered on top of the box.  Dell has also teamed up with ThingWorx (a division of PTC) to help customers create and deploy IoT applications that willgive them additional capabilities in achieving their business aims.

As time progresses, Dell will be bringing out more targeted IoT Gateways, with specific operating systems and specific code to deal with defined IoT scenarios.  This will help IoT device vendors and channel to more easily position and sell their offerings.

Overall, this is a good move by Dell and points toward a maturation of thinking in the market.  Whether other vendors step up to the mark is yet to be seen.  However, it will be in everyone's - including Dell's - interests for a standardised hub and spoke IoT architecture to be adopted.  This will avoid the IoT getting a bad name as poor architectures bring networks to their knees, and will also accelerate the actual adoption of real, useful IoT.

Have you entered our awards yet?

Find recent content on the main index or look in the archives to find all content.

Archives

Recent Comments

  • ChrisMaldini: Nice write-up, Clive. I agree there is a need to read more
  • Adam: Cloud computing and BYOD go hand-in-hand. Cloud computing can make read more
  • David Chassels: Hi Clive Is the business emphasis not wrong in looking read more
  • Clive Longbottom: After a discussion with CA Technologies, I would just like read more

Dilbert

 

 

-- Advertisement --