Making the most of data: making data more open

Clive Longbottom | No Comments
| More

A while back, I published an article on the use of open data sets, covering some of the things that were already being done and what may be possible in the near- to mid-term.  After this was published, David Patterson of KnowNow Information (KnowNow) got in touch to ask to meet up to discuss what his company has been up to in this space.

I met up with David at IBM's Hursley Laboratories in Hampshire. As a member of IBM's Global Entrepreneur Programme, KnowNow has access to IBM's skills and capabilities, helping to maximise KnowNow's capabilities in how it deals with data.

With access to IBM's Bluemix and Node-RED technologies, KnowNow can focus on its own deep domain expertise - managing datasets to ensure that its customers get the end results they need.  Indeed, KnowNow has a viewpoint that is quite refreshing - it wants to promulgate what it is doing to as many people and companies as possible: it does not want to be proprietary. It wants to be able to monetise its domain expertise in how it understands, analyses and feeds information back as the customer needs it - not from dealing with data in any 'hidden' manner.

Part of this is undoubtedly based on KnowNow's relationship with the Open Data Institute (ODI).  The ODI is pushing for as many datasets to be made available via open APIs as possible. KnowNow's approach has been to utilise as much 'free' data as possible - and therefore, it doesn't believe that it can charge for the data itself - only for the results.

So, what has KnowNow been up to?  The reason that David wanted to meet up was primarily to show me what KnowNow has been doing around flood monitoring and event prediction.  With a system aimed at the Environment Agency and DEFRA in the UK, the idea was to be able to run predictive simulations of where resources would be required in the case of a flood event occurring.  These resources could be anything from improved signage, presence of fire brigade, ambulance or army, based on available open data sets, included near real time data on rainfall, river levels and weather forecasts against geospatial information such as 2D and 3D map information.  An example of an event in this case could be where a ford is likely to get to a level where a car could get washed away - the simple provision of a 'no entry' sign would prevent this.  The use of the open data sets enables KnowNow to predict such events to a good degree of accuracy.  It is easy to see how such a model can be used in areas such as brush or forest fires, drought and other weather events.

However, it is finding councils and central government hard to persuade to pay for the service - essentially, government is reactive, rather than proactive, so KnowNow may need to wait for the winter and for floods to happen before the government purse is opened.  There has been more immediate interest from insurance companies - they can see value from this type of approach in setting premiums and in dealing with any fall out after an event.

KnowNow is also looking at other areas - and this is where its existing play in the open data market is key. The rise of the internet of things (IoT), could have a big impact on an areas such as healthcare.  Take as an example a person who, for whatever reason, is still capable of living alone but requires a degree of oversight.  At the moment, this will be carried out via timed visits from care workers - and it is apparent that the system is overstretched.  Instead, create an intelligent environment around the person.  Have they opened their medicine container when they should?  Have they opened the front door at all in the past hours or days?  Are they moving around? Has there been a sharp movement monitored as they move around, which could denote a fall?  Have they used the kettle, or the shower, opened the fridge door, watched TV or listened to the radio?  Monitoring all of these can enable analysis that can identify events where targeted interventions makes sense.  Where a person has not spoken to someone for a while, a person can be sent round to have a chat.  Where a meal hasn't been had, someone who can prepare one for them.  A fall? Send a first responder. 

Such targeted interventions can ensure that a person gets the help that they require, when they require it.  It can also help optimise use of the NHS' scarce resources.

Even with a more concentrated care environment, such as a care home, such an approach can help in optimising the care a person receives.  Heart rates, breathing, temperature can all be monitored.  Even areas such as the condition of adult nappies could be monitored, sending alerts when these need changing.  This again frees up the care staff to concentrate on the more human aspects of the job - talking and interacting with each person, rather than checking up on them.

But, all this needs some form of better standardisation around how data is held and transferred in such environments. Where near real time intervention is needed, any transformations of data can just slow things down.  What KnowNow would like to see is more of an agreement around how datasets and APIs are created and managed - to make them more open, more available, more usable.  This would be in the best interests of everyone involved - the IoT can only be effective where data is easily moved around and analysed.

I found the discussions with David very interesting - to me, KnowNow is one of just a few companies at the forefront of dealing with data in a manner that is suitable to the IoT.  It is apparent that data formats and APIs will be a sticking point for a truly effective IoT - it is incumbent on all players in the market to ensure that data is easily and freely available from their devices.

Is your identity and access management fit for purpose?

Bob Tarzey | No Comments
| More

In the old days, identity and access management (IAM) was a mainly internal affair; employees accessing applications, all safely behind a firewall. OK, perhaps the odd remote user, but they tunnelled in using a VPN and, to all intents and purposes, they were brought inside the firewall. Those days are long gone.

 

Today the applications can be anywhere and the users can come from anywhere. Quocirca research (Masters of Machines II, June 2015) shows almost 75% of organisations are now using cloud-based software-as-a-service (SaaS) applications with a similar number using infrastructure or platform-as-a-service (IaaS/PaaS) to deploy applications that run in 3rd party data centres. As for the users, as another recent Quocirca research report shows (Getting to know you, June 2015), they can be anywhere too.

 

It is not just the rise in the number of employees working remotely, but the fact that applications are opened up to outsiders. Whether it is better managing supply chains through sharing applications with partners and suppliers, managing distribution online or transacting directly with consumers, almost all organisations are interacting with external users beyond their firewall.

 

Furthermore, this is not a small scale opening up to a discrete set of users; the numbers involved are big. The average European enterprise is dealing with approaching a quarter of a million registered external users. For organisations that are dealing with consumers, such as financial services and transport organisations the numbers are even higher. Dealing with this complete reconfiguration of the way IT applications are managed and accessed has required a re-think of IAM.

 

The "Getting to know you" research shows that only 20% of organisations think their current IAM systems are fit for purpose. IAM covers a range of capabilities including user provisioning, compliance reporting and single-sign-on. There is also an increasing requirement for federated identity management, which is the bringing together of identities from multiple sources and apply a common policy. For the majority, the primary source of identity for employees remains Microsoft Active Directory but this is now supplemented by a range of other sources for external users. These include partner directories, government databases, lists from telco service providers, member lists of professional bodies and, especially when it comes to consumers, social media.

 

The trouble is that many IAM systems were designed to deal with the old way of doing things. They were often purchased as part of software stack from a vendor like Oracle, CA or IBM. Many organisations are now struggling to adapt these legacy IAM systems for the new use cases. As with any legacy system, wholesale replacement is often impractical if not impossible. The result is that new IAM suppliers are being introduced and integrated with the old.

 

The average organisation has at least 2 IAM suppliers; the number is higher when stack-based IAM is being adapted to deal with external users. The second IAM system is likely to be a SaaS system, designed for provisioning users from a wide range of identity sources to other cloud applications. IAM systems are becoming hybridised, legacy IAM for internal users and some older relationships (such as those with contractors) integrated with cloud-based management for remote workers and users from partners, business customers and consumers. 39% of the respondents to the "Getting to know you" research are taking a hybrid approach to federating identities and 53% are doing so for single sign on, a particularly effective way of handling access to cloud-based resources for internal and external users. Both numbers rise for consumer-facing organisations.

 

A small number of organisations, around 10%, have moved entirely over to a SaaS-based IAM system such as Ping Identity's PingOne, Intermedia's AppID (from its SaaS ID acquisition), Okta, OneLogin or Symplified. Traditional stack-IAM vendors are updating their products; for example, CA SiteMinder, Symantec's SAM and IBM via its 2014 acquisition of Lighthouse Security. Other cloud service providers, such as Salesforce, have entered the IAM market, in its case by working with the open source provider ForgeRock.

 

The last decade has seen a revolution in the IAM market. The old guard will attempt to keep up with the up-starts. However, it seems that simply being an incumbent IAM supplier is not enough, so in order to keep up there is likely to more acquisition and consolidation.

 

Simple security in the mobile 'jungle'

Rob Bamforth | No Comments
| More

Last month's 8th annual IT Security Analyst & CISO (chief info security officer) Forum organised and hosted by Eskenzi PR brought together a fascinating combination of those in charge of securing household names in the insurance, banking, accounting, pharmaceuticals and media verticals and a rich vein of vendors offering their security wares.


Shortly after, the tempo and feel of the event was well documented in my colleague, Bob Tarzey's event report, in his blog "From 'no' to 'know'", which explored the highly pragmatic idea of not blocking users, but understanding what they doing - and why.


This is especially important in the 'mobile' context, where the edge of the network is no longer a beige box running one operating system sat on the desk, but a plethora of pocket-able, smart, highly connected and increasingly wearable devices used by pretty much everyone and anyone. Each comes not only with a diversity of operating systems and huge ecosystems of apps, but also the personal preferences and idiosyncrasies of each user.


Finding enterprise tools that span and control devices, data, apps and ultimately the person using them is increasingly challenging - the problem could be characterised as no longer simply 'herding cats', but 'juggling lions'.


Images from popular culture indicated that lion tamers used to manage with a whip and a chair - essentially let the lion loose, but always kept within the keen eyes of the tamer plus a bit of fear from the potential of the whip and a prod from the chair in the right direction - so could IT security learn from this approach?


Many of the vendors at the Forum offered keen eyes to detect threats and problems, including vendors RiskIQ, Tenable and OpenDNS as well as others offering tools to whip applications, users and policies into shape such as Veracode, PulseSecure and Illumio.


However, one particular vendor caught my eye from a mobile perspective - Duo Security with its simple approach to two-factor authentication.


Humans are generally the weakest element in security, in IT just as in everywhere else. If it's counter-intuitive (their perception, not yours), slow or just 'a bit difficult', it will not be used or not used properly. Even the most loyal employees will find ways round cumbersome tools that impede them in addressing the task at hand.


Duo addresses this by making it simple for a user to authenticate; online, offline, while mobile or just over a landline. This can be accomplished by one touch on an app on the screen of a favourite mobile device, an SMS via a mobile phone if there is no internet available, or an automated voice call to a phone. Not to leave anyone or more 'old-fashioned' circumstances out, Duo also supports hardware devices via display tokens or the YubiKey USB device.


The enterprise using Duo's authentication service can keep control of the branding so that users recognise it as their own and since users can self enrol and configure authentication methods based on their preferences, they are engaged from the outset. Offered by monthly subscription per user, and geared to different levels of functionality for different sizes of business or requirements, this is simple authentication as a service.


Security can often be seen as painful to endure by users making it difficult to get buy-in and easy to be obstructive, which does not really help with the core intention - improving security. With so much user choice and preferences being exerted, it is far better to use tools that fit with 'lifestyles' as well as prodding security in the right direction. Here is a potential lion tamer's wooden stool; simple to use, works anywhere and perhaps even better, the 'lions' can self-enrol.


Masters of Machines II - lifting the fog of ignorance in IT management

Bob Tarzey | No Comments
| More

In 2014 Quocirca published a research report looking of the value European organisations were deriving from operational intelligence. Now, in June 2015, there is a sequel; Masters of Machines II. Both reports were sponsored by Splunk a provider of an operational intelligence platform. The new report is freely available to download at the given link and Quocirca will be presenting the results at a webinar on July 16th 2015 (free registration HERE).

 

The research looks at the changing priorities of European IT managers. In particular how cost-related concerns have dropped away with improving economic conditions in most European countries, whilst concerns around the customer experience, data chaos, inflexible IT monitoring and, in particular, IT security, have all risen.

 

The research goes on to look at how effective operational intelligence is at addressing some of these concerns as well as two other issues. The first of these is increasing IT complexity as more and more cloud-based resources are used to supplement in-house IT, thereby creating hybridised platforms. Second is the role operational intelligence plays in supporting commercial activities, especially the cross channel customer experience (that is the mixed use of web sites, mobile apps, social media, email, voice and so on by individual consumers to communicate with product and service suppliers).

 

Effective operational intelligence requires the comprehensive collection of machine data and tools that are capable of consolidating, processing and analysing it. The research looks at the ability European organisations have to do all this through the use of an operational intelligence index, which was used in both the 2014 and 2015 reports. The index covers 12 capabilities from the most basic to "capture, store, and search machine data" to the most advanced to "provide the business views from machine data analysis that drive real-time decision-making and innovation (customer insights, marketing insights, usage insights, product-centric insights)"

 

In nearly all areas there is a strong positive correlation between operational intelligence capability and the ability to address various IT and commercial management challenges. The exception is IT security, where concerns increase with better operational intelligence. The conclusion here is dark; only once deep enough insight is gained do organisations really see the scale of the security challenge. Some with little insight may exist in a state of blissful ignorance, however, that will not last. It is better to know the movements of your foes, than have them emerge at an expected time and place out of the fog of ignorance.

Securing joined-up government: the UK's Public Service Network (PSN)

Bob Tarzey | No Comments
| More

A common mantra of the New Labour administration that governed the UK from 1997 to 2007 (when the 'new' was all but dropped with the departure of Tony Blair), was that Britain must have more joined-up government. An initiative was kicked-off in 2007 to make this a digital reality with the launch of the UK Public Sector Network (PSN, since relabelled the Public Service Network).

 

Back then digital reform, data sharing, sustainability and multi-agency working were all top of mind. However, an effective PSN also makes it easier for smaller suppliers to participate in the public sector market place, an issue which interested the Coalition government that replaced Labour in 2010 and its recent Conservative successor. This saw the government focus shifted to public sector spending cuts and a desire to break up mega technology and communications contracts into smaller chunks.

 

In short, the PSN is a dedicated high performance internet for the UK government, a standardised network of networks, provided by large service provider such as BT, Virgin Media, Vodafone and Level 3 Communications and a host of smaller companies, keen to get in on the action. The PSN architecture is similar to the internet but separated from it with performance guaranties. Separate, but not isolated, how else could citizens be served?

 

Information sharing via the PSN is controlled, the aim is to be open when appropriate but secure when necessary. One objective is to reduce the reported instances of data leaks. According to the UK Information Commissioner's Office (ICO) Data Breach Trends, in the last finance year there were 35 report breaches for central government and 233 for local government, the latter only being beaten by healthcare with an atrocious 747. That made government organisations responsible for about 15% of all incidents (excluding health, education and law enforcement).

 

An organisation wanting to access the PSN must pass the PSN Code of Connection (CoCo), an information assurance mechanism that aims to ensure all the various member organisations can have an agreed level of trust through common levels of security.

 

Advice on compliance is laid out by the government on the PSN web site and advice is also available from Innopsis, a trade association for communications, network and application suppliers to the UK public sector. Innopsis was previously known as Public Service Network GB (PSNGB). Innopsis helps its members understand and deal with the complexities of the public sector ICT market, especially with regard to use of the UK's Public Service Network (PSN).

 

The PSN rules include making sure the end-points that attach to the network are compliant which means they must be managed in some way (i.e. ad hoc bring-your-own-device is not allowed). Example controls include; ensuring software is patched to the latest levels, preventing the execution of unauthorised software, deploying anti-malware and using encryption on remote and mobile devices. A PSN member organisation can have unmanaged devices on its own network, but this must be clearly and securely separated from the CoCo compliant sector of the network.

 

Innopsis was represented by its chairman, Phil Gibson on a panel facilitated by Quocirca at InfoSec in June 2015, which looked at secure network access in the UK public sector. Also on the panel was the ICT Security Manager for the NHS South East Commissioning Support Unit (CSU) talking about a project to roll out the Sussex Next Generation Community of Interest Network (NG-COIN); one of four Linked WANs that South East CSU manages.

NHS organisations currently use another dedicated network called N3; however, this is being replaced by a PSN for healthcare, which is to be labelled the Health and Social Care Network (HSCN).  The Sussex NG-COIN involved 30,000 end user devices across 230 sites with anything from 1 to 5,000 users; many of the sites required public network access. There are 15 different organisations using the NG-COIN with varying security requirements and thousands of applications containing sensitive clinical information.

 

The old COIN relied on an ageing and ineffective intrusion prevention system (IPS). With NG-COIN this was replaced by a network access control (NAC) system. The cost difference to the 15 user organisations was absorbed as a security line item cost, which they were already accustomed to.

 

ForeScout's CounterACT NAC system was selected in 2013. It proved to be fast to deploy, 95% of the network was being monitored within one week. It was compatible with all the legacy networking equipment from a range of vendors including Cisco, HP and 3Com (now owned by HP). The system provided flexibility to define policies by device type, site owner, user type, etc. and was integrated with the existing wireless solution to provide authenticated guest access.

 

CounterACT also fulfilled reporting requirements providing complete information about access and usage across the whole network; what, where, when and who from a single console. It also provided the ability to automatically block access to non-compliant devices or limit access based on usage policies.

 

These are all issues that any organisation needs to be able address before attaching to the UK PSN. NAC provided Sussex NHS with a way to ensure controlled and complaint use of its network that any organisation wanting to attach to the UK PSN compliantly could follow as an example.







IBM labs: people having fun while changing the future?

Clive Longbottom | No Comments
| More

As an industry analyst, I go to plenty of technology events.  Some are big, aimed at a specific vendor's customers and prospects.  Others are targeted just at analysts.  Many are full of marketing - and all too often, not a lot else.

However, once in a while, I get a real buzz at an event.  Such a one was a recent visit to IBM's Zurich Labs to talk directly with scientists there working on various aspects of storage and other technologies.  These sort of events tend to be pretty stripped of the marketing veneer around the actual work, and as much of what is going on tends to be at the forefront of science, then it also forces us to think more deeply about what we are seeing.

Starting with a general overview, IBM then dived deep into flash storage.  The main problem with flash storage is that it has a relatively low endurance, as each cell of memory can only be written to a given number of times before it no longer works.  IBM has developed a system which can take low-cost consumer-level solid state drives (SSDs) and elevate their performance and endurance to meet enterprise requirements. The team has demonstrated 4.6x more endurance with a specific low-cost SSD model. The impact on flash-storage economics using such an approach will be pretty massive.

However, IBM has to look beyond current technologies, and so is already researching what could take over from flash memory.  We were given the first demonstration to non-IBM people of phase change memory (PCM). To date, read/write electronic storage has been carried out on magnetic medium (tape, hard disks) or flash-based media. Read-only storage has also included the likes of CDs and other optical media, where a laser beam is used to change the state of a layer of material from one phase to another (switching between amorphous and crystalline states).  For read only, this is fine: once the change is made, it can be left and has a high level of stability.  Read/write optical disks need to be able to apply heat to the layer of material back to its base state - and this is where the problems have been when looking at moving the technology through to a more dynamic memory use.

PCM requires that the chosen material can be changed from one state to another very rapidly and back again.  It also needs to be stable, and needs to be able to store data over a long period of time.  Whereas in optical memory, a laser beam is used, in memory, it has to be carried out through the use of an electronic current. However, there is also a problem called drift. Here, the resistance of the amorphous state rises according to the power law, and this makes the use of PCM in a multi cell configuration (needed to provide enough memory density) a major problem.

IBM demonstrated how far it has got by writing a jpg image to the memory and then reading it back.  In a raw form, the picture was heavily corrupted.  However, by using intelligent technology developed by IBM, it has created a means of more clearly delineating the levels between the states within the material it is using.  Using that system then showed how the image recovered from the memory was near perfect.

Why bother?  Multi-Level Cell (MLC) Flash tops out at around 3,000 read/write cycles, but PCM can endure at least 10 million. PCM is also faster than flash and cheaper than RAM: creating a PCM memory system gets closer still to being able to manage large data sets in real time - which many organisations will be willing to pay for.

Next was what being called a "datacentre in a box" (a term I have heard so many times before, that I winced when I heard it).  However, on this occasion, it may be closer to being realistic than before.  Rather than just trying to increase densities to the highest point, IBM is taking a new architectural approach, using server-on-chip Power architecture systems on a board about the same size as a dual in-line memory module (DIMM), as used in modern PCs.  These modules can run a full version of Linux, and can be packed into a 2U unit using a novel form of water cooling.  Instead of the cooling being directly on the chip, a copper laminate sheet is laid across the top of the CPU, with the ends of the sheet clamped into large copper bus bars at each end.  These bus bars also carry the power required for the systems, so meeting two needs in one design.  The aim if for 128 of these modules to be held in a single 2U rack mount chassis, consuming less than 6kW in power.  The heat can also be scavenged and used elsewhere when hot water cooling is used. Although "hot water cooling" may sound weird, the core temperature of the CPU only has to be kept below 80°C, so used to cool the CPUs is passed by an external heat exchanger, where its temperature drops to just a low enough temperature to keep the CPU below 80°C before being pumped back round to the CPU.  The heat is high grade and can be used for space heating or heating water for use in e.g. washing facilities - so saving on a buildings overall energy bill.

We also saw IBM's quiet rooms.  No - not some Google-esque area where its employees could grab a few moments of sleep, but rooms specifically built to create as ideal a place for nanometer technology experimentation as possible.  By "quiet", IBM is not just looking at how much audible noise is in the room.  Sure, there are anechoic chamber elsewhere which have less noise.  IBM wanted areas where the noise of electromagnetic forces, of movement, of temperature and humidity could be minimised to such an extent that when they want to hit a molecule with an electron beam from a meter away, they know it will happen.

These rooms were designed not by an architect or a civil engineer.  It took one of IBM's physicists to look at what he really needed from such an environment, and then to come up with a proto-design and talk to others about whether this would be feasible.  Being told that it wouldn't be, he went ahead anyway.  These rooms are unique on the planet at this time - and put IBM at the forefront of nanometer research.

These areas we were shown, along with others, raised lots of questions and created discussions that were interesting.  As we all agreed as we left the facility - if only all analyst events could be like that.

From "no" to "know": a report from the Eskenzi CISO forum

Bob Tarzey | No Comments
| More

This year's Eskenzi PR annual IT Security Analyst & CISO (chief info security officer) Forum was the 8th such event and attracted the security leaders of some of the largest UK organisations. Household names from insurance, banking, accounting, pharmaceuticals and media were all represented, as well as a large service provider and one true 21st century born-in-the-cloud business.

 

Whilst media outlets are never going to see all issues to do with IT security in the same ways as insurers ("journalists have to act in anomalous ways compared to users in role-based organisations" said one), there was consensus in many areas.

 

All accepted the reality of bring-your-own-device (BYOD) however it is managed and implemented. Shadow IT was recognised as a widespread issue, but one to be managed not banished. The mood was well summarised by a comment from one CISO - "we have to move from NO to KNOW"; that is, do not block the users from trying to do their jobs, but do make sure you have sufficient insight into their activity. A good analogy offered up by another was of a newly built US university campus that was surrounded by newly laid lawns with no footpaths. Only after a year, when the students had made clear the most trodden routes were hard paths laid. Within reason, IT security can be managed in the same way - to suit users.

 

There was some disagreement about how news of software vulnerabilities and exploits should be reported in the press; is it better that some high profile cases raise awareness amongst management or does over-reporting lead to complacency? Denial-of-service (DoS) attacks were recognised as a ubiquitous problem; not to be accepted but controlled. Perhaps the greatest consensus was reached about the need to deal with privileged user access.  One CISO observed that if the use of privilege internally is well managed it goes a long way towards mitigating external threats as well; hackers invariable seek out privileges to perpetrate their attacks.

 

The two day event, which as well as CISOs included industry analysts (such as Quocirca) and a host of other IT security professionals was sponsored by a dozen or so IT security vendors. So what message was there for them from the attendees?

 

Clearly Wallix, a supplier of privileged user management tools would have gone away with a renewed sense of mission to limit the powers of internal users and unwanted visitors. As would Duo Security, whose two-factor authentication, through the use of one time keys on mobile devices, would also help keep unauthorised outsiders at bay.

 

Of course hackers will do all they can to find weaknesses in your applications and infrastructure; all the more reason to scan software code for vulnerabilities with services from Veracode both before and after deployment. Nevertheless vulnerabilities will always exist, so when a new one is made known, Tenable Security can scan your systems to find where the dodgy components are installed and highlight the riskiest deployments for priority fixing.

 

Should hackers and/or malware find their way onto the CISOs systems, new technology from Illumio enables the mapping of inter-workload traffic, including between virtual machines running on the same platform. Anomalous traffic can be identified, reported and blocked - it is a common tactic of hackers and malware to ingress one server and attempt to move sideways. Hopefully, such traffic would not include anything related to DoS attacks which could be blocked by services from Verisign or from other such providers that may base their prevention on DoS hardware appliances from Corero.

 

Enabling users to safely use the web is a key to saying YES and remaining safe. OpenDNS, amongst other things, protects users wherever they are from perilous web sites and other threats. RiskIQ eliminates the unknown greyness that can prevail in such matters by classifying any web resource as either known or rogue. Venafi says its monitoring of the use of and cleansing systems of SSL keys acts like an immune system for the internet. Meanwhile Pulse Secure (a 2014 spinoff from Juniper Networks) combines its mature SSL-VPN technology with network access control (NAC) to provide end point monitoring way out in the cloud. It also has newly acquired technology called Mobile Spaces to enable BYOD through the creation of local mobile containers on Android or iPhone devices.

 

Impressive claims from all the vendors, however, one CISO was keen to remind suppliers; "do not over-promise and under-deliver". His peers all nodded in agreement.

 

Li-Fi fantastic - Quocirca's report from Infosec 2015

Bob Tarzey | No Comments
| More

As with any trade show, Infosec (Europe's biggest IT security bash) can get a bit mind-numbing, with one vendor after another going on about the big issues of the day - advanced threat detection, threat intelligence networks, the dangers of the internet-of-things and so on. They all have a different take these topics, but they all talk the same language, it can be hard to see the wood for the trees.

 

It is, therefore, refreshing when you come across something completely different. So it was as I wandered among the small booths on the upper floor of Olympia. These are reserved for new innovators with their smaller marketing budgets (as well as a few old hands, who made last minute decisions to take cheap exhibition space!)

 

"Do you want to see something really amazing" I was asked, as I walked past the tiny stand of Edinburgh-based pureLiFi. Too rude to refuse, I agreed. "Light does not go through walls" I am told (and agreed), so it is a more secure way to transmit data than WiFi. I can't argue with that. So, I am shown a streaming video being transmitted direct to a device from a light above the stand, the stream can be stopped by simply by the intervention of a hand. "Line of sight only" I say, true, but the device is then moved across the stand to another light source, where, state aware, the streaming continues. Actually, Li-Fi is not a new concept, there is Wikipedia page on the subject and the Li-Fi Consortium was founded in 2011. However, pureLiFi seems to be the first to attempt to commercialise it.

 

pureLifi was not alone in coming to Infosec with a product that is not entirely about security but sees the show as a good place to promote its product by alluding to security specifics. Some IT industry old hands were to be seen at Infosec 2015 for the first time. For example Perforce Software, a software tool for managing software development teams, which was promoting its recently announced intellectual property (i.e. software) protection capabilities. Another was Bomgar, a tool for accessing and managing remote user devices that now has something to say about the secure use of privilege.

 

Many of the vendors might be majoring on advanced threats, but their actual or potential customers at Infosec often took the conversation elsewhere. Several moaned to Quocirca that they could still not get some of their senior managers to take security training seriously. This is a real problem, as recent Quocirca research, sponsored by Digital Guardian (an exhibitor at Infosec) shows; knowledge about data security at all levels in an organisation has a big role to play in improving security confidence.

 

PhishMe, another exhibitor, had something to say about this too; it runs in-company campaigns to raise awareness of email and web risks. It now includes immediate micro-training modules (one minute or less) for any employee that finds themselves taken in by a test email scam. It hopes even the most red-faced business manager will take the time to view these.

 

The overall size of Infosec 2015, compared to when the show started 20 years ago, is bewildering. And that is without some of the biggest names taking high-profile stand space; little sign of Symantec, Intel Security (aka McAfee), Microsoft, HP or IBM. However, no visitor should have gone away without new insight and ideas, the global stars of information security dominated the central space, including Trend Micro, FireEye, Palo Alto Networks and ForeScout. They were joined by other innovators from around the world, from China and Australia and every corner of Europe and the UK. Infosec Europe highlights not just the challenge of IT security but the central role security now plays in every aspect of IT.

 

Is the IoT actually becoming workable?

Clive Longbottom | No Comments
| More

In 2013, I wrote a piece (http://blog.silver-peak.com/hub-a-dub-dub-3-things-in-a-tub) discussing the issues that the internet of things (IoT) would bring to the fore, including how the mass chattiness of traffic created by the devices could bring a network to its knees.  In the piece, I recommended that a hub and spoke approach be used to control this.

The idea is that a host of relatively unintelligent devices would all be subservient to a local hub.  That local hub would control security and would also be the first level filter for data, ensuring that only data that needed to move more toward the main centre of the network did so, removing the chattiness at as an early a stage as possible.

The approach would then be hierarchical - multiple hubs would be subservient to more intelligent hubs and so on until a big data analytical centre hub would be there to carry out pattern recognition and complex event processing to optimise the value of the IoT.

At the time, I wrote this from a theoretical point of view - no vendor seemed to be looking at the IoT in this way, and I was very worried that the IoT could find itself discredited at an early stage as those who stood to gain the most from a well architected IoT ran up against the issues of an anything-connected-to-anything network.

So at a recent event, it was refreshing to see that Dell has taken a hub approach to the IoT.  Dell announced that it has set up a dedicated IoT unit, and the availability of its first product, an Intel-based IoT "gateway", using dual-core processors in a small and hardened form factor.  These devices can then be placed within any environment where IoT devices are creating data, and can act as an intelligent collection and filtering point.

Dell is actively partnering with companies that are in the IoT device space.  One such company is KMC Controls, which is looking to use Dell's IoT Gateway as a means of enabling it to continue to provide low-cost building monitoring and automation devices while using the centralised standardised data management and security of the IoT Gateway.

Dell's first IoT Gateway is a generic device coming in at under $500 that users can utilise in current projects or as a device being used in an IoT proof of concept (PoC).  It can run many flavours of Linux or Microsoft's specialised Windows IoT natively, so allowing IoT applications and functions to be layered on top of the box.  Dell has also teamed up with ThingWorx (a division of PTC) to help customers create and deploy IoT applications that willgive them additional capabilities in achieving their business aims.

As time progresses, Dell will be bringing out more targeted IoT Gateways, with specific operating systems and specific code to deal with defined IoT scenarios.  This will help IoT device vendors and channel to more easily position and sell their offerings.

Overall, this is a good move by Dell and points toward a maturation of thinking in the market.  Whether other vendors step up to the mark is yet to be seen.  However, it will be in everyone's - including Dell's - interests for a standardised hub and spoke IoT architecture to be adopted.  This will avoid the IoT getting a bad name as poor architectures bring networks to their knees, and will also accelerate the actual adoption of real, useful IoT.

Where Next for CRM?

Clive Longbottom | No Comments
| More

Remember when all the sales field force had their own contact management software - the likes of ACT!, Frontrange and so on? Remember the problems that meant for the business, as sales people left the company and took all the information with them, making handover impossible? Remember how these tried to evolve into sales force automation (SFA) packages, with sales people supposedly inputting data into a shared environment - but rarely doing so?

Remember how customer relationship management (CRM) systems began to allow the business to see the interactions between themselves and the customers, and how this led to SFA being embraced by CRM to get the upstream activities of sales people included in the details?

Remember how salesforce.com started off as an SFA vendor, but rapidly re-positioned itself as a CRM vendor?

Remember how difficult it has been to see the real return on investment from any of this? I bet you do.

The problem has been that the way that most SFA/CRM systems have been implemented has been as a simple system of record: an employee (whether it be a sales person or a contact centre agent) puts information into a system that can then be looked at by others. This is not helping the business in its main aim - which is to optimise the process of selling goods or services to new and existing customers at a profitable margin. Combined with this is that the system of engagement - the way that the user interacts with the system of record - has not been conducive to use. Sales people see it as getting in the way: they avoid it wherever they can, so minimising the value of the overall system.

This is how CRM now has to evolve. The sales person needs more of a framework to work within; one that provides actionable advice as they move along the sales journey in an intuitive and easy to use manner. They need a way of being able to see not just what has gone on before with a prospect or customer, but also need to fully understand whether they should continue with a possible sale - or cut loose and move on.

There are plenty of books available that deal with "Sales 101", showing how a simple sales pipeline works. However, these tend to be pretty simplistic, working against probabilities of deals being made based on the qualitative feeling of the sales person involved. A gung-ho salesperson may well have all of their prospects as 80% probabilities; a more pragmatic one may have theirs spread from 10-80%.

Given this, just how can the business then look at the realistic situation and make a sensible analysis on how well the sales force is operating and what the likelihood is of sales revenue through the next quarter or year?

A solid system of record is still needed: salesforce.com has proven itself to be a suitable engine here. While salesforce.com has shown sales people the 'What to do' of Sales, what has been missing is the 'How to do' of Sales. However, with the systems of engagement not currently sufficiently intuitive or valuable to the different roles involved in the overall process to be used effectively, this needs to be addressed. In the case of the sales person, he or she needs a system that they see as adding distinct value and that is so easy to use that they see it as better to use the system than not to. Using a system that, dependent on the sales person's input, provides contextual advice that enables the sales person to make better informed decisions of the next steps they should take makes the system work for everyone. Using a system that works with them - on the device that they choose, at a time and place that fits with their work pattern.

The individual is working against a more complete data set - not only the sales pipeline, but information from other parts of the business that are inputting information from existing customers; information that is coming in from other feeds, such as social media, business information systems (e.g. Dun and Bradstreet, Equifax, Experian). This data and insight can ensure that the sales force is working on a level playing field, normalising each person's approach so that the probability of a sale can be better judged - and can be compared across a mixed set of sales people.

For sales enablement, they get to pull together the two constituents of their world that seem to be fighting against each other - the individuals in the sales force fighting for their commissions and the rest of the business trying to optimise bottom line performance. The sales force get to work effectively; the data they create is being logged centrally in a manner where not only sales managers and sales directors can monitor and measure performance, but also where the business can look at what is happening. This is not so that they can wield the big stick, but so that they can work with sales to ensure that they are fully supported.
For example, if it is obvious that a campaign, product or service is not working, then marketing or product development needs to know this. The next generation of CRM has to enable this.

What is this next generation CRM, then? It is the bringing together of all the requisite data required by the business in order for an effective iterative product development-campaign management-sales cycle to be created. This requires better tools so that the sales force can become a peer provider and user of the data involved. These tools need to encapsulate advice that is pertinent and contextually aligned to the position the sales person is in.

What it shouldn't involve is users having to move from their existing salesforce.com system of record to a new system.

This blog first appeared on the SalesMethods site.






BIM + IoT + GIS = well, something interesting.

Clive Longbottom | No Comments
| More
I have just returned from EsriUK's user event in London.  Some 2,500 users, channel and Esri people were comparing ideas and showing what they were doing with Esri's GIS tools - and the place did seem to be buzzing.

To my mind, the world is moving towards a bigger need for spatially intelligent tools.  However, for many, such a need is still nascent - there is still little understanding of what a fully intelligent GIS tool can do within a business.  To many, Esri (and its main competitor, PBBI's MapInfo) are just mapping tools - maybe with greater detail than Google or Bing Maps with better overlays and so on, but nothing much more than that. That perception is present in those who have heard of Esri or MapInfo - but also too many organisations that could make use of the tools are still unaware of these companies.

So, what can GIS do for a business?  In the retail space, it can be used to optimise logistics; to decide where a new distribution warehouse or a new outlet should be positioned; or to decide when seasonal perishable goods should be delivered to different shops around the country to minimise waste and maximise sales.  In utilities, it can be used to log assets and to help in planning such things as where to dig trenches for laying pipes and cables, ensuring that existing underground items are not impacted.  In education, it can be used as an educational tool to help students learn more about the world around them.

All of these are common usage cases and have been around for some time.  But GIS is also seeing other technologies come through that will play to its strengths.

Consider building information modelling (BIM) systems.  Here, vendors such as Arup, Autodesk and Bentley provide systems that are asset-focussed in dealing with how buildings are managed.  For example, a BIM system will contain details of all the items that make up a specific building: depending on the system, this could be at a high level (covering things like the chairs, tables and other fixed and non-fixed items placed within a building) through to highly granular systems that cover the components that went in to constructing the building in the first place - the type of concrete, the position of lintels, the type of glass used and so on.

BIM systems have historically been pretty much self-contained.  Items that were moved from one building to another, were broken and disposed of or replaced as part of standard maintenance procedures have had to be manually input into the system.  The internet of things (IoT) may be in a good position to help change this, though.

Some time back, there was the thought that radio frequency identification (RFID) tags would be used to track assets, but a lack of true standardisation and cost meant that except for with certain areas, this did not take off.  Now, as the IoT starts to take off, low cost sensors and monitors that are visible directly through existing networks will become possible. IoT tags can be attached to pretty much anything and monitored through the right systems - already, disposable temperature sensors are being dropped into concrete to optimise how concrete is poured, and the military are looking at using IoT devices in battlefield situations in a massive way.  BIM will be an obvious one for these - but it will also require spatial context.  Where, in both a 2D and 3D world, do these assets actually exist?  In real time, can they be plotted in a meaningful way?

And the assets may not just be the inanimate objects of chairs, tables, computers, vending machines and so on.  People are just as important when looking at the security needs of today's highly dynamic environments.

Esri demonstrated an example of this at the event.  Replicating the Harry Potter "Marauders' Map", where the footprints of people at Hogwarts could be seen on a 2D map in real time was a pretty nifty demonstration.  However, taking this to the next level and twisting the map to one side so that it became 3D showed the power of GIS.

Whereas the 2D map showed that 2 people were close together, the 3D map showed how they were several storeys apart in the building.

Now let's take this a little further.  A personal example was when I had to take my wife to hospital to have a broken ankle seen to.  She was obviously not that mobile - yet the fracture clinic was some way from the entrances.  When we checked in at reception, we were told that we needed to go to the X-Ray clinic first - a distance away.  We got to the X-Ray clinic, and were sent back to get paperwork that the fracture clinic had neglected to give us.  We bounced backward and forward between various departments for a couple of hours.  Equipment that could have been optimised in its usage was left unused; time was wasted - the process was a bit of a mess.

It could have been so different through the smart use of technology. Track our phones using a GIS system and overlay this onto a BIM and then analyse the results to see where the process involved is broken.  Then, use a BIM project planning tool, such as Asta Powerproject BIM to run a project that optimises the whole process.

In discussion with EsriUK's managing director, Stuart Bonthrone, we discussed how GIS is misunderstood.  Where GIS is really now aiming is as a data aggregator, combining its own data sets with data from other systems, such as BIM and other systems and then acting as a full business intelligence analysis tool ensuring that the visualisation of the results is carried out in a flexible and effective manner to suit the needs of as many users as possible.

It's a long way from mapping - but it is a potentially massive market; and a very exciting one.

A report from Splunk Live 2015: the real world use of machine data

Bob Tarzey | No Comments
| More

Given the chance to address customers, partners, staff and the media en masse, any company likes to lay out its vision. This was certainly true when Splunk's CEO Godfrey Sullivan spoke to an audience of almost 600 at Splunk Live in the London on May 13th 2015. Vision is all well and good, but only if it chimes with the problems faced by customers and prospects. In a well-orchestrated event, there was plenty of evidence that Splunk's customers endorsed and benefited from initiatives being undertaken by the vendor.

 

In a nutshell, Splunk turns all the data churned out by computing infrastructure, applications and security systems into operational intelligence, aiding both IT and security management. The volume of this machine data has increased a lot as infrastructure been extended to include cloud services; the numbers of users has increased as online applications are opened up to outsiders; the layers of security have increased and the internet-of-things has taken shape. Splunk says it has moved from the static review of machine data to dynamic big data analytics; i.e. more insight from more data with the capability to respond in real time.

 

As the landscape Splunk is collecting data from has changed the tools it provides need to evolve too. Two initiatives help with this. First, along with many others in the industry, Splunk has moved to DevOps, enabling agile development and making new features available as soon as possible. This applies to its core Splunk Enterprise product and is native to the way the new Splunk Cloud service is delivered. Splunk has also extended its reach, with a Splunk Light for smaller businesses and Hunk, which enables its tools to be used directly against Hadoop big data clusters. Second, it encourages customers to develop and share their own applications, testing and certifying the most popular ones for download from its app store.

 

So, what do Splunk's customers think? The vendor is not shy to talk about them; big European names came up in presentations again and again including John Lewis, Tesco, the NHS, Sky and VW, the last of these using Splunk to help manage its connected cars program, a true internet-of-things challenge. Three customers presented during the morning sessions, with more time being given to them than Splunk's own spokespeople. Their testaments underlined the reality of the vision outlined by the CEO.

 

First up was Paddy Power; a familiar name in the UK and to many who gamble online, a service provider. It has all the IT challenges of a 21st Century online company; thousands of virtual machines, a mainly mobile customer base and huge spikes in demand, for example up to 12,000 bets to process per minute during the recent Grand National steeplechase. Splunk helps address all sorts of worries about performance and security. Perhaps, most interesting was Paddy Power's approach to development, agile DevOps, mirroring Splunk's own need to get new innovations to customers as soon as possible. The company's use of Splunk was initially in the area of security, but now it is providing business insight to senior execs via smartphones though analysis of machine data.

 

Next was Ticketmaster, another quintessential online operator, with thousands of virtual machines and over a quarter of a billion registered users. It experiences huge peaks in demand when tickets for popular events first become available; sales can top $1M a minute! Application failure is expensive and unacceptable. In Ticketmaster's own words "life was not pleasant before Splunk!" Initially Splunk was used for incident investigation, forensics, security/compliance reporting and monitoring known threats. In line with Splunk's own vision Ticketmaster has moved on to real time advanced threat and fraud detection and monitoring the insider threat.

 

Finally was CERT-EU, not an end user organisation but one providing security intelligence to a community of sixty plus opt-in European Union institutions. Here, in partnership with a range of IT security vendors, including Splunk, CERT-EU monitors threats in real time across all its members and is therefore able to provide much broader protection than any individual organisation could do for itself. Whilst this includes crime detection, nation state and terrorist activity are now an ever present threat for government bodies and a target of CERT-EU's monitoring.

 

In 2014 Quocirca worked with Splunk to get a better idea of the extent to which EU-based businesses recognised the value of machine data and were able to collect and analyse it. The results were published in a free report Master's of Machines. A new report, to be published in June 2015, will look at how similar businesses are using operational intelligence derived from machine data to manage IT complexity, improve the cross-channel customer experience (or omni-channel as some call it) and tighten security. Some are as advanced as Paddy Power, Ticketmaster and CERT-EU, but the research shows that for the majority machine data is a free resource that they are yet to fully exploit.

The changing face of project management

Clive Longbottom | 1 Comment
| More

Historically, a project has been defined as a time-bounded set of tasks.  That is, a project has a start date, a desired set of deliverables and an end date, along with human and cash resources that need to be allocated to it to make it happen.

So easy to describe, so difficult to do.  A whole market has grown up around project management, from the likes of Microsoft Project through to high-end project portfolio management tools such as seen in Artemis, Oracle Primavera or CA Clarity.  There are also more targeted packages, such as Deltek (aimed more at civil engineering) or AVEVA in the heavy engineering sector.

However, with continuous delivery of incremental improvements being required by organisations and more people across different skills and levels needing to be included in "projects", it is time for project management software to change.

The way companies work has changed, and this is forcing a redefinition of what is meant by a project.  In many cases, the definition given above is no longer the case: a project may be an ongoing sequence of tasks with less of a defined end point.  Instead, there may well be a series of desired outcomes spaced out along an evolving timeline.  Project members will join and leave as the project goes along in an increasingly ad-hoc manner.  The success of various tasks along the project timeline will define what the next steps are - and whether a project should continue or be brought to a halt.  Some projects will still work to deadlines - many will have review points that act as decision making points, but the project itself may have no direct end point.

For example, consider something like sales enablement.  A company could create a whole set of disparate projects overseen by someone viewing how the salesforce, product development, marketing, logistics and so on are working, trying to pull together different projects and approaches to create a desired end.  Or, an ongoing project could be put in place that leads to continuous reviews where the success or otherwise of a campaign can be measured against how many items sales have sold with a feedback loop being put in place such that product development can change what is provided to sales to meet the customers' needs.

High-end project management tools can't do this easily. Users generally have a need to understand the language of a project manager - Gantt charts, critical paths and so on.  They need to come out from the systems that they are used to working with and use specific project management tools.  All of this counts against broad use of project management - yet this may be about to change.

I have been talking with a couple of interesting project management vendors; both have a couple of things in common; each has its own way of operating.

Firstly, Clarizen is looking to make project management far easier to use and understand.  It provides all the standard views and tools that full project managers want - Gantt charts, resource management tools, overall portfolio management and so on - but also brings in social collaboration.  Users can choose to see projects in terms of tasks, Gantt charts, critical paths and so on if they so want, or can see a simple timeline with progress markers showing how complete different tasks within the project are.  A full, audited track of everything to do with a project is maintained in a manner that makes it easy for users to see exactly what is happening - not just at a "task 50% complete" level, but also from a "this person has severe issues around this area for these reasons, and here is what others think".  Through the use of Twitter-like hashtags and "@" identifiers, Clarizen is making it easy for people to track activities and people throughout an organisation's work.

As such, it can replace those horrendous email trails that many people involved in projects have had to deal with: trails that are not integrated into the project management software itself, and so lead to issues when decisions are being made at project management meetings. Indeed, it can also include outsiders, who can be sent static or dynamic views of what is happening in a project with granular security controls.

The way that Clarizen is being used by some of its clients shows how it is enabling users to manage their day-to-day lives.  Many users are seeing Clarizen not as a project management tool, but as a life management tool.

The other company I have talked with is InLoox.  It also realises that collaboration is part and parcel of how an organisation operates these days, and it also realised that the best way to get people to use project management is to embed it within an environment that they are already familiar with.  Therefore, InLoox made the decision not to be a stand-alone project management, instead embedding itself into the tool that the vast majority of people are using on a day-to-day business basis - email.  InLoox is an Outlook-native system - projects are created, managed and updated from within Outlook; notifications are made through email messages that take the recipient straight through to the InLoox project environment.  Again, InLoox retains a full list of social comments and interactions around a project - including all emails and attachments that are associated with the project.

By taking old-style project management and making it more user-centric and easy to use, Clarizen and InLoox stand a far better chance of democratising project management.  However, it will need either a change in perception from the prospective user base as to what project management means to them, or some heavy marketing from the vendors to create a new belief within prospective users that these solutions will change the way they work for the better.

The rise and rise of bad bots - part 2 - beyond web-scraping

Bob Tarzey | No Comments
| More

Anyone who listened to Aleks Krotoski's 5 short programmes on Radio 4 titled Codes that Changed the World will have been reminded that applications written in COBOL, despite dating from the late 1950s, remain in widespread use. Although organisations are reliant on these applications they are often impossible to change as the original developers are long gone and the documentation is poor. With the advent of Windows and then web browsers, there was a need to re-present the output of old COBOL applications. This led to the birth of screen-scraping, the reading of output intended for dumb terminals and repurposing it for alternative user interfaces.

The concepts of screen-scraping have been reborn in the 21st Century as web-scraping. Web scrapers are bots that scan web sites for information, when necessary manipulating i/o to get what they need. This is not necessarily a bad activity, price comparison sites rely on the technique, for example an airline or hotel wants its pricing information shared in the hope that their services will appear on as many sites as possible. However, there are also less desirable applications of web-scraping, such as competitive intelligence. So, how do you tell good bots from bad?

This was the original business of Distil Networks. It developed technology that could be deployed as an on-premise appliance or invoked as a cloud service, enabling bots to be identified and policy defined about what they can or cannot do. So, if you sell airline tickets, it can recognise bots from approved price comparison sites, but block those that are from competitors or are just unknown.

Distil does this by developing signatures that allow good bots to be white listed (i.e. allowed). It recognises bots in the first place by checking for a lack of a web browser (and therefore real user) and challenging suspects with CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart). It has plans to extend this to APIs (application programming interfaces) that are embedded in the native apps that are increasingly being used to access online resources from mobile devices.

With the ability to recognise and block bots, Distil Networks has realised it has the ability to block other unwanted attention being received by its customers. For example:

·        Brute force logins are perpetrated using bots; these can be identified and blocked, and if necessary challenged with a CAPTCHA

·        Man-in-the-middle (MITM) attacks where a user's communication with a resource is interfered with often involve bots, they can be detected and blocked

·        Online ad fraud/click fraud rely of bots to click many times mimicking user interest and potentially costing advertisers dearly; such activity can be identified and blocked

·        Bot-based vulnerability scanners can be limited to authorised products and services, blocking others that are being used by hackers to find weakness in target systems, giving resource owners back the initiative in the race to patch or exploit

Distil charges by the volume of page requests, so for example, if you were worried about ad-fraud and a bot net was used to generate millions of clicks, then costs could spiral out of control. The answer to that is to use DDoS controls that can detect volume attacks (as discussed in part 1 of this blog post) in conjunction with Distil's bot detection and blocking capability.

Distil seems to be onto something. It has received $13M in VC funding so far, and has an impressive and growing list of customer. Unlike many security vendors, it seems happy to name its customers; perhaps just knowing such protection is in place will encourage the bad-guys to move on? In the UK this includes EasyJet and Yell.com. Distil is set to make life harder for bad-bots - as ever there will surely be a fight back from the dark side.

The rise and rise of bad bots - part 1 - little DDoS

Bob Tarzey | No Comments
| More

Many will be familiar with the term bot, short for web-robot. Bots are essential for effective operation of the web: web-crawlers are a type of bot, automatically trawling sites looking for updates and making sure search engines know about new content. To this end, web site owners need to allow access to bots, but they can (and should) lay down rules. The standard here is to have a file associated with any web server called robots.txt that the owners of good bots should read and adhere too.

However, not all bots are good; bad bots can just ignore the rules! Most will also have heard of botnets, arrays of compromised users devices and/or servers that have illicit background tasks running to send spam or generate high volumes of traffic that can bring web servers to their knees through DDoS (distributed denial of service) attacks. A Quocirca research report, Online Domain Maturity, published in 2014 and sponsored by Neustar (a provider of DDoS mitigation and web site protection/performance services), shows that the majority of organisations say they have either permanent or emergency DDoS protection in place, especially if they rely on websites to interact with consumers.

However, Neustar's own March 2015, EMEA DDoS Attacks and Protection Report, shows that in many cases organisations are still relying on intrusion prevention systems (IPS) or firewalls rather than custom DDoS protection. The report, which is based on interviews with 250 IT managers, shows that 7-10% of organisations believe they are being attacked at least once a week. Other research suggests the situation may actually be much worse than this, but IT managers are simply not aware of it.

Corero (another DDoS protection vendor) shows in its Q4 2014 DDoS Trends and Analysis report, which uses actual data regarding observed attacks, that 73% last less than 5 minutes. Corero says these are specifically designed to be short lived and go unnoticed. This is a fine tuning of the so-called distraction attack.

Arbor (yet another DDoS protection vendor) finds distraction to be the motivation for about 19-20% of attacks in its 2014 Worldwide Infrastructure Security Report. However, as with Neustar, this is based on what IT managers know, not what they do not know. The low level, sub-saturation, DDoS attacks, reported by Corero are designed to go unnoticed but disrupt IPS and firewalls for just long enough to perpetrate a more insidious targeted attack before anything has been noticed. Typically it takes an IT security team many minutes to observe and respond to a DDoS attack, especially if they are relying on an IPS. That might sound fast, but in network time it is eons; attackers can easily insert their actual attack during the short minutes of the distraction.

So there is plenty of reason to put DDoS protection in place (other vendors include Akamai/Prolexic, Radware and DOSarrest). However, that is not the end of the bot story. Cyber-criminals are increasingly using bots to perpetrate another whole series of attacks. This story starts with another, sometimes, legitimate and positive activity of bots - web scraping; the subject of a follow on blog - The rise and rise of bad bots - part 2 - beyond web scraping.







Smartwatch - wearable companion or independent app platform?

Rob Bamforth | No Comments
| More

Wearable technologies have started to pop up in a variety of form factors from a wide variety of sources, but the wrist has become a clear favourite in terms of acceptable and accessible location for deployment. It is familiar, convenient and yet something worn there can be unobtrusively masked by a well styled or tailored cuff.


Many wearable devices, however, have been less elegant and perhaps understandably, a lot of early ones are quite clumsy. Some styles and functionality are a bit too 'cyberpunk' and have lead to backlash and ridicule eg facial icons like Google Glass. Some are plagued with laughably short battery lives (did anybody think to talk to the Swiss about self wind watches?) and most are parasitically reliant on an external host to feed off for communications.


Largely, this means that many wearable devices are simply 'IoT on the wrist'. They can identify, track and monitor and with a small screen or nearby smartphone can alert or inform the user. For many this means simple, but potentially useful applications tracking health and lifestyle or for the overly socially connected, simply tracking, tagging and broadcasting 'life'.


Stepping, beyond health and fitness means more application and device sophistication, which is the point at which Apple has decided to move further into the wearable market (small iPods did have clips, so arguably have been wearable for some time) with the much heralded Apple Watch.


Apple's huge strength in the mobile sector with both iPhone and iPad has been driven specifically by its app store. Sure the devices are beautifully styled and have cool cachet and appeal, but as many would point out, they are more expensive than most and there's lots of innovation elsewhere. However, Apple followed others in understanding that the key to the success of any platform, as Microsoft, Sun and others have found in the past, is fuelling the virtuous circle of user adoption, developer engagement, app availability (and repeating).


Apple would surely like to replicate this iPhone/iPad success with its smart watch, which already looks - still prior to mass release - like a very desirable, functional and app-friendly platform. It will no doubt do very well as a lifestyle accessory for the rich (with the high end variants) and aspirational (with the colourful Swatch-alike styles), but there is one element of mobile functionality even this device is missing - independence.


Connectivity to the wider network will still rely on the proximity, availability and battery life of another device, which may be a drawback in certain applications - especially perhaps those with unencumbered, unobtrusive and unfettered business in mind; field, logistics and healthcare workers needing handsfree access to IT or shop floor sales staff not wanting to hide behind screens.


This is one area where Samsung has taken an interesting direction with its Gear S smart watch, which has its own SIM-card slot and therefore can have an independent mobile connection. The big upside of this cellular connectivity is that unlike all other smart watches, which are companion devices to mobile phones (although thankfully they can tell the time on their own), this is a fully capable wearable application platform with a connection that is no longer dependant on another device.


It might be based on an operating system considered unusual in some parts of the world (Tizen), but it does open up opportunities for developers to provide wearable applications of value - this could be very significant for apps for the enterprise, if not for consumers. Many workplace tasks require IT support, but devices used to access it over-encumber the user, and an independent wearable device removes this headache.


Wearable devices are still in their infancy, but already a variety of interesting usage models are emerging. It may look a little futuristic, but applications can be deployed on wearable devices today and make a real difference to both an organisation and the wearer. Any business engaging in evaluating mobile IT needs to seriously consider how wearable devices could have a positive impact - it's no longer only phones that have to be smart.


Wearable devices - now a reality for the workplace

Rob Bamforth | No Comments
| More

There has been a lot of hype about wearable technologies mostly focusing on smart glasses, watches and heath wristbands for consumers, but it is likely that there will be far more compelling reasons to use wearable devices in the enterprise. Here, wearable devices, perhaps often in combination with sensors and other interaction from the internet of things (IoT) will provide greater context and granularity of information and control.


The overall reasons that should drive this are less to do with fad and fashion and much more oriented around efficiency and effectiveness for the organisation. That is not to say that employees will not gain some advantage. Safety, security and wellbeing are all potential benefits for wearers of smart devices, and these might be recognised as such by employees if presented correctly - or indeed if the employees' consumer experiences motivate them to buy in to the style and innovation.


At a recent event hosted by TBS Mobility and Samsung, several very interesting case studies emerged that demonstrated a combination of the innovation and clear business value from wearable device use. These could be effectively applied in many other sectors and organisations, but there are some challenges to address.


The primary reasons for wearable devices are to gain access to IT resources without encumbering the user and getting in the way of the task in hand. So many other items of technology involve varying degrees of significant physical commitment - sitting down to use a desktop or laptop, two hands to use a tablet while standing and even cradling a smartphone requires a hand and at least one eye or ear.


Something worn on the wrist, accessed by a glance, tap or spoken word not only fits a Dick Tracey wish-list, it also frees up hands, is out of sight and allows the user to be 'footloose'. In challenging environments this means the user no longer has to be as concerned about losing the device or even be seen to be carrying something of value that might be stolen. 


Tasks that require both hands, looking customers in the eye rather than via the back of a screen or getting help without being noticed are all much simpler. The case studies on the day of the aforementioned event featuring wearable applications already in use - by snow clearance workers at Heathrow airport, shop floor staff in Dixons and lone workers in public sector housing - highlighted that this is not about glitz and glamour, but getting a job done easily and well.


Another related reason to wear is that these devices can be unobtrusive and operate quietly in the background. They can be used to identify, track and monitor automatically without intervention from the user. The information gathered can be analysed on an individual or aggregated level to provide insights into operational processes, or potentially more valuably, used in real time to effect improvements based on changing circumstances as they happen.


Both of these reasons to wear depend on something quite critical - the wearable device must always be worn otherwise it cannot perform its expected function. While it may seem pretty obvious that smartphones and tablets need to be remembered and carried in order to be used, their actual use is intermittent rather than constant. While there may be an expectation of an alert or incoming message at any moment, most applications are dipped into periodically - ie email, messaging, browsing - and are not in constant use.


Applications designed to make use of wearable devices on the other hand expect to be worn and in use for the entire period of operation. The problem is that employees may forget or more worryingly may lack the incentive or might even be hostile to the idea of being constantly 'plugged in' and potentially personally monitored. Or, in the case of recording technologies, such as Google Glass, others may be hostile to being recorded.


This highlights the need for the benefits of using wearable devices need to be felt and understood by both employer and employee. For the organisation the benefits should be easily quantifiable as efficiency, but for the individual the value might be indirectly related to the job - eg increased safety for lone workers, eliminating boring or laborious tasks - or the whole concept of wearing devices could be offered by employers based on incentives.


There has been some initial research conducted in this area, especially around promoting health and lifestyle benefits. Employees with devices might get reduced insurance premiums, offers of free gym membership or other incentives based on levels of activity. If that all sounds too Orwellian, some of the research found that employees would be quite happy to be bribed by offers of extra days leave, a straightforward bonus or shorter working hours as payback for wearing a device to track for performance and productivity reasons.


Whatever the incentive - financial or personal wellbeing - the principal is the same; technology that doesn't get in the way, gathers insights and is devastatingly simple to use can and should bring value to both individual and organisation. There may be some fine-tuning of form factors, a shakeout of frivolous chaff from the worthwhile wheat, and some application development to do, but most organisations should no longer just be thinking 'mobile', but also 'wearable' when considering how employees, customers and partners interact with IT.


Delivery by drone or dead duck?

Rob Bamforth | No Comments
| More

Tiny autonomous delivery drones dropping off everything from your book orders to pizza? There have been trials in many parts of the world by serious names - Amazon, DHL, Dominos Pizzas - but is this a serious post-cyberpunk instant gratification-by-delivery dream or a dystopian nightmare?


It is easy to think of many 'opportunities' that need to be overcome first for this to 'take-off', for example:

  • Noise - one drone is a buzzing irritant you could live with, especially if you've ever owned a radio controlled aircraft, but dozens constantly flying back and forth, really, you'd be happy with that? Ok, it might not be noticeable in busy city centres, but those octo-copter blades aren't exactly silent.
  • Safety - no problem say the pro camp - there are failsafe options built-in, add in a little regulation here and air traffic control there and it should be fine - and after all it will be keeping delivery drivers off the streets so it should be even safer replacing vans and motorbikes with parcelcopters.


Then there's fog, wind, snow, rain, lightning, malfunction, range, battery life, shotguns, slingshots, birds, humans, traffic, aircraft, overhead power and phone lines, not to mention landing sites, a tiny carrying capacity and huge legal issues.


OK, we're a pretty innovative race and those clever folk in Silicon Valley will surely be able to fix all that stuff won't they?


Maybe.


But looking at this a different way, perhaps the problem is less to do with finding a different/faster/cheaper way of delivering things, but doing something much more radical and not delivering them at all.


Newspapers (still not dead by the way) are rarely physically delivered to the door on a paper round anymore - they might be physically picked up, but now they are sold from far many more outlets than they used to be, or they are digitally delivered. No need for a drone to replace the paper boy or girl, there are alternatives and they are cheap and effective.


Many things have headed that way. Music and video? Digital download not DVD, CD or tape and vinyl (much). Anything digitise-able is easier to shift over the wire/fibre and with 3D printing that could apply to physical goods too, so maybe delivery won't be necessary at all? Well, there are many things that 3D printing can and does offer, but it's probably not a complete substitute for everything e.g. fresh food as they've not quite achieved Star Trek replicators.


Far too often really clever innovations appear yet get applied in an old fashioned way. Then they try to fix a problem that isn't there, isn't worth expending any sort of effort on, or will disappear when things turn out differently (as they inevitably do at some point). Many technologies fall into this trap, when really if they were effectively employed and targeted to where they add significant value they could have great success and make a real difference rather than coming across like PR stunts.


For example, drones could be used to deliver consignments into dangerous locations without putting a delivery person at risk (some might say that's where wars are heading anyway); they could be used to rapidly deliver something vital to a critical situation - defibrillator to a first responder at a casualty; or deliver precious cargo - transplant organs, medicine to an offshore rig etc.


These are still on the edge of practical and commercial, let alone legal considerations, but probably much more credible than pizza. Marketing messages have to avoid being too outrageous otherwise they end up sounding like the boy who cried wolf and no one will be believe the sensible ones anymore.


Goodbye "paperless", hello "clueless"

Rob Bamforth | No Comments
| More

Scott Adams' Dilbert cartoons are often too close to workplace reality for comfort. A favourite strip from his book "I'm not anti-business, I'm anti-idiot" (another sentiment many would readily recognise and endorse) has the pointy haired boss having an emailed document faxed in case the recipient doesn't read his email and the original copy posted to ensure he receives a 'clean' version.


The cartoon sums up so many business outbound communications problems. Was what was sent received? Was it in a readily useful format? Does the recipient believe it came from the purported sender? Can it be readily shared and worked upon? Which version is 'the master copy'? Was it changed or replaced after it was copied or sent? And finally, why is technology depleting rather than saving precious resources?


Information and data management should be such a simple thing for 'information technology' to deal with, however the weakness that IT often fails to overcome is in the 'wetware peripherals' (people).


Companies and people, have preferences as to how information should be communicated. These are generally not 'whims', but relate to existing and often difficult to change, business processes. 


Even in small companies (small and mid-sized enterprises/SME), which may on the face of it be more flexible, change is difficult. It costs time and money and is often a distraction as there is little scope, capacity or appetite for making anything other than changes that are essential or forced by regulatory bodies. Many processes are cumbersome and require an audit trail or at least some checks and balances to ensure mistakes are not overlooked. This is especially important with commercial and financial matters.


So what can a business do to automate and improve its business communications?


Organisations may want to shift everything online or to electronic only documents when it is convenient and offers a cost saving, but will be unlikely to be able to force this onto all their business suppliers and customers (unless perhaps they are a high street bank...).


According to recent research conducted by Opinion Way on behalf of communications and logistics provider Neopost, the online or by paper decision is still a relatively complex one. From a survey of 280 UK SMEs it was discovered that communications media preferences varied especially between documents used for different purposes.


Just to be sure they get there, invoices and contracts are likely to be sent by BOTH email and physically mailed by half of UK SMEs, but three fifths of SMEs will send POs only by email. The document most likely to be sent only in the post is pay slips, by two in five SMEs. Type and importance of document are cited as major reasons, along with recipient's preference.


Much of what is contained in all of these document types is financial and potentially critical information, and yet is often handled in a haphazard way. Unfortunately the Dilbert pointy haired boss with his ad hoc fax, email and post scenario is more common than many would like to think. 45% of those surveyed were concerned about the risk of human errors, around a third had trouble keeping track of communications across communications channels with any given client and a quarter thought there was a lack of visibility, traceability and security related to outbound documents.


These are all important concerns, but many organisations will try muddling through hoping the costs of outbound communications mistakes do not cause problems or go unnoticed. Despite any environmental or green policies, few SMEs will worry about the overuse of a little bit of extra paper either.


However, the survey did give pause for thought in one area - almost half of SMEs thought they were wasting time on repetitive tasks where outgoing communications were concerned. If the reduction of risks and errors were not good enough reasons for adding a bit of automation to this communications process, surely doing it to save precious time should be?




Doing more with video conferencing

Rob Bamforth | No Comments
| More

While video conferencing is not a new technology, several factors appear to be driving forward the adoption of video as a communications tool, especially for personal use. Whether Skyping with distant relatives, using video in gameplay or creating recordings to share on YouTube, video has become much more broadly accepted and valued. So, just as in other areas of technology is this consumer appetite for video translating into more widespread business usage? Not really, which begs the question why?

First, any investment in technology must demonstrate clear benefits. With video conferencing there are generally two that are often quoted - saving travel costs and environmental benefits. The current economic climate means that the second of these is no longer the burning issue it once was, and the former is more difficult to measure than it seems at first glance. The problem is that while travel costs can be measured, the savings made because video conferencing was used instead are harder to quantify.

This can mean that the business incentive for increasing the use of video could be muted from a narrow financial perspective. However, take a wider look at the value proposition and the position changes.

Among existing users of video conferencing, both frequently quoted benefits are recognised, but only travel cost savings are really thought of as important. What is more interesting are the secondary benefits, which all revolve around more effective and efficient collaborative and individual working. These benefits in particular appear to be magnified as the adoption of video becomes more widespread across an organisation and as frequency of its use increases.

Once the financial imperative is understood, the things that are holding adoption back fall into two categories - technology and people.

Many organisations have taken a half-hearted or tentative approach to installing video conferencing systems; they have put them in the boardrooms where only a select few people have access; they buy different, often incompatible, systems when they do broaden deployments to other areas; or they have been slow to roll out video to regular desktop, laptop and mobile devices. Generally, training is after the event rather than getting everyone up to speed and familiar at the time of installation. In many cases there are disconnects between the needs of the business, those in IT running the video conferencing setups and the facilities groups that managed the rooms and the workplace.

From the individual's perspective, the technology is perceived as hard to use, unreliable and probably not worth trying without a fair amount of hand holding. Many are already uncomfortable being 'on camera', something that's often not helped if the primary use of video conferences is for mass meetings, where the screen enhances a feeling of formality and not even the friendliness of a handshake or shared cup of coffee are on offer.

No wonder the ones who embrace video the most are those most comfortable with the technology and those in senior positions or with the greatest access. Digital natives, the young group most likely to be very comfortable with video are much less likely to be embracing video in the workplace - why? It is probably simply because they typically are not in sufficiently senior positions to take advantage. Given the potential productivity benefits, this would seem to be a huge mistake.

Most of the technology challenges surrounding video have a direct impact on the people issues too. Past experiences colour current judgment, and many will have had bad experiences with video conferencing, which knock their confidence.

Despite these negative perceptions, current technology has moved on a great deal. Video and audio quality can be incredibly high, interoperability issues are largely contained, and it can be easy and reliable to set up video calls if the right investment decisions have been taken.

Video can be used on every device from a smartphone to big screens in the boardroom and the formality of call setup in advance should be a thing of the past, making it simpler to encourage more frequent usage and familiarity on all screens. This breadth and increased frequency of use is when the real benefits of productivity and collaboration really kick in, but of course requires funding.

Those with video already will often find a mix of systems, some of which will require upgrading to gain consistency across all their video platforms and almost all will require further investment in software platforms to ensure that even the smallest mobile devices can be added into the video mix.

Consistency, widespread availability and encouragement to use video regularly will have a real impact on adoption and if this is mixed with induction education to build early familiarity and further training on how to feel comfortable, relaxed and proficient on camera, both people and technology challenges will have been adequately addressed.

It might require some investment in a mix of hardware, some upgrades, software and services, but creating a democratic and pervasive communication culture that is comfortable with using video will pay off.

For some thoughts on how to finance the changes required to address an enterprise video conferencing strategy as a whole, download this free report.

 

Have you entered our awards yet?

Find recent content on the main index or look in the archives to find all content.

Archives

Recent Comments

  • ChrisMaldini: Nice write-up, Clive. I agree there is a need to read more
  • Adam: Cloud computing and BYOD go hand-in-hand. Cloud computing can make read more
  • David Chassels: Hi Clive Is the business emphasis not wrong in looking read more
  • Clive Longbottom: After a discussion with CA Technologies, I would just like read more

Dilbert

 

 

-- Advertisement --