Before and during targeted attacks - a report from the 2016 Eskenzi PR IT Security Analyst and CISO Forum

Bob Tarzey | No Comments
| More

A recent Quocirca report, The trouble at your door, sponsored by Trend Micro, looked at the scale of targeted attacks faced by UK and European businesses and the before, during and after measures in place to mitigate such attacks. Trend Micro has plenty of its own products on offer, not least its recently upgraded Hybrid Cloud Security offering. However, last week, Quocirca got a chance to review ideas from some smaller vendors at the annual Eskenzi PR IT Security Analyst and CISO Forum. The 10 vendors that sponsored the forum were focussed mainly on before and after measures.

Targeted attacks often rely on IT infrastructure vulnerabilities. The best way to protect against these is to find and fix them before the attackers do. White Hat Security discussed the latest developments in its static (before deployment) and dynamic (post deployment) software scanning services and how its focus has extended from web-enabled applications to a significant emerging attack vector - mobile apps. This is backed by White Hat's global threat research capability, including a substantial security operations centre (SOC) in Belfast, UK.

Cigital is an IT services company also focussed on software code scanning, mainly using IBM's AppScan. It helps its customers improve the way they develop and deploy software in the first place, as Cigital puts it we can "do it for you, do it with you or teach you to do it yourself". The company is based in Virginia but has an established UK presence and customer base.

Tripwire provides a broader vulnerability scanning capability looking for known problems across an organisation's IT infrastructure. In 2015 Tripwire was acquired by Belden, the US-based manufacturer of networking, connectivity and cabling products. Belden sees much opportunity in the Internet of Things (IoT) and Tripwire extends vulnerability scanning to the multitude of devices involved.

The continual need to interact with third parties online introduces new risk for most organisations; how can the security of the IT systems and practices of 3rd parties be better evaluated? RiskRecon offers a service for assessing the online presence of third parties, for example looking at how up to date web site software and DNS infrastructure are; poor online practice may point to deeper internal problems. RickRecon is considering extending its US-only operations to Europe.

UK-based MIRACL provides a commercial distribution of a new open source Milagro encryption project of which it is one of the major backers. Milagro is an alternative to public key encryption that relies on identity based keys that are broken down using a distributed trust authority which only the identity owner can reassemble. MIRACL believes IoT will be a key use case as confidence in the identity of devices is one of the barriers that needs to be overcome.

Illumio provides a set of APIs for embedding security into workloads, thus ensuring security levels are maintained wherever the workload is deployed, for example when moved from in-house to public cloud infrastructure. This moves security away from the fractured IT perimeter into the application itself; for example, enabling deployments on the same virtualised infrastructure to be ring fenced from each other - in effect creating virtual internal firewalls.

FireEye was perhaps the best know brand at the forum and one of four vendors more focussed on during measures. Its success in recent years has been mitigating threats at the network level using sandboxes that test files before they are opened in the user environment. FireEye's success has enabled it to expand to offer threat protection on a broad front including user end-points, email and file stores.

Lastline also mitigates network threats by providing a series of probes that detect bad files and embedded links. Its main development centre is in Cambridge, UK. A key route to market for Lastline is a series of OEM agreements with other security vendors including SonicWALL, Sophos, Barracuda and Mimecast.

UK-based Mimecast was itself a sponsor at the forum. Its on-demand email management services have always had a strong security focus. It has been expanding fastest in the USA and this included a 2015 IPO on NASDAQ. Mimecast has also been focussing on new capabilities to detect highly targeted spear phishing and supporting the growing use amongst its customers of Microsoft Office 365 and Google Apps. 

Last but not least, Corero is a specialist in DDoS mitigation. In a mirror image of Mimecast it is US-based but listed on the UK's Alternative Investment Market (AIM). Its appliances are mainly focussed on protecting large enterprises and service providers. Its latest technology initiative has been to move DDoS protection inline, enabling immediate detection and blocking of attacks as opposed to sampling traffic out of line and therefore blocking attacks only after they have started by diverting network traffic.

Quocirca's research underlines how attackers are getting more sophisticated. The Eskenzi forum provides a snapshot of how the IT security industry is innovating too. There were no vendors present specifically focussed on responding to successful attacks and the need for such plans to be in place for when an attack has been successful is paramount. That said, decreasing the likelihood of being breached with better before and during measures should reduce the need for clearing up after the event.

Your money or your data? Mitigating ransomware with Dropbox

Bob Tarzey | No Comments
| More

Windows 10 has an unnerving habit of throwing up a screen following certain updates that says "all your files are right where you left them". Quocirca has not been alone on first seeing this, in thinking it might be a ransomware message. Microsoft has said it is planning to change the alert following user complaints.

 

Real ransomware does just as the Windows message says; it leaves your files in place, but encrypts them, demanding a ransom (usually payable in anonymous bitcoins) for the decryption keys. Ransomware is usually distributed via dodgy email attachments or web links with cash demands that are low enough so that users who are caught out will see coughing-up as the easiest way forward. Consumers are particularly vulnerable, along with smaller business users who lack the protection of enterprise IT security. However, in the age of BYOD and remote working, users from larger organisations are not immune.

 

Ransomware is usually sent out en-masse, randomly and many times. So, traditional signature-based anti-virus products become familiar with common versions and provide protection for those that use them. In response, criminals tweak ransomware to make it look new and avoid file-based signature detection. To counter this, anti-virus products from vendors such as Trend Micro (which has built-in specific ransomware protection) detect modified ransomware by looking for suspicious behaviours such as the sequential accessing of many files and key exchange mechanisms with command and control servers used by would be extorters.

 

Avoiding infection in the first place is the best course of action. However, should the worst happen, there is of course another sure way to protect your data from ransomware that has been around since electronic storage was invented - data backup. Simple: if a device is encrypted by ransomware, clean it up or replace it and restore the data from backup. Data loss will only be since the last recovery point, and if you are using a cloud storage service to continuously backup your data the data loss should be minimal. Or will it?

 

The trouble is with online cloud storage services is that they appear as just another drive on a given device; this makes them easy for an authorised user to access. Unfortunately, that is also true for the ransomware which has to achieve authorised access before it can execute. So, following an infection data will likely be encrypted both locally and on the cloud storage system, which just sees the encryption of the file as another user driven update. So is this back to square one with all data lost? Not quite.

 

Whilst cloud storage services from vendors such as Dropbox and Google are not designed to mitigate the problem of ransomware per se, the fact that they provide versioning still enables a recovery of files from a previous state. As a Dropbox user Quocirca took a closer look at how its users could respond to a ransomware infection.

 

Dropbox is certainly diligent about keeping previous versions of files; by default, it goes back about a month keeping hundreds of versions of regularly used files if necessary. The user, and therefore the ransomware, cannot see previous versions without issuing a specific request to the Dropbox server. Following an infection, every file will have an immediate previous version that is untouched by the ransomware. Good news, clean up or replace the device and restore your files. However, this may take some time!

 

With the standard Dropbox service each file has to be retrieved in turn. Dropbox does provide a service for customers who get hit by ransomware for the retrieval of entire directory trees and its API provides file-level access and version history which programmers and other software applications can use to automate the process.

 

This is certainly a better position to be in than having no backup at all and the benefits of continuous copying to the cloud are one of the surest ways of protecting data against user device loss, theft or failure. Ultimately Dropbox is protecting its customers from a potential ransomware infection, and anyone relying on a similar continuous cloud back up service should check how their provider operates.

 

It also underlines the benefit of having a secondary backup process, for example to a detachable disk drive. This would save having to contact a third party for help when all your files have been encrypted by ransomware. The bulk of a file system can quickly be copied back to a cleaned or new device and just the most recent files recovered from the cloud. However, if you do that, then remember to actually detach the drive, or just like your cloud storage, it will appear as just another device and ransomware will be able to set about its evil work on your secondary back up as well.

Integrating IoT into the business

Rob Bamforth | No Comments
| More

icon.jpgThe Internet of Things is the latest tech sector 'must have', with some even claiming organisations need to have a CIoTO (Chief IoT Officer). However, this is unnecessary and completely misses the point. The IoT is not separate or entirely new (being in essence an open and scaled up evolution of SCADA - supervisory control and data acquisition), but it is something that has to be integrated into the wider set of business needs.


As an example of potential, take a look at something going on in the hospitality sector and how innovative technologies and IoT can be incorporated.


At one time booking hotels was a hit and miss affair conducted pretty much over the phone or via a visit to a third party travel agent. Now we have online direct booking, price comparison aggregators that search widely and the online brokering or 'sharing economy' with things like Airbnb.


Technology is starting to make an impact, but queuing for check-in at a desk is still more widespread than receptionists wandering around with tablets, or directing guests to self-serve on touch screen kiosks. In-room use of phones, internet access and TV (with varying amounts of paid for movies) have become commonplace, but controlling the lighting and air-conditioning is still a case of hunt for the right switch or dial or click and hope - unless you are staying in really expensive places.


'Things' appear to be changing and a small number of (still low budget) Premier Inn locations are now 'Hub Hotels'. These have kiosks for checking in and creating a room card key, and while it's still possible to interact with the room facilities though old fashioned switches or TV remote control, there is an app. It works on Android, Apple's iOS and yes, even on a smart watch. This app wirelessly connects guests to 'Things' in their rooms and allows them remote control and information access.


This might only be a first step, but it pulls together several aspects of innovative IT; wearable, mobile, IoT, touch interface, self-service and the inevitable cloud of services to back it up. However, rather than focussing in on any particular one of these technologies, the focus is quite rightly on customer experience - it is all about hospitality after all.


There is also the impact it has on running the business, efficiency, cost savings and therefore improved competitiveness is what is a crowded sector. It has an impact on staff too, and although efficiencies such as self-serve check-in gives an opportunity to reduce staff numbers, it also allows progressive thinking organisations to enable their staff to do more, and act more as welcoming hosts, rather than performing an administration role.


Interesting that a budget hotel would focus on improving service and competitive differentiation rather than just technology and cost cutting, but this is where the IoT can have a far greater impact than the futuristic scenarios often played out by the PowerPoint decks of IoT vendors. 


Organisations looking to benefit from the IoT opportunity need to think a bit differently about what they are doing, and the answer is not to start with a CIoTO, but with more integrated business thinking which can encompass a range of technologies that gather more data, permit more remote control, and are managed centrally.


Some steps any organisation looking into IoT could take would then be:


  • Prioritise understanding of the core business processes that are resource intensive or time consuming.
  • Identify intelligence gaps about what real-time information is available into how those processes are faring and how they might be better understood.
  • Find out where third parties (in particular customers) feel they do not have sufficient control or information.
  • Take a holistic approach to understand the whole range of internal systems would be impacted by the two previous steps - more information, more remote control - and plan a strategy that encompasses this bigger picture and abstracts overall control.
  • Implement an element of the strategy to test out a) if customers like it, b) if the organisation can manage and cope with all the extra information generated, and ultimately c) if it makes a difference to the business.
  • Refine and repeat


Too much focus on exotic applications and devices or a perceived need for specialist or segregated roles undermines the reality of the benefits that might be available. The IoT, perhaps even more than many other technology advances, needs to be 'embedded' - not simply in devices, but in business processes, and that's where the effort needs to be into first, not the technology.


With its interesting line-up of speakers and exhibitors, those investigating IoT for business might want to head along to ExCel in April for Smart IoT London, which is open for registration here.


Email is not dying - and it can serve a very valuable purpose

Clive Longbottom | No Comments
| More
Evidence.jpg

In too many situations, a discussion can end up as an argument between what was originally communicated between two parties.  You know the sort of thing: "I only bought this because you promised that"; "You never told me that when we discussed it"; "Go on, then - prove that you said that".

Many of the mis-selling cases that have cost financial institutions dearly have hinged on such a need to prove what was originally agreed.  In many cases, the lack of proof from the seller's side has been the problem - as long as the customer can claim that they were promised something or not told about something then the law will err on their side.

Likewise, credit card 'friendly fraud' is an increasing issue.  With electronically provided goods (e.g. games, video, music), the buyer can claim that they never received the goods and claim back payment from the credit card issuer.  The seller is then out of pocket in the form of the goods actually having been delivered, but not provably, plus the additional costs that the card issuer places on them for dealing with the case.

Organisations have tried various ways of dealing with such problems - all of which have been fraught with issues.  Anything that depends on the recipient of an electronic communication to take an action - for example, responding to an email, clicking on a button on a web site or whatever - has been shown to fail the legal test of standing up in court in the long run.

However, one very simple tool is still there, even with the media and other commentators having long predicted its demise.  Email still rules as the one standardised means of exchanging electronic information between two parties.  It makes no odds what email reading tool the recipient uses and what system the sender uses, the fact of this strong standardisation and proven communications medium makes email a great choice as a starting point for evidential communications.

However, when it comes to legal proof of communication, depending on delivery and read receipts no longer works - it is far too easy for users to disable these (indeed, many email clients disable them by default).  The actual content of an email cannot be counted on as being sacrosanct - it is just a container of text, and that text can be edited by the recipient or the sender. 

Where legal proof of the actual delivered content is required, a different approach is required.

By placing a proxy between the sender and recipient, an email can be 'captured'.  The proxy takes the form of an intermediary where emails are sent automatically before being forwarded on to the original recipient.  The sender still addresses the email in the normal manner - the proxy can be a completely transparent step in the process.  The message can then be created as immutable content through the creation of a final form pdf file, timestamped and stored either by the sender or by the proxy.  The message continues on its way to the recipient, who will be unaware that this action has been taken.

If necessary, any responses can also go via the proxy, creating an evidentiary trail of what was communicated between the two parties. 

Such records then meet the requirements of legal admissibility - if it ever came to the point where a complaint had to go to court, these records meet the general needs of any court world-wide.  However, the general idea is to avoid court wherever possible.

By being able to find, recover and provide a communication to a recipient along with proof that this is exactly what was sent and agreed to, most issues around proof of information communicated and delivered can be stopped quickly and cost effectively at an early stage.

Legal proof of communications is not the only value of such proxies.  Consider the movement of intellectual property across a value chain of suppliers, customers and other necessary third parties.  Details of something pre-patent has to be communicated to a third party.  By sending it via a proxy, an evidentiary copy is created and time stamped, so ensuring that the sender's rights to the intellectual property are recorded and maintained.

The use cases are many - travel companies can show what was agreed when a travel package was booked; financial companies proving that terms and conditions were supplied to a customer and agreed by them; electronic goods retailers showing that a message was sent and delivered to a recipient on a certain day and that the recipient then did click and download the product sold to them.

Quocirca has just published a report on the subject, commissioned by eEvidence.  The report, "The myth of email as proof of communication", is available for free download here

Repelling targeted attacks in the cloud

Bob Tarzey | No Comments
| More

In a previous blog post, 'The rise and rise of public cloud services', Quocirca pointed out that the crowds heading for Cloud Security Expo in April 2016 should be more enthusiastic than ever given the growing use of cloud-based services. The blog looked at the measures organisations can take to enable the use of cloud services whilst ensuring their data is reasonably protected; knowing what users are up to in the cloud rather than just saying no.

 

However, there is another side to the cloud coin. For many businesses, adopting cloud services will actually be a way of ensuring better protection of data, for example from the growing number of targeted cyber-attacks. A recent Quocirca research report, 'The trouble at your door', sponsored by Trend Micro, shows that the greatest concern about such attacks is that they will be used by cybercriminals to steal personal data.

 

The scale of the problem is certainly worrying. Of the 600 European business surveyed, 62% knew they had been targeted (many of the others were unsure) and for 42%, at least one recent attack had been successful.  One in five had lost data as a result; for one in ten it was a lot or devastating amount of data. One in six say a targeted attack had caused reputational damage to their business.

 

So how can cloud services reduce the risk? For a start, the greatest concern regarding how IT infrastructure might be attacked is the exploitation of software vulnerabilities. End user organisations and cloud service providers alike face similar flaws that are inevitable through the software development process. The difference is that many businesses are poor at identifying and tardy in fixing such vulnerabilities, whilst for cloud service providers, their raison d'être means they must have rigorous processes in place for scanning and patching their infrastructure.

 

Second, when it comes to regulated data, cloud service providers make sure they are able to tick all the check boxes and more. After personal data, the data type of greatest concern is payment card data - a very specific type of personal data. Many cloud service providers will already have implemented the relevant controls for the PCI-DSS standard that must be adhered to when storing payment card data (or of course you could simply outsource collections to a cloud-based payment services provider). They will also adhere to other data security standards such as ISO27001. Cloud service providers cannot afford to claim adherence and then fall short.

 

If infrastructure security and regulatory compliance is not enough, think of the physical security that surrounds the cloud service providers' data centres. And of course, it goes beyond security to resilience and availability through backup power supplies and multiple data connections.

 

No organisation can fully outsource the responsibility for caring for its data, but most can do a lot to make sure it is better protected and for many a move to a cloud service provider will be a step in the right direction. Quocirca has often posed the question, "think of a data theft that has happened because an organisation was using cloud-based rather than on-premise infrastructure": no examples have been forthcoming. Sure, data has been stolen from cloud data stores and cloud deployed applications, but these are usually the fault of the customer, for example a compromised identity or faulty application software deployed on to a more robust cloud platform.

 

Targeted cyberattacks are not going to go away, in fact all the evidence suggests they will continue to increase in number and sophistication. The good news is that cybercriminals will seek out the most vulnerabile targets, and if your infrastructure proves too hard to penetrate they will move on the next target. A cloud service provider may give your organisation the edge that ensures this is the case.







What can be done with that old data centre?

Clive Longbottom | No Comments
| More

canstockphoto0965238.jpgData centres used to be built with the knowledge that they could, with a degree of reworking, be used for 25 years or more.  Now, it is a brave person who would hazard a guess as to how long a brand new data centre would be fit for purpose without an extensive refit.

Why?  Power densities have been rapidly increasing requiring different distribution models.  Cooling has changed from the standardised computer room air conditioning (CRAC) model to a range of approaches including free air, swamp, Kyoto Wheel and hot running, while also moving from full volume cooling to highly targeted contained rows or racks.

The basic approach to IT has changed too - physical, one-application-per-box has been superseded by the more abstract virtualisation. This in turn is being superseded by private clouds, often interoperating with public infrastructure and platform as a service (I/PaaS) systems, which are also continuously challenged by software as a service (SaaS).

Even at the business level, the demands have changed.  The economic meltdown in 2008 led to most organisations realising that many of their business processes were far too static and slow to change.  Businesses are therefore placing more pressure on IT teams to ensure that the IT platform can respond to provide support for more flexible processes and, indeed, to provide what individual employees are now used to in their consumer world - continuous delivery of incremental functional improvements.

What does this mean for the data centre, then?  Quocirca believes that it would be a very brave or foolish (no - just a foolish) organisation that embarked on building itself a new general data centre now.

Organisations must start to prioritise their workloads and plan as to when and how these are renewed, replaced or relocated.  If a workload is to be renewed, is it better replaced with SaaS, or relocated onto I/PaaS?  If it is supporting the business to the right extent, would it be better placed in a private cloud in a colocation facility, or hived off to I/PaaS?

Each approach has its merits - and its problems.  What is clear is that the problem will continue to be a dynamic one, and that organisations must plan for continuous change.

Tools will be required to intelligently monitor workloads and move them and their data to the right part of the overall platform as necessary.  This 'necessary' may be defined by price, performance and or availability - but has to be automated as much as possible so as to provide the right levels of support to the business. 

Therefore, the tools chosen must be able to deal with future predictions - when is it likely that a workload will run out of resources; what will be the best way to avoid such issues; what impact could this have on users?

These tools need to be able to move things rapidly and seamlessly - this will require use of application containers and advanced data management systems.  End-to-end performance monitoring will also be key, along with root cause identification, as the finger pointing of different people across the extended platform has to be avoided at all costs.

If it becomes apparent that the data centre that you own is changing massively, what can you do with the facility?  Downsizing is an option - but can be costly.  A smaller data centre could leave you with space that could be repurposed for office or other business usage - but this only works if the conversion can be carried out effectively.  New walls will be required that run from real floor to real ceiling - otherwise you could end up trying to cool down office workers while trying to keep the IT equipment cool at the same time.

Overall security needs to be fully maintained - is converting a part of the data centre to general office space a physical security issue?  It may make sense to turn it into space for the IT department - or it may just not be economical.

A data centre facility is constructed to do one job: support the IT systems.  If it finds itself with a much smaller amount of IT to deal with, you could find that replacing UPS, auxiliary power and cooling systems is just too expensive.  In this case, colocation makes much better sense - which leaves you with the nuclear option - an empty data centre that needs repurposing.

Repurposing a data centre is probably a good business decision.  It could be cost-effectively converted into office space - unlike where only part of it is converted, a full conversion can avoid many of the pitfalls of trying to run a data centre and an office in the same facility.  If all else fails, that data centre is valuable real estate.  If the business cannot make direct use of it, a decommissioned data centre could be a suitable addition to the organisation's bottom line through selling it off. 

In April, DataCentreWorld will be held at the Excel Centre in London, where there will be much to discuss around the future of the data centre itself.  Registration for the event can be found here.

The 'software defined mainframe' - smoke and mirrors, or reality?

Clive Longbottom | No Comments
| More

It is a truth universally acknowledged that the mainframe's future stopped in 1981 when the PC was invented.  The trouble was that no-one told IBM, nor did they tell those who continued to buy mainframes or mainframe applications and continued to employ coders to keep putting things on the platform.

lzlabs.JPG

This 'dead' platform has continued to grow, and IBM has put a lot of money into continuing to modernise the platform through adding 'Specialty Engines' (zIIPs and zAAps), as well as porting Linux onto it.

However, there are many reasons why users would want to move workloads from the mainframe onto an alternative platform.

For some, it is purely a matter of freeing up MIPS to enable the mainframe to better serve the needs of workloads that they would prefer to keep on the mainframe.  For others, it is to move the workload from what is seen as a high-cost platform to a cheaper, more commoditised one.  For others, it is a case of wanting to gain the easier horizontal scaling models of a distributed platform.

Whatever the reason, there have been problems in moving the workloads over.  Bought applications tend to be very platform specific.  The mainframe is the bastion of hand-built code, and quite a lot of this has been running on the platform for 10 years or more.  In many cases, the original code has been lost, or the code has been modified and no change logs exist.  In other cases, the code will have been recompiled to make the most out of new compiler engines - but only the old compiler logs have been kept. 

Porting code from the mainframe to an alternative platform is fraught with danger.  As an example, let's consider a highly regulated, mainframe-heavy user vertical such as financial services.  Re-writing an application from Cobol to e.g. C## is bad enough - but the amount of retro testing to ensure that the new application does exactly what the old one did makes the task uneconomical. 

What if the application could be taken and converted in a bit-for-bit approach, so that the existing business logic and the technical manner in which it runs could be captured and moved onto a new platform?

This is where a company just coming out of stealth mode is aiming to help.  LzLabs has developed a platform that can take an existing mainframe application and create a 'container' that can then be run in a distributed environment.  It still does exactly what the mainframe application did: it does not change the way the data is stored or accessed (EBSDIC data remains EBSDIC, for example).

It has a small proof-of-concept box that it can make available to those running mainframe apps where they can see how it works and try out some of their own applications.  This box, based on an Intel NUC running an i7 CPU, is smaller than the size of a hardback book, but can run workloads as if it was a reasonable-sized mainframe.  It is not aimed at being a mainframe replacement itself, obviously, but it provides a great experimentation and demonstration platform.

On top of just being able to move the workload from the mainframe to a distributed platform, those who choose to engage with LzLabs will then gain a higher degree of future-proofing.  Not because the mainframe is dying (it isn't), but because they will be able to ride the wave of Moore's Law and the growth in distributed computing power, whereas the improvements in mainframe power have tended to be a bit more linear.

The overall approach, which LzLabs has chosen to call the 'software defined mainframe' (SDM) makes a great deal of sense.  For those who are looking to optimise their usage of the mainframe through to those looking to move off the mainframe completely, a chat with LzLabs could be well worth it.

Of course, life will not be easy for LzLabs.  It has to rapidly prove that its product not only manages to work, but that it works every time; that it is unbreakable; that its performance is at least as good as the platform users will be moving away from.  It will come up against the fierce loyalty of mainframe idealists.  It will come up, at some point, against the might of IBM itself.  It needs to be able to position its product and services in terms that both the business and IT can understand and appreciate.  It needs to find the right channels to work with to ensure that it can hit what is a relatively contained and known market at the right time with the right messages.

The devil is in the detail, but this SDM approach looks good.  It will not be a mainframe killer - but it will allow mainframe users to be more flexible in how they utilise the platform.

A managed services model for the collaborative workplace

Louella Fernandes | 1 Comment
| More

Embracing the digital workplace

An information managed service (IMS) is emerging as a key approach to enable digital transformation. Such providers are building on their traditional print heritage and evolving their offerings beyond the traditional managed print service (MPS). IMS offers a comprehensive range of services and solutions to manage enterprise wide information - both paper and digital - to drive improved productivity, lower costs and better employee engagement. 

Digital collaboration technology has revolutionised how we communicate and live our lives. Mobility has eliminated the boundaries of location and time. Now it is easy to communicate and collaborate and share information rapidly with others no matter their location, time zone, or geography. As consumer technology permeates the workplace, employees expect to engage in new and higher levels of collaboration and information sharing in the workplace. This is pushing organisations to accelerate digital transformation and gain better control and management of information throughout the organisation.  As such more organisations are increasing investment in digital workplace tools - such as connectivity solutions, collaboration and information management - to build their digital future.  

The information management challenge

The digital workplace must enable effective information collaboration amongst employees, partners and customers. Information is the lifeblood of any organisation, whether it is email, documents, web sites, transactions or other forms of knowledge, and it continues to grow. Yet although many organisations are building an information-sharing workplace, all too often information resides in inaccessible silos and is not easy to access or collaborate or share information. In many cases, information still resides on paper format which hampers productivity and makes it more difficult for effective collaboration and faster decision making. 

Ensuring that employees, customers and partners access to information at the right time, right place and from the right device requires a holistic approach to information capture, distribution and output. This means the use of digital workflow tools - such as document capture - for instance through a multifunction device (MFD); video conferencing, digital documents, cloud sharing portals and multichannel communications. Certainly, those using such solutions are reaping the benefits. A forthcoming Quocirca study on digitisation trends reveals that those organisation that have already adopted digital workplace tools are reporting multiple business benefits. These include increased productivity, faster business processes and improved customer satisfaction, faster decision making and increased profit - far beyond those that have yet to invest in such tools.


A new approach to information management

An information management service (IMS) provides a compelling solution for businesses that want to use a single strategic partner for areas such as information capture, collaboration and output management, with all the benefits of a traditional managed print services (MPS) model. Typically it compromises the following three key elements in order to take a broad view of the information lifecycle across an organisation. 

  • Phase 1: Information capture. Information may be captured at the point of origin through multifunction printers (MFPs), business scanners or mobile devices. For instance, paper invoices or expense receipts can be scanned and routed directly to an accounts application through an MFP interface panel. 

  • Phase 2: Information management and collaboration. This may include solutions for cloud document management and sharing portals (for instance such as Dropbox, Box, OneDrive or a privately hosted platform). This enables employees to access, share and collaborate on content, while maintaining high levels of control, privacy and protection. Most advanced MFPs offer this capability direct from the user panel.  Better collaboration can also be achieved through more effective visual collaboration - such as interactive meeting rooms which employ interactive displays where information can be shared and annotated on the screen.

  • Phase 3: Information output. The need to create and deliver content that is personal, relevant and timely is paramount in today's fast-moving digital era. Through the use of advanced customer communications management solutions, content can be created and distributed by an organisation in a multitude of formats. This ensures customers receive personalised content to their channel of choice - be it mobile, email or even paper formats. Meanwhile digital signage is becoming more prevalent in businesses of all sizes as a lower cost and dynamic alternative to static printed communications. In addition to traditional functions like meeting room presentation, businesses are using digital displays for signage such as conference room identification and scheduling, dynamic branding and information display and communications.  

Conclusion

An integrated managed services approach enables organisations to take a broader approach to integrating and managing paper and digital information.  Quocirca research has revealed that those who have moved beyond MPS and implemented information workflow tools are most confident in their overall information management strategy. 

Organisations should capitalise on this significant opportunity to improve productivity, customer satisfaction and employee engagement. Harnessing the emerging digital tools for collaboration - such as cloud document capture and sharing and interactive visual communications, is fundamental to the success of digital transformation. 


Read Quocirca's report on The Collaborative and Connected Workplace

Akamai takes on Distil Networks in bot control

Bob Tarzey | No Comments
| More

In an April 2015 Quocirca wrote about the problem of bad bots (web robots) that are causing more and more trouble for IT security teams (The rise and rise of bad bots - part 2 - beyond web-scraping). Bad bots are used for all sorts of activities including brute force login attempts, online ad fraud (creating false clicks), co-ordinating man-in-the-middle attacks, scanning for IT vulnerabilities that attackers can exploit and clustered as bot-nets to perpetrate denial of service attacks.

 

Blocking such activity is desirable, but not if it also blocks good bots, such as web crawlers (that keep search engines up to date), web scrappers that populate price comparison and content aggregation web sites and the bots of legitimate vulnerability scanning and pen testing services from vendors such as Qualys, Rapid7 and WhiteHat.

 

Distil Networks has emerged as a thought leader in the space with appliances and web services to identify bot-like activity and add bots to black lists (blocked) or white lists (allowed). The service also recognises that whether a bot is good or bad may depend on the target organisation, some news sites may welcome aggregator bots others may not and policy can be set accordingly.

 

As of February 2016 Distil has a formidable new competitor. The web content distribution and security vendor Akamai has released a new service called Bot Manager, which is linked to Akamai's Client Reputation Service (released in 2015) that helps to detect bots and assess their behaviour in real-time.

 

Akamai accepts that it aims to capitalise on the market opened up by Distil and others. Akamai will of course hope to make fast progress into the bot protection market through the loyalty of its large customer base, many of who will see the benefit of adding closely linked bot protection to other Akamai services including its Prolexic DDoS mitigation and Kona web site protection.

 

Akamai says it has already identified 1,300 good bots. It was also keen to point out that it believes it has taken responding to bots to a new level. This includes

·        Silent denial where a bot (and its owner) does not know it has been blocked

·        Serving alternate content (such as sending competitors false pricing information)

·        Limiting the activity of good bots to certain times to limit impact on performance for real users

·        Prioritising good bots for different partners

·        Slowing down aggressive bots (be they good or bad)

 

The control of Bot Manager and what how it responds is down to individual customers that can take action on different groups of bots based on either Akamai's or the customer's classification. They can take this to extremes; for example, if your organisation wanted to stop its content being searchable by Google you could block its web-crawler.

 

Distil and Akamai do not have the market to themselves, other bot protection products and services include Shape Security's Botwall and the anti-web scrapping service ShieldSquare. Bot blocking capabilities are also built into Imperva's Incapsula DDoS services and F5's Application Security Manager. Bad bots and good bots alike are going to have to work harder and harder to get the access they need to carry out their work.

 

How's the innovation of your digital transformation going?

Clive Longbottom | No Comments
| More
I want to take a holistic look at a current paradigm shift, running a couple of ideas up your flag pole to see if you salute them.  I may be pushing the envelope, but I trust that moving you out of your comfort zone will be seen as empowering.

Hopefully, you cringed at least twice during that introduction. The world is full of meaningless clichés, and there are two more that seem to be permeating the IT world that I would like to consign to Room 101.

Firstly, in 2003, IBM carried out some research that showed that CEOs were heavily into 'innovation'.  The term was not defined to the survey respondents, and my belief was (and still is) that the use of the term is like asking someone "Do you believe in good?"  Who is going to say "No"?

To define innovation, we need to go all the way back to its Latin roots.  It comes from 'in novare' - to make new.  Innovation is all about making changes, finding a different way to do something that you are already doing.

So - how's it going in your organisation?  Over the past week, how many new ways of buying paperclips have you found?  How many new ways of writing code and patching systems; of purchasing applications and in paying your staff have you introduced?  

If you are eternally innovating, then your organisation will be in eternal change - a chaotic set up that is unsustainable and unsurvivable.

There are three ways that an organisation can change; improvement, innovation and invention. Improvement is how an organisation does what it already does, with less cost, less risk and more productivity.  Innovation introduces the means to do something in a different way (which may mean incremental extra costs while the new way of doing things beds in), and then the biggie - invention: bringing in a new product or service that has never been done by the organisation before.

Balancing these three areas is what is key - for many organisations, a focus on improvement may have far greater payback than trying to be truly innovative.  For organisations in verticals such as pharmaceuticals, invention is far more important than improvement or innovation: the race to the 25 or so new molecular or chemical entities (NME/NCEs) cannot be won through just innovating existing processes.

Further, throwing technology at all of this is not the way to do it either.  And this brings me to my next term that needs deep investigation - 'digital transformation'.  Reading the technology media and much analyst output would have you believe that your organisation will die next week if it isn't going through some form of digital transformation.

However, what does this mean?  Replacing existing technical platforms with another - such as a move from client server to cloud?  Moving from on-premise applications to software as a service (SaaS)?  Sticking two fingers up at your boss as they ask you to implement a strategic digital transformation project?

As is often the case, this terminology is an attempt to place technology at the centre of the organisation. A determination not to let IT be relegated to a position of a facilitator to the business is self-serving and can actually be harmful to the business itself.

Back in the days when customer relationship management (CRM) and enterprise resource planning (ERP) were all the rage, the number of companies that I saw who pretty much stated that they had 'done' CRM as they had bought Seibel, or 'done' ERP as they had bought SAP was frightening.

No strategic changes in business processes had been planned for or implemented - many of the companies just implemented the software and then changed their business processes to meet the way that the software worked.  Unsurprisingly, they then wondered why they struggled.

It is important to remember what technology is there for - it is there simply to enable an organisation to better carry out its business.  The secret to a successful organisation is not in the technology it chooses and implements- it is in how well it chooses and implements technology to support its changing needs; in how the business can flexibly modify or replace its processes to meet market forces and so be successful in what it does.  If it can do this with an abacus, baked bean cans and pieces of wet string, then so be it - you do not need to be seen as the uber-nerd who introduced a scale-out supercomputer with a multi-lambda optical global network.

So, fine - if you want to tick off two terms on your cliché bingo card, then make sure that you are innovative in your digital transformation.  Just make sure that you sit down with the business and understand how it what it needs in that balance of improvement, innovation and invention, and provide it with the IT platform that enables that to happen for as long as possible.

Just think - providing the organisation with a technical platform that actually supports it: one that is flexible enough to embrace the future and enable rapid change.  That would be truly innovative.







There's money in improving WAN connectivity to the likes of AWS, Azure and Google

Bernt Ostergaard | No Comments
| More

WAN capacity pains is a hot topic this year

The global WAN infrastructure capacity debate will be a feature of the upcoming Cloud Expo Europe event in April (www.cloudexpoeurope.com). And hopefully the debate will also explore the financial advantages to be gained by infrastructure providers from closer cooperation with cloud service providers.

This year's Mobile World Congress (MWC) also highlighted many of the network capacity challenges - mostly from the carrier perspective. This led Mark Zuckerberg to remark that he may need to use laser transmissions from light aircraft and some 10,000 old school hotspots to open up access to Facebook in India. Both parties, infrastructure and content providers agree that:

  • 2016 will see fast growth take-up of cloud services and a shift from private cloud to hybrid managed and public cloud computing. Enterprises will massively shift from private cloud only deployments to hybrid private-public cloud solutions.

  • Fewer and bigger cloud SPs. The mega-players like AWS, Google and Azure with global infrastructures will take a bigger slice of the market

  • We can expect 30% annual growth in Internet traffic volumes - more so on the mobile side

  • Commoditisation, standardisation, virtualisation spells a richer communication environment to accommodate the many varieties of cloud services.

*'If it computes, it connects - and more so every day' *

To keep abreast of cloud computing where scalable and elastic IT-enabled capabilities are delivered "as a service" using Internet technologies, telco infrastructure providers must co-operate more and share resources with cloud service providers. They share a common interest in developing open, modular and expandable network architectures to keep up with customer demands for more bandwidth and ubiquitous network access.

However, the carrier infrastructure providers' litany of pains is well known:

§ The net neutrality regulations imposed on them invalidates attempts to charge specifically for additional bandwidth or higher quality connections across the Internet.

§ The huge growth in video communication across the Internet has strained their infrastructure everywhere, and the investments in last mile broadband connections must rely on customers buying quad service bundles.

§ The emergence of software-defined WAN routing, that provides channel bonding across fixed line, cellular and Internet connections reduces the demand for enterprise Quality of Service products like MPLS.

A New Deal between carriers and cloud services

The way forward requires a rethink in the infrastructure provider community. Carriers must understand what the cloud provides are trying to deliver, and help them get there. The storage and computing revenues that many cloud providers rely on, requires secure and responsive WANs, and that means faster roll-out of high-speed infrastructure - not just on the heavy traffic routes, but also widespread, high-speed 3G and 4G LTE mobile access. Many of these global cloud SPs are willing to invest in design and roll out of extended WAN networks.

*Improving core WAN performance * Improving Internet transport capabilities and security is coming up against the fundamental Internet Border Gateway Protocol (BGP) which is running out of steam. Fast expanding network traffic volumes create longer maintenance windows required to update routing tables. Errors in this updating process may lead to route hijacking, when a hacker uses BGP to announce illegitimate routes directing users to bogus web pages. BGP was never designed with security in mind. BGP routing table overflow has caused sudden outages in services such as eBay, Facebook, LinkedIn, and Comcast. Software Defined Networking (SDN) will certainly alleviate some of these issues when the SDN controllers can read and revalidate routing policies as fast as changes happen and then configure routers in real time.

Monetising traffic analytics

Better real-time network management will not only improve the cloud experience; it may also improve infrastructure economics. The ability to use big data analytics to identify anomalies in BGP routing behaviour, will also allow infrastructure operators to monetise the user traffic behaviour they log, but seldom use. Thousands of analytics companies have sprung up around the globe analysing users' web site behaviour. However, the digital service providers are also very interested in understanding and adjusting real-time their access network performance. Faster SDN infrastructure build-outs will improve traffic flows and cloud performance. That could in turn become a win-win for carriers and the cloud providers.

Simplifying meeting room management - the AV/IT challenge

Rob Bamforth | No Comments
| More

Can't get the projector or video conferencing working? Cables, adaptors or remotes missing? It is a common problem and it might seem like these issues get in the way of progress in most meetings, but spare a thought for those having to manage this environment. Being the over-stretched technical expert called in just to find a cable or click a button for those unwilling or unable to follow an instruction sheet (which was probably tidied away by the previous night's cleaners...) can be a bit of a thankless task. 


Things were simpler a decade or two ago when audio-visual (AV) meant finding a spare bulb for the overhead projector or acetate sheet for the printer - and it was straightforward to ask someone designated as office manager. Then along came laptops and cheaper reliable projectors and pretty much anyone who needed to could press to switch to external display.


Today, both AV and IT has progressed enormously and along the way, things have become much more complicated.


Low cost flat panel displays can be placed pretty much anywhere that workers might congregate, but with almost anybody capable of having a device they would like to present from; no longer just laptops, but also tablets and smartphones. The options for cables and connectors have therefore proliferated - chances are adaptors will be forgotten, mislaid or lost in a vacuum cleaner somewhere.


Smart wireless connectivity would be a great answer, and suppliers of professional AV systems have come up with several options, but with too much variety and purchasing decisions often made in facilities or workplace resources departments, the consistency and ease of manageability (especially remotely) is often missing.


Many rooms and AV systems already are, or will have to be, connected to the network. Screen sharing, remote participants, unified communications and video conferencing tools are becoming more widely deployed as organisation seek the holy grail of productive collaboration and individuals are increasingly accepting of being on camera and sharing data with colleagues. However, not only again are there many options, users quickly lose confidence after a bad experience.


Keeping meeting room technology under control and working effectively is increasingly a complex IT task, with a bit of asset management thrown in, but few organisations would be looking to put in more people just to support it, despite user frustrations from un-integrated or unusable expensive screens and equipment.


One answer might come from the approach Intel has taken with its Unite collaboration technology. The software can be incorporated by hardware companies into an AV 'hub' which allows simple and secured connection from either in-room or remote participants - wirelessly or over the network, so AV cables and adaptors should be a thing of the past.


Helping participants make meetings more productive is one thing, but because the hardware used in the Unite hub has to be based on Intel's vPro range of processors, remote management and security is built in from the start. Whilst this level of performance might seem like overkill in what is on the face of it an AV collaboration hub, it does open up some very interesting opportunities.


Unite is already capable for sharing and collaboration in meetings, but Intel has made the platform flexible so that its functionality can be extended. This means further integration of IT and AV capabilities, offering a more unified communication and meeting room experience by combining with conferencing systems and incorporating room booking or other facilities management needs.


A powerful in-room hub also offers the opportunity to extend to incorporate practical applications of the Internet of Things (IoT). These could include managing in-room controls - lighting, environmental controls, blackout blinds etc. - but also tagging and tracking valuable assets like projection systems, screens and cameras or ones easily lost such as remote controls. This might be useful for security, but also could be used to check temperature, system health etc. for proactive maintenance monitoring.


Many organisations are adding ever more sophisticated AV technology to open 'huddle' spaces as well as conventional meeting rooms and keeping on top of managing it all, with even fewer resources and more demanding users, is an increasing challenge now often being faced by IT managers. They need something to integrate the diverse needs of AV, IT and facilities management and help address the problem. For more thoughts on how to make meeting rooms smarter and better connected download this free to access Quocirca report


Is the outlook cloudy for the IoT?

Bob Tarzey | No Comments
| More

In a recent Quocirca research report, The many guises of the IoT (sponsored by Neustar), 37% of the UK-based enterprises surveyed said the IoT was already having a major impact on their organisation and other 45% expected there would be an impact soon. The remaining 18% were sceptical about the whole IoT thing.

The numbers reported in another 2015 Quocirca survey of UK enterprises regarding attitudes to public clouds services (From NO to KNOW, sponsored by Digital Guardian) were along similar lines, 32% were enthusiasts, 58% had various cloud initiatives and 10% said they were avoiding such services.

Whilst the two data sets cannot be correlated as they involved different sets of respondents, at the very least there must be a bit of overlap between the organisations that are enthusiastic about the IoT and those that feel the same about public cloud services. However, Quocirca expects there is a strong alignment between the two as organisations that seek to exploit the latest innovations tend to do so on a broad front.

That said, the teams involved within an individual organisation will be different. Those looking at IoT, as Quocirca's research shows, will be looking to improve existing and introduce new processes for managing supply chains, controlling infrastructure and so on. Those looking at cloud will be seeking new ways to deliver IT to their organisation or, perhaps, responding to initiatives taken elsewhere in the business (i.e. managing shadow IT).

So, is there any overlap between these teams? Should the IoT team be heading to events like Cloud Expo Europe in April 2016 to seek inspiration? The answer is surely yes. To build IoT applications requires many of the things public cloud platforms can offer. The top concerns for those steaming ahead with IoT deployments, identified in Quocirca's research, are that networks will be overwhelmed by data and that they will be unable to analyse all the data collected. Both are scalability issues that can be addressed with public cloud platforms.

For any IoT application, there will be a need for the large scale and often long term storage of data and the need for, sometimes intermittent, processing power to analyse it. Cloud service providers can provide both the storage and flexible computing capacity to support this. Furthermore, more than a third of the organisations Quocirca surveyed already expect to roll IoT applications out on a national scale; using a cloud platform to process the data can also mean using the provider's secure wide area networks to transmit it.

It is not surprising then that most cloud service providers now have IoT offerings. This includes Microsoft's Azure IoT Suite that comes with preconfigured options for deploying IoT end points (sensors and so on) and gathering data from them. The AWS IoT offers a similar capability to connect and securely collect and process data. The Google Cloud Platform provides "the tools to scale connections, gather and make sense of [IoT] data".

That's just what three of the biggest public cloud service providers are up to with the IoT. There will be many more offerings from other providers, of particular relevance may be providers in the UK that have strong local networks, such as Virgin Media and BT that both have IoT initiatives. Who knows what else may be discovered by those with IoT ambitions at shows such as Cloud Expo Europe., where there will be ready access to innovative vendors and informative conference presentations. It is perhaps no coincidence that it is collocated with a sister show Smart IoT London.

The challenge of transatlantic data security

Bob Tarzey | No Comments
| More

US companies that operate in the European Union (EU) need to understand what drives European organisations when it comes to data protection. This applies to both commercial organisations that want to trade in Europe and IT suppliers that need to ensure the messaging around their products and services resound with local concerns.

A recent Quocirca report, The trouble at your door; Targeted cyber-attacks in the UK and Europe (sponsored by Trend Micro), shows the scale of cybercrime in Europe. Of 600 organisations surveyed, 369 said they had definitely been the target of a cyber-attack during the previous 12 months. For 251 these attacks had been successful, 133 had had data stolen (or were unsure if it had been stolen), 54 said it was a significant amount of data and 94 reported serious reputational damage. The reality is almost certainly worse; many of the remainder were uncertain if they had been a victim or not. Cybercriminals are the top concern for European businesses, above hacktivists, industrial espionage and nation state attackers.

This shows that European businesses have plenty to worry about with regard to data security - even before the added complications of the seemingly ever-changing EU data protection laws. The new EU General Data Protection Regulation (GDPR) is looming and seems likely to come into force in early 2018. The good news for any business trading in Europe, is that the GDPR provides a standard way of dealing with personal data in all EU states (the current Data Protection Directive only provides guidance, from which many EU states deviate). The bad news is the new stringencies that come with the regulation; fines up to €20M (Euro) or 4% of a non-compliant organisation's revenue, requirements to report breaches 'without undue delay' and the 'right to erasure' (often referred to as the 'right to be forgotten').

Given the scale of crime and the pressure to protect customer privacy, it is not surprising that protecting customers' personal data is the highest priority in Europe, more so than payment card data (the processing of which can be outsourced) and intellectual property (which is less regulated). US businesses trading in Europe need to adapt their processes to take account of the new regulation and the changing Safe Harbour arrangements that are in-place between the EU and USA following a successful 2015 court challenge to the status quo.

The attack vectors of greatest concern for European organisations are exploited software vulnerabilities and compromised user identities. Protection against these threats is reflected in the measures put in place to help prevent targeted cyber-attacks in the first place and to stop them once in progress. User identities can be protected by improved awareness around safe email and web use whilst infrastructure can be protected through software scanning and update regimes, all of which top the list of deployed security measures.

Addressing concerns about secure infrastructure should play well for US cloud service providers that get across the message that their platforms are more likely to be kept up to date, have vulnerabilities fixed at an early stage and generally will be better managed than is the case with much in-house infrastructure. The higher up the stack the cloud service goes, the better, so these benefits apply more to application level software-as-a-service (SaaS) than more basic infrastructure-as-a-service (IaaS). The caveat is that with new doubts about Safe Harbour, US providers really need to put in place European infrastructure to satisfy data protection concerns, a move many are now making.

All this said, European businesses know that sooner or later they will have to deal with a first, or for many another successful breach of their systems and a potential data loss. So assistance with after measures will also go down well. Malware clean up technology tops the list of deployed measures, but the ability to identify compromised systems, data and users is also understood. Of course, all of these should be in place to assist with the execution of breach response plans, which should also include processes for informing compromised data subjects and data regulators, as well as having plans for good media relations. Less than half of European businesses have such a plan in place, but there is a wiliness to implement them, perhaps with some help and advice from those with the skills and services to offer.

The volume of trade between the US and EU is huge, especially when it comes to technology. Talks to establish the Transatlantic Trade Investment Partnership (TTIP) should make it even easier for US companies to trade with those countries that remain in the EU (the UK may leave following an in/out vote later in 2016). TTIP will provide common trading rules on both sides of the North Atlantic, but it will not change the need for US-companies to be savvy about local EU data protection concerns. 

Connected to the mobile world, but not to the room?

Rob Bamforth | No Comments
| More

Mobile devices provide a mix of connectivity and compute power on the move for most employees, and with the huge upsurge in adoption of tablets and smartphones this personal power can be used anywhere and remotely from other colleagues. However, it is still important in most working environments to get together, share information, make decisions and assign actions - i.e. have meetings.


These might be vital, but too much of a good thing is a problem, plus meeting places in many organisations are in short supply, so meetings need to be effective and efficient.


The spread of digital visual technologies into meeting rooms has kicked out the old overhead projector and in some cases created some fantastically equipped conference room facilities with telepresence and high definition video. But the average meeting space has at best a flat screen monitor (or projector) with a connection for video input and an ethernet connection to the network.


With luck there might be instructions and adaptor cables, but with so many options - presenters expect to use their preferred tablet or smartphone as well as laptops - there will be problems. "Please talk amongst yourselves for a few minutes" is all too often blurted out at the beginning of a meeting or presentation, and across a group of waiting attendees, those few minutes cost dearly.


It should not be like this, and it ought to be possible to easily harness the technology that participants bring to the meeting to make it more engaging, collaborative and ultimately, effective.


First, wireless video connection. Pretty much all potential presentation devices, whether laptop, tablet or smartphone, will have wi-fi, so it makes sense to get rid of all the messing around trying to remember to bring or find cables and adaptors and use wireless instead.


Next, simple, secure sharing. One person presenting is fine at times, but participants need to be engaged in meetings and many will have content to share too. Seamlessly switching between presenters and even having several providing content at the same time would be valuable, so too would ensuring that participants have content made available to them after the meeting.


Finally, the outside world. Making an external connection to remote participants can add further delays to any meeting, often until outside help can be brought in. If remote attendees can connect and share content using the same secure mechanism as participants in the room, the experience is much more conducive to effective working and collaboration.


While these involve audio visual (AV) technology as well as IT, replacing existing expensive AV equipment just to get wireless connectivity or remote conferencing is an unwarranted expense, and not workable if solutions only work with particular AV equipment. Much of the problem is in the IT space - wireless, simple sign on and remote connectivity - and ultimately someone is going to have to make everything work and manage it all.


It will fall to IT to take the management lead, so in order to converge AV with IT, a universal 'hub' option that could be applied in any meeting space, would seem a much better bet. This is the approach Intel has offered with its Unite technology. Initially developed for internal use to help its own meetings go more smoothly, Intel has released the software for device manufacturers to build AV integration hub 'appliances'. Based on Intel's vPro CPU, the ones that have already appeared are sufficiently low cost to be added to an existing meeting space for the cost of a small laptop.


While a powerful processor might seem overkill, it makes the box easy to manage as a regular device on the network and has the capability to go much further, such as adding video conferencing or tools to automate some of the remaining drudgery from meetings, perhaps by audio recording and transcribing for meeting minutes.


Meetings might not be the most enjoyable part of working life, but they perform a vital function in the daily flow of information. High end IT and AV solutions can deliver stunning solutions for some, such as telepresence, at a cost, but simple technology appliances that can remove some of the everyday pain from work should be applicable pretty much anywhere.


For more thoughts on how to use technology to make everyday meetings more effective, check out Quocirca's latest free to download report "Smarter, connected meeting spaces".








Blue Coat, from surf control to riding the security wave

Bob Tarzey | No Comments
| More
The IT security sector is good for producing start-ups that soon disappear, seemingly without trace. Of course, this is not a bad thing. Whilst it is true some wither on the venture capital vine, many others get acquired as the good idea they came up with (or imitated) gets recognised as a necessity and then absorbed into the security mainstream though acquisition.

In this way whole genres of security products have more or less disappeared. For example, most spam-filtering, SSL VPN and data loss prevention (DLP) vendors have been absorbed by the security giants; Symantec, Intel Security (nee McAfee), Trend Micro or other large IT vendors that deal in security - HP, IBM, Microsoft etc.

However, there is another path, for a small niche vendor itself to grow into a broad-play vendor. The aforementioned three security giants all came from an anti-virus background. Another that is now challenging them, started out addressing the need to make our online business lives fast, then safe and has now joined the security mainstream - Blue Coat.

Blue Coat was founded in the mid-1990s as CacheFlow, its initial aim being to improve web performance. However, it soon saw the need to address web security as well and changed its name to the more generic Blue Coat as it added content filtering and instant messaging security to its portfolio. By 2003 it had 250 employees and turnover of around $150M.

13 years on, in 2016 Blue Coat now has 1,600 employees. The company last reported revenues in 2010 of around $500M before being bought by private equity firm Thoma Bravo in 2011 for $1.3B. Thoma Bravo has now sold Blue Coat on to Bain Capital for $2.4B. A nice mark up, but one that reflects a lot of acquisition investment that has taken Blue Coat even further from its origins to becoming a vendor that can address broad enterprise security requirements.

The acquisition trail for Blue Coat started well before 2011. Entera, Packeteer and NetCache all enhanced its network performance capabilities. Whilst Ositis (anti-virus), Cerberian (URL filtering), Permeo (SSL VPN) added to the security portfolio.

The Thoma Bravo takeover saw the pace increase:
  • Crossbeam (firewall aggregation) gives Blue Coat a platform to build on both its enterprise and service provider customer base.
  • Netronome, which Blue Coat says is now its fastest growing product line, enables the inspection of encrypted web traffic, addressing the growing problem of hidden malware coming in and stolen data being egressed.
  • Solera is a full packet capture reporter; whatever we may think of service providers being required to store data long term, if regulators say they must, Blue Coat can now assist with the job. 
  • Norman Shark for malware analytics

Since the Bain Capital takeover there has already been two more. First, Elastica, one of the early cloud access security brokers (CASB). This gives Blue Coat the capability to monitor and audit the use of online applications, a growing need with the rise of shadow IT as lines-of-business and employees increasingly subscribe to on-demand services without direct reference to the IT function. The other is Perspecsys for the encryption and tokenisation of cloud data.

Through all this Blue Coat has also been investing in supporting capabilities; this includes building a threat intelligence network and the integration of all the various acquisitions. Blue Coat's delivery has typically been via on-premise appliances which still suits many of its customers, especially service providers. However, Elastica is a cloud-based service, capabilities from which will be integrated into its appliances and Blue Coat says the future looks increasingly hybrid.

As a private company Blue Coat does not disclosed revenues. However, with all the acquisitions and value-added in the five years it must be close to joining the rare club of IT security vendors with revenues in excess of $1B. Blue Coat has become and should remain a major player in the IT security sector providing it does not lose its way bringing all these acquisitions together into a set of coherent offerings.

The rapid morphing of the 'datacentre'

Clive Longbottom | No Comments
| More

room3.jpgOn-premise. Co-location. Public cloud.  These terms are being bandied about as if they are 'either/or', whereas the reality is that many organisations will end up with a mix of at least two of them.

This is why it is becoming important to plan for a hybrid datacentre - one where your organisation has different levels of ownership over different parts of its total data centre environment.  It is time to move away from the concept of the singularity of the 'datacentre' and look more at how this hybrid platform works together.

This makes decision making far more strategic.  If the organisation has reached a point where its existing data centre is up for review, it may make more sense to move to a co-location facility, which then enables greater flexibility in growth or shrinkage as the organisation's technology needs change - and indeed, as the technology itself continues to change.

However, although co-location takes away the issues of ownership of the facility and all that it brings with it (the need to manage power distribution, backup, auxiliary power, cooling and so on), it still leaves the ownership and management of the IT equipment within the co-location facility to the end-user organisation.

This can, in itself, be expensive.  Few resellers offer an OpEx model to acquire hardware (although capital agreements with the likes of BNP Paribas Leasing Solutions can turn a CapEx project into more of an OpEx-style one), so with co-location organisations still have to find the upfront money to purchase hardware.  Then there are the costs of licences and maintenance for all parts of the software stack - as well as the human costs for applying updates and patches to the software and for fixing any equipment that fails.  However, where an organisation believes that it can provide a better platform for its workloads than a third party platform provider can, colocation is a great way to gain better overall security, scalability, performance and flexibility.

If ownership of the stack offers no particular benefit, this is where public cloud comes in. Infrastructure as a service (IaaS) gets rid of the CapEx issues around hardware; platform as a service moves further to get rid of the costs of licencing and maintenance of operating system and certain other parts of the stack; software as a service (SaaS) gets rid of all these costs, rolling everything up into one subscription cost.

None of these choices is a silver bullet, however.  For whatever reasons - whether these be for solid, logically thought through or more visceral ones - many organisations will end up with some workloads where they want more ownership of the platform, alongside other workloads where they just want as easy a way to deploy as possible.

Planning for a hybrid platform brings in the need for looking well beyond the facilities themselves - how is high availability and business continuity going to be achieved?  How is security going to be implemented and managed?  How is data sovereignty impacted?  How is the end-user experience to be optimised, monitored and maintained?

The world of the datacentre is going through a period of rapid change not seen since the hegemony of the mainframe was broken by the emergence of distributed computing.  Few can see beyond an event horizon of just the next couple of years - virtualisation and cloud are still showing how a datacentre needs to shrink; other changes in storage, networking and computing power may yet have further impacts in how future datacentres will have to change.

At this year's Data Centre World, attendees will be able to engage with vendors and customers involved in this journey.

The rise and rise of public cloud services

Bob Tarzey | No Comments
| More

There is no reason why the crowds arriving at Cloud Expo Europe in April 2016 should not be more enthusiastic than ever about the offerings on show. Back In 2013, a Quocirca research report, The Adoption of Cloud-based Services (sponsored by CA Inc.) looked at the attitude of European organisations to the use of public cloud services. There were two extremes; among the UK respondents, 17% were enthusiasts who could not get enough cloud whilst 23% were proactive avoiders. Between these end points were those who saw cloud as complimentary to in-house IT or evaluated on a case-by-case basis.

In 2015 Quocirca published another UK-focussed research project, From NO to KNOW: The Secure use of Cloud-Based Services, (sponsored by Digital Guardian) which asked the same question. How things have changed! The proportion of enthusiasts had risen to 32% whilst the avoiders had fallen to just 10%. Those using cloud services on a case-by-case basis had risen too, from 17% to 23%, whilst those who regarded them as supplementary fell from 43% to 35%.

The questionnaire used in the research did not use the terms avoiders and enthusiasts but more nuanced language. What we dubbed enthusiasts during the analysis had actually agreed with the statement "we make use of cloud-based services whenever we can, seeing such services as the future for much of our IT requirement". Avoiders on the other hand felt either "we avoid cloud-based services" or "we proactively block the use of all cloud-based services". So the research should be a reasonable barometer for changing attitudes rather than a vote on buzzwords.

The 2015 report went on to look at the benefits associated with positive attitudes to cloud, such as ease of interaction with outsiders, especially consumers, and the support for complex information supply chains. It also showed the confidence in the use of cloud-based services was underpinned by confidence in data security.

The research looked at the extent to which respondents' organisations had invested in 20 different security measures. Enthusiasts were more likely than average to have invested in all of them, with policy based access rights and next generation firewalls topping the list. However, avoiders were no laggards; they were more likely than average to have invested in certain technologies too, data loss prevention (DLP) topped their list followed by a range of end-point controls as they sought to lock down their users' cloud activity.

Supplementary users of cloud services took a laissez-faire approach; they were less likely than average to have invested in nearly all 20 security measures. Case-by-case users tended to have thought things through, and were more likely than average to have in place a range of end-point security measures and to be using secure proxies for cloud access.

What is the message here? The direction of travel is clear; there is increasing confidence in, and recognition of the benefits of, cloud-based services. However, for many it is one step at a time, with initial adoption of cloud services for many in limited specific use cases. The security measures to enable all this, as the 2015 report's title points out, are not to block the use of cloud services by saying NO but to be able to control them by being in a position to KNOW what is going on.

There are two underlying drivers that can't be ignored by IT departments, whose role is to be a facilitator of the business rather than a constraint on it. First, many digital natives (those born around 1980 or later) are now entering into business management roles; bringing positive attitudes to cloud services with them. Second, regardless of what IT departments think, lines-of-business are recognising the benefits of cloud services and seeking them out; so-called shadow IT which is delivering all sorts of business benefits. Why would any organisation want to avoid that?  

The Compliance Oriented Architecture - are we there yet?

Clive Longbottom | No Comments
| More
Over a decade ago, Quocirca looked at the current means of securing data, and decided that there was something fundamentally wrong. The concept of solely relying on network edge protection, along with internal network and application defences misses the point. It has always been the data that matters - in fact, not really even the data, but the information and intellectual property that data represents. 

To our minds, enterprise content management (ECM) has not lived up to expectations around information security: it only dealt with a very small subset of information; it was far too expensive; and has not evolved to support modern collaboration mechanisms. It is also easy to circumvent its use, and far too easy for information assets to escape from within its sphere of control. 

As an increased need for decentralised collaboration evolved and cloud computing offered new ways of sharing information, the problem became more complex. There was an increase in the difficulty of defining the network edge as the value chain of contractors, consultants, suppliers, customers and prospects grew, and in ensuring that the new silos of data and information being held in places such as Dropbox, Box and other cloud-based data stores were secure. However, in contrast to the problems with ECM, the use of cloud-based information sharing systems was in trying to stop individuals from using them: usage has grown, and in many cases, the organisation is oblivious to these new data stores. 

Sure, these silos have evolved to provide greater levels of security - but they are self-contained, with any such security being based primarily around encrypting files at the application or email level, or managing documents/files as long as they remain within the secure cloud repository or local secure 'container' (the encapsulation of a file in a proprietary manner to apply security on that file) on the host. 

The problem with just using application- or email-based encryption is that if that passcode created by the user is not strong, it can be hacked. Keys also have to be distributed to each person that needs to have access to the data - and such sharing is difficult and insecure in itself. Each key created has to be managed by the owning organisation (even where key management tools are in place), which presents another problem when keys are lost and have to be recovered. However, all data that is outside of the central repository is now out there forever - once received and unlocked, it can be forwarded as emails, be modified, it leaves uncontrolled copies of itself all over the place. 

The same with the use of containers to try and track and monitor how data is being dealt with. It is difficult, outside of a full digital/information rights management (DRM/IRM) platform to track data across a full value chain of suppliers and customers - and it is expensive. Using containerised defences within a system still has drawbacks: the security only works across those using the same system or cloud container. Once that file leaves the container, the data is in the clear for anyone to do whatever they wish with (as described above). 

To try and address the problem, Quocirca came up with an idea we called a compliance oriented architecture, or a COA. The idea was to provide security directly to data, such that it was secure no matter where it was within or outside of a value chain. At the time, the best we could come up with to create a COA was a mix of encryption, data leak prevention (DLP) and DRM. We accepted that this would be expensive - and reasonably easy for individuals to work around. 

Since then, we have seen many technical products that have gone some way towards information security, yet none, to our mind has hit the spot of the COA. 

Now, we wonder whether FinalCode has come up with the closest system yet. 

When Quocirca first spoke with FinalCode, although we liked the approach, we had worries over its interface and overall usability. We liked the technical approach - but felt that individuals may not have enough understanding of its value and operation to actually use it. With its latest release, FinalCode 5, Quocirca believes that the company has managed to come up with a system that offers the best COA approach to date. 

What does FinalCode do? It acts as a secure proxy between an information asset and the individual. Either directly through its own graphical interface or through the use of its application program interface (API), documents can be secured as close to source as possible - with policy being enforced by the OS and through the application being used (e.g. Microsoft Office, CAD applications, etc) in most cases. So the sender and recipients work in the application they are accustomed to. 

Once the document to be shared is put through FinalCode, the FinalCode system encrypts it with a one-time code, and manages keys as necessary. The information creator (or a corporate policy) applies rules around how the information can be used - and by whom. Joe may have read, edit and email forward capabilities; Jane may only have read. When the document reaches them, they first have to download a very small FinalCode client (a one-time activity). From there on, everything is automated - they do not have to know any keys, and they will be informed at every step what they can do. 

So, if Jane tries to forward on the document, she will be informed that she is not allowed to do this. If she tries to cut and paste any content from the document to another one, she will be prevented. 

It makes no odds where Jane is - she can be within the same organisation as the originator; could be a trusted partner in the value chain, or could be an accidental inclusion into an email list. All the actions that she can do are controlled by the file originator or a corporate policy. Should Jane have received the file by accident, she won't be able to do anything with it, as her name will not be in the list created by the originator for her to gain access to the content of the file itself. If a trusted person leaves the company they work for, then the files they have access can be remotely deleted by the originator. It also means that the document can be stored anywhere, distributed in any way - as FinalCode's capabilities are not container based, files can be used in whatever workflow suites the user or business requires; secured files can be output to a local disk, a network share or a cloud service - and all its restrictions and functionalities are maintained. 

Other functions include setting the number of times a document can be opened, including a visible or invisible watermark on documents and allowing recipients access to a file for a set time period only. 

This is all managed without FinalCode 'owning' the data at all. Although FinalCode operates as a cloud service, it is only really operating as a key management and functional control mechanism. As far as it is concerned, all information is just ones and zeros; it never actually sees the data in the clear. Encryption is carried out at the originating client; decryption is carried out at the receiving client. And the receiving client obtains the usage permissions all maintained by the FinalCode server. 

With pricing being based on low-cost subscriptions, FinalCode is a system that can be rolled out pretty much to everyone within an organisation, providing this high level of a COA. There will be problems for FinalCode - there always are for vendors. It is, as yet, still not a well-known name. It also runs the risk of being confused with the likes of Dropbox and Box. However, with the right messaging, FinalCode can deal with the second problem (indeed, it should be able to work well alongside such cloud stores) - and as its usage grows, its name should spread organically. 

So, when the business asks from the back seat as to whether they are there yet in their seemingly endless journey to a COA, IT can now honestly respond with an "almost there, yes". (Note: since writing this article, another company, Vera, has come to Quocirca's attention that looks similar. We will be investigating...)

The great transition from constrained certainty to good-enough risk

Bernt Ostergaard | No Comments
| More

For much of the past 150 years of telecoms history, customers have had little influence on the evolution of the Wide Area Network (WAN). Corporate WAN strategies centered on adapting to what was available. There were certain bedrocks of stability and authority that you did not question - the only question was: how much of it could you afford?

The upside has been global availability, the downside has been high cost and sub-optimal utilisation. With the enormous expansion of available bandwidth and the shift from proprietary hardware and software to software defined network, this is about to change - giving enterprise users the chance to simultaneously lower telecom costs and expand available capacity.

The PTT monopolies When it came to WAN service providers, choices emerged in the early 1980s. Until then the national telco infrastructure was built and operated by the national Post, Telegraph & Telephone (PTT) monopoly, and its pricing was not cost based, but rather determined by how much revenue it, as a public service wanted to charge. International services relied on bilateral service and tariff agreements between individual PTTs.

For the business executives it meant predictability. Telecoms was a purely technical and engineering issue. Communications was a punitive cost item on the company books - like it or lump it!

The 1980s changed all that. The first challenge came from MCI Communications in the US. The company built a privately owned microwave link between Chicago and New York in 1982. This led to the break-up of the American Telegraph & Telephone (AT&T) monopoly, the emergence of regional Bell companies and the rapid decentralisation of telco infrastructure ownership.

This event forced European governments to gradually open their markets to competition in the mid-1980's, with the final competitive push coming from international mobile telephony in the early 1990s! With the emergence of Global Service Mobility (GSM) technology and standards, European governments decided that each country should have at least three contending mobile service providers - the leading one was invariably the national telco, the second one was most often a neighbouring PTT, but the next ones were the real challengers! Vodafone, Orange and Virgin in the UK, Bouygues and Vivendi in France, Comcast, Century Link, Level 3 in the US, PCCW in Hong Kong, KDDI in Japan to name a few front-runners.

How has choice changed business thinking?

Executives now weight the business value of expensive, guaranteed quality connectivity versus cheaper and more flexible best-effort connectivity. With the 1000-fold increase of available bandwidth, connectivity not only becomes very price competitive, it also shifts from being a cost item to becoming a possible revenue generator for the enterprise. The CTO is no longer in the dog house being responsible for 'major costs to the company balance sheet', but an entrepreneur able to create new opportunities for the lines of business.

As corporate data users, we have become more comfortable in the 2000s with good-enough connectivity, and less concerned by scare stories about the dire consequences of business comms disruptions, such as: o Banks going out of business, if their systems go off line for a few hours or days, o ERP systems unable to work in less than five-nines uptime environments o Hackers threatening company survival when trade secrets and customer details are stolen, o Professional communications could not rely on Skype or other consumer video conferencing services due to lack of any quality of service (QoS) guarantees. These scares are all being disproved by everyday company resilience, and user acceptance of minor service disruptions.

Fundamental voice telephony has taken us on the same technical journey from clear uninterrupted, echo free analog QoS telephony in the 1970'ies to mobile calls today that vary significantly in call quality, come with frequent call degradations, interruptions and cut-offs.

There is simply more upside in today's digital communication scene! We have achieved mobility, lower cost, greater bandwidth, media diversity and much higher levels of video, voice and data integration. So our perception of risk has changed - we are less risk averse, having experienced the advantages of digital integration.

What does SDN and SD-WAN mean for telcos, vendors and the enterprise?

However, the infrastructure telcos are still here, and the telco equipment vendors still sell networking equipment with proprietary software that doesn't easily communicate with competitor's equipment.

This is what the shift to Software Defined Networking (SDN) aims to address, by defining network functions in software that can be installed on multi-functioning hardware. Essentially, by separating software from a specific hardware platform, it becomes much easier, quicker and cheaper for telcos and service providers to renew and improve their network infrastructure. The vendors that are slow to adapt risk becoming low margin commodity vendors if they do not own the software or worse, going out of business if commodity x86 hardware or cloud platforms can be used instead.

However, enterprise users must still subscribe to a wide range of distinct infrastructure services encompassing mobile and fixed-line voice, video and data services, using a limited subset of equipment to maintain compatibility, and does not allow them to pool all their network capacity into a single virtual connection.

This is what the Software Defined Wide Area Networking (SD-WAN) aims to address by concatenating all the network access channels and managing them as a single virtual channel for any kind of WAN traffic. SD-WAN components like encryption, path control, overlay networks and subscription-based pricing are well known, but now with orchestrated delivery and unified management capabilities driving the SD-WAN momentum.

Have you entered our awards yet?

Find recent content on the main index or look in the archives to find all content.

Archives

Recent Comments

  • Andy Jones: Great article, Louella! Really interesting take on how information managed read more
  • paul simone: I usually use Vagrant for sliding options. This tutorial is read more
  • ChrisMaldini: Nice write-up, Clive. I agree there is a need to read more
  • Adam: Cloud computing and BYOD go hand-in-hand. Cloud computing can make read more
  • David Chassels: Hi Clive Is the business emphasis not wrong in looking read more
  • Clive Longbottom: After a discussion with CA Technologies, I would just like read more

Dilbert

 

 

-- Advertisement --