Data loss prevention (DLP) systems, encryption, internet monitoring tools and other restrictive controls are failing to deliver total security, with a growing number of data breaches linked to insiders.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
But how can organisations increase security without affecting productivity or encroaching on employees’ right to privacy?
The challenge is an important one to tackle, with insider-related fraud up 43% in 2012, according to the latest report from the UK’s fraud prevention service Cifas, and 14% of all data breaches linked to insiders, according to The Verizon 2013 Data Breach Investigations Report.
A more recent study by storage and information management firm Iron Mountain revealed that 8% of UK employees said that if they were treated badly by an employer, they would take revenge by stealing confidential or sensitive information.
Why is the insider threat increasing?
In recent years, companies with highly sensitive data have done a fairly good job of securing the network perimeter with firewalls and intrusion prevention systems, which has pushed attackers into looking for insiders to help them bypass these controls.
Accessing data within an organisation on a regular basis is much easier with the help of an insider, and security firms are reporting an increase in the number of data breach incidents that can be linked to an insider who has been coerced by the hackers into co-operating.
Why does this require a change in approach to security?
Banks have traditionally approached security by locking down systems and restricting access to the internet because data security is essential to their core business.
They also commonly deploy data leakage prevention (DLP) tools to restrict what types of data can be accessed and sent out of the organisation, data encryption tools and other restrictive controls.
But banks are realising that data is still leaking, and at the same time that locking down systems and restricting access is stifling innovation that is vital to creating competitive advantage.
Banks, as well as media companies and telcos, are leading the way in adopting a “trust, but verify” model of security to balance data protection with employee privacy, according to Mohan Koo, managing director of Dtex Systems.
This approach means that employees can be given the access and flexibility they need to be innovative and creative, but at the same time deploying protective monitoring systems to verify that they are not engaging in any risky or malicious behaviour.
“This is enabling banking institutions in the UK to reverse out of their restrictive lockdown approach to security,” Koo told Computer Weekly.
Why are DLP tools and internet security tools failing?
The main reason for DLP tools failing to halt data loss is that they have not been configured properly for the specific requirements of the organisations that have deployed them, said Koo.
Often, suppliers of DLP tools recommend the same configuration settings to all customers, he said, even though organisations do not have the same risks.
“The only way to understand and to see where data loss is really taking place is to run an audit that looks at how users interact with all of the files across the network to identify where the DLP systems are working and where data is being transferred without DLP systems picking it up,” said Koo.
Protective monitoring also reveals that in many data breach cases, employees are using internet searches to find ways of bypassing the most common internet security tools used by their employers, and then sharing those techniques with their peers.
By identifying the gaps, organisations are able to reconfigure their DLP tools and internet security systems to make them more effective and keep up to date with new and emerging behaviour trends of their employees, he said.
What other sectors are changing their approach to security?
Telecommunication companies have been second only to financial institutions in their restrictive approach to security, but they too are looking for ways to improve data security, but at the same time be less intrusive on employee access to internal systems and online resources.
Protective monitoring provides a way for telcos to remain in control of key customer retention data, as required by the Data Protection Act, for example, without being too restrictive and without infringing employees’ privacy.
Conversely, media organisations are turning to this approach to improve information security, while preserving the freedom they have traditionally allowed their employees because creativity and innovation is core to their business.
It is surprising how often insiders responsible for malicious activity are members of the IT or security teams
Mohan Koo, Dtex Systems
“Media organisations, which generally have few to no security controls in place, are beginning to recognise the importance of information security, especially after the high-profile attacks in May on Western media organisations by the Syrian Electronic Army,” said Koo.
Other sectors that are moving towards the “trust, but verify” approach to security using protective monitoring are energy and retail.
But how does protective monitoring avoid privacy concerns?
The key to approval of this approach by the Information Commissioner’s Office (ICO), the UK’s privacy watchdog, is that all data is anonymised.
The systems create event logs based on three classes of general high-level activity, and not on content. These classes of activity relate to internet browsing, applications and data handling.
When a serious of events come together to exceed a set severity level, an alert is raised and sent to the security incident management team or a particular individual, depending on the nature and severity of the alert.
The systems can pick up risky behaviour in these areas, raising alerts for example when non-IT or security employees use hacking or vulnerability assessment tools or when IT or security employees are using such tools in suspicious ways.
Data breach fines issued by the ICO
- ICO fines Glasgow City Council for loss of unencrypted laptops
- ICO issues £200,000 penalty for failed IT disposal
- ICO fines text spammers Tetrus Telecoms £440,000
- ICO hits Stoke-on-Trent City Council with £120,000 fine
- ICO hits Sony with £250,000 data breach penalty
- ICO issues £150,000 penalty, urging more care with personal data
- ICO fines Midlothian Council £140K for data breaches
- Croydon council handed £100,000 fine by ICO
“It is surprising how often insiders responsible for malicious activity are members of the IT or security teams,” said Koo.
Protective monitoring systems also typically look at the creation, storage, copying, printing, transferring and renaming of files to ensure there is a full audit trail of any file on a system.
It is only when risky behaviour is identified and data security is at risk that authorised administrators of the protective monitoring systems are able to link that behaviour to an individual as part of a formal investigation.
Whenever alerts are raised, the protective monitoring systems will generate a report that details everything that happened in the run-up to that alert being sent and what happens subsequently.
The ICO requires that there are very strong authentication controls around who is allowed to access the data which can identify which employees are involved in risky or suspicious behaviour.
“But this is done only when a high-risk activity has been referred for a forensic investigation,” said Koo.
Fraud and investigation teams then go to work under the supervision of a very senior representative of the organisation.
But protective monitoring is not only about attributing blame, it can also be used as a training tool to raise user awareness and change behaviour by giving warnings of potentially risky behaviour.
According to Dtex Systems, only about 5% of risky behaviour by insiders is intentional, which means that most of it is without malicious intent and, theoretically, could be eliminated through user education.