Maksim Kabakou - Fotolia
Until a few years ago, effective perimeter security was widely considered enough to protect the enterprise. Data existed in its own datacentres, third parties worked on-site, and the general belief was that the organisation would never be hacked. But things change – quickly.
Today, most organisations operate on the basis of “when” they will be hacked, rather than “if”. Processes operate in cloud and mobile applications, while the enterprise network is accessed 24/7 by multiple third parties, as well as employees and partners on mobile devices and their own computers.
Clearly, this calls for a change in approach. But although it is tempting to double down and ensure the confidentiality, availability and integrity of all data no matter what it is or where it is stored, processed or transmitted, this is neither practical nor desirable.
Instead, when determining how best to protect valuable enterprise data in an era when perimeters have become so porous that there is no way to adequately protect them, a risk-based response is critical.
This should focus on classifying the data itself based on its sensitivity. Significant investment in security countermeasures may not be necessary for relatively low-value data, such as standalone names, but component lists or personal employee details clearly require a more robust level of security.
The first step, therefore, for any organisation is to identify and document what data it has and where it exists to create an in-depth master “data dictionary”.
Data identification and ownership
All data types need to be identified, from master data for processes (such as HR, finance and supplier relationships) to the templates and policies used to run the organisation. Ownership of this data should also be established to determine the accountability required to ensure that appropriate protective controls can be defined, for example when approval is needed to replicate it to another system, transfer it to a third party or incorporate it into written contracts.
One of the biggest challenges when identifying and protecting data is the tendency for employees to save files in arbitrary locations, such as file servers or SharePoint sites, for ease of use. This facilitates non-standard version controls and multiple copies of data, often with no clear ownership because the creators have moved to different roles.
Identifying key information in these satellite stores and appointing owners takes time. Files then need to be purged, potentially by deploying tools that identify duplicate files and remove those already logged in the master data dictionary, with a focus on only retaining data that furthers critical business processes or is legally required.
Policies and guidelines
As data is discovered and documented, the organisation needs to work with interested parties, such as data owners and legal teams, to define how information is classified and handled, along with data retention policies.
Four levels of data sensitivity are generally considered sufficient: public, restricted (internal use only), confidential (limited internal use internal) and classified (need to know only). Any more and the overly-complex tiers of security created are impractical to work with from a business perspective.
Clear guidelines about what data is the most sensitive, how information should be shared internally and externally, and at what point it can be deleted, should be disseminated to employees and partners. This minimises the chances of information being shared or stored inappropriately (resulting in a data breach) or deleted when it is required for legal or regulatory purposes.
At the same time, data-handling guidelines and controls need to align with how the organisation operates. Tools and applications that store and transmit information need to be protected adequately, but straightforward to access. Making it more difficult for employees to do their job increases the likelihood that they will bypass controls altogether, resorting to a return to unstructured data that ultimately raises the risk of a breach.
Technology that supports the processes outlined above is continually being developed. Encryption protects sensitive or highly confidential data while it is stored. Data loss prevention technology should be used to track where data is being sent and stored, while strong access controls ensure that only approved individuals can access systems and information, while audited data logs can capture their activity.
Security vs operational efficiency
Having cleaned and classified its data, the organisation’s path to securing it will be more straightforward. But there is no binary solution – the right response will depend on many factors and will be individual to each enterprise.
While limiting access may provide an answer, overly-restrictive policies can be seen as road-blocks – and lead to workarounds by employees who need to fulfil the objectives of their role. In finding a balance between security and operational efficiency, it is important to be as flexible and open as possible, reinforced with clear communication about why security is everyone’s concern.
Read more on Privacy and data protection
Security Think Tank: Deploy multiple defence layers to protect data-rich applications
Security Think Tank: GDPR requires unprecedented view of data flows
Security Think Tank: Classification is the first step to personal data security
Security Think Tank: Create a data security culture to keep data safe