Maksim Kabakou - Fotolia

Security Think Tank: Steps to a solid data privacy practice

Petra Wenham of the BCS shares her expertise on building, or rebuilding, a solid business data privacy practice in a post-Covid-19 world

Your company has survived the Covid-19 lockdowns; administrative staff and employees who can work from home have got used to it and, in the main, really do not want to return to the office. Attendant with that, your company has seen the opportunity to reduce office space and so save money. Your IT staff did sterling work upgrading and rebuilding the IT infrastructure to better support remote working and that probably included adapting the company infrastructure to use more cloud-based resources.

The company is now approaching a new normal and to better support this, it decided it needed new blood at board level to better address internet-based marketing and sales and potential data privacy issues that might arise. To address these requirements meant hiring a couple of non-executive directors (NEDs) – one with expertise in exploiting the internet and social media for marketing, sales and product support purposes, and the other with experience in information assurance, information security, the General Data Protection Regulation and IT risk analysis. 

The hired marketing NED started by initiating a review of what the company has done in the past, what went well and what didn’t. In parallel, an exercise started to measure how the company’s products measure up against the competition and how the competition market their products and how they run any advertising campaigns. These reviews will, in all probability, lead to company-wide changes that will impact on the IT infrastructure.

The other NED, the infosec NED, started by asking: “Is there a complete and up-to-date inventory of all the data held or processed by the company?” The answers typically ranged from “no” to “it’s up the individual departments”. There are very few companies or organisations where leaders could put their hand on their heart and answer “yes” to that question.

The reason why that question was asked, and indeed asked first, is that if a company does not know the totality of what data it has, the value of the data and how it is used and stored, then it is very difficult, if not impossible, to secure and control that data effectively.

Preparatory to developing or updating an existing inventory

Various data types will need to be identified, such as that pertaining to HR or finance, but in identifying these different types of data, be warned that an overly granular approach will make it more difficult to effectively control over time, whereas an approach that is not granular enough will not address data privacy properly.

Each data type must have only one data owner and the job of that data owner is to identify, by policy and procedure, who or what process can access their data and for what purpose (create, read only, read/write/copy, process, archive or delete). It is likely that in a large organisation, the data owner will devolve the day-to-day control of their data to specific known others. In small and medium-sized enterprises (SMEs), that day-to-day control of their data would likely be carried out by the data owners.

The data inventory

Once the result to that data inventory question is known, then it is highly likely that a full “drains up” review will be initiated, leading to the development of a new or revised data inventory, which should:

  • Be kept up to date.
  • Be properly identified by type (finance, sales, HR, etc).
  • Have an appropriate data owner for each data type.
  • Identify the value of the data type, such as public, company internal, company private, personal and personal sensitive, and so on.
  • Identify data archive-by and destroy-by dates.
  • Identify who or what process can access and use each data type.
  • Identify any access restrictions, such as internal-only access, time-of-day restrictions, whether two-factor authentication (2FA or MFA) is required, and so on.
  • Identify all locations where data of each data type is stored or held, alongside a method of identifying the version of the data. This must include where data has been downloaded to individual PCs, copied to CD/DVD disks or USB memory sticks, and archive storage.

A revised data inventory is available – what next?

Once we have that up-to-date data inventory, how is it going to help address and optimise data privacy? In and of itself, the inventory is one of the tools, but a very necessary one in developing a plan leading to a more secure infrastructure.

An essential adjunct to the data inventory is the policy associated with user credential creation and ongoing maintenance. These policies must include when reviews are carried out to identify whether an account should still be able to access the data (and for what purpose) or whether it is a stale account and how stale accounts are handled (deleted or deactivated and the period between deactivation and deletion).

Other inputs to securing the infrastructure will include the future plans and strategic direction of the company and individual departments. These inputs, in conjunction with the data inventory, will allow the identification of the technical security requirements for each data type, for example by Active Directory (or equivalent) controlled authentication and authorisation settings, separate physical storage or firewalled dedicated storage.

For example, data from different company divisions is likely to need to be segregated from other departments’ data and some data that is deemed sensitive or secret will need to be protected to a higher level than other data. 

Accessing data will require a user or process to be in a specific organisational unit and group and to have the appropriate authorisation level. Additional access restrictions could also be applied, such as time of day and whether 2FA is in use – for example, a user accessing data from a remote location might be given a restricted view of data compared to an in-office access unless it is during business hours and 2FA is used. These decisions would depend on a risk assessment of each data type against various IT architectures and the company’s overall risk appetite.

In infrastructure terms, general departmental data could typically be segregated by limiting access by organisational unit and/or group settings in Active Directory (AD or equivalent), although in some cases departmental data might need to be held in physically separate data stores. What can be done to any data can be controlled by authorisation role settings in AD.

Where sensitive and secret data is concerned, access control will be subject to these same settings, but additionally access would be limited to specifically authorised people or groups of people and potentially specific IP addresses. It is also likely to require the data to be segregated from other data by physical means. The reason here is that the storage medium at end-of-life or in failure mode must be destroyed to a higher level than storage mediums used for non-sensitive data.  

Key takeaways

  • You need to know where data is being stored and used, because if you do not know, you cannot control it.
  • The data owner is key in identifying and controlling who or what process can access and use the data.
  • Understanding the value of data and understanding how different security techniques can protect data is key to developing a risk assessment and, ultimately, the chosen security architecture.
  • User and process access controls must be based on a strict “need to know” basis. Just because a person is a senior manager does not mean they need access to every file or data item within their company, organisation unit or department.
  • Access controls should ideally take into account a user’s or process’s origination point and possibly time of day. 2FA for users is a valuable way to enhance network security and data privacy by significantly improving access to a company’s infrastructure.
  • Sensitive and secret information must be held separately from other data and ideally in a separate physical store. Access to this type of data must also be restricted to known origination points, for example authorisation down to not just a department, but appropriately authorised users or group of users within a department. Additionally, an authorised point of origin might be required, such as known IP addresses. 

Finally, don’t forget the basics:

  • The IT infrastructure must be fully documented, including, but not limited to, any and all-outsourced services, licences, building layouts (computer rooms, wiring closets, etc).
  • All external access points to the infrastructure (via the public internet and third parties) must be adequately firewalled with demilitarised zones with proxy-type devices providing an isolation layer between internal company processes and the outside world.

You will also need to ensure that:

  • All software (and firmware) is up to date.
  • Security patches are applied in a timely way.
  • Anti-virus and malware ingress prevention tools are in place, operational and maintained.
  • There is a security monitoring process in place and it is being utilised.
  • There is a regular programme of IT security health checks and external penetration testing in place.
  • IT and security staff are part of an ongoing continuing professional development programme.
  • That a company-wide security awareness programme is in place and maintained.

Read more on Privacy and data protection

Data Center
Data Management