Maksim Kabakou - Fotolia
Most modern-day security controls are geared up to detect unauthorised access and data exfiltration. What they do not appear to be very good at is detecting and preventing unauthorised file system or database changes. After all, these systems are in a constant state of flux.
Benign changes are business as usual, and spotting malicious change is like trying to find a needle in a haystack. Even more so when authorised employees, during the course of the working day, decide to drop in a few bogus figures and wait a few months or even years to materialise some sort of illicit gain.
Rogue trader scandals seem to go back decades, but we all reached the same consensus: traders were allowed to do pretty much anything they liked, without check. Appropriate governance would have saved UBS over $2bn back in 2011, and €4.9bn for Société Générale back in 2008. Not insignificant losses, due to staff acting beyond their authority – without detection.
Roll on 2017. While banks have invested heavily to ensure traders can no longer act without authority, the rest of industry has failed to catch up.
Let’s take software houses. It is trivial for a software developer of an application to change a few lines of code. That’s what they do. But how do non-code-aware managerial staff know if these changes are legitimate when they lack the skills to interpret code?
This year, when renewing 2-sec’s professional indemnity insurance, a clause had cropped up to ensure we carry out DBS and background checks on all software developers, in addition to those staff who look after our war chest. If it’s on an underwriter’s radar, it means they’re seeing a rise in claims relating to unauthorised code changes. If companies have outsourced development somewhere a few thousand miles to the east, there is no reliable way to carry out a criminal records check.
The job of database administrators is to make changes to databases. Again, how do we know these changes are legitimate? What if the database contains a few million credit card numbers and the database administrator prepares a stored procedure that dumps them all to their USB stick?
In all these cases, as with rogue traders, appropriate governance and oversight is key. Limit those who can access data. Software developers and database administrators do not need to see sensitive data. They should be aware it is there, but through effective encryption and key management, should not be able to see it.
Read more from Computer Weekly’s Security Think Tank about cyber security strategy
- What are the main security risks associated with DNS and how are these best mitigated?
- How can information security professionals help organisations to understand the cyber risks across increasingly digital businesses?
Ensure data owners take full responsibility for knowing where sensitive data is, at all times, and monitor it for change. If data changes, this should be traced back to an appropriate change control system. If it’s not logged and approved by a change approval board, it’s an unauthorised change, and the incident response plan needs to be invoked.
Adding governance and oversight to an operation does invariably add cost. It takes longer for operations to be carried out. But it’s no excuse, and when taking an ad hoc, agile approach to your operations, you do so at your own risk.