A good data breach incident response plan looks like one that has never been used. By that I mean it has been created and tested but never had to be called into use because the preparation, education and testing involved in good security has been so effective.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Realistically, though, I wanted to put together some thoughts to show the kind of things an organisation needs to be thinking about when it comes to developing a good quality plan and embedding a security culture that minimises the likelihood of the plan being activated.
My experience of data breaches is that sometimes the near-misses go unnoticed. These are small security failings that should be taken as fair warning that there is a problem in the organisational culture that could eventually lead to a serious incident. Instead, priority and spend is reserved for bigger threats that may have a greater impact, but are far less likely to occur.
The more we focus on preventing large events, the more chance there is of one actually happening. This may sound absurd but it is a real result of the ‘aggregation effect’. The aggregation effect refers to there being many more minor security infractions within an organisation than large security events, with an accumulation of increasingly frequent small incidents ultimately leading to a major event.
Read more on incident response
- Security Think Tank: Planning key to incident response
- Security Think Tank: Incident response – prepare, test, and test again
- Security Think Tank: Three steps to effective incident response
- Security Think Tank: The dos and don’ts of a good incident response plan
- Security Think Tank: Ready for your data breach moment?
- Security Think Tank: Key elements of an incident response plan
When small incidents go unchallenged – or even unnoticed – they become the accepted culture. So, the first time a door to a file room is propped open for the sake of convenience, the security policy is bypassed. If this goes unchallenged, it will happen again because “Fred” does not see the importance of putting his PIN code into the door entry system for the file room. This mindset cascades, with more and more people believing it to be acceptable behaviour. Before you know it, propping the door open is the norm within the business, offering an opportunity for files to be removed by unauthorised staff, altered and copied – and a more major security breach could occur.
Just because the policy is that entry to the file room is via PIN code, with the door locked again when someone leaves the room, it is dangerous to assume that this is exactly what happens every time.
Another common example of how a seemingly innocuous act can generate the potential for a major security event relates to encrypted laptops.
“Jim” never logs his laptop off before closing it and dropping it into his bag. It takes too long and he’ll be working on it when he gets home anyway. Not only does he ignore security protocol but more importantly the laptop is unencrypted while he is logged on – as is access to the sensitive data held on it as a result. The problem is compounded because Jim assumes that because nothing bad happened the first time he forgot to log his laptop off it’s ok to carry on doing so.
This is how a culture of sloppy security practice or lack of focus on the small but frequent incidents can become a fast track to a major incident. Chances are that this behaviour is not an isolated instance and others may be doing it too, so the risk of a major information breach increases.
Data from the Ponemon Institute suggests that almost a third of breaches are reported by customers but only 19% by employees. The same research says that more than half of employees are wandering around with sensitive or highly sensitive data on mobile devices and around 10% carry out activities on those devices that are subject to the data protection laws. These are security failings that become cultural and when added together increase the chance of a large-scale or business-critical breach. Hopefully, it highlights the importance of education when it comes to building an effective plan.
How to construct your incident response plan
So back to the question, ‘What does a good data breach incident response plan look like?’
The plan should have the stages of preparing, identifying, assessing, containing, investigating, resolving and, finally, learning.
Hopefully you will have gathered from what you have just read that not enough time, effort and resource is going into the preparation and identification parts of plan development.
Training and embedding the security policy is all part of the preparation phase. However, statistics show that far too many organisations still vastly under-fund and under-resource awareness and education. If we continue with our earlier example of the unencrypted and still logged-on laptop, then the preparation part would have covered things like encrypting the laptop, training Jim to ensure he always logs his laptop off before taking it out of the office and only removes if from the office when necessary. In other words, some solid policy, procedure and education will ensure the employee knows the correct actions to take.
To take the identify stage with the laptop example again, a colleague might notice that the laptop was still logged on and notify the colleague or appropriate person. This would be flagged as a near-miss and a risk that required Jim to be reminded of the policy. This would have to be recognised and flagged enough times to stop this behaviour becoming the culture. “Culture eats strategy for breakfast,” as Paul Drucker, widely considered the father of modern management, once said.
Skimping on these two stages enables the risk of more serious incidents to grow, as sloppy or insecure behaviour will proliferate if unchecked. Eventually one of these incidents may get through and you end up with what could be a business-closing data breach. The eBay breach is a reasonable example of how it can go wrong.
There are many hundreds of thousands of phishing emails a day sent to organisations and businesses of all sizes; the same number of serious data breaches do not occur on a daily basis. But all it takes is one staff member who does not realise that they have received a phishing or spear phishing email and accidentally allows the payload to be deployed. Inadvertently they have enabled what could turn into a major breach.
So even though major breaches are relatively rare, phishing is ubiquitous and seemingly relentless. However, dealing with the occurrence of phishing emails is much easier than trying to contain and resolve a major breach.
Being able to accurately assess the scope and scale of the incident is vital. If it has gone beyond a near-miss and an actual breach has occurred, then you need to understand precisely what assets are affected and gather as much information as possible before moving onto the containment stage of the plan. The investigation stage needs to establish not only where the vulnerability was but what resources are going to be required (such as legal or forensic support) to move into resolution.
Finally the learn stage will cover all the key indicators that have been revealed throughout the whole plan implementation. The learning phase is as important as the preparation stage because it actively informs that part of the plan and enables improvements to be made.
Put it to the test
Testing any plan is a vital part of its effectiveness and relevance. I have frequently seen the whole plan being tested – a huge undertaking for most organisations. The danger here is that the focus can end up being on large-scale and disastrous events, which tend to be less frequent, and the smaller yet more frequent events can be overlooked and untested. This can be magnified when you add in the ‘marking your own homework’ approach; sometimes an independent set of eyes will find flaws and vulnerabilities that someone very close to the plan or organisation/department might not notice.
An example of how a flaw like this might happen is the file room door example I mentioned earlier. Perhaps this is a patient or client file room containing sensitive and possibly valuable paper-based data. This has to be protected too and it is dangerous to assume that all staff would know not to prop the door open or have a Post-it note with the door entry code stuck to the wall. These seem like obvious things, but when a plan is being tested, if the focus is on preventing a major hack then it is little things like this that can drift by unless someone with a fresh approach can spot these potential failures in policy and procedure.
In closing, I would say that a fully rounded view of all the factors mentioned here has to be part of any data breach incident response plan. The key part is to make sure you put measures in place to limit the need for its full deployment in your organisation, and if you do have to use it, then make sure you glean every possible lesson from it and get those small incidents and near-misses covered, recorded and acted on as they are the frequent enablers of the much larger events. Get outside help to check and test your plan to ensure no assumptions become vulnerabilities.
Mike Gillespie is director of cyber research and security at The Security Institute