First up, there has to be a security incident plan. At ISSA UK we see plenty of organisations without any incident response plan at all in place, and that means they could find themselves over-exposed when a cyber security incident does happen.
Without a proper incident response plan in place, what happens to such organisations is any or all of the following:
Gaffe 1: Bad PR
The wrong people in the organisation push out the wrong message to the media. In particular I see many ill-thought out responses from CEOs of breached organisations, thinking they know best. I cringe at statements like “the hackers used advanced techniques and simply were too clever for us” – it has immediate legal implications, is a public admission of liability, and does not paint the right picture of a responsible organisation to its customers.
Gaffe 2: Poor recovery times
Without a plan in place, organisations take longer to recover, period. Whether this is minutes, hours or days, who knows? However, if there is an incident response plan in place, then recovery time can be tested. Recovery times can be baselined and an organisation can confidently operate on a 99%+ uptime basis. I often find that organisations without an incident response plan do not have a business continuity plan either – or at least one that works.
Gaffe 3: The wrong people are assigned the task
For the IT professional who loves fixing problems, firefighting and taking a reactive approach, creating an incident response plan and testing it is excruciatingly boring and time-consuming. What’s more, their fire-fighting job will detract them from formulating the right level of response in the first place. You need the right process-driven individual to help put this together. A business analyst would be a better choice than a security analyst, for example.
Gaffe 4: Incident plans purchased off the shelf
Incredibly, there is a huge market for off-the-shelf policy packs, and suppliers are making a killing. Companies can just download a complete information security governance framework and policy pack, do a find and replace, and voilà – they’re PCI DSS or ISO 27001-compliant. As a seasoned PCI DSS QSA and auditor, I can easily tell where most policy packs come from. Some companies even leave the vendor’s name on the policy pack, as some sort of warranty that they have bought the pack on a commercial basis, so it MUST be good.
Read more on incident response
Security Think Tank: Planning key to incident response
Security Think Tank: Incident response – prepare, test, and test again
Security Think Tank: Three steps to effective incident response
Doing it properly
Hopefully that gives you an idea of what not to do and why incident and disaster response planning is absolutely critical in any business. Putting a plan together is not a quick task. All businesses are different, and my recommendations would be to loosely follow these steps:
- Create an incident response team, available 24/7, to co-ordinate any cyber security or business continuity incident.
- Train them. Get them used to the idea that incidents cost the company money, and why a consistent response is a must. If you can’t get at least one board member on the incident response team, then it’s not going to work. If the board needs convincing, give me a call!
- Carry out business process analysis, and identify critical areas within your business. What sort of data security incident should trigger a response? Which systems, if down, would cause the company problems? What systems containing data should be monitored for signs of attack?
- Put together a one-page response plan for all employees for when the shit hits the fan. Train them. Tell them why an effective and consistent response is so important.
- Build specific incident response processes, defining how you want your staff to recover systems in the event of outage.
- Build specific cyber security incident response processes, defining how you want your staff to contain incidents and recover compromised systems. Teach them how to image a system for later forensic analysis.
- Once plans and processes have been released, test them. In practice. Pull plugs out. Install test malware on critical servers and see what happens. If that’s going to cause problems, then there’s immediate justification for a pre-production or test infrastructure so this can be done in a controlled manner. The point is, if you want to put a serious incident/disaster response programme in place, then realistic tests are a must. Don’t wait for a few blown power supplies or hackers to test things for you. If you’ve not looked at virtualisation yet, do so now – it’s disaster recovery in a box.
- Fire everyone who thinks a reactive, fire-fighting approach is best for your business.
There is plenty of formal guidance around incident response – we have ISO 27001, PCI DSS, NIST, SANS et al – it’s all just guidance. It is not meant for cutting and pasting into your own incident response plans, although it will definitely give you food for thought and cover pretty much every eventuality. Read them and do your own research.
If you are stuck, then hire in expert advice. I cannot stress enough the importance of getting these plans right.