I was reading David Lacey’s latest blog entry with some interest. One of the challenges I’m currently faced with is to present an achievable and realistic set of objectives against which my personal performance for 2007 can be judged. But it’s important for more reasons than just my own assessment because without having a useful set of metrics against which to measure success, and given that information security can often appear to be a big money pit with little return on investment, then how else can there be any judgement on success or otherwise? Of course, from a web product perspective we could just measure success in terms of the number of reported incidents. That’s all fine and dandy but if we have no incidents then does that mean we have good security? Nope. It probably means that we’ve just been lucky. Instead, let’s judge success against an assessed security standard – set your standards (first you have to define what it is that you are protecting against and define your mitigating controls) then find out the degree to which you have controls in place sufficient to achieve the required standard. So, you now have a standard and you also have something against which success can be measured because it’s possible to develop a strategy based around improving the overall assessment scores.
This isn’t a program that I can lay claim to any credit for as it was in place within my own organisation long before I came along, however, it’s something that has become more sophisticated over time and the results of assessments have become useful and measurable security metrics.
Anyway, just a short blog entry today as I’m in the final stages of putting together a workshop session on product risk. As always, leave your comments on this subject and any of the others that I’ve raised, whether you agree with me or otherwise.