Recently in Governance Issues Category
I keep reading defeatist talk. The latest is from a chap called James Lewis, a cybersecurity expert at the Washington DC based Center for Strategic and International Studies, who has been claiming that businesses should "stop worrying about preventing intruders getting into their computer networks, and concentrate instead on minimising the damage they cause when they do".
It would be a very black day for cyber security if businesses stopped worrying about intrusions. Let's face it the reason we have so many is because we don't try hard enough to stop them. The attackers are fast, smart and agile, and our defences are sloppy, dumb and slow to react. The DC man is right to point this out, but the answer is to beef them up, not let the security managers off the hook.
Valuable intellectual property can be safeguarded by not storing it on networks. We don't do enough of this. Intruders can be stopped or quickly detected by state-of-the-art defences, though these are rarely deployed effectively even in large enterprises. Admittedly, some intelligence services have the capability to by-pass any defence, but such attacks are selectively mounted and should not be a reason for a wholesale abandonment of confidence in preventative measures.
The "dwell time" of a sophisticated APT intrusion is the serious new metric, though there is no mention of this in the international standard on this subject ISO 27004, which is perhaps where it all goes wrong. The modern CISO is bogged down in hundreds of pages of paper nonsense which stops them applying common sense and judgement. The target should be to reduce the dwell time from several years to less than a day.
Zero days should be the target. But then that would be bordering on prevention...
I'm finally back blogging after a delightful summer break. Surprisingly, not a lot has changed in the cyber security world. Big security breaches have been surprisingly thin on the ground. And most have resulted from predictable human failings or greed, rather than technical weaknesses. There have been few recent reports of dangerous APTs, except perhaps for an inevitable attack on Apple users, many of whom may have naively assumed they were immune from such threats.
Anyone that understands the motives of attackers and the vulnerability of our critical infrastructure will know that professional attacks have not gone away. They are just much harder to detect. There is clearly much more to come, especially given with a steeply increasing terrorist threat.
I sense however that we are some years from a major disaster, though I expect it will occur well before we are able to implement effective countermeasures. That's because the most significant failing of the security community is in responding quickly to new threats. There are one or two exceptions of course, generally in areas where business sets stretch targets for security developers.
The mobile world is one such area. A few days ago I attended the excellent, annual exhibition at the Royal Holloway University Smart Card Centre. There were some first-class presentations, especially the talk by Dr. Klaus Vedder, a real expert in this field, who convinced me that mobile devices are the focus of the fastest-moving developments in cyber security. Product developers race to bring new technologies to market in record time. And they need to be sufficiently secure for the marketplace.
In sharp contrast the presentations on government cryptographic development reflected a legacy of lethargy, underpinned by outrageous demands from a bygone age. New products require a minimum, five-year time scale, and must be designed to be secure for 20 years and to protect data for 30 years. Such assumptions reflect an absence of business pressure for stretch targets.
Security processes are slow because nobody in business cares sufficiently to whip them into shape. Society should demand better than this to safeguard our critical intellectual assets.
Last night a friend sent me an email drawing attention to the UK Government's new cyber security scheme. This one is called "Cyber Essentials". So what's new? And what does it offer?
The answer is very little. It contains no new advice or controls. It's incomplete and insufficient. And it's not mandated by regulators. In fact it's nothing more than a restructuring of advice already covered by more important standards.
It's unfortunate that governments and institutes insist on publishing their own versions of standards at a time when many enterprises are forced to address specific ones. The most widely enforced standard at present is the Payment Card Industry Data Security (PCI DSS) standard. But this important standard is not even mentioned in the Cyber Essentials guide.
The unfortunate truth is that cyber security standards are a nightmare for enterprises of all sizes. Big companies are required to provide annual evidence of the existence of hundreds of control requirements. Small retailers are forced to employ expensive consultants to translate technical standards into action.
It's not advice we need, but consistency. In a world awash with standards, where tick-box compliance has replaced security, what matters is structure more than content. This perhaps explains why the Cyber Essentials contains an appendix mapping the new standard onto several others. Unfortunately it doesn't cover the 220 controls in the PCI DSS so it's no use to the millions of retailers out there.
There's no benefit in having all the rights words, but not necessarily in the right order. Any framework is a means to an end, not an end in itself. If that end is to complete a questionnaire, then the questionnaire structure is the sequence you require. If it's to design a compliance workflow system, you need a framework structured around organisation responsibilities. If it's just for use as a reference document, you simply need a good index.
There are more than a dozen ways of structuring a security standard. I know because I experimented with all of them when drafting the original BSI Code of Practice back in 1993. You can do it around process, services, life cycles, technology, job function, subject areas, etc. Or you can simply pluck headings out of the air, as many standards do.
The COBIT 5 standard is structured around organizational processes. The ITIL standard around IT services. ISO 27000 was originally structured around ten "natural subject areas" as might be encountered in enterprise security manuals. The ISF Standard of Good Practice is structured around six areas of IT Security responsibility, mapped onto several dozen individual topics. In contrast ISO management systems tend to follow a "Plan, Do, Check, Act" life cycle.
Other standards are more arbitrary. The PCI DSS follows an unusual structure of twelve broad control requirements grouped into six overall headings, which collectively define more than two hundred individual, prescriptive requirements. A further complication in navigating PCI DSS requirements is the fact that the standard is also enforced through a "Prioritized Approach" which sets out the controls in a completely different order, reflecting the urgency of their implementation.
Further security standards published by governments and specialist circles such as The Cloud Security Alliance have only added to the navigation challenge facing CISOs. The Cyber Essentials standard adds a tad more confusion by adopting a new structure of five subject areas pointing to "Ten Steps to Cyber Security". Will the madness ever end?
Several weeks ago an Australian friend of mine sent me a delightful note pointing out how recent events and media reporting had confirmed some controversial points I had made last year in the Australian press
There is now growing evidence that compliance does not guarantee security, though the reverse can sometimes be true. For many years I have been lecturing on the difference between real security and compliance. Most security professionals instinctively get it. But the distinction is not addressed adequately in training courses or acknowledged by institutes, so the practice remains riddled with misconceptions about the roles effectiveness of security and compliance.
The reason we have compliance is because people do not willingly spend time or money on security. Business has no appetite for spending money to dodge risks that have yet to materialise. And there is no guaranteed return on investment for security. It's a leap of faith, the type of thing that finance managers hate. Without compliance there would be little or no security in today's more demanding commercial environment.
But a compliance programme cannot make an enterprise secure. On the one hand it's designed to improve matters, so one could argue it's better than nothing. On the other hand it can be counter-productive as it diverts scarce resources from addressing more immediate, specific risks. (This is a debate I regularly have with Professor Fred Piper.) In the absence of a major incident, however without compliance nothing would get done. So we need it and we would demand it if it was not there.
Compliance can make a difference but it's painfully slow and expensive. The PCI DSS standard comes in for lots of stick. But without it, the level of payment card fraud would be higher. It might not be perfect or efficient but it motivates a lot of security improvement in an area that has traditionally been dangerously open to compromise.
It would be nice to think that good security would guarantee compliance. Unfortunately that's not correct either. Regulators and auditors require a large number of small boxes to be ticked and an unreasonable amount of processes, paperwork and evidence to support security claims. Smart, slick operators do not survive audits. Compliance rewards bureaucratic security managers.
If you take a look in any leading financial enterprise today you are likely to find hundreds of security professionals being driven by thousands of auditors of varying kinds. Twenty years ago these functions were a tiny fraction of their size today. Yet security has not visibly improved. Ninety percent of the work is focused on developing content-free processes, counting assets, assessing risks, writing policies that go unread, measuring last year's performance or generating evidence that a control is in place. Very little work is focused on implementing real countermeasures.
Efficient and effective security will only happen following three things. Firstly, a great big incident or liability that scares directors into spending money on countermeasures that actually work. Secondly, an understanding by the security profession of the root causes of incidents and the approaches needed to eliminate them. And thirdly, the recognition that large-scale culture changes are possible if top management is sufficiently motivated.
Some supporting evidence for these claims can be found in the history of industrial safety. In the early part of the last century many production methods were unacceptably dangerous, especially in the United States. It took many decades to drive through change, but by the end of the century safety was transformed and embedded across manufacturing industries. Some of this was driven by compliance but the largest cultural changes were directed by executive boards and shaped by an understanding of the root causes of incidents, the nature of an effective safety culture, and a genuine recognition that safety is everybody's responsibility. In the security profession we are a long, long way from achieving that goal.
For the past decade the real enemy of security practitioners has not been the hackers and malware that threaten our systems but the numerous best practices, compliance demands and audit actions that take up all of the time and resources of the security function.
Security standards and frameworks add to the burden of security managers by insisting that evidence of governance, assessments and controls are presented according to a structure laid down by standards authorities, many of whom might have little sharp-end experience.
And so we have the latest distraction: a "Framework for Improving Critical Infrastructure Cybersecurity" published by the National Institute of Standards and Technology, which appears to contain not a single new control, technique or technology, but one that merely restructures existing controls and guidance according to a new contents list.
Anyone who truly understands the rare art of designing models and architecture will appreciate that the top levels of any model are shaped purely for political or cosmetic purposes. They add little real value to the purpose or content of the guidance.
And of course there is an unlimited number of ways of structuring a set of controls. It can be done by lifecycle, process, technology, organisation, etc. Ideally the structure should be based on the purpose of the framework, as it is primarily a means to an end, not an end in itself. Unfortunately this rarely happens.
The original set of baseline controls designed by Donn Parker in the 1980s contained several different contents lists, reflecting different needs. When drafting the original BS7799 we decided to have a single structure. Having presented over a dozen different structures to the BS7799 team, we all agreed unanimously to base in on "natural subject areas", i.e. the structure most of us had already adopted for our own security manuals.
There's nothing wrong of course in experimenting with new structures. But these should only be a accepted when there is clear, added value. Otherwise it's a case of, as Eric Morecambe might say, of using all the right words but not necessary in the right order.
It's not often that an institute decides that its mission has been accomplished, declares success and steps down. But that's what the Jericho Forum has done after a decade of evangelising the message of de-perimeterization.
Originally a private club of CISOs meeting to exchange views on security architecture, the forum quickly became a highly influential user/vendor circle of leading international experts, publishing guiding principles and commandments on how to develop secure information systems for an open networked environment.
Ten years ago this was revolutionary thinking. Today it's generally accepted that enterprise systems and data need to be hardened to mitigate the threats presented by shared networks. It's time therefore to move on to new security challenges.
The forum was officially dissolved at a meeting of the Open Group in London on Monday. The founding fathers (myself included) were presented with plaques commemorating our contribution. Fittingly the meeting was hosted at Central Hall Westminster, the location of the first meeting of the United Nations General Assembly in 1946.
nthony Freed has now published the final article in his series on the true background of BS7799 on his Tripwire blog. There are real lessons to be learned from these postings. I hope that students of regulatory compliance will take note.
My apologies for radio silence on this blog. It's been due to an exceptionally busy workload coupled with an extended holiday I'm now back with lots of views about what's going on and what's going wrong with cyber security.
Over the last month I've been concerned about the press coverage about the Snowden case. Privacy advocates and journalists have lauded his efforts, often with little understanding of the consequences to national security. I was surprised for example to read that Bruce Schneier had nailed his colours firmly to the Guardian mast and was advocating large scale whistle blowing.
There are strong arguments from both the security and privacy sides. We clearly need a more informed public debate about both the dangers and the benefits of large-scale communications surveillance. We're now seeing the beginnings of a reaction from senior figures in the intelligence community suggesting that serious damage to national security has been done. Pundits and journalists cannot assess or refute such damage without evidence to the contrary. And there seems to be little if any evidence of government misuse of intercepted data. So who is right?
I've always taken the view that the security professional should be above the political debate. I care about national security as well as citizen privacy. And society seems equally divided on the importance of both. These requirements need to be carefully balanced. A public debate is well overdue. Unfortunately the Snowden revelations have gone further than is necessary to provoke a debate. And they have not delivered evidence that access to citizen data is being misused.
One day there might be terrorist groups with access to weapons of mass destruction. When that transpires we will be grateful to agencies that can prevent such attacks though smart data mining. That's not of course to say that controls to prevent potential abuse of intelligence data should be ignored. But continuous releases of details of interception methods and platforms can only serve to undermine the high ground claimed by the whistle blowers.
-- Advertisement --
-- Advertisement --