Sergey Nivens - Fotolia
There is a certain fallacy in the world of cyber security. It has been there since day one and continues to thrive today. The myth is simply that security controls work, when in the main, they don’t.
For too long, security teams have lived the lie that what they have delivered has been effective, but so often they approach it from a viewpoint divorced from the customers they affect. To be fair to most security teams, they are generally blissfully unaware of the inefficiencies of their controls – or ignorant.
This is admittedly a very sweeping statement, but headline after headline about data breaches tends to argue the point. And let’s not be shy here – these are major corporations with systemic failures when it comes to protecting their crown jewels. Something doesn’t sit right.
How can this be? Spending on IT security is at an all-time high. The volume of security offerings to cover every possible facet of security is unparalleled. The technological possibilities for mitigating risk know no bounds. We have more experts than ever before. And, of course, these days we have big data and artificial intelligence (AI) to solve all our ills in the battle against superhuman adversaries with incredibly sophisticated attacks.
Is that reality, though? Are organisations spending wisely when it comes to security? Are organisations doing the right things or papering over the cracks? In reality, it is the latter – and here’s why.
Let’s start with strategy – the overarching mission. How many organisations have such a thing? A few. How many are built through business engagement? Even fewer. Security strategy is generally written from a position of prejudice and as a means of gaining budget to mature the organisation’s posture.
For a strategy to be sound, it should be preceded by a warts-and-all look at the effectiveness and maturity of the as-is position and a clear line of sight of where it needs to get to. This requires a deep understanding of the business within which security operates, alongside measuring the effects of the myriad security jigsaw pieces across the organisation.
This almost never happens. If it did, security teams would recognise that investment needs to be made primarily and almost solely on fixing the crap that is already there.
How can this be? Well, let’s go through some of the jigsaw pieces that just about every organisation will have in its security picture.
Policy – we all have policy. If you work in government, you will have more policy than you can shake a stick at, and in other organisations or industries, hopefully less so. However, almost every policy is the equivalent of the Ten Commandments: thou shalt not commit adultery; thou shalt not share thy password.
Exceedingly rarely will you see any explanation as to why it is a bad thing to do, or rather a risk-altering thing to do. Nor will you ever see an explanation of the alternative for the customer – in other words, what they should do to achieve the same goal achieved by not sharing their password.
In this case, delegate access mechanisms are an option – but, of course, those mechanisms are outside of the security team’s control, which, in turn, means the security team has a dependency on another team, probably IT. To make it beneficial for the customer (user) to adhere to the policy, the alternative to sharing their password must be very simple, easy and slick. And it must be promoted so that the user is aware of what they can do that has less of an effect on risk than sharing their password.
The trouble here is that policy is written very much from a position of prejudice by security people for security people. If we are honest with ourselves and maybe engaged with our customer base, we would also learn that hardly anyone actually reads the policies, which are generally far too long and in the wrong tone – and even fewer people actually understand them.
If your policy is not read or understood, there is little point in having one. Much the same as operating procedures – there is what the policy or procedure says, and then there is the reality of what people do. People share passwords and more. Deal with it.
Maybe something that could help here would be raising security awareness with our customers. That would be a great idea. Most organisations do this, which is good. However, what most organisations actually do is once-a-year mandatory computer-based training exercises, which consists of the user clicking next, next, next, next, next and then answering 10 questions where if they got them wrong, they should not be allowed shoes with laces.
You may laugh at this and then sigh because it is exactly what you do in your organisation. It is so common, it is ridiculous. It is also ridiculous because it has zero positive effect – it is a complete and utter waste of time and money. Security awareness isn’t, but this approach is. You are simply ticking a box, as is the user who is doing their mandatory security training as well as their diversity, health and safety, and other annual box-ticking exercises.
That’s not a great start. It’s OK though – we’ve got some technical controls. Oh yes, we’ve got firewalls. In fact, we’ve got dual-pair firewalls, from different suppliers. And when we installed them, we blocked all unnecessary ports and protocols by default. We’ve got it nailed. Fab!
Then this minor thing called business change happens, wherein the business, those little rapscallions, decide to make a change. A new process, a new technology, a new partner – it matters not. As part of that change, we need to add a rule to the firewall to allow connectivity. Without it, the change will fail.
It goes through change control, though – good old ITIL – so it is still fine. Except, of course, it doesn’t really tell us whether that change to the firewall changes our risk profile in any way.
Now, of course, that is just one change and businesses make many changes rather regularly. And hey, before you know it, your firewall that had four rules on it now has 4,000. Your firewall has gone from being an effective control to effectively heating your datacentre.
Ask yourself: when was the last time you looked at your firewall rules? Let’s make it easy: when was the last time you looked at the rules just on your external firewalls? Let’s not bother asking if you changed them, because that is highly unlikely to have happened. If WannaCry told us anything, it’s that external firewalls are, shall we say, sub-optimal. Have you looked at them since that stark warning?
Intrusion detection and prevention
It is still OK, though, because we have intrusion detection systems (IDS) and intrusion prevention systems (IPS). Happy days. The slight issue here is that it really, really helps if you have a vague idea what protocols and ports are in use across your network. It also kind of helps if your internal, genuine traffic does not look anomalous enough to trigger the IDS. It is also rather beneficial if you have the faintest clue what assets are on your network – but more of that later. And, of course, do you ever look at the alerts?
Let’s presuppose that you do monitor the alerts. As a rough estimate, what percentage are false positives? In the high 90%, by any chance? Mainly because of the things stated above and that we’ve just built stuff and plugged it into other stuff for many years. Most of that plugging is done as simply as possible, rather than how things should have been done in an ideal world.
Read more about cyber security strategy
- What key things should organisations be doing in terms of cyber defences to ensure they are resilient?
- Nearly half of UK businesses lack a cyber security strategy
- Network security has always been a core focus, but in an era of digital transformation, we examine if network security strategies are keeping up with new and emerging cyber threats
So what do we do with all of these false positives? Do we investigate the cause and influence change to reduce the noise by getting assets to talk to each other in a better way? No, we just turn that perfectly genuine rule off. That’ll sort it.
What about IPS? Almost nobody turns it on in “prevent” mode. Because, frankly, it will stop lots of genuine business traffic and be turned off again rather quickly. Security then gets a kicking from the business and loses credibility.
Now all of this is not security’s fault. IT has a lot to answer for in terms of network configuration, and so on. You really must work together if you want to make effective change, or even understand what is there today.
Here are two further considerations – assets and users. What’s the accuracy of asset inventories? Maybe 60% if you’re lucky. Users? Maybe slightly better. There is a massive problem in that most organisations do not know how many users vs accounts vs actual people they have. Neither do they in any way have anything like an accurate view of how many assets there are, their locations, their health, and so on.
Without any semblance of reality here, you are going to struggle big time. What privileges do those users have? Do they need them in their current role? Do we do move, add, change well when it comes to access permissions? What about leavers? Consultants and contractors? Suppliers? What about admins – how many, where, who, and do they have internet access?
But at least we have individual (maybe) accounts for everyone, so that’s still a control, right? Well, yes, only if you log and maybe look at it every once in a while. Because, you know, users share passwords because it is easier than delegate access. So without ever checking, you’ll never see the dual logins from the same user on different machines. Or of course they’ll let their colleague sit at their desk and use their machine.
It’s the same with assets. Simply put, how do you know what your vulnerabilities are if you don’t know what assets you have, let alone their health and status? It kind of makes vulnerability management or patching a tad hard.
It is still OK, though, because we have antivirus (AV) literally everywhere. Let’s not get into which one and the ins and outs of different AV approaches, but simply ask: how often do you update the agents, and how many fail to update? Oh hang on, you don’t know how many assets you have, which does make this tricky, but you do update AV every day. Good. However, no doubt there are several assets that do not update every day for one reason or another.
OK, but despite all of this, you have a security operations centre (SOC). So you still maintain you’re in a good position because you’ve got a SOC– eyes on glass; coiled like a spring ready to respond to the slightest noise. If only.
Apart from not knowing what assets, users, ports or protocols are in use on the network (or networks), you’re now logging all this stuff and sticking it in a big security incident and event management (SIEM) engine – effectively collecting and mashing together a noise akin to Saturday evening at Glastonbury.
It is just noise. Your SOC analysts will be surfing through false positive after false positive, and suffering huge bouts of alert fatigue. Chasing ghosts and generally not adding huge swathes of value. You’ll probably just deal with known alerts rather than actually look for abnormalities because everything looks abnormal, and establishing a baseline of normal is nigh on impossible. Most of those known alerts will be controls doing their job, such as blocking bad emails, or false positives.
This isn’t painting a pretty picture, let’s be honest – mainly because it isn’t a pretty picture. What about things like data? You know, like if you have a clue where your data is? OK, you’ve got databases that you know about, but do you know where all of it is? Quite a bit will be on people’s personal devices after they sent it home because it is easier to work there without these stupid security things getting in the way.
Even your databases – they are probably encrypted, which is awesome. But what happens when a legitimate asset (user or device) asks a legitimate question of that database? Does it reply? And is the reply encrypted? What if that asset were malicious?
What about risk and the fact that almost nobody does risk management in the true form? You know – the continual loop of measurement, planning and action.
Most organisations deal with theoretical risk (a one-time assessment) and notional controls that “mitigate” the risks found. And then the parameters that make up each risk change, as they have a habit of doing, and nobody notices or reacts because they have no idea how to measure said parameters and act accordingly.
Sound familiar? How do you go about measuring each parameter of your security risks? Threat actor/source, threat, exploit, vulnerability/weakness, likelihood, impact, and so on. Do you measure them on an ongoing basis in the context of your organisation? Probably not. But you do risk, right?
Still here? If you recognise any of these things within your organisation, you need to focus there and not on some next-generation panacea, big data or AI solution because it won’t work. If you don’t recognise any of these things, then you’re not looking hard enough, or you are in the 0.1% that do the basics well.
The reason so many organisations suffer breaches is simply down to a failure in doing the very basics of security. It doesn’t matter how much security technology you buy, you will fail. It is time to get back to basics.