natali_mis - stock.adobe.com
There are several positive trends in cyber defence strategies that organisations should be adopting to once again manage cyber risks to acceptable levels, says Stephen Bonner, cyber risk partner at Deloitte.
“First, more security-mature organisations are using intelligence-based red teaming to answer help questions about how secure they really are,” he told Computer Weekly.
Red teaming is the practice of rigorously challenging plans, policies, systems and assumptions by adopting an adversarial approach. A red team may be a contracted external party or an internal group that uses strategies to encourage an outsider perspective.
“By accurately simulating a plausible adversary as a partial surprise against the defending team, you get a realistic measure, not of individual point controls, but the coordination and correlation across those components to actually see how well the organisation responds,” said Bonner.
“Penetration testing tended to be bright people turning up and telling you that you are stupid, but this sense of having a sparring partner helps you get better. Following on from red teaming is the purple team concept of melding attack and defence together to make the defenders better, rather than scoring points off of them by finding problems that they probably already knew about, but did not have the support to fix.
“That is an interesting and positive change that has been taken up by regulators around the world, and I am seeing a lot of boards using that as a shorthand way of getting their head around this problem and understanding it. It is less about failing a test, but about what have we learned, and how much better are we now because we now have these insights and better understand the risk we face.”
Another positive approach by organisations that have realised that adversaries will always find a way in is to segregate or segment corporate IT environments so that problems in one part of it do not spread to other parts, said Bonner.
“We also see the idea of using containers at a data level and a network level and within the design of system architectures,” he said. “Understanding that there are domains within your environment, and that this gives you opportunities to identify things crossing between those domains and apply controls at those points, can be applied effectively within internal networks.”
Along similar lines, Bonner said he has seen lot of work about managing privileged access. “The problem in organisations that don’t have good control over privileged access is that once an attacker gets privileged credentials, then they can start stepping across these boundaries and making the recovery much harder,” he said.
“So, both from protecting against the insider threat and external and just getting operational efficiency, moving credentials into better stores rather than individuals having passwords and continual access has made a huge difference in organisations that have gone down this route.”
One of the biggest problems in recent times has been the lack of segregation or segmentation of corporate networks, said Bonner. This has meant that once attackers have breached the network perimeter, they have then been able to access most parts of the network unhindered, as was demonstrated by the cyber attack on Sony Pictures in 2014, but historically, corporate networks are flat due to business pressures and associated connectivity and time considerations.
“The problem is that every time you let a connection through that is business-critical, everyone is happy, but every time you block one, everyone is unhappy,” he said. “As a result, the default state is ‘open and connected’ and so it takes time, effort and energy to move up from that state.
“The challenge is that the moment you take your eye off it, it starts to decay back into that state. It is not that people have done a bad thing to end up in this place, but that it takes constant effort and energy to move out of that state into a more protective one.”
The good news, said Bonner, is that one of the real hopes is that both cloud and infrastructure as code can offer the technical change that will enable companies to take control and finally have a real sense of what the IT environment is, what it is designed for and what it is doing, making segregation easier to do.
Another challenge in recent years has been the fact that a culture of security is not yet embedded in many organisations, but this is changing as firms wake up to the advantages of baking into processes security and privacy by design, with fintechs [financial technology companies] among the leaders, said Bonner.
“Increasingly, fintechs use security as a feature rather than a secondary thing that they will add later when customers demand it,” he said. “Security has become part of the discussion, and you see them talking of their success at identifying fraud. In the fintech industry, there is much more of that mindset of protecting things by default. There is not quite that sense yet in other industries, but hopefully it will spread.”
Increasing investment in recovery capabilities is another positive trend, said Bonner. “We are seeing a lot of investment coming from the realisation that recovery from an outage is critically important and that you can’t guarantee that a dedicated attacker will not be capable of causing an outage,” he said.
“I don’t think anyone can claim they have built a system that is robust in the face of continuous, skilled attack, but we are now seeing efforts in terms of recovery. Companies are starting to think about what they need to store outside of their environment that will enable them to rebuild.
“They are also thinking about what they need to do to protect critical assets so that they are safe, even against a relatively sophisticated attack.”
In the heyday of mainframe computing, Bonner said teams had a “battlebox” with all the data, systems and contacts they needed. “It is easy to say we haven’t made any progress, but what is fascinating is that until these more advanced threats came along, we had made so much progress in building IT systems that were actually resilient and capable so people didn’t have to worry about catastrophic failures,” he said.
“We moved from mainframes going down on a regular basis to distributed systems with failover, active/active mirroring and Raid. All that stuff actually worked to the point where we could become relaxed about catastrophic threats, until skilled and highly motivated adversaries came along.
“We tend to look at the failures and miss the successes. After all, we have built a globally connected financial system running over the internet that allows millions of users around the world to buy things online in a much safer way than they could in the physical world before. I will take that as a win.”
Noting that there are some things that were fixed so well that they stopped being a worry, Bonner said that in the face of a more advanced threat, information security professionals have to dig into the past to find the way forward.
“Catastrophes across the globally connected environment haven’t happened because of active work by lot of people coming together in communities,” he said. “But that doesn’t mean we should put up our feet, because there is definitely more to be done. Some problems persist, but a huge amount of improvement has been made that has forced attackers to up their game and work a lot harder.”
Strides have also been made in sharing security knowledge, said Bonner, but a lot more work and improvement is needed in this area. “As new people connect to this environment, we need to get better in educating them,” he added.
Another major challenge
Keeping software security updates or patches up to date has been another major challenge for organisations, and in recent years, failures in this area have led to data breaches as attackers continually exploit known and patched software vulnerabilities, but here again the future is encouraging, said Bonner.
Cloud will not only help to ensure that things are configured correctly by default and allow CISOs to focus less on infrastructure, but also help to deal with the challenge of patching, he said.
“This problem of patching is all about we’ve built something unique and special, but if we put changes into it, people are concerned what will happen,” said Bonner. “But in the cloud world, I can just replace it. So when it comes to patching, organisations can have their proper environment running and, next to it, spin up a copy with patches.
“Then they can move 1% of the traffic across to ensure that it is working fine, then 10% to load test it, and then 100%. Wait a couple of hours to ensure it is working fine, then they throw away the old one. You don’t need to rebuild. You simply delete and drop in a new one that you know is in a good set state.
“Once that mindset is established, it empowers organisations to do agile, rapid movement. You can fully patch every device and environment within hours. In the legacy world, that is a two-month patch window rolling cycle, but in the cloud, it can be achieved very quickly. And you have a brilliant fall-back because the other one is there for long enough to prove it is working.
“In the legacy world, you put a patch into a system that you have nurtured, and if it falls over, you have nothing to go back to, you are recovering from tape from last night’s structure, which is eight hours to recovery.”
There are fundamental technologies that, if embraced properly, start providing the ability to solve some of the really fundamental problems, such as knowing what organisations are running and what its configuration is, said Bonner.
“If we can keep stuff in a known good state and be able to rebuild that rapidly, that gives us huge extra capability. I see a number of CISOs embracing that and building capabilities in to give them a much better capability, so there is hope.”