monsitj - Fotolia
Technology design needs to change so that cyber security is easier to carry out because, as users of technology, people are key, according to a panel of security industry representatives.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
“We risk a C1-level national cyber security incident in the next few years if we do not put some science and data behind cyber security and start to demystify it,” said Ian Levy, technical director at the UK’s National Cyber Security Centre (NCSC).
“I think we can stop [a C1-level incident] from happening, but the trajectory I see at the moment around how cyber security is talked about and how people put militaristic analogies around it that make people think they can’t defend themselves, is really dangerous,” he told the Symantec Crystal Ball roundtable in London.
For this reason, Levy said the NCSC wants to publish data and evidence to ensure that people really understand how to do risk management properly. “Because in the end, cyber security is just risk management, which is not fundamentally different to HR, legal or financial risk management.”
Levy also believes that the way technology tends to be designed currently makes impossible security demands on people.
As a result, he said security professionals have spent the past 25 years saying people are the weakest link. “But this is stupid,” he said. “People cannot be the weakest link [because] they are people who do jobs, and they are people who create value in their organisations.
“What this tells us is that the technical systems are not built for people. Techies build systems for techies, not normal people,” said Levy. “We [the NCSC] have started saying people are the strongest link if you can leverage your people better, they can be the first and last line of defence in an organisation. This is the change that needs to take place. We need to stop blaming the users and start making the systems usable.”
Also focusing on the importance of people in cyber security, Jessica Barker, co-founder and socio-technical lead at cyber security consultancy RedactedFirm, said that in the face of fake news, it is important to ensure people are taught to think critically.
“If we look at the past year or so, we have had insights into the impact of fake news, or what I would call mass social engineering.
“We have become familiar with the idea of social engineering in the cyber security, but thinking about it on this larger scale and how people with a vested interest can make up stories or can steal particular targeted information and to leak that information in a selected way to influence how people feel on a large scale,” she said.
Barker said that while this idea of changing hearts and minds is not new, the way people consume and share news stories has changed, with people typically consuming news through smartphones and tending to trust their social network more than traditionally authoritative sources.
“And this has caught us off guard. About 50% of the US public regarding Facebook as their main source of news, which we really didn’t see coming. Although research suggests that only 4% of those people say they trust that news, it also suggests that it still influences the way they feel and how they see the world, but they may not be conscious of it, which is quite a scary prospect,” she said.
Fake news to target corporations
Barker predicts that while we have seen fake news targeted mainly at politics, in coming years there will be an increasing incidence of fake news targeting corporations and key individuals.
“We are seeing attempts to use technology to solve this problem, and while computers have a role in scaling up human capabilities to deal with this, what we really need to focus on is how we can build critical thought and encourage people who create and consume news to check and verify stories,” she said.
This is also something that cannot be solved by regulation and content providers or publishers need to be careful in how they try to deal with it because by flagging something as fake news, they could inadvertently promote its further dissemination.
“We have a collective responsibility to address this problem, so while regulation has a role to play, education is also important to encourage everyone to be more considered in what they share and teaching them to be able to think critically and be able to distinguish between fact and opinion so that they do not believe everything they read,” said Barker.
While agreeing with Ian Levy that people are “the strongest tool in a toobox,” Peter Wood, CEO at cyber security consultancy First Base Technologies, said there is still a lot of opportunity for social engineering, especially with attackers routinely stealing legitimate credentials to impersonate people in authority to trick others into cooperating with them.
“Stealing legitimate credentials enables attackers not only socially engineer people but also to get into corporate systems to locate, identify, exfiltrate, damage and even manipulate data easily,” he said.
A new order of automated attacks
In terms of crystal ball gazing, Wood said although it is difficult to predict what attackers will do next, from what we have seen in the past, it seems likely that in the future organisations and individuals will be faced with a new order of automated attacks.
“Right now, the cyber criminal has automated the bog-standard attacks. We have seen that within 30 seconds of connecting a brand new computer to the internet, it is pinged, it is port scanned within 45 seconds and if it had got patches missing, it is pwned in two minutes, which must be automation,” he said.
In future, Wood said it is possible that that automation capability could be extended to the entire hacking process using the same big data analytics marketers use to target consumers to identify soft target organisations and soft target individuals within those organisations to get a foothold in the corporate network.
“A future challenge could be trying to identify fake from real callers if the AI [artificial intelligence] and voice simulation is good enough, and within the next five years we could see fully automated attacks right up to the exfiltration of the data being offered as a commoditised criminal service to target individuals and organisations,” he said.
Getting ahead of cyber attackers
Darren Thomson, chief technology officer and vice-president for technology services at security firm Symantec, said he shared the positive view that all is not lost, and there is a chance for organisations to get ahead of cyber attackers despite the increasing security challenges of an ever-more connected world.
Looking to the future, he predicts that non-expert technology users, which includes most people, are the “secret sauce” in the cyber defence systems that need to be developed.
“We have done a bad job in the security industry of training people the right way, and we have been kidding ourselves that those people care to be trained,” he said. “As a result, we are at a point in history where we are more reliant on non-experts making good IT decisions than ever before, with people being required to make decisions, for example, in response to messages about security certificates when they have no way of understanding what the message means.”
Technologies collaborating to spot ransomware
Thomson predicts a coming together of technologies such as AI and non-expert people, where systems could ask people at scale across an organisation if they are encrypting their backups that could create an analytic that could indicate that a ransomware attack is in progress.
“This combination could lead to ‘left of bang’, where ‘bang’ is the breach on a linear timeline. Currently, the [security] industry is obsessed with right of bang. When bad happens, we need to respond as quickly as possible to detect, respond and recover. But we need to get ‘left of bang’ if we are going to win this battle; we need to be able to predict.
“And my prediction is that [in future] incident response is going to fundamentally mean a different thing. The ‘incident’ will be the mere possibility of a breach. The ‘incident’ will have something to do with the energy that we are detecting that could lead to a breach “But we can only gather that insight by mashing together technologies like AI and the masses of non-experts,” he said.
As a result, Thomson said the security industry needs to be populated with a different kind of person. “We need more security development teams that are made up of sociologists and psychologists who know how to engage the masses of non-technical users and how they will respond.
“For example, Symantec is doing some work around what can be done to a user interface to make it uglier or less attractive as the risk level increases to provide visual cues and discourage users from interacting with risky websites,” he said. “We need to think about how to make people care.”