rvlsoft - Fotolia

The Security Interviews: Red gets automated

We speak to Jack Stockdale, CTO of Darktrace, about Cambridge’s strong data analytics and artificial intelligence links and the role of AI in cyber security

Jack Stockdale joined Darktrace as its chief technology officer when the company began in 2013. He is responsible for overseeing the development of Bayesian mathematical models and artificial intelligence (AI) algorithms that underpin the company’s AI-based threat detection system.

What is interesting from the discussion with Stockdale is the strong data science community that has been built in and around the University of Cambridge. Describing his education and the switch to computer science, Stockdale says: “I went to university to study particle physics in the late 1990s. When Google came on the scene, I switched to computer science, which really grabbed me. I later moved to Cambridge as a coder.”

Asked about Cambridge’s strong association with data science and AI, Stockdale says the late professor Bill Fitzgerald, who was head of Cambridge University’s signal processing laboratory, tutored a number of people on the concept of Bayesian inference, which is the technique Darktrace uses in its intelligent threat management system. It is often applied to image restoration and medical imaging, bioinformatics, data mining and data classification.

“It is about dealing with computation in the same way as the brain,” says Stockdale, adding that those who choose to study this area tend to have a link to the university, and to the signal processing lab at Cambridge. “These are key to having the guts to do things very differently,” he says.

According to Stockdale, although the community of people developing such AI systems is not huge, “I see a long line of clever ideas, from self-driving cars and fraud detection to our approach to solving modern cyber threats – these are similar problems, due to the way computers interact with the real world”.

Along with Darktrace, several prominent companies have emerged from the concentration of AI and advanced data analytics businesses in Cambridge. In fact, there is a strong link between Stockdale and the business empire of Autonomy founder Mike Lynch, who set up the Cambridge angel investor Invoke Capital.

From 2012 to 2013, Stockdale served as chief architect at Invoke Capital, which seed-funded Darktrace. Between 2006 and 2011, he worked at online video company blinkx, which spun out of Autonomy in 2007. Before that, Stockdale worked as a technical director at Autonomy from 2002 to 2006. “Darktrace is my third startup working with very large datasets,” he says.

Stockdale and his development team in Cambridge were recognised for their outstanding contribution to engineering by the Royal Academy of Engineering MacRobert Innovation Award Committee in 2017 and again in 2019.

Discussing the rationale behind Darktrace, Stockdale says: “We get to a point where regular cyber attacks are a board-level problem. We have a long-term vision. Darktrace uses self-learning AI to respond to cyber threats.”

The idea is to understand the business looking for protection against cyber threats, he says. “We look at the customer and the business. We truly believe you can’t protect yourself unless you understand yourself.”

Read more security interviews

  • As the government lays out the next iteration of its Cyber Security Strategy, we speak to Plexal and Lorca’s Saj Huq about his work building a cyber ecosystem to support the UK’s future ambitions.
  • In his first major UK press interview, SolarWinds CEO Sudhakar Ramakrishna tells Computer Weekly how a relentless focus on transparency saw the company safely through a nightmare cyber breach scenario.

While cyber security tends to focus on a reactive approach to tackling new attacks as and when they happen, the cyber security industry needs to be more proactive, says Stockdale. “You need to get into the mindset of an attacker, learn your enemy with a new model of AI using attack path modelling.”

In effect, Darktrace uses machine language modelling to get into the mindset of attackers before an attack occurs. Going forward, Stockdale says: “We think this will be a key part of cyber security. Most AI is supervised, built on models based on human supervision. But think about a self-healing system. We build self-optimising and self-reinforcing loops.”

The technology at Darktrace uses unsupervised, self-learning AI, says Stockdale. “Rather than a system that knows everything, we’ve built a system that has the capacity to learn. It builds a very bespoke model of the IT environment, without any human tuning.”

As computers are increasingly interconnected to the real world, Stockdale believes the possibility of a cyber attack becomes ubiquitous. “In cyber security, you need to win every single time,” he says. “You need to have a single piece of technology to protect them all and you need to make the defender’s job easy.”

He believes cyber security is a “human-level problem”, but also feels that there is an arms race. “You absolutely need to get machine versus machine in cyber space,” he says.

For Stockdale, AI presents an unfair advantage. “We need to use AI on defence,” he says.

“Attacks are ongoing all the time. Most are weakness indicators,” he adds. “We have AI analysts taking the job of stressed human beings who can’t manage all the red alerts.”

Moving forward, Stockdale believes the role of IT security professionals will be elevated to where they can focus on how to mitigate the next-level attack and how they spend the budget. “While zero-day attacks have a massive impact, most attacks are much more bespoke,” he warns. “There are threats that never exist in the real world, and you can’t predict them.” 

Many organisations operate a “red team” to deal with the most serious cyber threat, and Stockdale believes this team will increasingly need to run simulated attacks. “When there are limited resources, use an AI red team,” he says – which is what Darktrace want to achieve.

Read more on IT risk management