I missed the opening of this year’s Infosecurity Europe as I was speaking in Zurich. I did however catch the end, though there was little to fire my attention. The theme was dated, the slogans on stands (e.g. “security re-imagined”) were unrealistic, and the talks were from original. The exhibition however was much bigger and even more crowded. As usual, the conference was essentially a huge networking event, as well as a chance to seek out what might be new in cyber security.
Just about everyone in security attends at least one day of Infosecurity. I bumped into dozens of old acquaintances and met lots of new people, ranging from IT researchers to behavioral psychologists. This conference seems to attract a more diverse set of people than other big security conferences.
Little innovation was on show though there is much happening behind the scenes. For me, the underpinning trend is the continuing growth in the use of artificial intelligence (AI) in security products. Such technology is becoming mainstream. It has its advantages and shortcomings.
Things have certainly changed. Fifteen years ago when I was promoting the use of AI it was a dirty word in many academic circles. The Professor running Microsoft’s research labs in Cambridge told me he binned anything he received on the subject. Yet today Cambridge is the home of the most hyped security product in this space: Darktrace, a learning system inspired by the human immune system.
Clearly someone has been paying attention to my long-promoted advice that security technologies needs to steal ideas from nature, especially the human immune system. Back in 1999 I sponsored a three year project to develop a fraud detection system based on the human immune system. The technology worked to an extent, but was a long way from being ready for business deployment.
There are huge challenges in developing AI systems. We don’t fully understand the human immune system, and we can’t keep up with the accelerating changes going on across a modern, global enterprise. I always imagined that perfecting such technology would be a long haul. Professor Stephanie Forrest at the University of New Mexico for example has been trying to develop intrusion detection systems based on this approach for two decades.
Perhaps we just needed Mike Lynch’s magical Bayesian logic. Certainly something has accelerated the maturity of the technology which now appears to be ready for prime time.
But be warned. False positives might be acceptable in a research, intelligence or relatively small environment. In a large enterprise however they can be time consuming to process and deadly if you ignore them. We’ve all heard about the CISO who lost his job after not acting on an intrusion alert.
As I’ve pointed out for the past fifteen years, the future of security will be probabilistic rather than deterministic. But it’s a slow change. Don’t expect instant results.