According to Darpa, the world’s growing dependence on computer systems demands the creation of some kind of smart, autonomous security system.
The CGC, dubbed the world’s first automated network defence tournament, aims to push the state of the art in automatic program analysis to find ways to fix bugs faster than humans can.
Competing automated systems are given several compiled programs and asked to find inputs which crash the target programs, and to generate new versions secured against those crashes.
Mayhem and Murphy
Although final results are yet to be confirmed, the winning team was from US security firm ForAllSecure, which will receive $2m in prize money to continue developing its technology.
According to the company, its cloud-based bug-finding system uses the Mayhem symbolic execution engine and a directed fuzzer called Murphy after a team member’s pet cat.
The two components communicate through a PostgreSQL database, sharing test cases they find “interesting”, based on the coverage they achieve, ForAllSecure said in a blog post.
The second prize of $1m went to the team behind a program called Xandra, whicg was created by security experts from the University of Virginia and European firm GrammaTech. The third prize of $750,000 went to the Mech Phish team from the University of California.
DeepArmor is aimed at protecting networks from new and never-seen-before cyber security threats by combining AI techniques such as neural networks, heuristics, data science and natural language processing with antivirus to find and remove malicious files.
With most security professionals no longer trusting traditional antivirus systems, SparkCognition and a growing number of other security technology firms see AI as the next logical step, underpinning systems that can identify, analyse, learn, anticipate and adjust to cyber security threats.
DeepArmor is designed to examine every file to identify if any components are suspicious or malicious in nature. All of these individually analysed components are then run through continuously evolving groups of neural networks to find patterns that may be malicious in nature.
“We are using cognitive algorithms to constantly learn new malware behaviours and recognise how polymorphic files may try to attack in the future,” said Keith Moore, senior product manager at SparkCognition.
According to Moore, this approach is necessary in the face of potentially devastating zero-day threats, which often confound and evade existing tools.
According to UK information security startup Darktrace, cyber security will be mainly automated based on AI in future.
The company aims to be a leader in the move to this new era of information security, and is already working on the next phase of its self-learning security system to enable automatic defence.
“We believe we are the only ones at the moment who focus only on learning from the behaviours of people and systems within the business rather than on algorithms that look for known types of attacks,” Darktrace co-founder and director of technology Dave Palmer told Computer Weekly in a recent interview.
“We believe in a continuous security approach because there will always be risks, and organisations need to have the capability to deal with them and bring that risk down to a manageable level all the time – rather than having a rollercoaster situation,” he said.
Before the end of this year, Darktrace plans to release its Antigena technology. Antigena is designed to replicate the function of human antibodies, which identify and neutralise bacteria and viruses, by neutralising cyber threats automatically without human intervention.
Darktrace is researching how information security teams respond to situations with a view to enabling the system not only to learn what they do, but also to predict what they will do and then use that information to offer better support information.
Read more about artificial intelligence
- Socially aware general-purpose artificial intelligence in the form of a dog could be the ideal form factor to take over the world.
- The UK government has announced plans to allow driverless cars to be used on public roads from early next year.
- A computer program has made history by passing the artificial intelligence test set by computer science pioneer Alan Turing.
- Smart systems like IBM’s Watson, autonomous vehicles and a growing army of robots are quietly making more and more decisions every day.
“This is the kind of thing that really interests us, and is the kind of envelope-pushing, self-learning, machine-learning, AI-type stuff that we really want to get into,” said Palmer.
“An entirely AI security operations centre is not an unreasonable objective for us to have as researchers, and is certainly one of our goals, especially considering how quickly technology is moving in areas such as self-driving cars, which not long ago were considered to be pure fiction.”