Laurent - stock.adobe.com
Security software and privacy specialist Avast and artificial intelligence (AI) software-defined secure computing firm Borsetta have joined up with Intel to launch a new AI security research project designed to advance and develop technologies that bring privacy and trust to decentralised AI.
The Private AI Collaborative Research Institute was first mooted by Intel’s University Research and Collaboration Office earlier in 2020, and the project is now formally kicking off with the announcement of nine research projects that it will fund around the world.
It is dedicated to encouraging and supporting research that will solve real-world challenges for society, with an ethical approach to AI development at its core. The initial focus of the project will be on addressing five challenges resulting from current centralised approaches to AI.
The partners believe that by decentralising AI and moving AI analytics to the network edge, they can liberate data from silos, protect privacy and security, and maintain efficiency.
Richard Uhlig, Intel Labs vice-president and director, and Intel senior fellow, said: “AI will continue to have a transformational impact on many industries, and is poised to become a life-changing force in the healthcare, automotive, cyber security, financial and technology industries.
“That said, research into responsible, secure and private AI is crucial for its true potential to be realised. The Private AI Collaborative Research Institute will be committed to the advancing technologies using ethical principles that put people first and keep individuals safe and secure.
“We invited Avast and Borsetta to join us on our mission to identify the true impact of AI on the world around us. We are excited to have them on board to mitigate potential downsides and dangers of AI.”
Avast’s CTO Michal Pechoucek added: “With our skilled AI research team, and Avast AI and Cybersecurity Laboratory located on campus at the Czech Technical University, we are already witnessing the great results from our scientific research into the intersection of AI, machine learning and cyber security.
Read more about security and AI
- Conversations about ‘AI as a solution’ may overlook potentially grave AI security issues. Explore the potential infosec implications of the emerging technology in this video.
- AI-powered analytics is critical to an effective, proactive security strategy. Learn how AI-enabled tools work and what your organisation needs to do to reap their benefits.
- Vendors and customers must be aware of potential gaps between expectations and reality in the sale or purchase of AI cyber security products, an AI security expert advises.
“Industry and academic collaboration is key to tackle the big issues of our time, including ethical and responsible AI. As AI continues to grow in strength and scope, we have reached a point where action is necessary, not just talk. We are delighted to be joining forces with Intel and Borsetta to unlock AI’s full potential for keeping individuals and their data secure.’’
Borsetta CEO Pamela Norton said she strongly believed in driving privacy-preserving frameworks to support a future, AI-empowered world.
“The mission of the Private AI Collaborative Institute is aligned with our vision for future-proof security where data is provably protected with edge computing services that can be trusted,” she said. “Trust will be the currency of the future, and we need to design AI embedded edge systems with trust, transparency and security while advancing the human-driven values they were intended to reflect.”
The initial research projects will be conducted at Carnegie Mellon and the Universities of California at San Diego, and Southern California in the US, the Universities of Toronto and Waterloo in Canada, the Technical University of Darmstadt in Germany, the Université Catholique de Louvain in Belgium, and the National University of Singapore. More details on these projects can be read here.