Maksim Kabakou - Fotolia
“In like a lion, out like a lamb” is an accurate description of 2023. It was a challenging start to the year with multiple bank failures, ongoing tech sector layoffs, and the emergence of corporate austerity measures. However, by the spring of 2023, inflation had eased; GDP growth had returned; and innovation had started to offer organisations renewed optimism. Indeed, 2023 saw increased investment in what we’re calling the biggest technology and business change of our lifetime – generative AI (genAI).
In 2024, there will be a continued focus on innovation as more businesses embrace rapid experimentation and launch new genAI initiatives. However, it will be crucial for organisations to safely transition from experimentation to implementation of new AI-based technologies. According to Forrester’s 2023 data, 53% of AI decision-makers whose organisations have made policy changes regarding genAI are evolving their AI governance programmes to support AI use cases. As such, security, risk, and privacy leaders will need to balance the speed of innovation with governance and accountability beyond regulatory mandates amidst a backdrop of interconnected risks.
Insecure AI code and OpenAI
Unfortunately, as a result of this increased focus on genAI, in 2024 it’s likely that we will see at least three data breaches publicly blamed on AI-generated code. Indeed, we are seeing developers increasingly rely on AI development assistants – known as TuringBots – to generate code and boost productivity, and whilst many organisations are responsible and will scan the code for security flaws, we will also witness overconfident developers take a more trusting stance and assume AI-generated code is secure.
Forrester also predicts that in 2024, an app using ChatGPT will be fined for its handling of PII. We have seen regulators focus their lens on genAI, with OpenAI under heavy scrutiny from regulators – and for good reason. In Europe, we are already seeing an ongoing OpenAI investigation in Italy, whilst lawyers in Poland are dealing with a new lawsuit for several potential GDPR violations. The European Data Protection Board has also launched a task force to coordinate enforcement actions against OpenAI’s ChatGPT.
The problem is that OpenAI may have the resources to defend itself against regulators, however many third-party apps running on ChatGPT do not. Furthermore, some apps actually bear an even greater risk of fines than OpenAI itself, as they introduce risks via their third-party tech provider but lack the financial and technical resources needed to mitigate them. We subsequently need to see companies identify apps that have the potential to increase their risk exposure and double-down on third-party risk management.
Zero-trust explosion and human error
According to Forrester’s predictions, we will also see roles with zero-trust titles double across public and private sectors in 2024. At the time of writing, there were 81 zero-trust positions advertised on LinkedIn in the US, six in the UK, and one in Singapore. However, the combination of an increase in zero-trust mandates and executive orders in the US, and zero-trust finally going mainstream in APAC and EMEA, will lead to a growing need for cyber security roles dedicated to zero-trust architecture, engineering, governance, strategy, and leadership. As a result, not only will the number of roles double in each region over the next year, but these roles will start to emerge in countries such as Australia and India.
Looking ahead to 2024 in security
- ISACA's Steven Sim Kok Leong shares his thoughts on the coming year in cyber security, considering the impact of regulatory change, the evolving role of the CISO, and advances in innovative cyber tech.
- PA Consulting's Rasika Somasiri looks back at a busy 12 months in the cyber security world, and highlights some key learnings from 2023.
- Emerging technologies have brought about a new age of cyber – and we need a 360-degree collaborative approach more than ever to succeed, says Plexal's Saj Huq.
We will also see the role of human error play a prominent role in 2024 data breaches. Human error, which involves an employee or user unwittingly enabling a data breach through privilege misuse, use of stolen credentials, or social engineering, has always been a challenge for security and risk professionals, with many global and local breach publications estimating that they account for 74% of breaches. However, Forrester predicts that this will increase to 90% of data breaches in 2024, primarily due to the rise of genAI and the prevalence of communication channels that make social engineering attacks simpler and faster. This increase will also expose what is often touted as a silver bullet for mitigating human breaches – security awareness and training. In 2024, we will therefore see more CISOs move to adaptive human protection and take a data-driven approach to behaviour change, whilst adding human risk to security risk management.
Cyber insurance: vendors will matter
Finally, in 2024, Forrester expects two security technology vendors to be considered red flags by insurance carriers. Following a data deficit for many years, cyber insurance providers now possess valuable insights from security services, tech partnerships and insurance claims that will start to impact how they review and process claims. Indeed, we will likely see an increased prescription around secure practices and the specific security technologies used by policyholders. It will no longer be sufficient to have a specific security solution in place; using vendors deemed risky by insurers will lead to increased premiums, additional scrutiny, new requirements for vulnerability management, and possibly denial of coverage unless they choose a less risky option.
Alla Valente is a senior analyst at Forrester. She specialises in governance, risk and compliance, third-party risk management, contract lifecycle management and supply chain risk.