How GenAI is breaking traditional cyber security awareness tactics
With threat actors exploiting the growing use of generative AI tools and the prevalence of shadow AI, organisations must strengthen their security programmes and culture to manage the rising risk
The rapid adoption of generative AI (GenAI) is exposing weaknesses in traditional cyber security awareness efforts as employees frequently use unsanctioned GenAI solutions in the workplace. This unknowingly puts sensitive company information at risk.
The integration of GenAI tools into daily workflows outpaces existing security controls, while threat actors are exploiting the same technology to sharpen their campaigns. This combination is creating risks that many cyber security programmes were not designed to manage.
A recent Gartner survey revealed over 57% of employees use personal GenAI accounts for work and 33% admit to inputting sensitive work information into public or unapproved GenAI tools. The challenge grows as 36% either download or use unapproved GenAI tools on their work devices. These behaviours significantly increase the risk of cyber incidents and regulatory non-compliance.
At the same time, threat actors are leveraging GenAI to launch highly sophisticated deepfake, phishing and social engineering attacks. Gartner research shows 35% of organisations have faced deepfake incidents and 84% of cyber security leaders have seen phishing attacks become more advanced in recent years. The number of AI-powered malicious emails has also doubled over the past two years, making detection increasingly difficult for employees.
These trends carry real business consequences if left unaddressed. Privacy and intellectual property risks can escalate into costly incidents with long-term reputational impact, which can impact broader business outcomes.
There is an urgent need for organisations to strengthen their security behaviour and culture programmes to foster vigilance and drive behavioural change with employees. This will help manage their interactions with GenAI, focusing on how they utilise AI tools at every level.
Set rules for responsible GenAI use
Many employees are already using AI tools for routine tasks, which means guidance must focus on how to handle sensitive data, intellectual property and data privacy. It should emphasise principles such as data minimisation, so employees understand what information can and can’t be entered into GenAI environments.
To help teams co-manage risks while facilitating agile adoption, ownership must be clarified across the GenAI adoption cycle – who is responsible, accountable, consulted and informed for each activity.
In addition, creating net-new policies must be avoided until existing data governance and privacy acceptable use policies for GenAI have been fully leveraged and adapted. New or adapted policies need to be consistent with the organisation’s code of conduct and corporate values to prevent confusion and misalignment during implementation.
Engage senior leadership
Senior executives must be proactively engaged in risk decisions early to address the operational impacts of GenAI-driven attacks and policy violations. With leadership aligned, a more consistent approach to AI risk can be established.
This involves building a robust governance framework, which reinforces clear usage policies, manages secure AI development and ensures regulatory compliance.
Failing to secure senior leadership buy-in for GenAI governance and behaviour change initiatives can undermine efforts to operationalise effective risk management.
Strengthen employee defences
Security behaviour and culture programmes must include AI-specific risk education, deepfake scenarios and advanced attack simulations.
Encourage employees to validate unusual requests and understand that brief operational delays may be necessary when additional security verification is required. Streamlined reporting processes and incentivisation are also essential so employees can quickly raise concerns about suspicious AI interactions.
Don’t rely solely on generic awareness training. Instead, training content must be regularly updated to address emerging GenAI-enabled social engineering tactics and deepfake scenarios.
Embed secure daily practices
Promote AI literacy and transparency across the organisation so employees can safely adopt and report unusual AI outputs. It is also important to stress the role of human oversight for all generated outputs to identify incorrect content.
To sustain behaviour change and foster a security-conscious culture around GenAI, ongoing training and reinforcement is required, not just policy updates or tool restrictions.
Richard Addiscott is vice-president analyst at Gartner focused on cybersecurity risk management. He will be speaking at the upcoming Gartner Security & Risk Management Summit in Sydney, 16-17 March 2026.
