freshidea - stock.adobe.com

Rise of the cyber clones: When seeing isn’t believing

It is frighteningly easy to clone someone else's identity using readily-available artificial intelligence tools

Armed with little more than a Flipper Zero, a few apps that specialise in deepfakes and a desire to cause mischief, I recently proved how easy it is to realistically impersonate any person on the planet.

I can be the hottest actor in Hollywood, without leaving my home town of Bournemouth. I can be Jason Bourne without scaling any buildings, or I can be the Wolf of Wall Street, sadly without ever dating Margot Robbie.

How? Welcome to the age of AI.

Today we live in a world where anything is possible through the creativity and intuition of AI and deepfakes.

We can clone influential figures and spread misinformation.

We can break into offices, pretend to be the CEO and take a selfie from the boardroom, which is exactly what I did recently.

I was on a mission to prove that AI, teamed with malicious intentions, has opened an entirely new and dangerous realm in the cyber domain, offering criminals limitless potential to carry out Mission Impossible style scams.

But to prove my point, I needed a willing candidate.

Despite me having already exhausted many of my friends for my research-driven-hacking endeavours, I found a willing participant in Jason Gault, the founder of TeamJobs.

The challenge was set, I was to break into Jason’s offices, get past all his layers of security, without anyone suspecting a thing.

Jason was a perfect candidate for my experiment. He has thousands of followers on LinkedIn, great offices in Dorset, and he was naïve enough to believe what I was proposing was impossible.

Spoiler alert – he was wrong.

In the first stage of my experiment, I was able to steal Jason’s office RFID card in a meeting, which I knew would grant me access into the building.

I did this using a little device called Flipper Zero. A Flipper Zero can clone almost any signal it meets, from car keys to hotel keys to office keys and credit cards. It’s currently sold on Amazon for under £180, which is small change considering the money-making potential it offers criminals.

Once the RFID card was in my possession, I then had to get past CCTV cameras and the building’s security guards. But this again proved to be straightforward using a tool called Swapface. I used the tool to change my face so that when I walked past the CCTV cameras I wouldn’t be recognised by the security guards.

Once I passed the CCTV cameras, I was safely in the building and made my way to the boardroom where I managed to trick an unsuspecting employee of Jason’s to take a picture of me. It was that simple. Jason and I were both shocked at how well it worked.

But this was just the first part of the experiment. In the next stage, we decided to put my skills to test on LinkedIn to see if I could dupe Jason’s 2,000 plus followers. 

Once again, it was a piece of cake.  

Using HeyGen AI, I posted a video of Jason on LinkedIn announcing a spoof cycle from the UK to Australia. Jason is a huge cycling fan, so it wasn’t hugely unbelievable to his followers, but when the video was watched by over 4,000 people and liked and commented on by hundreds, I knew I had succeeded. 

In the end, after a frantic call from Jason’s CFO wondering where he was going to be for the next six months, we were forced to take the video down early, but the experiment did highlight the power of deepfakes at spreading misinformation.

Read more about deepfakes

  • Facial biometrics and controlled illumination can detect liveness, verify identities and help prevent deepfake attacks.
  • The growth of generative AI has led to more audio cloning technology. This could affect November's US election. Recent incidents show that existing safeguards are not effective.
  • As generative AI systems and voice cloning apps grow, organizations are seeing a rise in fraudulent calls. Organisations need to be vigilant and plan to deal with these threats.

These were just experiments that I carried out using widely available apps, but imagine what an adversary could do?

Imagine if a criminal cloned a CEO and tricked employees into actioning an urgent banking transfer, or posted a video of LinkedIn announcing a charity fundraiser?

When people see a face they recognise, they trust it without question. But in the digital world, we no longer have this luxury.

People are taught to be cautious about emails from unknown sources, but what about requests that come from a recognised face? Advances in deepfakes and AI, mean we might not always be able to trust these either.

This will warrant an entirely new realm of security awareness training, where all computer users will be trained how to detect a deepfake, where their eyesight must be laser sharp to spot tell-tale signs like inaccurate lip syncing or obscure graphics.

These challenges are on the horizon and it is vital we prepare for them today.

At DTX London this week I hosted a session discussing the Rise of Cyber Clones and how AI is aiding criminals in ways never thought possible.

I walked the audience through my recent hack-capades and explained exactly how I was able to spoof Jason Gault's identity, break into his offices and fool many of his followers on LinkedIn, all in a few easy steps.

The session is designed to educate, surprise and remind everyone that seeing isn’t always believing in the digital world.

Stay safe out there.

Jake Moore is global cyber security advisor at ESET. A former Dorset police officer specialising in digital forensics and cyber crime, Jake transitioned into the private sector in 2018 and now helps guide customers through their security journeys. He can also be found conducting security research and analysis and enjoys exploring creative ethical hacking opportunities, often using AI. He is a frequent speaker and commentator on impactful cyber security stories.

Read more on Hackers and cybercrime prevention

Search CIO
Search Security
Search Networking
Search Data Center
Search Data Management
Close