Getty Images

Navigating artificial intelligence: Red flags to watch out for

Lou Steinberg, founder of cyber security research lab CTM Insights, flags up the risks of the growing use of AI, and what organisations can do to tame the technology for good

Lou Steinberg, founder and managing partner of CTM Insights, a cyber security research lab and incubator, doesn’t watch movies about artificial intelligence (AI) because he believes what he sees in real life is enough.

Steinberg has also worn other hats, including a six-year tenure as chief technology officer of TD Ameritrade, where he was responsible for technology innovation, platform architecture, engineering, operations, risk management and cyber security.

He has worked with US government officials on cyber issues as well. Recently, after a White House meeting with tech leaders about AI, Steinberg spoke about the benefits and downsides of having AI provide advice and complete tasks.

Businesses with agendas, for example, might try to skew training data to get people to buy their cars, stay in their hotels, or eat at their restaurants. Hackers may also change training data to advise people to buy stocks that are being sold at inflated prices. They may even teach AI to write software with built-in security issues, he contended.

In an interview with Computer Weekly, Steinberg drilled down into these red flags and what organisations can do to mitigate the risks of the growing use of AI.

What would you say are the top three things we should really be worried about right now when it comes to AI?

Steinberg: My short- to medium-term concerns with AI are in three main areas. First, AI- and machine learning-powered chatbots and decision support tools will return inaccurate results that are misconstrued as accurate, as they used untrustworthy training data and lack traceability.

Second, the lack of traceability means we don’t know why AI gives the answers it gives – though Google is taking an interesting approach by providing links to supporting documentation that a user can assess for credibility.

Third, attempts to slow the progress of AI, while well meaning, will slow the pace of innovation in Western nations while countries like China will continue to advance. While there have been examples of internationally respected bans on research, such as human cloning, AI advancement is not likely to be slowed globally.

How soon can bad actors jail-break AI? And what would that mean for society? Can AI developers pre-empt such dangers?

People have already gotten past guardrails built into tools like ChatGPT through prompt engineering. For example, a chatbot might refuse to generate code that is obviously malware but will happily create one function at a time that can be combined to create malware. Jail-breaking of AI is already happening today, and will continue as both the guardrails and attacks gain in sophistication.

The ability to attack poorly protected training data and bias the outcome is an even larger concern. Combined with the lack of traceability, we have a system without feedback loops to self-correct.

The ability to synthetically recreate a real person's voice and likeness will cause fraud and reputational damage to skyrocket. We need to solve this problem before we can no longer trust what we see or hear, like fake phone calls, fake videos of people appearing to commit crimes and fake investor conferences
Lou Steinberg, CTM Insights

When will we get past the black box problem of AI?

Great question. As I said, Google appears to be trying to reinforce answers with pointers to supporting data. That helps, though I would rather see a chain of steps that led to a decision. Transparency and traceability are key.

Who can exploit AI the most? Governments? big tech? Hackers?

All of the above can and will exploit AI to analyse data, support decision-making and synthesise new outputs. Exploiting AI comes down to whether the use cases will be good or bad for society. 

If made by a tech company, it will be to gain commercial advantage, ranging from selling you products to detecting fraud to personalising medicine and medical diagnoses. Businesses will also tap cost savings by replacing humans with AI, whether to write movie scripts, drive a delivery truck, develop software, or board an airplane by using facial recognition as a boarding pass.

Many hackers are also profit-seeking, and will try to steal money by guessing bank account passwords or replicating a person’s voice and likeness to scam others. Just look at recent examples of realistic, synthesised voices being used to trick people into believing a loved one has been kidnapped.

While autonomous killer robots from science fiction are certainly a concern with some nation states and terrorist groups, governments and some companies sit on huge amounts of data that would benefit from improved pattern detection. Expect governments to analyse and interpret data to better manage everything from public health to air traffic congestion. AI will also allow personalised decision-making at scale, where agencies like the US Internal Revenue Service will look for fraud while authoritarian governments will increase their ability to do surveillance.

What advice would you give to AI developers? As an incubator, does CTM Insights have any special lens here?

There are so many dimensions of protection needed. Training data must be curated and protected from malicious tampering. The ability to synthetically recreate a real person’s voice and likeness will cause fraud and reputational damage to skyrocket. We need to solve this problem before we can no longer trust what we see or hear, like fake phone calls, fake videos of people appearing to commit crimes and fake investor conferences.

Similarly, the ability to realistically edit images and evade detection will create cases where even real images, like your medical scans, are untrustworthy. CTM has technology to isolate untrustworthy portions of data and images, without requiring everything to be thrown out. We are working on a new way to detect synthetic deepfakes.

Is synthetic data a good thing or a bad thing if we want to create safer AI?

Synthetic data is mostly a good thing, and we can use it to help create curated training data. The challenge is that attackers can do the same thing.

Will singularity and artificial general intelligence (AGI) be a utopia or a dystopia?

I’m an optimist. While most major technology advances can be used to do harm, AI has the ability to eliminate a huge amount of work done by people but still create the value of that work. If the benefits are shared across society, and not concentrated, society will gain broadly.

For example, one of the most common jobs in the US is driving a delivery truck. If autonomous vehicles replace those jobs, society still gets the benefit of having things delivered. If all that does is raise profit margins at delivery companies, then that will be deeply impactful to laid-off drivers. But if some of the benefit is used to help those ex-drivers do something else like construction, then society benefits by getting new buildings.

Data poisoning, adversarial AI, co-evolution of good guys and bad guys – how serious have these issues become?

Co-evolution of AI and adversarial AI have already started. There is debate as to the level of data poisoning out there today as many attacks aren’t made public. I’d say they are all in their infancy. I’m worried about what happens when they grow up.

If you were to create an algorithm that’s water-tight on security, what broad areas would you be careful about?

The system would have traceability built in from the start. The inputs would be carefully curated and protected. The outputs would be signed and have authorised use built in. Today, we focus way too much on identity and authentication of people and not enough on whether those people authorised things.

Have you seen any evidence of AI-driven or assisted attacks?

Yes, deepfake videos exist of Elon Musk and others for financial scams, as well as Ukraine’s President Zelensky telling his troops to surrender in disinformation campaigns. Synthesised voices of real people have been used in fake kidnapping scams, and fake CEO voices on phone calls have asked employees to transfer money to a fraudster’s account. AI is also being used by attackers to exploit vulnerabilities to breach networks and systems.

What’s your favourite Black Mirror episode or movie about AI that feels like a premonition?

I try to not watch stuff that might scare me – real life is enough!

Read more about AI in APAC

    Next Steps

    AI good and evil as portrayed in the movies, and reality

    Read more on Information technology (IT) in ASEAN

    CIO
    Security
    Networking
    Data Center
    Data Management
    Close