Time to take control of AI?


At the start of May, Chegg, the US edutech firm, saw its share price crash after CEO Dan Rosensweig said during the company’s Q1 2023 earnings call, that the company had been impacted by ChatGPT. According to the FT, Chegg is the first publicly listed company to reveal its concerns over the use of foundational AI models, and how these may adversely impact a business model.

Another news item that may well influence bosses is the Samsung memo, seen by Bloomberg, which bans employees from uploading data to generative AI systems like ChatGPT and Bard after developers in the company’s semiconductor business inadvertently uploaded its source code into ChatGPT. 

Numerous surveys show that business leaders are keen to adopt advanced AI technology to help them deliver business improvements, such as better customer experience. It is too early to tell, but is therte a correlation between the use of foundational AI models like ChatGPT and commercial success?

Some business leaders may be worried, as is the case with Chegg, that advancements in AI will disrupt the market and erode their value proposition. Others are concerned that training large language AI models requires huge datasets, which is both costly and involves vast farms of computational power.

There are huge ethical, privacy and intellectual property issues that need to be addressed. In an extensive interview with the New York Times, father of AI, Geoffrey Hinton, discussed why he quit his job at Google and his concerns about the progress of AI and its ability to create fake information. Apple co-founder, Steve Wokzniak, also revealed his concerns in an interview with the BBC.

Regulators are playing catch-up. In a blog referencing Ava, the robot girl in the sci-fi thriller, Ex-Machina, Michael Atleson, attorney, FTC division of advertising practices said that generative AI tools are starting to be deployed by businesses to influence people’s beliefs, emotions, and behaviour.

In the UK, the Competition and Markets Authority (CMA) has now decided that it too has begun a review of foundational AI models. Its review is focused on the competitive markets for foundation models; the opportunities and risks as the technology is deployed and how to support competition and protect consumers as models develop.

Sarah Cardell, Chief Executive of the CMA, said: “It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information. Our goal is to help this new, rapidly scaling technology develop in ways that ensure open, competitive markets and effective consumer protection.”

One can’t help but feel that AI advancement is accelerating at such a rate that Ava is already here, albeit in software and no one can really predict what’s coming next.

CIO
Security
Networking
Data Center
Data Management
Close