Societal implications of enabling a better version of ourselves

In the Adam and Eve Garden of Eden that is artificial intelligence (AI), the forbidden fruit appears to be AI that is subtly better at being human than us.

For all the potential to improve society, the mass adoption of technology has led to people having a more meaningful relationship with their smart devices and the digital world than real friends who they used to meet in real places, have meaningful conversations and shared warmth and compassion.

Today, AI is getting better at providing us with the instant gratification many in society seek.

AI companions are built using large language models, which in turn are fine-tuned through reinforcement learning based on human feedback. An article posted in January on the Ada Lovelace Institute website points out that this training technique tends to produce AI models that select for sycophantic responses as human feedback favours agreeable responses to the detriment of truth.

“While generally regarded as a bug in other types of AI assistants, companies developing AI companions explicitly amplify this tendency, as they are eager to satisfy users’ desire for their companion to be non-judgemental,” independent AI policy researcher, Jamie Bernardi, the author of the article said.

Consider Ava in Ex Machina: an AI will eventually manipulate our emotions and unlike real world human relationships, the AI is ever present and ever ready to offer a carefully orchestrated response to provide the dopamine hit that encourages the user to continue speaking to it. In a world where these activities are being monetised, there is a growing market for AI companions, which means money will be ploughed into advancing such technology.

Given the pace of technological development, AI chiefs are getting worried.

Mustafa Suleyman, CEO of Microsoft AI, and former head of applied AI at DeepMind says he is being kept up at night thinking about “seemingly conscious artificial intelligence” (SCAI).

It will not be something that arises by accident. Suleyman expects that developers will simply use existing techniques and package them in such a fluid way that collectively they give the impression of an SCAI. Such capabilities would enable the AI not only to speak naturally and appear to have empathy with the people it engages with, but it would have some form of persistent memory.  Not only would it remember things and seem knowledgeable, but it would appear to have a sense of self and intrinsic motivation, Suleyman noted in a recent article.

There is a moral imperative to establish guardrails for such AI systems. In a world of increasing loneliness, society as a whole, needs to be involved in deciding how these systems evolve.