In the science fiction drama series Humans, Laura feels displaced by Anita, a humanoid robot known as a synth that was bought by her husband to help with the household chores.
Imbued with human-like intelligence, Anita – and the rogue code in her that powers her alter ego Mia – does a great job at being a nanny to the kids and washing dishes. But unlike her fellow synths, she has a sense of consciousness, to the point of being manipulative at times.
While synths may appear to be far from reality, they don’t seem too far-fetched, given the rate at which artificial intelligence (AI) is advancing.
Consider Erica, the well-known human-like autonomous android that says she (or it?) wants to explore the outside world in a recent documentary produced by The Guardian.
Erica even cracks robot jokes and desires to move her arms and legs – which her creators at ATR Institute in Kyoto say will happen someday.
The thing is, Erica’s main creator Hiroshi Ishiguro created her not only for the oft-cited purpose of automating mundane tasks for humans, but to understand what makes us human, including what it means to interact with others, be creative and have a personality.
Assuming Ishiguro gets his way in representing humanity in robots, how should we interact with humanoids? What impact would they have on society, besides making us more efficient?
Should we treat them as humans, partnering us on factory floors and trading floors? Will they succumb to failings such as greed that have plagued mankind for thousands of years? Can we trust them to take care of our children?
Over the weekend, I had a conversation with my wife about whether she would trust a humanoid like Anita to take care of our son, like a nanny or domestic helper would. She couldn’t give me a clear answer, although I suspect she is wary of the bond that may form between the humanoid and our son – like how Laura felt in Humans.
And how should we interact with them? Do we bark out orders because they are machines and hence there’s no need for pleasantries? A friend recently recalled an executive who was criticised by his business associates for passing curt instructions to his personal assistant (PA) who was copied in his emails. The thing is, that executive’s PA is an AI programme and not a human being.
Today, discussions on human-machine interactions are mostly centred on feedback loops that enable us to correct the mistakes of machines, like how we would with a child. The social, cultural and psychological impact of AI is often not discussed enough, primarily because most of the rhetoric on AI today is shaped by technologists and business people, not anthropologists.
I believe that impact will be intertwined with how we view a robot, which may not be a person or a machine after all. As Dylan Glas, a robotics researcher at ATR, puts it, a robot may be a new oncological category for which we can’t describe at present. The Japanese believe robots could even have souls like all things do.
Lest you think we still have time to think things through, or that humanoids like Erica will never see the light of day, remember it was just over 20 years ago that the idea of a tablet computer was conceived by the creators of Star Trek. Before you know it, the Ericas and Anitas could well be in our midst. The question is, how will we cope with them?