djama - stock.adobe.com
Robots and artificial intelligence (AI) is beginning to transition away from being purely transactional and towards interacting with humans in a social manner – something developers need to be prepared for, according to founder and chief scientist of Jibo, Cynthia Breazeal.
Breazeal told the Women’s Forum at the 2017 CA World event in Vegas that she was first inspired by drone R2D2 from Star Wars, and realised that there would be space in people’s lives for “social robotics” in the future, where AI would not only be smart and useful, but would also have personality.
But during her time studying social interaction and robotics as part of her thesis, Breazeal found autonomous technologies were focused on taking people out of the equation rather than integration with people’s daily lives.
“If robots are going to come into our human lives it’s not about interacting with stuff – it’s about interacting with people.” she said.
Robots for human benefit
The discussion around AI and robotics is currently focused on what jobs are at risk of being automated in the near future, but some believe that rather than replace job roles, robots will work alongside humans.
AI is expected to become collaborative and interpersonal, and able to read and interpret people’s expressions and emotions. “It’s about recognising people, and people’s social queues in order to better understand their emotions and intention,” she said.
There are many humanoid robots that are in the mainstream public eye, varying from those in fictional media to real-life AI such as SoftBank’s Pepper, and people have a tendency to anthropomorphise them.
Even when robots aren’t particularly sophisticated, humans will try to read a robot’s behaviour and anticipate what they will do next – the same way they do with people.
Breazeal highlighted that these technologies need to be built in such a way that people can build “trust and rapport” with them, but AI and robotics is often developed to be “roped off and dangerous” such as in manufacturing industries or factories.
“A lot of the discussion around AI has been around how we can make machines more productive and efficient,” she said. But the focus should really be around how AI can help us “live more meaningful lives”.
AI in day-to-day life
AI is now becoming ubiquitous – everyone has a mobile device, all of which have voice-activated assistance included, and many people have AI devices such as Amazon Alexa in their homes. “We’re seeing people interact with AI as part of daily life,” said Breazeal.
This is creating a generation of not just digital natives, who have never known a world without tech and so are well adept at using it, but “AI natives” who are used to interacting with technology in a particular way.
Breazeal pointed out there has been a decline in how empathetic people at college are, partly due to how technology has changed the way people interact with each other and how people interact with the technology itself.
“Our brains change and learn through experience. We’re starting to interact more with tech that treats us like tech,” said Breazeal. “If you don’t have an empathetic society, that’s a society we’ll find it difficult to learn and grow in.”
Read more about automation
To use AI to make our lives more meaningful, the emphasis needs to be on more emotional interactions with people, rather than using them to emulate tasks people can already do. For example, robots could be used as interactive learning companions for children, or as a source of empathy and support in the healthcare industry to reduce stress.
To prevent this from continuing, Breazeal said we need to develop AI in such a way that it learns in the same way as we socially teach each other – but not necessarily by making AI and robotics more “humanlike”.
Jibo, the robot developed by Breazeal’s company, aims to move “from the transactional AI we have today – to this relational AI” to help people achieve their daily goals. “It’s kind of like the Disney sidekick brought to life – we’re very used to that paradigm,” she said.
Removing bias from AI
When trying to develop an artificial intelligence that uses machine learning to change and adapt, it will be based on deep learning systems that contain pre-labelled data to help the AI recognise situations and learn from them.
But these systems are trained using societal data sets, meaning many of these datasets already contain biases.
“We’re living in a time where AI is no longer just a computer science problem,” said Breazeal. “When you’re training AI in these datasets you’re getting more of the same [bias].”
This is why it is important who is developing AI. These teams need to be diverse and made up of different types of people who are aiming to solve the problems that matter the most to them.
Breazeal developed Jibo with the help of a team at MIT which has become mostly women over time, and she explained this was due in part to the project being aimed at creating a more “humanistic view of tech”.
“Who innovates and who builds with AI really matters,” said Breazeal. “A very small subset of people innovate and build with AI. Who [develops AI] is really critical, and that really needs to be made diverse.”