Despite becoming increasingly lifelike in appearance, robots still have terrible body language, writes New Scientist's Colin Barras.
But Bilge Mutlu and colleague's team at Carnegie Mellon University, Pittsburgh, is changing that with robots that "leak" non-verbal information through eye movements when interacting with humans, according to this article which first appeared on our sister website New Scientist.
The eyes of a robot may not provide a window into its soul, but they can help humans guess the machine's intentions.
Humans constantly give off non-verbal cues and interpret the signals of others – but without realising it at a conscious level, says Mutlu. The trembling hands of a public speaker betray their nerves even before a word is uttered, while poker players leak subtle signs such as eye flickers or twitches that can be used to spot bluffers.
But when faced with a robot all our interpretive skills are irrelevant. Robots leak no information, so it is virtually impossible to read their intentions, which makes them hard to get along with.
Video: Robots are easier to get along with when their eyes signal their intentions
Mutlu's team tested strategies to improve robot body language using a guessing game played by a human and a humanoid robot. The robot is programmed to choose one object from around a dozen resting on a table, without making a move to actually pick it up.
The human must work out the object it has mentally selected, through a series of yes and no questions.
The 26 participants involved in the study took on average 5.5 questions to work out the correct object when the robot simply sat motionless across the table and answered verbally.
In the second trial, it answered in exactly the same way, but also swivelled its eyes to glance at its chosen object in the brief pause before answering two of the first three questions. When faced with a robot "leaking" information like that, the same 26 participants needed fewer questions to identify the correct object – an average of just 5.0 and a statistically significant result.
When the robot in question was the lifelike Geminoid with realistic rubbery skin, around three-quarters of participants said they hadn't noticed the short glances. But the fact their scores improved suggests they subconsciously detected the signals, says Mutlu.
The study suggests that people make attributions of intentionality and mental states to robots like they do humans, he says, although apparently only as long as the robot appears to be lifelike.
When the same experiment was repeated with Robovie – a less lifelike robot with large glassy eyes – the participants' efficiency at completing the guessing game was the same whether or not Robovie took short glances at the chosen object.
Sylvain Calinon works on human-robot interactions at the Swiss Federal Institutes of Technology in Lausanne, Switzerland.
"From my experience I would say that some communication cues do not necessarily have to be subtle," says Calinon. Simply giving robots the ability to turn towards a user or nod during a conversation are important for improving the efficiency and quality of human-robot interactions, he says.
Calinon thinks that the subtle cues explored by Mutlu's team could improve the quality of interaction further, although to be most useful the robot would have to be able to "read" the human's body language.
Mutlu presented his work at the Human Robot Interaction 2009 conference in La Jolla, California, last week.