Humans readily anthropomorphise inanimate objects but research suggests even babies are surprisingly sensitive to the differences between robots and humans when it comes to social interaction.
The Tokyo international robot exhibition (iRex), which opened last Thursday, showcases the world’s latest robot technology and applications. Already robots are replacing humans in industries such as manufacturing, military battle and search and rescue. But there are increasing demands for robots that will interact with us in more intimate and social ways.
In August Kirobo, the talking robot, was sent to the international space station to explore how robots can be used to emotionally support humans in social isolation. Chris Melhuish, director of the Bristol Robotics Laboratory, considers one of the major drives of robotic development to be care and companionship for the elderly. And robotic toys for children having been growing steadily in popularity.
The success of these new technologies will rest in large part on our tendency to anthropomorphise objects - to think of them as having thoughts, intentions and emotions like humans. But just how gullible are we when it comes to treating robots as social partners?
Certainly we seem to readily anthropomorphise inanimate objects – most adults will have some experience of pleading with their computer not to crash or kicking their apparently uncooperative car. And anthropomorphism is driven more by how objects behave than how they look. Researchers at the University of Calgery showed that adults presented with a motorized balsa-wood stick with no personifying features none-the-less reported feeling scared, seduced or intimidated by it.
We see faces in clouds, project personalities onto machines and assume intentions in even simple geometric figures. Animation and puppetry rely largely on this hair-trigger bias to treat moving objects as social entities. But most adults do not really believe that these objects are living, thinking beings in the same way that humans and animals are. How do we become sensitive to these sometimes subtle differences?
The early roots of anthropomorphism are evident even in very young infants. One of our first categorical distinctions is between living and non-living entities but eye-tracking and behavioural studies show that infants from around 6-months of age attribute thoughts and feelings to simple geometric figures, follow the gaze of robots to novel objects and treat simple automated boxes as having goals and intentions.
However, even very young infants have limited credulity when it comes to robots. In two papers published earlier this year, Yuko Okumura and colleagues at Kyoto University Baby Lab compared how 12-month-olds treat human and robot gaze direction. The robot they used had human-like eyes and hands but was clearly a robot.
Understanding eye-gaze as a learning cue is an important social development in infants that underlies their ability to learn labels for objects, how to use objects and how to predict other people’s goals. Previous research has shown that infants will follow the gaze of robots (but not a directional light or pointer) to an object and have taken this as evidence that they are treating robots as social entities.
However, Okumura’s studies showed there are significant differences. Only in the human condition do babies subsequently remember the object and imitate actions on it. The authors suggest that although babies are responding to robots in an anthropomorphic manner, at 12-months of age they are already sensitive to differences between human and robot communication as cues to learning.
Older children show similar sensitivity. Kahn and colleagues observed 3- and 5-year-olds interacting with a Sony AIBO robodog and a stuffed toy dog. A number of studies have shown that although young children readily attribute thoughts and feelings to their stuffed toys in imaginary play, they do not really believe that they are living social entities. The robodog, in contrast, had numerous features that might confuse children into thinking it was a living animal such as self-initiated movement, learning new behaviours and a repertoire of skills including sitting, standing and playing dead.
When interviewed the majority of children attributed the robodog with thoughts, feelings and intentions. However, children attributed the samemental states to the stuffed toy dog. Its possible then that the anthropomorphism exhibited towards the robodog is a similar form of make-believe to that engaged in with stuffed toys rather than children really confusing the robot with a social companion.
This was supported by a subsequent study showing that children treat a real dog and the robodog significantly differently – the real dog was attributed with more feelings, thoughts and moral rights and children spent more time touching it or in close proximity to it.
Khan and his co-authors propose that rather than thinking of robots as being ‘alive’ or ‘not-alive’, perhaps children and adults are creating a new mental category that lies somewhere in between. This might explain our spontaneous tendency to project personalities and feelings onto robots while simultaneously being happy to lock them in a cupboard for hours on end or send them into mine fields.
The fact that robots are treated as different from humans might prove to be a useful asset. For instance, numerous studies have shown that children with autism are more motivated to learn, readier to acquire basic social skills and are less intimidated in diagnostic situations when faced with a robot counterpart than a human one. Dr. Karen Guldberg at Birmingham University’s Autism Centre suggests that the key to robots’ success in these contexts is specifically because children with autism see them as something other than living things and therefore are not threatened by them.
As the technology advances and robots become more commonplace in our work-spaces and homes it is possible that we will become less sensitive to these differences. What current research is showing however is that even babies are more sophisticated in distinguishing between robots and humans than we previously thought. Understanding the psychological roots of our skepticism towards social robots is vitally important in developing robots that can eventually cater to our social requirements.