There may be truth after all to the theory of the uncanny valley. When humans witness robots displaying emotions, it may be a deeply discomfiting experience.
In 1970, Japanese roboticist Masahiro Mori coined the term “uncanny valley”. This theory explains that as robots resemble humans more and more, we tend to feel more affinity towards them. However, there will come a point where robots resemble humans more than ever, but are not quite perfectly human-like yet. This is the uncanny valley, wherein affinity plummets to revulsion.
A new study on this theory explored another facet of how humans may react to near-human robots. The original formulation of the theory focused more on what robots look like. However, will humans still dip into the valley if it's the robot's mind that seems near-human instead?
The researchers set up an experiment wherein they can see how humans will react to virtual reality avatars that display human emotions. The avatars will not therefore look almost human-like, but they will interact almost like humans.
In the experiment, the researchers asked four groups of participants to wear a virtual reality headset. Through this headset, they can watch an interaction between a male and female avatar. There's nothing particularly special about the conversation; it's just small talk. They discuss the weather, mild frustrations, and express sympathy.
What each group of participants didn't know was that the researchers gave them different descriptions of what's taking place. One group thought that humans controlled the avatars, and that the conversation was scripted. Another group thought that humans controlled the avatars, but the conversation was spontaneous. The third group thought that artificial intelligence controlled the avatars, but the conversation was scripted. The last group also thought that artificial intelligence controlled the avatars, and that the conversation was spontaneous.
The first three groups found nothing wrong with the set up. The first two didn't mind the conversation between what they thought were human-controlled avatars, spontaneous or not. The third group also didn't mind the AI interaction because it was following a script.
The fourth group, however, was another matter entirely. The participants felt uncomfortable and on edge that the AI managed to create a spontaneous, seemingly natural conversation.
While one may say that social skills are a great thing to develop in artificial intelligence, the experiment wasn't just about social skills. The avatars expressed emotions like frustration and sympathy. They didn't simply make small talk about the weather, they also discussed how they felt about it. The discomfort that the participants in the research felt is what the researchers call the “uncanny valley of the mind”. In this instance, it wasn't the physical appearance of the artificial intelligence that caused discomfiture. Instead, it was the fact that they seemed to have minds that are akin to those of humans.
It's not clear yet what causes humans to feel this discomfort with near-human artificial life. What's clear, though, is there may be dips and crevices in the uncanny valley that researchers still need to explore.
Get weekly science updates in your inbox!