Is the “strange valley” good for a future metaverse?
TOKYO (Kyodo) – It’s been more than five decades since Japanese roboticist Masahiro Mori developed a theory describing the strange or uncomfortable feeling people experience in response to humanoid robots that closely, but not perfectly, resemble human being.
Called “the strange valley” by Mori in 1970, the phenomenon has stood the test of time with more recent examples of dread filtering through the burgeoning fields of artificial intelligence, photorealistic computer animation. , virtual reality, augmented reality and increasingly realistic androids.
But what happens beyond the other side of the valley as the likeness to humans improves? Some researchers fear that as ‘trusted’ virtual humans become indistinguishable from real people, we are exposed to more manipulation by platform providers. In other words, our responses while we are still in the Weird Valley, scary as they may be, could be a good thing – some sort of self-defense mechanism.
Mori, now 94, professor emeritus at the Tokyo Institute of Technology who retired in 1987, originally plotted his strange valley hypothesis in a graph, showing the emotional response of ‘an observer against the human likeness of a robot.
He stated that as a robot’s appearance is made more human, there is an increasing affinity for it, but only to a point beyond which the person experiences a reaction of extreme disgust, of cold or even fear, manifested by a dip in the valley.
But as the robot becomes more and more indistinguishable from a real person, positive emotions of empathy similar to human-to-human interaction reappear. The bewildering void between “not quite human” and “perfectly human” is the strange valley.
With tech companies led by Mark Zuckerberg’s Meta Platforms Inc. claiming the creation of a metaverse – seen as the next iteration of the Internet “where people can work and socialize in a virtual world” – some experts say the graph of the Strange Valley is equally relevant in immersive environments, including VR and AR.
As we have grown accustomed to interacting with “low fidelity versions of human faces dating back to the early days of television,” we will have the ability to project photorealistic humans into 3D virtual worlds before the end of this decade, Louis Rosenberg, a 30-year veteran of AR development and CEO of Unanimous AI, recently told Kyodo News in an interview. How will we determine what is real?
Personally, I think the greatest danger in the Metaverse is the prospect of artificial agents driven by an agenda and controlled by AI algorithms to engage us in ‘conversational manipulation’ without us realizing that the ‘person’ we interact with is not real. ”
In a corporate-controlled metaverse featuring ‘virtual product placement’, we could easily think that we are just having a conversation with someone like us, causing us to let go of our defenses. “You will not know what has been manipulated to serve a third party payment agenda and what is genuine.”
This is dangerous because “the AI agent who is trying to influence us might have access to a vast database of our personal interests and beliefs, our buying habits, our temperament, etc. So how do we go about it? protect us from that? regulation, ”Rosenberg said.
Mori himself said designers should stop before the first summit of the strange valley and not “risk getting close to the other side,” where the robots – and now, by extension, the AI or AR – become indistinguishable from humans.
Applying his theory to the virtual world of the Metaverse, he said, “If the person (in the real world) understands that the space they are in is imaginary, I don’t think that is a problem, even if it is. is scary. he recently told Kyodo.
But if the person is unable to distinguish reality from a virtual world, that in itself will be a problem, he said, adding that the “bigger problem” is if bad actors abuse technology for purposes. malicious, comparing it to a sharp tool that can either be used as “like a ‘dagger’ to kill or a ‘scalpel’ to save someone.”
In her research, Rachel McDonnell, associate professor of creative technologies at the School of Computer Science and Statistics at Trinity College Dublin, asks the question, “should we walk gently through the strange valley” with virtual humans?
She says that even though virtual humans have almost reached photorealism, “their conversational skills are still far from a stage where they’re convincing enough to be mistaken for a true human conversator.”
A long-time follower of making virtual humans more realistic, she says the biggest dangers now are “AI-driven video avatars or deepfake videos, where compelling videos can be created of a human, driven by the human. movement and the word of another “.
But she adds, “Being transparent about how avatars and videos are created will help overcome some of the ethical challenges associated with privacy and misrepresentation.” She gives an example of attaching a watermark to distinguish deepfakes from genuine video content.
Rosenberg, meanwhile, describes various forms of regulation to keep the Metaverse safe, such as notifying users when they interact with a virtual character.
“They may all be required to dress in a certain way, indicating that they are not real, or have some other visual clue. But, an even more effective method would be to make sure they don’t look quite human in comparison to other users. ”
That is, regulation could ensure that virtual humans trigger the “strange valley” response deep in our brains, he said. “This is the most effective route because the response within us is visceral and unconscious, which would most effectively protect us from being duped.”
Meta, the social media giant formerly known as Facebook that changed its name to focus on the metaverse, has come under fire in recent years for spreading misinformation, mistreating user data, and using algorithms that end by sowing discord and mistrust on the Internet, where users cling to their own “facts”.
On December 9, Meta launched the cartoon-like Horizon Worlds to people 18 and older in the United States and Canada.
Christoph Bartneck, associate professor at the University of Canterbury in New Zealand, says the metaverse, a name taken from Neal Stephenson’s 1992 science fiction novel “Snow Crash”, is not a new concept, and for now, just fiction.
“It’s a sign of a lack of originality that Facebook is resorting to to promise another virtual world. It sounds like a gigantic distraction maneuver to distract our attention from all the bad influence that Facebook and its products have on society.” , did he declare.
In 2021, Meta announced that it will spend at least $ 10 billion in its Metaverse division to create AR and VR hardware, software, and content. Other tech companies, including Microsoft and video game and software developer Epic Games, jumped on the bandwagon, while Nike Inc. launched Nikeland, with virtual sneakers, on the video game platform Roblox. .
Unanimous AI’s Rosenberg says making the Metaverse seem “odd,” that is, not quite real, is easier than we think. “It turns out that very small changes can make a big difference” by focusing on how our perception attributes authenticity to experiences.
British design and manufacturing company Engineered Arts’s Ameca is described as “the perfect platform for developing the interaction between us humans and any metaverse or digital realm”. A newly unveiled AI robot with remarkably human facial expressions, it seems astonished to be “awake” – puzzled and strangely amused.
“In the metaverse, the simplest thing – like the way a virtual character’s eyes move, or hair moves, or even just the speed of their movement (does it take longer to move? than a real human?) is enough to make them appear deeply unreal, ”said Rosenberg, adding that regulations should require that artificial agents be distinct from others, as that would be easy to achieve.
McDonnell, meanwhile, says she’s still optimistic that realistic virtual humans will have a positive impact on society in a future metaverse, including benefits such as preserving user privacy in situations. sensitive such as whistleblowers or witnesses testifying in court and tackling phobias, racial bias phobias, and even conditions such as post-traumatic stress disorder.
“There is huge potential for using virtual humans for good,” she said.
During experiments, his research team found that participants in survival task games “generally trusted” virtual agents who had suggested a ranking of objects vital to survival in hypothetical crash scenarios, “but of small manipulations of officers’ facial expressions or voices could influence the level of confidence, ”she said.
The notion of the strange valley as a defense mechanism dates back to Mori in 1970, who called it an “instinct of self-preservation”, not from lifeless objects that seem different from us, but to protect us from it. things that are “extremely similar, such as like corpses and related species,” noted Karl F. MacDorman, associate professor in the School of Computing and Computer Science at Indiana University.
As for Mori, who has said that he never intended Strange Valley to be a rigorous scientific theory but rather a caveat to robot designers, his message on the Metaverse is straightforward.
“I hope (those) involved in its creation will do something healthy for the happiness of mankind,” Mori said.