Do you think artificial intelligence will change our lives for the better or threaten the existence of humanity? Consider carefully—your position on this may influence how generative AI programs such as ChatGPT respond to you, prompting them to deliver results that align with your expectations.
“AI is a mirror,” says Pat Pataranutaporn, a researcher at the M.I.T. Media Lab and co-author of a new study that exposes how user bias drives AI interactions. In it, researchers found that the way a user is “primed” for an AI experience consistently impacts the results. Experiment subjects who expected a “caring” AI reported having a more positive interaction, while those who presumed the bot to have bad intentions recounted experiencing negativity—even though all participants were using the same program.
“We wanted to quantify the effect of AI placebo, basically,” Pataranutaporn says. “We wanted to see what happened if you have a certain imagination of AI: How would that manifest in your interaction?” He and his colleagues hypothesized that AI reacts with a feedback loop: if you believe an AI will act a certain way, it will.
To test this idea, the researchers divided 300 participants into three groups and asked each person to interact with an AI program and assess its ability to deliver mental health support. Before starting, those in the first group were told the AI they would be using had no motives—it was just a run-of-the-mill text completion program. The second set of participants were told their AI was trained to have empathy. The third group was warned that the AI in question was manipulative and that it would act nice merely to sell a service. But in reality, all three groups encountered an identical program. After chatting with the bot for one 10- to 30-minute session, the participants were asked to evaluate whether it was an effective mental health companion.
The results suggest that the participants’ preconceived ideas affected the chatbot’s output. In all three groups, the majority of users reported a neutral, positive or negative experience in line with the expectations the researchers had planted. “When people think that the AI is caring, they become more positive toward it,” Pataranutaporn explains. “This creates a positive reinforcement feedback loop where, at the end, the AI becomes much more positive, compared to the control condition. And when people believe that the AI was manipulative, they become more negative toward the AI—and it makes the AI become more negative toward the person as well.”
This impact was absent, however, in a simple rule-based chatbot, as opposed to a more complex one that used generative AI. While half the study participants interacted with a chatbot that used GPT-3, the other half used the more primitive chatbot ELIZA, which does not rely on machine learning to generate its responses. The expectation effect was seen with the former bot but not the latter one. This suggests that the more complex the AI, the more reflective the mirror that it holds up to humans.
The study intimates that AI aims to give people what they want—whatever that happens to be. As Pataranutaporn puts it, “A lot of this actually happens in our head.” His team’s work was published in Nature on Monday.
According to Nina Beguš, a researcher at the University of California, Berkeley, and author of the upcoming book Artificial Humanities: A Fictional Perspective on Language in AI, who was not involved in the M.I.T. Media Lab paper, it is “a good first step. Having these kinds of studies, and further studies about how people will interact under certain priming, is crucial.”
Both Beguš and Pataranutaporn worry about how human presuppositions about AI—derived largely from popular media such as the films Her and Ex Machina, as well as classic stories such as the myth of Pygmalion—will shape our future interactions with it. Beguš’s book examines how literature across history has primed our expectations regarding AI.
“The way we build them right now is: they are mirroring you,” she says. “They adjust to you.” In order to shift attitudes toward AI, Beguš suggests that art containing more accurate depictions of the technology is necessary. “We should create a culture around it,” she says.
“What we think about AI came from what we see in Star Wars or Blade Runner or Ex Machina,” Pataranutaporn says. “This ‘collective imagination’ of what AI could be, or should be, has been around. Right now, when we create a new AI system, we’re still drawing from that same source of inspiration.”
That collective imagination can change over time, and it can also vary depending on where people grew up. “AI will have different flavors in different cultures,” Beguš says. Pataranutaporn has firsthand experience with that. “I grew up with a cartoon, Doraemon, about a cool robot cat who helped a boy who was a loser in … school,” he says. Because Pataranutaporn was familiar with a positive example of a robot, as opposed to a portrayal of a killing machine, “my mental model of AI was more positive,” he says. “I think in … Asia people have more of a positive narrative about AI and robots—you see them as this companion or friend.” Knowing how AI “culture” influences AI users can help ensure that the technology delivers desirable outcomes, Pataranutaporn adds. For instance, developers might design a system to seem more positive in order to bolster positive results. Or they could program it to use more straightforward delivery, providing answers like a search engine does and avoiding talking about itself as “I” or “me” in order to limit people from becoming emotionally attached to or overly reliant on the AI.
This same knowledge, however, can also make it easier to manipulate AI users. “Different people will try to put out different narratives for different purposes,” Pataranutaporn says. “People in marketing or people who make the product want to shape it a certain way. They want to make it seem more empathetic or trustworthy, even though the inside engine might be super biased or flawed.” He calls for something analogous to a “nutrition label” for AI, which would allow users to see a variety of information—the data on which a particular model was trained, its coding architecture, the biases that have been tested, its potential misuses and its mitigation options—in order to better understand the AI before deciding to trust its output.
“It’s very hard to eliminate biases,” Beguš says. “Being very careful in what you put out and thinking about potential challenges as you develop your product is the only way.”
“A lot of conversation on AI bias is on the responses: Does it give biased answers?” Pataranutaporn says. “But when you think of human-AI interaction, it’s not just a one-way street. You need to think about what kind of biases people bring into the system.”