Loading...
The uncle who believes 9/11 never happened. The next-door neighbor who thinks Biden stole the 2020 election. The Nieman Lab editor who’s been known to wonder if aliens do exist and the U.S. government is covering them up.
You probably don’t want to talk to these people and convince them that they are wrong. But what if an AI chatbot could do it for you? That’s exactly what a group of researchers just did. In their peer-reviewed article “Durably reducing conspiracy beliefs through dialogues with AI,” featured on the cover of Science this week, Thomas Costello of American University, Gordon Pennycook of Cornell, and David Rand of MIT explain how they put 2,190 conspiracy-believing Americans in brief but detailed conversation with the large language model GPT-4 Turbo.
Those conversations worked: They “reduced participants’ belief in their chosen conspiracy theory by 20% on average,” the authors write. The effect “persisted undiminished for at least 2 months” and “was consistently observed across a wide range of conspiracy theories, from classic conspiracies involving the assassination of John F. Kennedy, aliens, and the illuminati, to those pertaining to topical events such as COVID-19 and the 2020 U.S. presidential election, and occurred even for participants whose conspiracy beliefs were deeply entrenched and important to their identities.”
The studies suggest that, contrary to the common belief that people “down the rabbit hole” are beyond rescue, they can actually be brought back. They also offer a compelling demonstration of “the potential positive impacts of generative AI when deployed responsibly,” the authors write — and “the pressing importance of minimizing opportunities for this technology to be used irresponsibly.”
David Rand, study coauthor and the Erwin H. Schell Professor and professor of management science and brain and cognitive sciences at MIT, told me the research is novel. He’s pretty sure this is the first study that’s used an LLM to reduce conspiracy beliefs — but broad studies haven’t been done with real people in the AI’s debunking role, either. “One of the issues is that it’s impossible, from a practical perspective, to find a human expert to debunk the wide variety of conspiracy theories participants believe,” he said.
In one of the studies, each participant
rated their belief in 15 popular conspiracy theories…completed a distractor task, and were then asked to identify and describe a particular conspiracy theory they believed in (not necessarily one of the 15 rated earlier) as well as providing details about evidence or experiences supporting their belief. In real time, the AI created a summary statement of each participant’s free-text conspiratorial belief description, and each participant was then asked to indicate their belief in the AI summary of their conspiracy statement — providing a pretreatment measure of belief.Participants, who were “quota matched to the U.S. census on age, gender, race, and ethnicity,” were then randomly assigned to either have a three-round conversation with the AI about their “favored conspiracy belief” or to chat with the AI about a neutral topic.
One of the most fascinating parts of this experiment is that you can read through all of the AI’s thousands of conversations with human participants. They’re here, arranged by conspiracy belief, and filterable by how effective the intervention was (i.e., how much a person changed their mind before and after the conversation with GPT-4 Turbo.) I loved reading through some of the conversations and put excerpts at the bottom of this post.
I was really struck by how polite both the GPT and most of the respondents were, and asked Rand if participants somehow see the AI chatbot as being objective in a way another human wouldn’t be. That’s possible, he said, but noted that “in a not-yet-published followup we explicitly told participants that the AI was going to try to talk them out of believing the conspiracy, or that they were supposed to have a debate with the AI, and it still worked just as well. So telling people the AI isn’t neutral doesn’t undermine the effect.”
People who trust AI less did show a smaller effect in their belief change, Rand said, “but it still works even for people who strongly distrust AI.” In another follow-up study, the researchers had GPT-4 explain structural racism to Republicans. “Although a lot of people accuse the AI of being ‘woke,’ etc.,” he said, “it still works pretty much as well as the conspiracy debunking bot.”
As for the robotic level of politeness? “It is definitely very polite and does a lot of rapport building,” Rand acknowledged — so they tested that in yet another follow-up study: “We tell it not to do that and instead just present the facts, etc., and it still works just as well. So I think the politeness isn’t key, and instead it’s about the facts and evidence. On the flip side, though, I bet it would work less well if it was outright rude.” (That said: He pointed to other studies suggesting that, in misinformation correction, tone doesn’t matter that much.)
Read on for conversation excerpts and read the full paper here.
— Laura Hazard Owen
From the weekAn AI chatbot helped Americans who believe in conspiracy theories “exit the rabbit hole”“It still works even for people who strongly distrust AI.” By Laura Hazard Owen. |
Documentary filmmakers publish new AI ethics guidelines. Are news broadcasters next?The Archival Producers Alliance’s new generative AI guardrails put audience transparency first. By Andrew Deck. |
Mobile newsrooms help drive citizen journalism in North Macedonia and beyond“With each region we visited, the audience from that region grew, and they have continued to follow us to this day.” By Lex Doig. |
The California Google deal could leave out news startups and the smallest publishers“We don’t know whether or how this nonprofit and its fund will operate, and likely won’t for some months (nonprofit governance is many things, but fast is not one of them).” By Sophie Culpepper. |
With an expansion on the way, Ken Doctor’s Lookout thinks it has some answers to the local news crisisAfter finding success — and a Pulitzer Prize — in Santa Cruz, Lookout aims to replicate its model in Oregon. “All of these playbooks are at least partially written. You sometimes hear people say, ‘Nobody’s figured it out yet.’ But this is all about execution.” By Joshua Benton. |
Big tech is painting itself as journalism’s savior. We should tread carefully.“We set out to explore how big tech’s ‘philanthrocapitalism’ could be reshaping the news industry, focusing on countries in the Global South…Our findings suggest an emerging web of dependency between cash-strapped newsrooms and Silicon Valley’s deep pockets.” By Mathias Felipe de Lima Santos. |
Rebooting the Minnesota Star Tribune: A conversation with Steve Grove“We would like to see at least 25% of our P&L look different in a couple of years than it does now…I don’t think any media company right now can just be banking on subscriptions to save the day.” By Richard Tofel. |
Nieman Lab | View email in browser | Unsubscribe
You are receiving this daily newsletter because you signed up for for it at www.niemanlab.org.
Nieman Journalism Lab · Harvard University · 1 Francis Ave. · Cambridge, MA 02138 · USA
Loading...
Loading...