“Intuition is not a great guide for this work."
Here’s an uncomfortable truth about mis- and disinformation: We still don’t know the best ways to combat it. David Rand is an MIT professor who studies cognitive science. He conducts experiments to test ways to fight misinformation and disinformation. Here’s the bad news: Rand said approaches like emphasizing the source of a piece of information may not be as effective as we think. The good news? Sometimes the things we think won’t work — like crowdsourcing fact-checking — in fact show surprising promise. “Intuition is not a great guide for this work, because psychology is complicated. And the misinformation problem is fundamentally a psychological problem, not a technological problem,” he told us this week. Rand and colleagues released a working paper on Tuesday about an experiment they performed to test if deepfakes posed a threat because people were more likely to be tricked by videos than text. Well, nope. Rand found there wasn’t a huge difference. “It’s not like video is way more compelling or way more believable than text. It's like a teeny bit more,” he said. His research is a reminder there’s a lot we don’t know about how we process information and media. Here are four assumptions his work gives us reason to question. Political bias prevents people from judging the accuracy of news articles. Rand found that people who were asked to judge the accuracy of a news article and given time to do it were surprisingly effective at spotting false or misleading information. “So just getting people to slow down and think makes them more accurate in particular, and makes them less likely to believe fake news, regardless of whether the fake news aligns with their ideology or not,” he said. Showing people warnings on false articles on social media won’t work. This is something Facebook is doing, and many people are skeptical. Rand’s work found that people were less likely to believe and share articles with warnings attached to them. But warning labels create another problem: “It makes people believe and share all the other unlabeled false headlines more.” He and colleagues call it the “implied truth effect.” People assume that if some articles have warnings, those that don’t must be accurate. Crowdsourcing fact-checking won’t work. Rand initially thought “people are just going to be biased and say they trust the things that agree with their ideology.” Then he tested it and found the opposite: Crowdsourced fact-checking could work because Democrats and Republicans were largely able to judge the accuracy of content from hyperpartisan and fake news sites. He and colleagues are now working with Facebook to design a system to do it. Highlighting the source of an article on social media can help people determine its credibility. “We find that it doesn't actually help at all, more or less,” Rand said. “The reason is: People mostly made their judgments based on the headlines, which were often a reliable cue about the accuracy of the story.”
Got a tip? Email us: fakenewsletter@buzzfeed.com or find us on Twitter: @craigsilverman and @janelytv. Want to communicate with us securely? Here’s how: tips.buzzfeed.com
P.S. If you like this newsletter, help keep our reporting free for all. Support BuzzFeed News by becoming a member here. (monthly memberships are available worldwide) 💌 Did a friend forward you this email? Sign up to get The Fake Newsletter in your inbox! Show privacy notice and cookie policy. BuzzFeed, Inc. 111 E. 18th St. New York, NY 10003 Unsubscribe |