Why do we believe the stuff we read online?
The illusory truth effect, source confusion, and red kangaroos
If you haven’t already, please consider subscribing to Techno Sapiens. You’ll get posts delivered straight to your inbox and join a growing community of amazing humans.
6 min read
There’s an episode of the podcast Rabbit Hole, where reporter Kevin Roose interviews computer scientist and former YouTube employee Guillaume Chaslot. After Chaslot joined YouTube in 2010, his team made a small tweak to the video recommendation algorithm. Instead of prompting users to watch videos with higher numbers of clicks, the algorithm began prompting users to watch videos that were similar to what they’d watched before. It was a subtle change, but one that increased watch time over 50 percent. It also led to a new kind of online experience—one that sucked users into a vortex of repetitive content. An experience that fed users hours of videos that told them the same story, pushed the same ideas, showed them more, and more, of what they’d seen before.
Chaslot describes a bus ride, after quitting his job at the company. As luck would have it, he found himself sitting next to a man watching—you guessed it—YouTube videos. This lasted for hours, with each successive recommended video describing a secret plan to kill two billion people. He engaged the man in conversation, asking what he believed and why.
[The media] are not going to tell you about it, the man said. But if you look on YouTube, you’ll find the truth.
There are so many videos like that. It has to be true.
The illusory truth effect
We hear a lot about online “misinformation,” or incorrect information presented as fact.1 Social media’s large networks and instantaneous sharing create an environment where information, true or not, spreads fast. We’re exposed to an unbelievable quantity of information each day and some of it is, simply, not true. So then why do we believe it?
One theory is what’s called the “illusory truth effect.” The idea is simple: we are more likely to judge something as accurate when we’ve been exposed to it before. Here’s a classic study—not related to the Internet—to illustrate. Researchers took 40 college students and read them a series of statements, some true (e.g., “The thigh bone is the longest bone in the human body”) and some false (e.g., “The capybara is the largest of the marsupials”).2 The students rated how true they believed each statement to be on a scale of 1 to 7. Two weeks later, they heard another series of statements. Some were repeated from the first session, some were not. They did the same thing another two weeks later.
The result? Statements that students had heard in prior sessions were rated as more true, whether those statements were actually true or false. The change wasn’t huge—it would be exceedingly rare for a statement to go from “definitely false” to “definitely true.”
But it was enough to begin sanding down the edges of belief or disbelief, enough to erode, almost imperceptibly, at what they knew to be true.
There has been a mounting body of research on the illusory truth effect since that original study.
One line of this research tests the effects of repetition frequency. In other words, does our tendency to rate a statement as true increase the more times we’ve been exposed to it? It does. According to a 2021 study, our belief in statements like “The gestation period of a giraffe is 425 days”3 increases with repeated exposure. The sharpest increase in truth rating happens after the first exposure. If we see it once, we’re definitely more likely to believe it. But each time we’re exposed to it—up to 27 times in this study—our perception of its accuracy goes up.
Another important step in this research has been to test the limits around plausibility. The statements in the original study were plausibly true—those facts about capybaras and giraffes seemed reasonable enough. Surely, this effect would not hold for totally implausible statements like, say, “The tallest person in the world is 35 feet tall” or “George Washington was born in Beijing, China.”
Right? Wrong.4 According to at least one recent study, the illusory truth effect holds regardless of whether we’re judging if “A prune is a dried plum” (it is) or “The earth is a perfect square” (it is not). “Even highly implausible statements,” the authors conclude, “will become more plausible with enough repetition.”
What about the Internet?
This brings us to the next logical extension of this research. What happens when we see statements like these on the Internet?—say, our second cousin claiming that vaccines implant microchips5, or a headline suggesting Keanu Reeves is immortal. The illusory truth effect still happens.6
In a 2017 study, researchers presented participants with a series of headlines—some true, some fake—in the form of Facebook posts. Here are two examples (both fake):7
Some of the headlines were labeled “Disputed by 3rd Party Fact Checkers,” some were not. In addition, as in prior studies, some of the headlines were repeated across multiple sessions. What did the authors find? You guessed it. Whether true or false, prior exposure to a headline led to increased accuracy ratings. This effect held a full week later and held in the face of the warning labels. The takeaway: the illusory truth effect is powerful and long-lasting, even online.
Repeated exposure to a statement increases our perception of its truth. Over time, this is compounded by a common error in our memories called source confusion (sometimes called “misattribution of memory”). Originally described by Daniel Schacter in his “seven sins of memory,” source confusion occurs when we incorrectly recall the source from which we learned a piece of information. We hear something from a friend, for example, but then later mistakenly believe that we read it in the New York Times.
Here’s how this might all play out on social media. We’re browsing Facebook and, almost outside of conscious awareness, we glance at our great-uncle’s recent post. He shares an article claiming that Justin Bieber is a shape-shifting lizard person. A week later, your friend mentions that they’ve been reading about the Justin-Bieber’s-a-reptile theory. Huh, you think, that doesn’t sound totally implausible (illusory truth effect). Didn’t I read about that somewhere? Must have seen it on the news (source monitoring error). Another week passes. You’re back on Facebook. You see a headline: “Bieber lizard status: confirmed.” This might actually have some truth to it, you think. I’m going to share it. And so on.
What can we do about it?
Social media plays a powerful role in shaping what we believe to be true or false. If all it takes is exposure to a statement to get the belief wheels turning, the 167 millions videos TikTokers watch every minute—and our inability to remember where we learned the information in those videos—are influential. So what can be done to combat our instinctive beliefs in false information online?
We know that attaching warning labels is unlikely to be useful. Relying on third-party fact-checkers is problematic, too. They don’t always agree, and this manual process of checking facts is simply not scalable.
But recent evidence suggests a solution that sounds too easy to be true: simply ask people to rate the accuracy of an article before they share it. It turns out, this may be a powerful approach for two reasons.
First, aggregating this data would provide platforms (and their algorithms) information on which articles are true, relying on a scalable “wisdom of the crowd” solution, instead of individual fact-checkers.
Second, and perhaps more importantly, asking people to rate the accuracy of an article forces them to slow down. The illusory truth effect, and its subsequent decision-making, happens in an instant. The evidence suggests that when we take a second to reflect on the accuracy of a statement, we’re better able to discern between fiction and truth, less likely to share false information, and less likely to encode false information into our memories by nature of simply having been exposed to it.
It won’t solve everything, but the simple act of slowing down may be one antidote to misinformation. Sometimes we need a second—just a second—to remind ourselves that just because we saw it online, doesn’t mean it’s true.
A quick survey
What did you think of this week’s Techno Sapiens? Your feedback helps me make this better. Thanks!
The Best | Great | Good | Meh | The Wor
Misinformation is a broad term, but generally it means incorrect information that is presented as fact. We often think of this as nefarious, but it doesn’t need to be—like the time I mistakenly told my fourth-grade class that my best friend had lice. Disinformation, on the other hand, is incorrect information that is presented as true in a way that is deliberately deceptive.
Yes, I also wondered what, if not the capybara, is the largest of the marsupials. A quick search revealed that it’s the red kangaroo. This search also revealed that the red kangaroo can weigh up to 200 pounds, hop 30-40 miles per hour, and jump over 10 feet high. This is approximately the height of a school bus. Also, they congregate in numbers up to 1,500 at a time. And apparently, they’re just roaming around Australia? And no one is concerned about this? Aussie techno sapiens: please confirm.
These studies really seem to love using facts about animals, and this one is true. Apparently, the gestation period of giraffes is the longest of any mammals—between 425 and 465 days, or 15 months. Also, their babies are between 104 and 220 pounds and 6 feet tall when they’re born. And here I was, feeling pretty good about 9 months and a 7-pounder.
There is, of course, some debate about how far you can go with the implausibility. Here, the authors state the following: “Contrary to many intuitions, our results suggest that belief in all statements is increased by repetition. The observed illusory truth effect is largest for ambiguous items, but this can be explained by the psychometric properties of the task, rather than an underlying psychological mechanism that blocks the impact of repetition for implausible items.” Put simply, once you take a careful look at the stats, it seems that the effect happens pretty consistently across plausible and implausible items. In case you’re curious, here’s the full list of statements used in the study.
In July 2021, an Economist/YouGov poll asked 1500 U.S. Adults: “How likely is it that the following scenario is true? The U.S. government is using the COVID-19 vaccine to microchip the population.” The results suggest that 15% of people think this statement is “probably true” and 5% think it is “definitely true.”
Note that the “fake news” study came out before the implausibility study, and was done by some of the same researchers. In the “fake news” study, the authors concluded that the illusory truth effect did not hold for highly improbable headlines. However, in the later study, they cited new evidence and fancy stats (see above), arguing that their prior theory had been incorrect. Science in action!