cognitive bias

conspiracy-theorists-don’t-realize-they’re-on-the-fringe

Conspiracy theorists don’t realize they’re on the fringe


Gordon Pennycook: “It might be one of the biggest false consensus effects that’s been observed.”

Credit: Aurich Lawson / Thinkstock

Belief in conspiracy theories is often attributed to some form of motivated reasoning: People want to believe a conspiracy because it reinforces their worldview, for example, or doing so meets some deep psychological need, like wanting to feel unique. However, it might also be driven by overconfidence in their own cognitive abilities, according to a paper published in the Personality and Social Psychology Bulletin. The authors were surprised to discover that not only are conspiracy theorists overconfident, they also don’t realize their beliefs are on the fringe, massively overestimating by as much as a factor of four how much other people agree with them.

“I was expecting the overconfidence finding,” co-author Gordon Pennycook, a psychologist at Cornell University, told Ars. “If you’ve talked to someone who believes conspiracies, it’s self-evident. I did not expect them to be so ready to state that people agree with them. I thought that they would overestimate, but I didn’t think that there’d be such a strong sense that they are in the majority. It might be one of the biggest false consensus effects that’s been observed.”

In 2015, Pennycook made headlines when he co-authored a paper demonstrating how certain people interpret “pseudo-profound bullshit” as deep observations. Pennycook et al. were interested in identifying individual differences between those who are susceptible to pseudo-profound BS and those who are not and thus looked at conspiracy beliefs, their degree of analytical thinking, religious beliefs, and so forth.

They presented several randomly generated statements, containing “profound” buzzwords, that were grammatically correct but made no sense logically, along with a 2014 tweet by Deepak Chopra that met the same criteria. They found that the less skeptical participants were less logical and analytical in their thinking and hence much more likely to consider these nonsensical statements as being deeply profound. That study was a bit controversial, in part for what was perceived to be its condescending tone, along with questions about its methodology. But it did snag Pennycook et al. a 2016 Ig Nobel Prize.

Last year we reported on another Pennycook study, presenting results from experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory. That study showed that the AI interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual. “The work overturns a lot of how we thought about conspiracies, that they’re the result of various psychological motives and needs,” Pennycook said at the time.

Miscalibrated from reality

Pennycook has been working on this new overconfidence study since 2018, perplexed by observations indicating that people who believe in conspiracies also seem to have a lot of faith in their cognitive abilities—contradicting prior research finding that conspiracists are generally more intuitive. To investigate, he and his co-authors conducted eight separate studies that involved over 4,000 US adults.

The assigned tasks were designed in such a way that participants’ actual performance and how they perceived their performance were unrelated. For example, in one experiment, they were asked to guess the subject of an image that was largely obscured. The subjects were then asked direct questions about their belief (or lack thereof) concerning several key conspiracy claims: the Apollo Moon landings were faked, for example, or that Princess Diana’s death wasn’t an accident. Four of the studies focused on testing how subjects perceived others’ beliefs.

The results showed a marked association between subjects’ tendency to be overconfident and belief in conspiracy theories. And while a majority of participants believed a conspiracy’s claims just 12 percent of the time, believers thought they were in the majority 93 percent of the time. This suggests that overconfidence is a primary driver of belief in conspiracies.

It’s not that believers in conspiracy theories are massively overconfident; there is no data on that, because the studies didn’t set out to quantify the degree of overconfidence, per Pennycook. Rather, “They’re overconfident, and they massively overestimate how much people agree with them,” he said.

Ars spoke with Pennycook to learn more.

Ars Technica: Why did you decide to investigate overconfidence as a contributing factor to believing conspiracies?

Gordon Pennycook: There’s a popular sense that people believe conspiracies because they’re dumb and don’t understand anything, they don’t care about the truth, and they’re motivated by believing things that make them feel good. Then there’s the academic side, where that idea molds into a set of theories about how needs and motivations drive belief in conspiracies. It’s not someone falling down the rabbit hole and getting exposed to misinformation or conspiratorial narratives. They’re strolling down: “I like it over here. This appeals to me and makes me feel good.”

Believing things that no one else agrees with makes you feel unique. Then there’s various things I think that are a little more legitimate: People join communities and there’s this sense of belongingness. How that drives core beliefs is different. Someone may stop believing but hang around in the community because they don’t want to lose their friends. Even with religion, people will go to church when they don’t really believe. So we distinguish beliefs from practice.

What we observed is that they do tend to strongly believe these conspiracies despite the fact that there’s counter evidence or a lot of people disagree. What would lead that to happen? It could be their needs and motivations, but it could also be that there’s something about the way that they think where it just doesn’t occur to them that they could be wrong about it. And that’s where overconfidence comes in.

Ars Technica: What makes this particular trait such a powerful driving force?

Gordon Pennycook: Overconfidence is one of the most important core underlying components, because if you’re overconfident, it stops you from really questioning whether the thing that you’re seeing is right or wrong, and whether you might be wrong about it. You have an almost moral purity of complete confidence that the thing you believe is true. You cannot even imagine what it’s like from somebody else’s perspective. You couldn’t imagine a world in which the things that you think are true could be false. Having overconfidence is that buffer that stops you from learning from other people. You end up not just going down the rabbit hole, you’re doing laps down there.

Overconfidence doesn’t have to be learned, parts of it could be genetic. It also doesn’t have to be maladaptive. It’s maladaptive when it comes to beliefs. But you want people to think that they will be successful when starting new businesses. A lot of them will fail, but you need some people in the population to take risks that they wouldn’t take if they were thinking about it in a more rational way. So it can be optimal at a population level, but maybe not at an individual level.

Ars Technica: Is this overconfidence related to the well-known Dunning-Kruger effect?

Gordon Pennycook: It’s because of Dunning-Kruger that we had to develop a new methodology to measure overconfidence, because the people who are the worst at a task are the worst at knowing that they’re the worst at the task. But that’s because the same things that you use to do the task are the things you use to assess how good you are at the task. So if you were to give someone a math test and they’re bad at math, they’ll appear overconfident. But if you give them a test of assessing humor and they’re good at that, they won’t appear overconfident. That’s about the task, not the person.

So we have tasks where people essentially have to guess, and it’s transparent. There’s no reason to think that you’re good at the task. In fact, people who think they’re better at the task are not better at it, they just think they are. They just have this underlying kind of sense that they can do things, they know things, and that’s the kind of thing that we’re trying to capture. It’s not specific to a domain. There are lots of reasons why you could be overconfident in a particular domain. But this is something that’s an actual trait that you carry into situations. So when you’re scrolling online and come up with these ideas about how the world works that don’t make any sense, it must be everybody else that’s wrong, not you.

Ars Technica: Overestimating how many people agree with them seems to be at odds with conspiracy theorists’ desire to be unique.  

Gordon Pennycook: In general, people who believe conspiracies often have contrary beliefs. We’re working with a population where coherence is not to be expected. They say that they’re in the majority, but it’s never a strong majority. They just don’t think that they’re in a minority when it comes to the belief. Take the case of the Sandy Hook conspiracy, where adherents believe it was a false flag operation. In one sample, 8 percent of people thought that this was true. That 8 percent thought 61 percent of people agreed with them.

So they’re way off. They really, really miscalibrated. But they don’t say 90 percent. It’s 60 percent, enough to be special, but not enough to be on the fringe where they actually are. I could have asked them to rank how smart they are relative to others, or how unique they thought their beliefs were, and they would’ve answered high on that. But those are kind of mushy self-concepts. When you ask a specific question that has an objectively correct answer in terms of the percent of people in the sample that agree with you, it’s not close.

Ars Technica: How does one even begin to combat this? Could last year’s AI study point the way?

Gordon Pennycook: The AI debunking effect works better for people who are less overconfident. In those experiments, very detailed, specific debunks had a much bigger effect than people expected. After eight minutes of conversation, a quarter of the people who believed the thing didn’t believe it anymore, but 75 percent still did. That’s a lot. And some of them, not only did they still believe it, they still believed it to the same degree. So no one’s cracked that. Getting any movement at all in the aggregate was a big win.

Here’s the problem. You can’t have a conversation with somebody who doesn’t want to have the conversation. In those studies, we’re paying people, but they still get out what they put into the conversation. If you don’t really respond or engage, then our AI is not going to give you good responses because it doesn’t know what you’re thinking. And if the person is not willing to think. … This is why overconfidence is such an overarching issue. The only alternative is some sort of propagandistic sit-them-downs with their eyes open and try to de-convert them. But you can’t really convert someone who doesn’t want to be converted. So I’m not sure that there is an answer. I think that’s just the way that humans are.

Personality and Social Psychology Bulletin, 2025. DOI: 10.1177/01461672251338358  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Conspiracy theorists don’t realize they’re on the fringe Read More »

we’ve-outsourced-our-confirmation-biases-to-search-engines

We’ve outsourced our confirmation biases to search engines

So, the researchers decided to see if they could upend it.

Keeping it general

The simplest way to change the dynamics of this was simply to change the results returned by the search. So, the researchers did a number of experiments where they gave all of the participants the same results, regardless of the search terms they had used. When everybody gets the same results, their opinions after reading them tend to move in the same direction, suggesting that search results can help change people’s opinions.

The researchers also tried giving everyone the results of a broad, neutral search, regardless of the terms they’d entered. This weakened the probability that beliefs would last through the process of formulating and executing a search. In other words, avoiding the sorts of focused, biased search terms allowed some participants to see information that could change their minds.

Despite all the swapping, participants continued to rate the search results relevant. So, providing more general search results even when people were looking for more focused information doesn’t seem to harm people’s perception of the service. In fact, Leung and Urminsky found that the AI version of Bing search would reformulate narrow questions into more general ones.

That said, making this sort of change wouldn’t be without risks. There are a lot of subject areas where a search shouldn’t return a broad range of information—where grabbing a range of ideas would expose people to fringe and false information.

Nevertheless, it can’t hurt to be aware of how we can use search services to reinforce our biases. So, in the words of Leung and Urminsky, “When search engines provide directionally narrow search results in response to users’ directionally narrow search terms, the results will reflect the users’ existing beliefs, instead of promoting belief updating by providing a broad spectrum of related information.”

PNAS, 2025. DOI: 10.1073/pnas.2408175122  (About DOIs).

We’ve outsourced our confirmation biases to search engines Read More »

how-to-avoid-the-cognitive-hooks-and-habits-that-make-us-vulnerable-to-cons

How to avoid the cognitive hooks and habits that make us vulnerable to cons

Daniel Simons and Christopher Chabris are the authors of <em> Nobody’s Fool: Why We Get Taken In and What We Can Do About It.</em>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/01/fool1-800×531.jpg”></img><figcaption>
<p><a data-height=Enlarge / Daniel Simons and Christopher Chabris are the authors of Nobody’s Fool: Why We Get Taken In and What We Can Do About It.

Basic Books

There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2023, each day from December 25 through January 5. Today: A conversation with psychologists Daniel Simons and Christopher Chabris on the key habits of thinking and reasoning that may serve us well most of the time, but can make us vulnerable to being fooled.

It’s one of the most famous experiments in psychology. Back in 1999, Daniel Simons and Christopher Chabris conducted an experiment on inattentional blindness. They asked test subjects to watch a short video in which six people—half in white T-shirts, half in black ones—passed basketballs around. The subjects were asked to count the number of passes made by the people in white shirts. Halfway through the video, a person in a gorilla suit walked into the midst of the players and thumped their chest at the camera before strolling off-screen. What surprised the researchers was that fully half the test subjects were so busy counting the number of basketball passes that they never saw the gorilla.

The experiment became a viral sensation—helped by the amusing paper title, “Gorillas in Our Midst“—and snagged Simons and Chabris the 2004 Ig Nobel Psychology Prize. It also became the basis of their bestselling 2010 book, The Invisible Gorilla: How Our Intuitions Deceive Us. Thirteen years later, the two psychologists are back with their latest book, published last July, called Nobody’s Fool: Why We Get Taken In and What We Can Do About It.  Simons and Chabris have penned an entertaining examination of key habits of thinking that usually serve us well but also make us vulnerable to cons and scams. They also offer some practical tools based on cognitive science to help us spot deceptions before being taken in.

“People love reading about cons, yet they keep happening,” Simons told Ars. “Why do they keep happening? What is it those cons are tapping into? Why do we not learn from reading about Theranos? We realized there was a set of cognitive principles that seemed to apply across all of the domains, from cheating in sports and chess to cheating in finance and biotech. That became our organizing theme.”

Ars spoke with Simons and Chabris to learn more.

Ars Technica: I was surprised to learn that people still fall for basic scams like the Nigerian Prince scam. It reminds me of Fox Mulder’s poster on The X-Files: “I want to believe.

Daniel Simons: The Nigerian Prince scam is an interesting one because it’s been around forever. Its original form was in letters. Most people don’t get fooled by that one. The vast majority of people look at it and say, this thing is written in terrible grammar. It’s a mess. And why would anybody believe that they’re the one to recover this vast fortune? So there are some people who fall for it, but it’s a tiny percentage of people. I think it’s still illustrative because that one is obviously too good to be true for most people, but there’s some small subset of people for whom it’s just good enough. It’s just appealing enough to say, “Oh yeah, maybe I could become rich.”

There was a profile in the New Yorker of a clinical psychologist who fell for it. There are people who, for whatever reason, are either desperate or have the idea that they deserve to inherit a lot of money. But there are a lot of scams that are much less obvious than that one, selecting for the people who are most naive about it. I think the key insight there is that we tend to assume that only gullible people fall for this stuff. That is fundamentally wrong. We all fall for this stuff if it’s framed in the right way.

Christopher Chabris: I don’t think they’re necessarily people who always want to believe. I think it really depends on the situation. Some people might want to believe that they can strike it rich in crypto, but they would never fall for a Nigerian email or, for that matter, they might not fall for a traditional Ponzi scheme because they don’t believe in fiat money or the stock market. Going back to the Invisible Gorilla, one thing we noticed was a lot of people would ask us, “What’s the difference between the people who noticed the gorilla and the people who didn’t notice the gorilla?” The answer is, well, some of them happened to notice it and some of them didn’t. It’s not an IQ or personality test. So in the case of the Nigerian email, there might’ve been something going on in that guy’s life at that moment when he got that email that maybe led him to initially accept the premise as true, even though he knew it seemed kind of weird. Then, he got committed to the idea once he started interacting with these people.

Christopher Chabris

So one of our principles is commitment: the idea that if you accept something as true and you don’t question it anymore, then all kinds of bad decisions and bad outcomes can flow from that. So, if you somehow actually get convinced that these guys in Nigeria are real, that can explain the bad decisions you make after that. I think there’s a lot of unpredictableness about it. We all need to understand how these things work. We might think it sounds crazy and we would never fall for it, but we might if it was a different scam at a different time.

How to avoid the cognitive hooks and habits that make us vulnerable to cons Read More »