psychology

conspiracy-theorists-don’t-realize-they’re-on-the-fringe

Conspiracy theorists don’t realize they’re on the fringe


Gordon Pennycook: “It might be one of the biggest false consensus effects that’s been observed.”

Credit: Aurich Lawson / Thinkstock

Belief in conspiracy theories is often attributed to some form of motivated reasoning: People want to believe a conspiracy because it reinforces their worldview, for example, or doing so meets some deep psychological need, like wanting to feel unique. However, it might also be driven by overconfidence in their own cognitive abilities, according to a paper published in the Personality and Social Psychology Bulletin. The authors were surprised to discover that not only are conspiracy theorists overconfident, they also don’t realize their beliefs are on the fringe, massively overestimating by as much as a factor of four how much other people agree with them.

“I was expecting the overconfidence finding,” co-author Gordon Pennycook, a psychologist at Cornell University, told Ars. “If you’ve talked to someone who believes conspiracies, it’s self-evident. I did not expect them to be so ready to state that people agree with them. I thought that they would overestimate, but I didn’t think that there’d be such a strong sense that they are in the majority. It might be one of the biggest false consensus effects that’s been observed.”

In 2015, Pennycook made headlines when he co-authored a paper demonstrating how certain people interpret “pseudo-profound bullshit” as deep observations. Pennycook et al. were interested in identifying individual differences between those who are susceptible to pseudo-profound BS and those who are not and thus looked at conspiracy beliefs, their degree of analytical thinking, religious beliefs, and so forth.

They presented several randomly generated statements, containing “profound” buzzwords, that were grammatically correct but made no sense logically, along with a 2014 tweet by Deepak Chopra that met the same criteria. They found that the less skeptical participants were less logical and analytical in their thinking and hence much more likely to consider these nonsensical statements as being deeply profound. That study was a bit controversial, in part for what was perceived to be its condescending tone, along with questions about its methodology. But it did snag Pennycook et al. a 2016 Ig Nobel Prize.

Last year we reported on another Pennycook study, presenting results from experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory. That study showed that the AI interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual. “The work overturns a lot of how we thought about conspiracies, that they’re the result of various psychological motives and needs,” Pennycook said at the time.

Miscalibrated from reality

Pennycook has been working on this new overconfidence study since 2018, perplexed by observations indicating that people who believe in conspiracies also seem to have a lot of faith in their cognitive abilities—contradicting prior research finding that conspiracists are generally more intuitive. To investigate, he and his co-authors conducted eight separate studies that involved over 4,000 US adults.

The assigned tasks were designed in such a way that participants’ actual performance and how they perceived their performance were unrelated. For example, in one experiment, they were asked to guess the subject of an image that was largely obscured. The subjects were then asked direct questions about their belief (or lack thereof) concerning several key conspiracy claims: the Apollo Moon landings were faked, for example, or that Princess Diana’s death wasn’t an accident. Four of the studies focused on testing how subjects perceived others’ beliefs.

The results showed a marked association between subjects’ tendency to be overconfident and belief in conspiracy theories. And while a majority of participants believed a conspiracy’s claims just 12 percent of the time, believers thought they were in the majority 93 percent of the time. This suggests that overconfidence is a primary driver of belief in conspiracies.

It’s not that believers in conspiracy theories are massively overconfident; there is no data on that, because the studies didn’t set out to quantify the degree of overconfidence, per Pennycook. Rather, “They’re overconfident, and they massively overestimate how much people agree with them,” he said.

Ars spoke with Pennycook to learn more.

Ars Technica: Why did you decide to investigate overconfidence as a contributing factor to believing conspiracies?

Gordon Pennycook: There’s a popular sense that people believe conspiracies because they’re dumb and don’t understand anything, they don’t care about the truth, and they’re motivated by believing things that make them feel good. Then there’s the academic side, where that idea molds into a set of theories about how needs and motivations drive belief in conspiracies. It’s not someone falling down the rabbit hole and getting exposed to misinformation or conspiratorial narratives. They’re strolling down: “I like it over here. This appeals to me and makes me feel good.”

Believing things that no one else agrees with makes you feel unique. Then there’s various things I think that are a little more legitimate: People join communities and there’s this sense of belongingness. How that drives core beliefs is different. Someone may stop believing but hang around in the community because they don’t want to lose their friends. Even with religion, people will go to church when they don’t really believe. So we distinguish beliefs from practice.

What we observed is that they do tend to strongly believe these conspiracies despite the fact that there’s counter evidence or a lot of people disagree. What would lead that to happen? It could be their needs and motivations, but it could also be that there’s something about the way that they think where it just doesn’t occur to them that they could be wrong about it. And that’s where overconfidence comes in.

Ars Technica: What makes this particular trait such a powerful driving force?

Gordon Pennycook: Overconfidence is one of the most important core underlying components, because if you’re overconfident, it stops you from really questioning whether the thing that you’re seeing is right or wrong, and whether you might be wrong about it. You have an almost moral purity of complete confidence that the thing you believe is true. You cannot even imagine what it’s like from somebody else’s perspective. You couldn’t imagine a world in which the things that you think are true could be false. Having overconfidence is that buffer that stops you from learning from other people. You end up not just going down the rabbit hole, you’re doing laps down there.

Overconfidence doesn’t have to be learned, parts of it could be genetic. It also doesn’t have to be maladaptive. It’s maladaptive when it comes to beliefs. But you want people to think that they will be successful when starting new businesses. A lot of them will fail, but you need some people in the population to take risks that they wouldn’t take if they were thinking about it in a more rational way. So it can be optimal at a population level, but maybe not at an individual level.

Ars Technica: Is this overconfidence related to the well-known Dunning-Kruger effect?

Gordon Pennycook: It’s because of Dunning-Kruger that we had to develop a new methodology to measure overconfidence, because the people who are the worst at a task are the worst at knowing that they’re the worst at the task. But that’s because the same things that you use to do the task are the things you use to assess how good you are at the task. So if you were to give someone a math test and they’re bad at math, they’ll appear overconfident. But if you give them a test of assessing humor and they’re good at that, they won’t appear overconfident. That’s about the task, not the person.

So we have tasks where people essentially have to guess, and it’s transparent. There’s no reason to think that you’re good at the task. In fact, people who think they’re better at the task are not better at it, they just think they are. They just have this underlying kind of sense that they can do things, they know things, and that’s the kind of thing that we’re trying to capture. It’s not specific to a domain. There are lots of reasons why you could be overconfident in a particular domain. But this is something that’s an actual trait that you carry into situations. So when you’re scrolling online and come up with these ideas about how the world works that don’t make any sense, it must be everybody else that’s wrong, not you.

Ars Technica: Overestimating how many people agree with them seems to be at odds with conspiracy theorists’ desire to be unique.  

Gordon Pennycook: In general, people who believe conspiracies often have contrary beliefs. We’re working with a population where coherence is not to be expected. They say that they’re in the majority, but it’s never a strong majority. They just don’t think that they’re in a minority when it comes to the belief. Take the case of the Sandy Hook conspiracy, where adherents believe it was a false flag operation. In one sample, 8 percent of people thought that this was true. That 8 percent thought 61 percent of people agreed with them.

So they’re way off. They really, really miscalibrated. But they don’t say 90 percent. It’s 60 percent, enough to be special, but not enough to be on the fringe where they actually are. I could have asked them to rank how smart they are relative to others, or how unique they thought their beliefs were, and they would’ve answered high on that. But those are kind of mushy self-concepts. When you ask a specific question that has an objectively correct answer in terms of the percent of people in the sample that agree with you, it’s not close.

Ars Technica: How does one even begin to combat this? Could last year’s AI study point the way?

Gordon Pennycook: The AI debunking effect works better for people who are less overconfident. In those experiments, very detailed, specific debunks had a much bigger effect than people expected. After eight minutes of conversation, a quarter of the people who believed the thing didn’t believe it anymore, but 75 percent still did. That’s a lot. And some of them, not only did they still believe it, they still believed it to the same degree. So no one’s cracked that. Getting any movement at all in the aggregate was a big win.

Here’s the problem. You can’t have a conversation with somebody who doesn’t want to have the conversation. In those studies, we’re paying people, but they still get out what they put into the conversation. If you don’t really respond or engage, then our AI is not going to give you good responses because it doesn’t know what you’re thinking. And if the person is not willing to think. … This is why overconfidence is such an overarching issue. The only alternative is some sort of propagandistic sit-them-downs with their eyes open and try to de-convert them. But you can’t really convert someone who doesn’t want to be converted. So I’m not sure that there is an answer. I think that’s just the way that humans are.

Personality and Social Psychology Bulletin, 2025. DOI: 10.1177/01461672251338358  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Conspiracy theorists don’t realize they’re on the fringe Read More »

fanfic-study-challenges-leading-cultural-evolution-theory

Fanfic study challenges leading cultural evolution theory


Fanfic community craves familiarity much more than novelty—but reports greater enjoyment from novelty.

Credit: Aurich Lawson | Marvel

It’s widely accepted conventional wisdom that when it comes to creative works—TV shows, films, music, books—consumers crave an optimal balance between novelty and familiarity. What we choose to consume and share with others, in turn, drives cultural evolution.

But what if that conventional wisdom is wrong? An analysis based on data from a massive online fan fiction (fanfic) archive contradicts this so-called “balance theory,” according to a paper published in the journal Humanities and Social Sciences Communications. The fanfic community seems to overwhelmingly prefer more of the same, consistently choosing familiarity over novelty; however, they reported greater overall enjoyment when they took a chance and read something more novel. In short: “Sameness entices, but novelty enchants.”

Strictly speaking, authors have always copied characters and plots from other works (cf. many of William Shakespeare’s plays), although the advent of copyright law complicated matters. Modern fan fiction as we currently think of it arguably emerged with the 1967 publication of the first Star Trek fanzine (Spockanalia), which included spinoff fiction based on the series. Star Trek also spawned the subgenre of slash fiction, when writers began creating stories featuring Kirk and Spock (Kirk/Spock, or K/S) in a romantic (often sexual) relationship.

The advent of the World Wide Web brought fan fiction to the masses, starting with Usenet newsgroups and mailing lists and eventually the development of massive online archives where creators could upload their work to be read and commented upon by readers. The subculture has since exploded; there’s fanfic based on everything from Sherlock Holmes to The X-Files, Buffy the Vampire Slayer, Game of Thrones, the Marvel Cinematic Universe, and Harry Potter. You name it, there’s probably fanfic about it.

There are also many subgenres within fanfic beyond slash, some of them rather weird, like a magical pregnancy (Mpreg) story in which Sherlock Holmes and Watson fall so much in love with each other that one of them becomes magically pregnant. (One suspects Sherlock would not handle morning sickness very well.) Sometimes fanfic even breaks into the cultural mainstream: E.L. James’ bestselling Fifty Shades of Grey started out as fan fiction set in the world of Stephenie Meyer’s Twilight series.

So fanfic is a genuine cultural phenomenon—hence its fascination for Simon DeDeo, a complexity scientist at Carnegie Mellon University and the Santa Fe Institute who studies cultural evolution and the emergence of social hierarchies. (I reported on DeDeo’s work analyzing the archives of London’s Old Bailey in 2014.) While opinion remains split—even among the authors of the original works—as to whether fanfic is a welcome homage to the original works that just might help drive book sales or whether it constitutes a form of copyright infringement, DeDeo enthusiastically embraces the format.

“It’s the dark matter of creativity,” DeDeo told Ars. “I love that it exists. It’s a very non-elitist form. There’s no New York Times bestseller list. It would be hard to name the most famous fan fiction writers. The world building has been done. The characters exist. The plot elements have already been put together. So the bar to entry is lower. Maybe sometime in the 19th century we get a notion of genius and the individual creator, but that’s not really what storytelling has been about for the majority of human history. In that one sense, fan fiction is closer to what we were doing around the campfire.”

spock lying down in sick bay while kirk holds his hand tenderly at his bedside

Star Trek arguably spawned contemporary fan fiction—including stories imagining Kirk and Spock as romantic partners. Credit: Paramount Pictures

That’s a boon for fanfic writers, most of whom have non-creative day jobs; fanfic provides them with a creative outlet. Every year, when DeDeo asks students in his classes whether they read and/or write fanfic, a significant percentage always raise their hands. (He once asked a woman about why she wrote slash. Her response: “Because no one was writing porn that I wanted to read.”) In fact, that’s how this current study came about. Co-author Elise Jing is one of DeDeo’s former students with a background in both science and the humanities—and she’s also a fanfic connoisseur.

Give them more of the same

Jing thought (and DeDeo concurred) that the fanfic subculture provided an excellent laboratory for studying cultural evolution. “It’s tough to get students to read a book. They write fan fiction voluntarily. This is stuff they care about writing and care about reading. Nobody gets prestige or power in the larger society from writing fan fiction,” said DeDeo. “This is not a top-down model where Hollywood is producing something and then the fans are consuming it. The fans are producing and consuming so it’s a truly self-contained culture that’s constantly evolving. It’s a pure product consumption cycle. People read it, they bookmark it, they write comments on it, and all that gives us insight into how it’s being received. If you’re a psychologist, you couldn’t pay to get this kind of data.”

Fanfic is a tightly controlled ecosystem, so it lacks many of the confounding factors that make it so difficult to study mainstream cultural works. Also, the fan fiction community is enormous, so the potential datasets are huge. For this study, the authors relied on data from the online Archive of Our Own (AO3), which boasts nearly 9 million users covering more than 70,000 different fandoms and some 15 million individual works. (Sadly, the site has since shut down access to its data over concerns of that data being used to train AI.)

According to DeDeo, the idea was to examine the question of cultural evolution on a population level, rather than on the individual level: “How do these individual things agglomerate to produce the culture? “

Strong positive correlation is found between the response variables except for the Kudos-to-hits ratio. Topic novelty is weakly positively correlated with Kudos-to-hits ratio, but negatively correlated with the other response variables.

Strong positive correlation is found between the response variables except for the Kudos-to-hits ratio. Topic novelty is weakly positively correlated with Kudos-to-hits ratio but negatively correlated with the other response variables. Credit: E. Jing et al., 2025

The results were striking. AO3 members overwhelmingly preferred familiarity in their fan fiction, i.e., more of the same. One notable exception was a short story that was both hugely popular and highly novel. Simply titled “I Am Groot,” the story featured the character from Guardians of the Galaxy. The text is just “I am Groot” repeated 40,000 times—a stroke of genius in that this is entirely consistent with the canonical MCU character, whose entire dialogue consists of those words, with meaning conveyed by shifts of tone and context. But such exceptions proved to be very rare.

“We were so stunned that balance theory wasn’t working,” said DeDeo, who credits Jing with the realization that they were dealing with two distinct pieces of the puzzle: how much is being consumed, and how much people like what they consume, i.e., enjoyment. Their analysis revealed, first, that people really don’t want an optimized mix of familiar and new; they want the same thing over and over again, even within the fanfic community. But when people do make the effort to try something new, they tend to enjoy it more than just consuming more of the same.

In short, “We are anti-balance theory,” said DeDeo. “In biology, for example, you make a small variation in the species and you get micro-evolution. In culture, a minor variation is just less likely to be consumed. So it really is a mystery how we evolve at all culturally; it’s not happening by gradual movement. We can see that there’s novelty. We can see that when people encounter novelty, they enjoy it. But we can’t quite make sense of how these two competing effects work out.”

“This is the great paradox,” said DeDeo. “Culture has to be stable. Without long-term stability, there’s no coherent body of work that can even constitute of culture if every year fan fiction totally changes. That inherent cultural conservatism is in some sense a precondition for culture to exist at all.” Yet culture does evolve, even within the fanfic community.

One possible alternative is some kind of punctuated equilibrium model for cultural evolution, in which things remain stable but undergo occasional leaps forward. “One story about how culture evolves is that eventually, the stuff that’s more enjoyable than what people keep re-consuming somehow becomes accessible to the majority of the community,” said DeDeo. “Novelty might act as a gravitational pull on the center and [over time] some new material gets incorporated into the culture.” He draws an analogy to established tech companies like IBM versus startups, most of which die out; but those few that succeed often push the culture substantially forward.

Perhaps there are two distinct groups of people: those who actively seek out new things and those who routinely click on familiar subject matter because even though their enjoyment might be less, it’s not worth overcoming their inertia to try out something new. Perhaps it is those who seek novelty that sow the seeds of eventual shifts in trends.

“Is it that we’re tired? Is it that we’re lazy? Is this a conflict within a human or within a culture?” said DeDeo. “We don’t know because we only get the raw numbers. If we could track an individual reader to see how they moved between these two spaces, that would be really interesting.”

Humanities and Social Sciences Communications, 2025. DOI: 10.1057/s41599-025-05166-3  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Fanfic study challenges leading cultural evolution theory Read More »

why-incels-take-the-“blackpill”—and-why-we-should-care

Why incels take the “Blackpill”—and why we should care


“Don’t work for Soyciety”

A growing number of incels are NEET (Not in Education, Employment, or Training). That should concern us all.

The Netlix series Adolescence explores the roots of misogynistic subcultures. Credit: Netflix

The online incel (“involuntary celibate”) subculture is mostly known for its extreme rhetoric, primarily against women, sometimes erupting into violence. But a growing number of self-identified incels are using their ideology as an excuse for not working or studying. This could constitute a kind of coping mechanism to make sense of their failures—not just in romantic relationships but also in education and employment, according to a paper published in the journal Gender, Work, & Organization.

Contrary to how it’s often portrayed, the “manosphere,” as it is often called, is not a monolith. Those who embrace the “Redpill” ideology, for example, might insist that women control the “sexual marketplace” and are only interested in ultramasculine “Chads.” They champion self-improvement as a means to make themselves more masculine and successful, and hence (they believe) more attractive to women—or at least better able to manipulate women.

By contrast, the “Blackpilled” incel contingent is generally more nihilistic. These individuals reject the Redpill notion of alpha-male masculinity and the accompanying focus on self-improvement. They believe that dating and social success are entirely determined by one’s looks and/or genetics. Since there is nothing they can do to improve their chances with women or their lot in life, why even bother?

“People have a tendency to lump all these different groups together as the manosphere,” co-author AnnaRose Beckett-Herbert of McGill University told Ars. “One critique I have of the recent Netflix show Adolescence—which was well done overall—is they lump incels in with figures like Andrew Tate, as though it’s all interchangeable. There’s areas of overlap, like extreme misogyny, but there are really important distinctions. We have to be careful to make those distinctions because the kind of intervention or prevention efforts that we might direct towards the Redpill community versus the Blackpill community might be very different.”

Incels constitute a fairly small fraction of the manosphere, but the vast majority of incels appear to embrace the Blackpill ideology, per Beckett-Herbert. That nihilistic attitude can extend to any kind of participation in what incels term “Soyciety”—including educational attainment and employment. When that happens, such individuals are best described by the acronym NEET (Not in Education, Employment, or Training).

“It’s not that we have large swaths of young men that are falling into this rabbit hole,” said Beckett-Herbert. “Their ideology is pretty fringe, but we’re seeing the community grow, and we’re seeing the ideology spread. It used to be contained to romantic relationships and sex. Now we’re seeing this broader disengagement from society as a whole. We should all be concerned about that trend.”

The NEET trend is also tied to the broader cultural discourse on how boys and young men are struggling in contemporary society. While prior studies tended to focus on the misogynistic rhetoric and propensity for violence among incels, “I thought that the unemployment lens was interesting because it’s indicative of larger problems,” said Beckett-Herbert. “It’s important to remember that it’s not zero-sum. We can care about the well-being of women and girls and also acknowledge that young men are struggling, too. Those don’t have to be at odds.”

“Lie down and rot”

Beckett-Herbert and her advisor/co-author, McGill University sociologist Eran Shor, chose the incels.is platform as a data source for their study due to its ease of public access and relatively high traffic, with nearly 20,000 members. The pair used Python code to scrape 100 pages, amounting to around 10,000 discussion threads between October and December 2022. A pilot study revealed 10 keywords that appeared most frequently in those threads: “study,” “school,” “NEET,” “job,” “work,” “money,” “career,” “wage,” “employ,” and “rot.” (“They use the phrase ‘lie down and rot’ a lot,” said Beckett-Herbert.)

This allowed Beckett-Herbert and Shor to narrow their sample down to 516 threads with titles containing those keywords. They randomly selected a subset of 171 discussion threads for further study. That analysis yielded four main themes that dominated the discussion threads: political/ideological arguments about being NEET; boundary policing; perceived discrimination; and bullying and marginalization.

Roughly one-quarter of the total comments consisted of political or ideological arguments promoting being NEET, with most commenters advocating minimizing one’s contributions to society as much as possible. They suggested going on welfare, for instance, to “take back” from society, or declared they should be exempt from paying any taxes, as “compensation for our suffering.” About 25 percent—a vocal minority—pushed back on glorifying the NEET lifestyle and offered concrete suggestions for self-improvement. (“Go outside and try at least,” one user commented.)

Such pushback often led to boundary policing. Those who do pursue jobs or education run the risk of being dubbed “fakecels” and becoming alienated from the rest of the incel community. (“Don’t work for a society that hates you,” one user commented.) “There’s a lot of social psychological research on groupthink and group polarization that is relevant here,” said Beckett-Herbert. “A lot of these young men may not have friends in their real life. This community is often their one source of social connection. So the incel ideology becomes core to their identity: ‘I’m part of this community, and we don’t work. We are subhumans.'”

There were also frequent laments about being discriminated against for not being attractive (“lookism”), both romantically and professionally, as well as deep resentment of women’s increased presence in the workplace, deemed a threat to men’s own success. “They love to cherry-pick all these findings from psychology research [to support their position],” said Beckett-Herbert. For instance, “There is evidence that men who are short or not conventionally attractive are discriminated against in hiring. But there’s also a lot of evidence suggesting that this actually affects women more. Women who are overweight face a greater bias against them in hiring than men do, for example.”

Beckett-Herbert and Shor also found that about 15 percent of the comments in their sample concerned users’ experiences being harassed or bullied (usually by other men), their mental health challenges (anxiety, depression), and feeling estranged or ostracized at school or work—experiences that cemented their reluctance to work or engage in education or vocational training.

Many of these users also mentioned being autistic, in keeping with prior research showing a relatively high share of people with autism in incel communities. The authors were careful to clarify, however, that most people with autism “are not violent or hateful, nor do they identify as incels or hold explicitly misogynistic views,” they wrote. “Rather, autism, when combined with other mental health issues such as depression, anxiety, and hopelessness, may make young men more vulnerable to incel ideologies.”

There are always caveats. In this case, the study was limited to a single incel forum, which might not be broadly representative of similar discussions on other platforms. And there could be a bit of selection bias at play. Not every incel member may actively participate in discussion threads (lurkers) and non-NEET incels might be less likely to do so either because they have less free time or don’t wish to be dismissed as “fakecels.”However, Beckett-Herbert and Shor note that their findings are consistent with previous studies that suggest there are a disproportionately large number of NEETs within the incel community.

A pound of prevention

Is effective intervention even possible for members of the incel community, given their online echo chamber? Beckett-Herbert acknowledges that it is very difficult to break through to such people. “De-radicalization is a noble, worthy line of research,” she said. “But the existing evidence from that field of study suggests that prevention is easier and more effective than trying to pull these people out once they’re already in.” Potential strategies might include fostering better digital and media literacy, i.e., teaching kids to be cognizant of the content they’re consuming online. Exposure time is another key issue.

“A lot of these young people don’t have healthy outlets that are not in the digital world,” said Beckett-Herbert “They come home from school and spend hours and hours online. They’re lonely and isolated from real-world communities and structures. Some of these harmful ideologies might be downstream of these larger root causes. How can we help boys do better in school, feel better prepared for the labor market? How can we help them make more friends? How can we get them involved in real-world activities that will diminish their time spent online? I think that that can go a long way. Just condemning them or banning their spaces—that’s not a good long-term solution.”

While there are multiple well-publicized instances of self-identified incels committing violent acts—most notably Elliot Rodger, who killed six people in 2014—Beckett-Herbert emphasizes not losing sight of incels’ fundamental humanity. “We focus a lot on the misogyny, the potential for violence against women, and that is so important,” she said. “You will not hear me saying we should not focus on that. But we also should note that statistically, an incel is much more likely to commit suicide or be violent towards themselves than they are toward someone else. You can both condemn their ideology and find it abhorrent and also remember that we need to have empathy for these people.”

Many people—women especially—might find that a tall order, and Beckett-Herbert understands that reluctance. “I do understand people’s hesitancy to empathize with them, because it feels like you’re giving credence to their rhetoric,” she said. “But at the end of the day, they are human, and a lot of them are really struggling, marginalized people coming from pretty sad backgrounds. When you peruse their online world, it’s the most horrifying, angering misogyny right next to some of the saddest mental health, suicidal, low self-esteem stuff you’ve ever seen. I think humanizing them and having empathy is going to be foundational to any intervention efforts to reintegrate them. But it’s something I wrestle with a lot.”

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Why incels take the “Blackpill”—and why we should care Read More »

new-twist-on-marshmallow-test-shows-power-of-a-promise

New twist on marshmallow test shows power of a promise

There have also been several studies examining the effects of social interdependence and similar social contexts on children’s ability to delay gratification, using variations of the marshmallow test paradigm. For instance, in 2020, a team of German researchers adapted the classic experimental setup using Oreos and vanilla cookies with German and Kenyan schoolchildren, respectively. If both children waited to eat their treat, they received a second cookie as a reward; if one did not wait, neither child received a second cookie. They found that the kids were more likely to delay gratification when they depended on each other, compared to the standard marshmallow test.

An online paradigm

Rebecca Koomen, a psychologist now at the University of Manchester, co-authored the 2020 study as well as this latest one, which sought to build on those findings. Koomen et al. structured their experiments similarly, this time recruiting 66 UK children, ages 5 to 6, as subjects. They focused on how promising a partner not to eat a favorite treat could inspire sufficient trust to delay gratification, compared to the social risk of one or both partners breaking that promise. Any parent could tell you that children of this age are really big on the importance of promises, and science largely concurs; a promise has been shown to enhance interdependent cooperation in this age group.

Koomen and her Manchester colleagues added an extra twist: They conducted their version of the marshmallow test online to test the effectiveness compared to lab-based versions of the experiment. (Prior results from similar online studies have been mixed.) “Given face-to-face testing restrictions during the COVID pandemic, this, to our knowledge, represents the first cooperative marshmallow study to be conducted online, thereby adding to the growing body of literature concerning the validity of remote testing methods,” they wrote.

The type of treat was chosen by each child’s parents, ensuring it was a favorite: chocolate, candy, biscuits, and marshmallows, mostly, although three kids loved potato chips, fruit, and nuts, respectively. Parents were asked to set up the experiment in a quiet room with minimal potential distractions, outfitted with a webcam to monitor the experiment. Each child was shown a video of a “confederate child” who either clearly promised not to eat the treat or more ambiguously suggested they might succumb and eat their treat. (The confederate child refrained from eating the treat in both conditions, although the participant child did not know that.)

New twist on marshmallow test shows power of a promise Read More »

how-the-language-of-job-postings-can-attract-rule-bending-narcissists

How the language of job postings can attract rule-bending narcissists

Why it matters

Companies write job postings carefully in hopes of attracting the ideal candidate. However, they may unknowingly attract and select narcissistic candidates whose goals and ethics might not align with a company’s values or long-term success. Research shows that narcissistic employees are more likely to behave unethically, potentially leading to legal consequences.

While narcissistic traits can lead to negative outcomes, we aren’t saying that companies should avoid attracting narcissistic applicants altogether. Consider a company hiring a salesperson. A firm can benefit from a salesperson who is persuasive, who “thinks outside the box,” and who is “results-oriented.” In contrast, a company hiring an accountant or compliance officer would likely benefit from someone who “thinks methodically” and “communicates in a straightforward and accurate manner.”

Bending the rules is of particular concern in accounting. A significant amount of research examines how accounting managers sometimes bend rules or massage the numbers to achieve earnings targets. This “earnings management” can misrepresent the company’s true financial position.

In fact, my co-author Nick Seybert is currently working on a paper whose data suggests rule-bender language in accounting job postings predicts rule-bending in financial reporting.

Our current findings shed light on the importance of carefully crafting job posting language. Recruiting professionals may instinctively use rule-bender language to try to attract someone who seems like a good fit. If companies are concerned about hiring narcissists, they may want to clearly communicate their ethical values and needs while crafting a job posting, or avoid rule-bender language entirely.

What still isn’t known

While we find that professional recruiters are using language that attracts narcissists, it is unclear whether this is intentional.

Additionally, we are unsure what really drives rule-bending in a company. Rule-bending could happen due to attracting and hiring more narcissistic candidates, or it could be because of a company’s culture—or a combination of both.

The Research Brief is a short take on interesting academic work.

Jonathan Gay is Assistant Professor of Accountancy at the University of Mississippi.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How the language of job postings can attract rule-bending narcissists Read More »

do-these-dual-images-say-anything-about-your-personality?

Do these dual images say anything about your personality?

There’s little that Internet denizens love more than a snazzy personality test—cat videos, maybe, or perpetual outrage. One trend that has gained popularity over the last several years is personality quizzes based on so-called ambiguous images—in which one sees either a young girl or an old man, for instance, or a skull or a little girl. It’s possible to perceive both images by shifting one’s perspective, but it’s the image one sees first that is said to indicate specific personality traits. According to one such quiz, seeing the young girl first means you are optimistic and a bit impulsive, while seeing the old man first would mean one is honest, faithful, and goal-oriented.

But is there any actual science to back up the current fad? There is not, according to a paper published in the journal PeerJ, whose authors declare these kinds of personality quizzes to be a new kind of psychological myth. That said, they did find a couple of intriguing, statistically significant correlations they believe warrant further research.

In 1892, a German humor magazine published the earliest known version of the “rabbit-duck illusion,” in which one can see either a rabbit or a duck, depending on one’s perspective—i.e., multistable perception. There have been many more such images produced since then, all of which create ambiguity by exploiting certain peculiarities of the human visual system, such as playing with illusory contours and how we perceive edges.

Such images have long fascinated scientists and philosophers because they seem to represent different ways of seeing. So naturally there is a substantial body of research drawing parallels between such images and various sociological, biological, or psychological characteristics.

For instance, a 2010 study examined BBC archival data on the duck-rabbit illusion from the 1950s and found that men see the duck more often than women, while older people were more likely to see the rabbit. A 2018 study of the “younger-older woman” ambiguous image asked participants to estimate the age of the woman they saw in the image. Older participants over 30 gave higher estimates than younger ones. This was confirmed by a 2021 study, although that study also found no correlation between participants’ age and whether they were more likely to see the older or younger woman in the image.

Do these dual images say anything about your personality? Read More »

heroes,-villains,-and-childhood-trauma-in-the-mceu-and-dcu

Heroes, villains, and childhood trauma in the MCEU and DCU

They also limited their study to Marvel and DC characters depicted in major films, rather than including storylines from spinoff TV series. So Wanda Maximoff/The Scarlet Witch was not included since much of her traumatic backstory appeared in the series WandaVision. Furthermore, “We omitted gathering more characters from comic books in both Marvel and DC universes, due to their inconsistency in character development,” the authors wrote. “Comic book storylines often feature alternative plot lines, character arcs, and multiverse outcomes. The storytelling makes comic book characters highly inconsistent and challenging to score.”

With great power…

They ended up watching 33 films, with a total runtime of 77 hours and 5 minutes. They chose 19 male characters, eight female characters, and one gender-fluid character (Loki) as “subjects” for their study, applying the ACE questionnaire to their childhoods as portrayed in the films.

The results: “We found no statistically significant differences between heroes and villains, Marvel and DC characters, or men and women and ACE scores,” said Jackson. “This means that characters who were portrayed as having difficult childhoods were not more likely to be villains. This study somewhat refutes the idea that villains are a product of their experiences. Based on the films we watched, people chose to be heroes and that was what made the difference—not their experiences.”

Notably, Black Widow had the highest ACE score (eight) and yet still became an Avenger, though the authors acknowledge that the character did some bad things before then and famously wanted to wipe out the “red” in her ledger. She “represents resilience of characters who have experienced trauma,” the authors wrote, as well as demonstrating that “socio-ecological resilience, including access to social relationships and supportive communities, can play a mitigating role in the effect of ACEs.” The Joker, by contrast, scored a six and “wreaked havoc across Gotham City.”

Heroes, villains, and childhood trauma in the MCEU and DCU Read More »

ai-chatbots-might-be-better-at-swaying-conspiracy-theorists-than-humans

AI chatbots might be better at swaying conspiracy theorists than humans

Out of the rabbit hole —

Co-author Gordon Pennycook: “The work overturns a lot of how we thought about conspiracies.”

A woman wearing a sweatshirt for the QAnon conspiracy theory on October 11, 2020 in Ronkonkoma, New York.

Enlarge / A woman wearing a sweatshirt for the QAnon conspiracy theory on October 11, 2020 in Ronkonkoma, New York.

Stephanie Keith | Getty Images

Belief in conspiracy theories is rampant, particularly in the US, where some estimates suggest as much as 50 percent of the population believes in at least one outlandish claim. And those beliefs are notoriously difficult to debunk. Challenge a committed conspiracy theorist with facts and evidence, and they’ll usually just double down—a phenomenon psychologists usually attribute to motivated reasoning, i.e., a biased way of processing information.

A new paper published in the journal Science is challenging that conventional wisdom, however. Experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory showed that the interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual.

“These are some of the most fascinating results I’ve ever seen,” co-author Gordon Pennycook, a psychologist at Cornell University, said during a media briefing. “The work overturns a lot of how we thought about conspiracies, that they’re the result of various psychological motives and needs. [Participants] were remarkably responsive to evidence. There’s been a lot of ink spilled about being in a post-truth world. It’s really validating to know that evidence does matter. We can act in a more adaptive way using this new technology to get good evidence in front of people that is specifically relevant to what they think, so it’s a much more powerful approach.”

When confronted with facts that challenge a deeply entrenched belief, people will often seek to preserve it rather than update their priors (in Bayesian-speak) in light of the new evidence. So there has been a good deal of pessimism lately about ever reaching those who have plunged deep down the rabbit hole of conspiracy theories, which are notoriously persistent and “pose a serious threat to democratic societies,” per the authors. Pennycook and his fellow co-authors devised an alternative explanation for that stubborn persistence of belief.

Bespoke counter-arguments

The issue is that “conspiracy theories just vary a lot from person to person,” said co-author Thomas Costello, a psychologist at American University who is also affiliated with MIT. “They’re quite heterogeneous. People believe a wide range of them and the specific evidence that people use to support even a single conspiracy may differ from one person to another. So debunking attempts where you try to argue broadly against a conspiracy theory are not going to be effective because people have different versions of that conspiracy in their heads.”

By contrast, an AI chatbot would be able to tailor debunking efforts to those different versions of a conspiracy. So in theory a chatbot might prove more effective in swaying someone from their pet conspiracy theory.

To test their hypothesis, the team conducted a series of experiments with 2,190 participants who believed in one or more conspiracy theories. The participants engaged in several personal “conversations” with a large language model (GT-4 Turbo) in which they shared their pet conspiracy theory and the evidence they felt supported that belief. The LLM would respond by offering factual and evidence-based counter-arguments tailored to the individual participant. GPT-4 Turbo’s responses were professionally fact-checked, which showed that 99.2 percent of the claims it made were true, with just 0.8 percent being labeled misleading, and zero as false. (You can try your hand at interacting with the debunking chatbot here.)

Screenshot of the chatbot opening page asking questions to prepare for a conversation

Enlarge / Screenshot of the chatbot opening page asking questions to prepare for a conversation

Thomas H. Costello

Participants first answered a series of open-ended questions about the conspiracy theories they strongly believed and the evidence they relied upon to support those beliefs. The AI then produced a single-sentence summary of each belief, for example, “9/11 was an inside job because X, Y, and Z.” Participants would rate the accuracy of that statement in terms of their own beliefs and then filled out a questionnaire about other conspiracies, their attitude toward trusted experts, AI, other people in society, and so forth.

Then it was time for the one-on-one dialogues with the chatbot, which the team programmed to be as persuasive as possible. The chatbot had also been fed the open-ended responses of the participants, which made it better to tailor its counter-arguments individually. For example, if someone thought 9/11 was an inside job and cited as evidence the fact that jet fuel doesn’t burn hot enough to melt steel, the chatbot might counter with, say, the NIST report showing that steel loses its strength at much lower temperatures, sufficient to weaken the towers’ structures so that it collapsed. Someone who thought 9/11 was an inside job and cited demolitions as evidence would get a different response tailored to that.

Participants then answered the same set of questions after their dialogues with the chatbot, which lasted about eight minutes on average. Costello et al. found that these targeted dialogues resulted in a 20 percent decrease in the participants’ misinformed beliefs—a reduction that persisted even two months later when participants were evaluated again.

As Bence Bago (Tilburg University) and Jean-Francois Bonnefon (CNRS, Toulouse, France) noted in an accompanying perspective, this is a substantial effect compared to the 1 to 6 percent drop in beliefs achieved by other interventions. They also deemed the persistence of the effect noteworthy, while cautioning that two months is “insufficient to completely eliminate misinformed conspiracy beliefs.”

AI chatbots might be better at swaying conspiracy theorists than humans Read More »

study:-playing-dungeons-&-dragons-helps-autistic-players-in-social-interactions

Study: Playing Dungeons & Dragons helps autistic players in social interactions

We can be heroes —

“I can make a character quite different from how I interact with people in real life.”

A Dungeons & Dragons game session featuring a map, miniatures, dice, and character sheets

Enlarge / Researchers say that Dungeons & Dragons can give autistic players a way to engage in low-risk social interactions.

Since its introduction in the 1970s, Dungeons & Dragons has become one of the most influential tabletop role-playing games (TRPGs) in popular culture, featuring heavily in Stranger Things, for example, and spawning a blockbuster movie released last year. Over the last decade or so, researchers have turned their focus more heavily to the ways in which D&D and other TRPGs can help people with autism form healthy social connections, in part because the gaming environment offers clear rules around social interactions. According to the authors of a new paper published in the journal Autism, D&D helped boost players’ confidence with autism, giving them a strong sense of kinship or belonging, among other benefits.

“There are many myths and misconceptions about autism, with some of the biggest suggesting that those with it aren’t socially motivated, or don’t have any imagination,” said co-author Gray Atherton, a psychologist at the University of Plymouth. “Dungeons & Dragons goes against all that, centering around working together in a team, all of which takes place in a completely imaginary environment. Those taking part in our study saw the game as a breath of fresh air, a chance to take on a different persona and share experiences outside of an often challenging reality. That sense of escapism made them feel incredibly comfortable, and many of them said they were now trying to apply aspects of it in their daily lives.”

Prior research has shown that autistic people are more likely to feel lonely, have smaller social networks, and often experience anxiety in social settings. Their desire for social connection leads many to “mask” their neurodivergent traits in public for fear of being rejected as a result of social gaffes. “I think every autistic person has had multiple instances of social rejection and loss of relationships,” one of the study participants said when Atherton et al. interviewed them about their experiences. “You’ve done something wrong. You don’t know what it is. They don’t tell you, and you find out when you’ve been just, you know, left shunned in relationships, left out…. It’s traumatic.”

TPRGs like D&D can serve as a social lubricant for autistic players, according to a year-long study published earlier this year co-authored by Atherton, because there is less uncertainty around how to behave in-game—unlike the plethora of unwritten social rules that make navigating social settings so anxiety-inducing. Such games immerse players in a fantastical world where they create their characters with unique backstories, strengths, and weaknesses and cooperate with others to complete campaigns. A game master guides the overall campaign, but the game itself evolves according to the various choices different players make throughout.

A critical hit

Small wonder, then, that there tend to be higher percentages of autistic TRPG players than in the general populace. For this latest study. Atherton et al. wanted to specifically investigate how autistic players experience D&D when playing in groups with other autistic players. It’s essentially a case study with a small sample size—just eight participants—and qualitative in nature, since the post-play analysis focused on semistructured interviews with each player after the conclusion of the online campaign, the better to highlight their individual voices.

The players were recruited through social media advertisements within the D&D, Reddit and Discord online communities; all had received an autism diagnosis by a medical professional. They were split into two groups of four players, with one of the researchers (who’s been playing D&D for years) acting as the dungeon master. The online sessions featured in the study was the Waterdeep: Dragonheist campaign. The campaign ran for six weeks, with sessions lasting between two and four hours (including breaks).

Participants spoke repeatedly about the positive benefits they received from playing D&D, providing a friendly environment that helped them relax about social pressures. “When you’re interacting with people over D&D, you’re more likely to understand what’s going on,” one participant said in their study interview. “That’s because the method you’ll use to interact is written out. You can see what you’re meant to do. There’s an actual sort of reference sheet for some social interactions.” That, in turn, helped foster a sense of belonging and kinship with their fellow players.

Participants also reported feeling emotionally invested and close to their characters, with some preferring to separate themselves from their character in order to explore other aspects of their personality or even an entirely new persona, thus broadening their perspectives. “I can make a character quite different from how I interact with people in real-life interactions,” one participant said. “It helps you put yourself in the other person’s perspective because you are technically entering a persona that is your character. You can then try to see how it feels to be in that interaction or in that scenario through another lens.” And some participants said they were able to “rewrite” their own personal stories outside the game by adopting some of their characters’ traits—a psychological phenomenon known as “bleed.”

“Autism comes with several stigmas, and that can lead to people being met with judgment or disdain,” said co-author Liam Cross, also of the University of Plymouth. “We also hear from lots of families who have concerns about whether teenagers with autism are spending too much time playing things like video games. A lot of the time that is because people have a picture in their minds of how a person with autism should behave, but that is based on neurotypical experiences. Our studies have shown that there are everyday games and hobbies that autistic people do not simply enjoy but also gain confidence and other skills from. It might not be the case for everyone with autism, but our work suggests it can enable people to have positive experiences that are worth celebrating.”

Autism, 2024. DOI: 10.1177/13623613241275260  (About DOIs).

Study: Playing Dungeons & Dragons helps autistic players in social interactions Read More »

the-nature-of-consciousness,-and-how-to-enjoy-it-while-you-can

The nature of consciousness, and how to enjoy it while you can

Remaining aware —

In his new book, Christof Koch views consciousness as a theorist and an aficionado.

A black background with multicolored swirls filling the shape of a human brain.

Unraveling how consciousness arises out of particular configurations of organic matter is a quest that has absorbed scientists and philosophers for ages. Now, with AI systems behaving in strikingly conscious-looking ways, it is more important than ever to get a handle on who and what is capable of experiencing life on a conscious level. As Christof Koch writes in Then I Am Myself the World, “That you are intimately acquainted with the way life feels is a brute fact about the world that cries out for an explanation.” His explanation—bounded by the limits of current research and framed through Koch’s preferred theory of consciousness—is what he eloquently attempts to deliver.

Koch, a physicist, neuroscientist, and former president of the Allen Institute for Brain Science, has spent his career hunting for the seat of consciousness, scouring the brain for physical footprints of subjective experience. It turns out that the posterior hot zone, a region in the back of the neocortex, is intricately connected to self-awareness and experiences of sound, sight, and touch. Dense networks of neocortical neurons in this area connect in a looped configuration; output signals feedback into input neurons, allowing the posterior hot zone to influence its own behavior. And herein, Koch claims, lies the key to consciousness.

In the hot zone

According to integrated information theory (IIT)—which Koch strongly favors over a multitude of contending theories of consciousness—the Rosetta Stone of subjective experience is the ability of a system to influence itself: to use its past state to affect its present state and its present state to influence its future state.

Billions of neurons exist in the cerebellum, but they are wired “with nonoverlapping inputs and outputs … in a feed-forward manner,” writes Koch. He argues that a structure designed in this way, with limited influence over its own future, is not likely to produce consciousness. Similarly, the prefrontal cortex might allow us to perform complex calculations and exhibit advanced reasoning skills, but such traits do not equate to a capacity to experience life. It is the “reverberatory, self-sustaining excitatory loops prevalent in the neocortex,” Koch tells us, that set the stage for subjective experience to arise.

This declaration matches the experimental evidence Koch presents in Chapter 6: Injuries to the cerebellum do not eliminate a person’s awareness of themselves in relation to the outside world. Consciousness remains, even in a person who can no longer move their body with ease. Yet injuries to the posterior hot zone within the neocortex significantly change a person’s perception of auditory, visual, and tactile information, altering what they subjectively experience and how they describe these experiences to themselves and others.

Does this mean that artificial computer systems, wired appropriately, can be conscious? Not necessarily, Koch says. This might one day be possible with the advent of new technology, but we are not there yet. He writes. “The high connectivity [in a human brain] is very different from that found in the central processing unit of any digital computer, where one transistor typically connects to a handful of other transistors.” For the foreseeable future, AI systems will remain unconscious despite appearances to the contrary.

Koch’s eloquent overview of IIT and the melodic ease of his neuroscientific explanations are undeniably compelling, even for die-hard physicalists who flinch at terms like “self-influence.” His impeccably written descriptions are peppered with references to philosophers, writers, musicians, and psychologists—Albert Camus, Viktor Frankl, Richard Wagner, and Lewis Carroll all make appearances, adding richness and relatability to the narrative. For example, as an introduction to phenomenology—the way an experience feels or appears—he aptly quotes Eminem: “I can’t tell you what it really is, I can only tell you what it feels like.”

The nature of consciousness, and how to enjoy it while you can Read More »

lawsuit-opens-research-misconduct-report-that-may-get-a-harvard-prof-fired

Lawsuit opens research misconduct report that may get a Harvard prof fired

Image of a campus of red brick buildings with copper roofs.

Enlarge / Harvard’s got a lawsuit on its hands.

Glowimages

Accusations of research misconduct often trigger extensive investigations, typically performed by the institution where the misconduct allegedly took place. These investigations are internal employment matters, and false accusations have the potential to needlessly wreck someone’s career. As a result, most of these investigations are kept completely confidential, even after their completion.

But all the details of a misconduct investigation performed by Harvard University became public this week through an unusual route. The professor who had been accused of misconduct, Francesca Gino, had filed a multi-million dollar lawsuit, targeting both Harvard and a team of external researchers who had accused her of misconduct. Harvard submitted its investigator’s report as part of its attempt to have part of the suit dismissed, and the judge overseeing the case made it public.

We covered one of the studies at issue at the time of its publication. It has since been retracted, and we’ll be updating our original coverage accordingly.

Misconduct allegations lead to lawsuit

Gino, currently on administrative leave, had been faculty at Harvard Business School, where she did research on human behavior. One of her more prominent studies (the one we covered) suggested that signing a form before completing it caused people to fill in its contents more accurately than if they filled out the form first and then signed it.

Oddly, for a paper about honesty, it had a number of issues. Some of its original authors had attempted to go back and expand on the paper but found they were unable to replicate the results. That seems to have prompted a group of behavioral researchers who write at the blog Data Colada to look more carefully at the results that didn’t replicate, at which point they found indications that the data was fabricated. That got the paper retracted.

Gino was not implicated in the fabrication of the data. But the attention of the Data Colada team (Uri Simonsohn, Leif Nelson, and Joe Simmons) had been drawn to the paper. They found additional indications of completely independent problems in other data from the paper that did come from her work, which caused them to examine additional papers from Gino, coming up with evidence for potential research fraud in four of them.

Before posting it on their blog, however, the Data Colada team had provided their evidence to Harvard, which launched its own investigation. Their posts came out after Harvard’s investigation concluded that Gino’s research had serious issues, and she was placed on administrative leave as the university looked into revoking her tenure. It also alerted the journals that had published the three yet-to-be-retracted papers about the issues.

Things might have ended there, except that Gino filed a defamation lawsuit against Harvard and the Data Colada team, claiming they “worked together to destroy my career and reputation despite admitting they have no evidence proving their allegations.” As part of the $25 million suit, she also accused Harvard of mishandling its investigation and not following proper procedures.

Lawsuit opens research misconduct report that may get a Harvard prof fired Read More »