psychology

meet-the-2025-ig-nobel-prize-winners

Meet the 2025 Ig Nobel Prize winners


The annual award ceremony features miniature operas, scientific demos, and the 24/7 lectures.

The Ig Nobel Prizes honor “achievements that first make people laugh and then make them think.” Credit: Aurich Lawson / Getty Images

Does alcohol enhance one’s foreign language fluency? Do West African lizards have a preferred pizza topping? And can painting cows with zebra stripes help repel biting flies? These and other unusual research questions were honored tonight in a virtual ceremony to announce the 2025 recipients of the annual Ig Nobel Prizes. Yes, it’s that time of year again, when the serious and the silly converge—for science.

Established in 1991, the Ig Nobels are a good-natured parody of the Nobel Prizes; they honor “achievements that first make people laugh and then make them think.” The unapologetically campy awards ceremony features miniature operas, scientific demos, and the 24/7 lectures whereby experts must explain their work twice: once in 24 seconds and the second in just seven words.

Acceptance speeches are limited to 60 seconds. And as the motto implies, the research being honored might seem ridiculous at first glance, but that doesn’t mean it’s devoid of scientific merit. In the weeks following the ceremony, the winners will also give free public talks, which will be posted on the Improbable Research website.

Without further ado, here are the winners of the 2025 Ig Nobel prizes.

Biology

Example of the area of legs and body used to count biting flies on cows.

Credit: Tomoki Kojima et al., 2019

Citation: Tomoki Kojima, Kazato Oishi, Yasushi Matsubara, Yuki Uchiyama, Yoshihiko Fukushima, Naoto Aoki, Say Sato, Tatsuaki Masuda, Junichi Ueda, Hiroyuki Hirooka, and Katsutoshi Kino, for their experiments to learn whether cows painted with zebra-like striping can avoid being bitten by flies.

Any dairy farmer can tell you that biting flies are a pestilent scourge for cattle herds, which is why one so often sees cows throwing their heads, stamping their feet, flicking their tails, and twitching their skin—desperately trying to shake off the nasty creatures. There’s an economic cost as well since it causes the cattle to graze and feed less, bed down for shorter times, and start bunching together, which increases heat stress and risks injury to the animals. That results in less milk yield for dairy cows and less beef yields from feedlot cattle.

You know who isn’t much bothered by biting flies? The zebra. Scientists have long debated the function of the zebra’s distinctive black-and-white striped pattern. Is it for camouflage? Confusing potential predators? Or is it to repel those pesky flies? Tomoki Kojima et al. decided to put the latter hypothesis to the test, painting zebra stripes on six pregnant Japanese black cows at the Aichi Agricultural Research Center in Japan. They used water-borne lacquers that washed away after a few days, so the cows could take turns being in three different groups: zebra stripes, just black stripes, or no stripes (as a control).

The results: the zebra stripes significantly decreased both the number of biting flies on the cattle and the animals’ fly-repelling behaviors compared to those with black stripes or no stripes. The one exception was for skin twitching—perhaps because it is the least energy intensive of those behaviors. Why does it work? The authors suggest it might have something to do with modulation brightness or polarized light that confuses the insects’ motion detection system, used to control their approach when landing on a surface. But that’s a topic for further study.

Chemistry

Freshly cooked frozen w:blintzes in a non-stick frying pan coated with Teflon

Credit: Andrevan/CC BY-SA 2.5

Citation: Rotem Naftalovich, Daniel Naftalovich, and Frank Greenway, for experiments to test whether eating Teflon [a form of plastic more formally called “polytetrafluoroethylene”] is a good way to increase food volume and hence satiety without increasing calorie content.

Diet sodas and other zero-calorie drinks are a mainstay of the modern diet, thanks to the development of artificial sweeteners whose molecules can’t be metabolized by the human body. The authors of this paper are intrigued by the notion of zero-calorie foods, which they believe could be achieved by increasing the satisfying volume and mass of food without increasing the calories. And they have just the additive for that purpose: polytetrafluoroethylene (PTFE), more commonly known as Teflon.

Yes, the stuff they use on nonstick cookware. They insist that Teflon is inert, heat-resistant, impervious to stomach acid, tasteless, cost-effective, and available in handy powder form for easy mixing into food. They recommend a ratio of three parts food to one part Teflon powder.

The authors understand that to the average layperson, this is going to sound like a phenomenally bad idea—no thank you, I would prefer not to have powdered Teflon added to my food. So they spend many paragraphs citing all the scientific studies on the safety of Teflon—it didn’t hurt rats in feeding trials!—as well as the many applications for which it is already being used. These include Teflon-coated stirring rods used in labs and coatings on medical devices like bladder catheters and gynecological implants, as well as the catheters used for in vitro fertilization. And guys, you’ll be happy to know that Teflon doesn’t seem to affect sperm motility or viability. I suspect this will still be a hard sell in the consumer marketplace.

Physics

Cacio e pepe is an iconic pasta dish that is also frustratingly difficult to make

Credit: Simone Frau

Citation: Giacomo Bartolucci, Daniel Maria Busiello, Matteo Ciarchi, Alberto Corticelli, Ivan Di Terlizzi, Fabrizio Olmeda, Davide Revignas, and Vincenzo Maria Schimmenti, for discoveries about the physics of pasta sauce, especially the phase transition that can lead to clumping, which can be a cause of unpleasantness.

“Pasta alla cacio e pepe” is a simple dish: just tonnarelli pasta, pecorino cheese, and pepper. But its simplicity is deceptive. The dish is notoriously challenging to make because it’s so easy for the sauce to form unappetizing clumps with a texture more akin to stringy mozzarella rather than being smooth and creamy. As we reported in April, Italian physicists came to the rescue with a foolproof recipe based on their many scientific experiments, according to a new paper published in the journal Physics of Fluids. The trick: using corn starch for the cheese and pepper sauce instead of relying on however much starch leaches into the boiling water as the pasta is cooked.

Traditionally, the chef will extract part of the water and starch solution—which is cooled to a suitable temperature to avoid clumping as the cheese proteins “denaturate”—and mix it with the cheese to make the sauce, adding the pepper last, right before serving. But the authors note that temperature is not the only factor that can lead to this dreaded “mozzarella phase.” If one tries to mix cheese and water without any starch, the clumping is more pronounced. There is less clumping with water containing a little starch, like water in which pasta has been cooked. And when one mixes the cheese with pasta water “risottata”—i.e., collected and heated in a pan so enough water evaporates that there is a higher concentration of starch—there is almost no clumping.

The authors found that the correct starch ratio is between 2 to 3 percent of the cheese weight. Below that, you get the clumping phase separation; above that, and the sauce becomes stiff and unappetizing as it cools. Pasta water alone contains too little starch. Using pasta water “risottata” may concentrate the starch, but the chef has less control over the precise amount of starch. So the authors recommend simply dissolving 4 grams of powdered potato or corn starch in 40 grams of water, heating it gently until it thickens and combining that gel with the cheese. They also recommend toasting the black pepper briefly before adding it to the mixture to enhance its flavors and aromas.

Engineering Design

Experimental set-up (a) cardboard enclosure (b) UV-C tube light (c) SMPS

Credit: Vikash Kumar and Sarthak Mittal

Citation: Vikash Kumar and Sarthak Mittal, for analyzing, from an engineering design perspective, “how foul-smelling shoes affects the good experience of using a shoe-rack.”

Shoe odor is a universal problem, even in India, according to the authors of this paper, who hail from Shiv Nadar University (SNU) in Uttar Pradesh. All that heat and humidity means people perspire profusely when engaging even in moderate physical activity. Add in a lack of proper ventilation and washing, and shoes become a breeding ground for odor-causing bacteria called Kytococcus sedentarius. Most Indians make use of shoe racks to store their footwear, and the odors can become quite intense in that closed environment.

Yet nobody has really studied the “smelly shoe” problem when it comes to shoe racks. Enter Kumar and Mittal, who conducted a pilot study with the help of 149 first-year SNU students. More than half reported feeling uncomfortable about their own or someone else’s smelly shoes, and 90 percent kept their shoes in a shoe rack. Common methods to combat the odor included washing the shoes and drying them in the sun; using spray deodorant; or sprinkling the shoes with an antibacterial powder. They were unaware of many current odor-combatting products on the market, such as tea tree and coconut oil solutions, thyme oil, or isopropyl alcohol.

Clearly, there is an opportunity to make a killing in the odor-resistant shoe rack market. So naturally Kumar and Mittal decided to design their own version. They opted to use bacteria-killing UV rays (via a UV-C tube light) as their built-in “odor eater,” testing their device on the shoes of several SNU athletes, “which had a very strong noticeable odor.” They concluded that an exposure time of two to three minutes was sufficient to kill the bacteria and get rid of the odor.

Aviation

Wing membranes (patagia) of Townsend's big-eared bat, Corynorhinus townsendii

Credit: Public domain

Citation: Francisco Sánchez, Mariana Melcón, Carmi Korine, and Berry Pinshow, for studying whether ingesting alcohol can impair bats’ ability to fly and also their ability to echolocate.

Nature is rife with naturally occurring ethanol, particularly from ripening fruit, and that fruit in turn is consumed by various microorganisms and animal species. There are occasional rare instances of some mammals, birds, and even insects consuming fruit rich in ethanol and becoming intoxicated, making those creatures more vulnerable to potential predators or more accident-prone due to lessened motor coordination. Sánchez et al. decided to look specifically at the effects of ethanol on Egyptian fruit bats, which have been shown to avoid high-ethanol fruit. The authors wondered if this might be because the bats wanted to avoid becoming inebriated.

They conducted their experiments on adult male fruit bats kept in an outdoor cage that served as a long flight corridor. The bats were given liquid food with varying amounts of ethanol and then released in the corridor, with the authors timing how long it took each bat to fly from one end to the other. A second experiment followed the same basic protocol, but this time the authors recorded the bats’ echolocation calls with an ultrasonic microphone. The results: The bats that received liquid food with the highest ethanol content took longer to fly the length of the corridor, evidence of impaired flight ability. The quality of those bats’ echolocation was also adversely affected, putting them at a higher risk of colliding with obstacles mid-flight.

Psychology

Narcissus (1597–99) by Caravaggio; the man in love with his own reflection

Credit: Public domain

Citation: Marcin Zajenkowski and Gilles Gignac, for investigating what happens when you tell narcissists—or anyone else—that they are intelligent.

Not all narcissists are created equal. There are vulnerable narcissists who tend to be socially withdrawn, have low self-esteem, and are prone to negative emotions. And then there are grandiose narcissists, who exhibit social boldness, high self-esteem, and are more likely to overestimate their own intelligence. The prevailing view is that this overconfidence stems from narcissism. The authors wanted to explore whether this effect might also work in reverse, i.e., that believing one has superior intelligence due to positive external feedback can lead to at least a temporary state of narcissism.

Zajenkowski et al. recruited 361 participants from Poland who were asked to rate their level of intelligence compared to other people; complete the Polish version of the Narcissistic Personality Inventory; and take an IQ test to compare their perceptions of their own intelligence with an objective measurement. The participants were then randomly assigned to one of two groups. One group received positive feedback—telling them they did indeed have a higher IQ than most people—while the other received negative feedback.

The results confirmed most of the researchers’ hypotheses. In general, participants gave lower estimates of their relative intelligence after completing the IQ test, which provided an objective check of sorts. But the type of feedback they received had a measurable impact. Positive feedback enhanced their feelings of uniqueness (a key aspect of grandiose narcissism). Those who received negative feedback rated their own intelligence as being lower, and that negative feedback had a larger effect than positive feedback. The authors concluded that external feedback helped shape the subjects’ perception of their own intelligence, regardless of the accuracy of that feedback.

Nutrition

Rainbow lizards eating ‘four cheese’ pizza at a seaside touristic resort in Togo.

Credit: Daniele Dendi et al, 2022

Citation: Daniele Dendi, Gabriel H. Segniagbeto, Roger Meek, and Luca Luiselli, for studying the extent to which a certain kind of lizard chooses to eat certain kinds of pizza.

Move over, Pizza Rat, here come the Pizza Lizards—rainbow lizards, to be precise. This is a species common to urban and suburban West Africa. The lizards primarily live off insects and arthropods, but their proximity to humans has led to some developing a more omnivorous approach to their foraging. Bread is a particular favorite. Case in point: One fine sunny day at a Togo seaside resort, the authors noticed a rainbow lizard stealing a tourist’s slice of four-cheese pizza and happily chowing down.

Naturally, they wanted to know if this was an isolated incident or whether the local rainbow lizards routinely feasted on pizza slices. And did the lizards have a preferred topping? Inquiring minds need to know. So they monitored the behavior of nine particular lizards, giving them the choice between a plate of four-cheese pizza and a plate of “four seasons” pizza, spaced about 10 meters apart.

It only took 15 minutes for the lizards to find the pizza and eat it, sometimes fighting over the remaining slices. But they only ate the four-cheese pizza. For the authors, this suggests there might be some form of chemical cues that attract them to the cheesy pizzas, or perhaps it’s easier for them to digest. I’d love to see how the lizards react to the widely derided Canadian bacon and pineapple pizza.

Pediatrics

Pumped breast milk in bottles

Citation: Julie Mennella and Gary Beauchamp, for studying what a nursing baby experiences when the baby’s mother eats garlic.

Mennella and Beauchamp designed their experiment to investigate two questions: whether the consumption of garlic altered the odor of a mother’s breast milk, and if so, whether those changes affected the behavior of nursing infants. (Garlic was chosen because it is known to produce off flavors in dairy cow milk and affect human body odor.) They recruited eight women who were exclusively breastfeeding their infants, taking samples of their breast milk over a period when the participants abstained from eating sulfurous foods (garlic, onion, asparagus), and more samples after the mothers consumed either a garlic capsule or a placebo.

The results: Mothers who ingested the garlic capsules produced milk with a perceptibly more intense odor, as evaluated by several adult panelists brought in to sniff the breast milk samples. The strong odor peaked at two hours after ingestion and decreased fats, which is consistent with prior research on cows that ingested highly odorous feeds. As for the infants, those whose mothers ingested garlic attached to the breast for longer periods and sucked more when the milk smelled like garlic. This could be relevant to ongoing efforts to determine whether sensory experiences during breastfeeding can influence how readily infants accept new foods upon weaning, and perhaps even their later food preferences.

Literature

closeup of a hand with clubbed fingernails

Credit: William B. Bean

Citation: The late Dr. William B. Bean, for persistently recording and analyzing the rate of growth of one of his fingernails over a period of 35 years.

If you’re surprised to see a study on fingernail growth rates under the Literature category, it will all make sense once you read the flowery prose stylings of Dr. Bean. He really did keep detailed records of how fast his fingernails grew for 35 years, claiming in his final report that “the nail provides a slowly moving keratin kymograph that measures age on the inexorable abscissa of time.” He sprinkles his observations with ponderous references to medieval astrology, James Boswell, and Moby Dick, with a dash of curmudgeonly asides bemoaning the sterile modern medical teaching methods that permeate “the teeming mass of hope and pain, technical virtuosity, and depersonalization called a ‘health center.'”

So what did our pedantic doctor discover in those 35 years, not just studying his own nails, but meticulously reviewing all the available scientific literature? Well, for starters, the rate of fingernail growth diminishes as one ages; Bean noted that his growth rates remained steady early on, but “slowed down a trifle” over the last five years of his project. Nails grow faster in children than adults. A warm environment can also accelerate growth, as does biting one’s fingernails—perhaps, he suggests, because the biting stimulates blood flow to the area. And he debunks the folklore of hair and nails growing even after death: it’s just the retraction and contraction of the skin post-mortem that makes it seem like the nails are growing.

Peace

Citation: Fritz Renner, Inge Kersbergen, Matt Field, and Jessica Werthmann, for showing that drinking alcohol sometimes improves a person’s ability to speak in a foreign language.

Alcohol is well-known to have detrimental effects on what’s known in psychological circles as “executive functioning,” impacting things like working memory and inhibitory control. But there’s a widespread belief among bilingual people that a little bit of alcohol actually improves one’s fluency in a foreign language, which also relies on executive functioning. So wouldn’t being intoxicated actually have an adverse effect on foreign language fluency? Renner et al. decided to investigate further.

They recruited 50 native German-speaking undergrad psychology students at Maastricht University in the Netherlands who were also fluent in Dutch. They were randomly divided into two groups. One group received an alcoholic drink (vodka with bitter lemon), and the other received water. Each participant consumed enough to be slightly intoxicated after 15 minutes, and then engaged in a discussion in Dutch with a native Dutch speaker. Afterward, they were asked to rate their self-perception of their skill at Dutch, with the Dutch speakers offering independent observer ratings.

The researchers were surprised to find that intoxication improved the participants’ Dutch fluency, based on the independent observer reports. (Self-evaluations were largely unaffected by intoxication levels.) One can’t simply attribute this to so-called “Dutch courage,” i.e., increased confidence associated with intoxication. Rather, the authors suggest that intoxication lowers language anxiety, thereby increasing one’s foreign language proficiency, although further research would be needed to support that hypothesis.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Meet the 2025 Ig Nobel Prize winners Read More »

conspiracy-theorists-don’t-realize-they’re-on-the-fringe

Conspiracy theorists don’t realize they’re on the fringe


Gordon Pennycook: “It might be one of the biggest false consensus effects that’s been observed.”

Credit: Aurich Lawson / Thinkstock

Belief in conspiracy theories is often attributed to some form of motivated reasoning: People want to believe a conspiracy because it reinforces their worldview, for example, or doing so meets some deep psychological need, like wanting to feel unique. However, it might also be driven by overconfidence in their own cognitive abilities, according to a paper published in the Personality and Social Psychology Bulletin. The authors were surprised to discover that not only are conspiracy theorists overconfident, they also don’t realize their beliefs are on the fringe, massively overestimating by as much as a factor of four how much other people agree with them.

“I was expecting the overconfidence finding,” co-author Gordon Pennycook, a psychologist at Cornell University, told Ars. “If you’ve talked to someone who believes conspiracies, it’s self-evident. I did not expect them to be so ready to state that people agree with them. I thought that they would overestimate, but I didn’t think that there’d be such a strong sense that they are in the majority. It might be one of the biggest false consensus effects that’s been observed.”

In 2015, Pennycook made headlines when he co-authored a paper demonstrating how certain people interpret “pseudo-profound bullshit” as deep observations. Pennycook et al. were interested in identifying individual differences between those who are susceptible to pseudo-profound BS and those who are not and thus looked at conspiracy beliefs, their degree of analytical thinking, religious beliefs, and so forth.

They presented several randomly generated statements, containing “profound” buzzwords, that were grammatically correct but made no sense logically, along with a 2014 tweet by Deepak Chopra that met the same criteria. They found that the less skeptical participants were less logical and analytical in their thinking and hence much more likely to consider these nonsensical statements as being deeply profound. That study was a bit controversial, in part for what was perceived to be its condescending tone, along with questions about its methodology. But it did snag Pennycook et al. a 2016 Ig Nobel Prize.

Last year we reported on another Pennycook study, presenting results from experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory. That study showed that the AI interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual. “The work overturns a lot of how we thought about conspiracies, that they’re the result of various psychological motives and needs,” Pennycook said at the time.

Miscalibrated from reality

Pennycook has been working on this new overconfidence study since 2018, perplexed by observations indicating that people who believe in conspiracies also seem to have a lot of faith in their cognitive abilities—contradicting prior research finding that conspiracists are generally more intuitive. To investigate, he and his co-authors conducted eight separate studies that involved over 4,000 US adults.

The assigned tasks were designed in such a way that participants’ actual performance and how they perceived their performance were unrelated. For example, in one experiment, they were asked to guess the subject of an image that was largely obscured. The subjects were then asked direct questions about their belief (or lack thereof) concerning several key conspiracy claims: the Apollo Moon landings were faked, for example, or that Princess Diana’s death wasn’t an accident. Four of the studies focused on testing how subjects perceived others’ beliefs.

The results showed a marked association between subjects’ tendency to be overconfident and belief in conspiracy theories. And while a majority of participants believed a conspiracy’s claims just 12 percent of the time, believers thought they were in the majority 93 percent of the time. This suggests that overconfidence is a primary driver of belief in conspiracies.

It’s not that believers in conspiracy theories are massively overconfident; there is no data on that, because the studies didn’t set out to quantify the degree of overconfidence, per Pennycook. Rather, “They’re overconfident, and they massively overestimate how much people agree with them,” he said.

Ars spoke with Pennycook to learn more.

Ars Technica: Why did you decide to investigate overconfidence as a contributing factor to believing conspiracies?

Gordon Pennycook: There’s a popular sense that people believe conspiracies because they’re dumb and don’t understand anything, they don’t care about the truth, and they’re motivated by believing things that make them feel good. Then there’s the academic side, where that idea molds into a set of theories about how needs and motivations drive belief in conspiracies. It’s not someone falling down the rabbit hole and getting exposed to misinformation or conspiratorial narratives. They’re strolling down: “I like it over here. This appeals to me and makes me feel good.”

Believing things that no one else agrees with makes you feel unique. Then there’s various things I think that are a little more legitimate: People join communities and there’s this sense of belongingness. How that drives core beliefs is different. Someone may stop believing but hang around in the community because they don’t want to lose their friends. Even with religion, people will go to church when they don’t really believe. So we distinguish beliefs from practice.

What we observed is that they do tend to strongly believe these conspiracies despite the fact that there’s counter evidence or a lot of people disagree. What would lead that to happen? It could be their needs and motivations, but it could also be that there’s something about the way that they think where it just doesn’t occur to them that they could be wrong about it. And that’s where overconfidence comes in.

Ars Technica: What makes this particular trait such a powerful driving force?

Gordon Pennycook: Overconfidence is one of the most important core underlying components, because if you’re overconfident, it stops you from really questioning whether the thing that you’re seeing is right or wrong, and whether you might be wrong about it. You have an almost moral purity of complete confidence that the thing you believe is true. You cannot even imagine what it’s like from somebody else’s perspective. You couldn’t imagine a world in which the things that you think are true could be false. Having overconfidence is that buffer that stops you from learning from other people. You end up not just going down the rabbit hole, you’re doing laps down there.

Overconfidence doesn’t have to be learned, parts of it could be genetic. It also doesn’t have to be maladaptive. It’s maladaptive when it comes to beliefs. But you want people to think that they will be successful when starting new businesses. A lot of them will fail, but you need some people in the population to take risks that they wouldn’t take if they were thinking about it in a more rational way. So it can be optimal at a population level, but maybe not at an individual level.

Ars Technica: Is this overconfidence related to the well-known Dunning-Kruger effect?

Gordon Pennycook: It’s because of Dunning-Kruger that we had to develop a new methodology to measure overconfidence, because the people who are the worst at a task are the worst at knowing that they’re the worst at the task. But that’s because the same things that you use to do the task are the things you use to assess how good you are at the task. So if you were to give someone a math test and they’re bad at math, they’ll appear overconfident. But if you give them a test of assessing humor and they’re good at that, they won’t appear overconfident. That’s about the task, not the person.

So we have tasks where people essentially have to guess, and it’s transparent. There’s no reason to think that you’re good at the task. In fact, people who think they’re better at the task are not better at it, they just think they are. They just have this underlying kind of sense that they can do things, they know things, and that’s the kind of thing that we’re trying to capture. It’s not specific to a domain. There are lots of reasons why you could be overconfident in a particular domain. But this is something that’s an actual trait that you carry into situations. So when you’re scrolling online and come up with these ideas about how the world works that don’t make any sense, it must be everybody else that’s wrong, not you.

Ars Technica: Overestimating how many people agree with them seems to be at odds with conspiracy theorists’ desire to be unique.  

Gordon Pennycook: In general, people who believe conspiracies often have contrary beliefs. We’re working with a population where coherence is not to be expected. They say that they’re in the majority, but it’s never a strong majority. They just don’t think that they’re in a minority when it comes to the belief. Take the case of the Sandy Hook conspiracy, where adherents believe it was a false flag operation. In one sample, 8 percent of people thought that this was true. That 8 percent thought 61 percent of people agreed with them.

So they’re way off. They really, really miscalibrated. But they don’t say 90 percent. It’s 60 percent, enough to be special, but not enough to be on the fringe where they actually are. I could have asked them to rank how smart they are relative to others, or how unique they thought their beliefs were, and they would’ve answered high on that. But those are kind of mushy self-concepts. When you ask a specific question that has an objectively correct answer in terms of the percent of people in the sample that agree with you, it’s not close.

Ars Technica: How does one even begin to combat this? Could last year’s AI study point the way?

Gordon Pennycook: The AI debunking effect works better for people who are less overconfident. In those experiments, very detailed, specific debunks had a much bigger effect than people expected. After eight minutes of conversation, a quarter of the people who believed the thing didn’t believe it anymore, but 75 percent still did. That’s a lot. And some of them, not only did they still believe it, they still believed it to the same degree. So no one’s cracked that. Getting any movement at all in the aggregate was a big win.

Here’s the problem. You can’t have a conversation with somebody who doesn’t want to have the conversation. In those studies, we’re paying people, but they still get out what they put into the conversation. If you don’t really respond or engage, then our AI is not going to give you good responses because it doesn’t know what you’re thinking. And if the person is not willing to think. … This is why overconfidence is such an overarching issue. The only alternative is some sort of propagandistic sit-them-downs with their eyes open and try to de-convert them. But you can’t really convert someone who doesn’t want to be converted. So I’m not sure that there is an answer. I think that’s just the way that humans are.

Personality and Social Psychology Bulletin, 2025. DOI: 10.1177/01461672251338358  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Conspiracy theorists don’t realize they’re on the fringe Read More »

fanfic-study-challenges-leading-cultural-evolution-theory

Fanfic study challenges leading cultural evolution theory


Fanfic community craves familiarity much more than novelty—but reports greater enjoyment from novelty.

Credit: Aurich Lawson | Marvel

It’s widely accepted conventional wisdom that when it comes to creative works—TV shows, films, music, books—consumers crave an optimal balance between novelty and familiarity. What we choose to consume and share with others, in turn, drives cultural evolution.

But what if that conventional wisdom is wrong? An analysis based on data from a massive online fan fiction (fanfic) archive contradicts this so-called “balance theory,” according to a paper published in the journal Humanities and Social Sciences Communications. The fanfic community seems to overwhelmingly prefer more of the same, consistently choosing familiarity over novelty; however, they reported greater overall enjoyment when they took a chance and read something more novel. In short: “Sameness entices, but novelty enchants.”

Strictly speaking, authors have always copied characters and plots from other works (cf. many of William Shakespeare’s plays), although the advent of copyright law complicated matters. Modern fan fiction as we currently think of it arguably emerged with the 1967 publication of the first Star Trek fanzine (Spockanalia), which included spinoff fiction based on the series. Star Trek also spawned the subgenre of slash fiction, when writers began creating stories featuring Kirk and Spock (Kirk/Spock, or K/S) in a romantic (often sexual) relationship.

The advent of the World Wide Web brought fan fiction to the masses, starting with Usenet newsgroups and mailing lists and eventually the development of massive online archives where creators could upload their work to be read and commented upon by readers. The subculture has since exploded; there’s fanfic based on everything from Sherlock Holmes to The X-Files, Buffy the Vampire Slayer, Game of Thrones, the Marvel Cinematic Universe, and Harry Potter. You name it, there’s probably fanfic about it.

There are also many subgenres within fanfic beyond slash, some of them rather weird, like a magical pregnancy (Mpreg) story in which Sherlock Holmes and Watson fall so much in love with each other that one of them becomes magically pregnant. (One suspects Sherlock would not handle morning sickness very well.) Sometimes fanfic even breaks into the cultural mainstream: E.L. James’ bestselling Fifty Shades of Grey started out as fan fiction set in the world of Stephenie Meyer’s Twilight series.

So fanfic is a genuine cultural phenomenon—hence its fascination for Simon DeDeo, a complexity scientist at Carnegie Mellon University and the Santa Fe Institute who studies cultural evolution and the emergence of social hierarchies. (I reported on DeDeo’s work analyzing the archives of London’s Old Bailey in 2014.) While opinion remains split—even among the authors of the original works—as to whether fanfic is a welcome homage to the original works that just might help drive book sales or whether it constitutes a form of copyright infringement, DeDeo enthusiastically embraces the format.

“It’s the dark matter of creativity,” DeDeo told Ars. “I love that it exists. It’s a very non-elitist form. There’s no New York Times bestseller list. It would be hard to name the most famous fan fiction writers. The world building has been done. The characters exist. The plot elements have already been put together. So the bar to entry is lower. Maybe sometime in the 19th century we get a notion of genius and the individual creator, but that’s not really what storytelling has been about for the majority of human history. In that one sense, fan fiction is closer to what we were doing around the campfire.”

spock lying down in sick bay while kirk holds his hand tenderly at his bedside

Star Trek arguably spawned contemporary fan fiction—including stories imagining Kirk and Spock as romantic partners. Credit: Paramount Pictures

That’s a boon for fanfic writers, most of whom have non-creative day jobs; fanfic provides them with a creative outlet. Every year, when DeDeo asks students in his classes whether they read and/or write fanfic, a significant percentage always raise their hands. (He once asked a woman about why she wrote slash. Her response: “Because no one was writing porn that I wanted to read.”) In fact, that’s how this current study came about. Co-author Elise Jing is one of DeDeo’s former students with a background in both science and the humanities—and she’s also a fanfic connoisseur.

Give them more of the same

Jing thought (and DeDeo concurred) that the fanfic subculture provided an excellent laboratory for studying cultural evolution. “It’s tough to get students to read a book. They write fan fiction voluntarily. This is stuff they care about writing and care about reading. Nobody gets prestige or power in the larger society from writing fan fiction,” said DeDeo. “This is not a top-down model where Hollywood is producing something and then the fans are consuming it. The fans are producing and consuming so it’s a truly self-contained culture that’s constantly evolving. It’s a pure product consumption cycle. People read it, they bookmark it, they write comments on it, and all that gives us insight into how it’s being received. If you’re a psychologist, you couldn’t pay to get this kind of data.”

Fanfic is a tightly controlled ecosystem, so it lacks many of the confounding factors that make it so difficult to study mainstream cultural works. Also, the fan fiction community is enormous, so the potential datasets are huge. For this study, the authors relied on data from the online Archive of Our Own (AO3), which boasts nearly 9 million users covering more than 70,000 different fandoms and some 15 million individual works. (Sadly, the site has since shut down access to its data over concerns of that data being used to train AI.)

According to DeDeo, the idea was to examine the question of cultural evolution on a population level, rather than on the individual level: “How do these individual things agglomerate to produce the culture? “

Strong positive correlation is found between the response variables except for the Kudos-to-hits ratio. Topic novelty is weakly positively correlated with Kudos-to-hits ratio, but negatively correlated with the other response variables.

Strong positive correlation is found between the response variables except for the Kudos-to-hits ratio. Topic novelty is weakly positively correlated with Kudos-to-hits ratio but negatively correlated with the other response variables. Credit: E. Jing et al., 2025

The results were striking. AO3 members overwhelmingly preferred familiarity in their fan fiction, i.e., more of the same. One notable exception was a short story that was both hugely popular and highly novel. Simply titled “I Am Groot,” the story featured the character from Guardians of the Galaxy. The text is just “I am Groot” repeated 40,000 times—a stroke of genius in that this is entirely consistent with the canonical MCU character, whose entire dialogue consists of those words, with meaning conveyed by shifts of tone and context. But such exceptions proved to be very rare.

“We were so stunned that balance theory wasn’t working,” said DeDeo, who credits Jing with the realization that they were dealing with two distinct pieces of the puzzle: how much is being consumed, and how much people like what they consume, i.e., enjoyment. Their analysis revealed, first, that people really don’t want an optimized mix of familiar and new; they want the same thing over and over again, even within the fanfic community. But when people do make the effort to try something new, they tend to enjoy it more than just consuming more of the same.

In short, “We are anti-balance theory,” said DeDeo. “In biology, for example, you make a small variation in the species and you get micro-evolution. In culture, a minor variation is just less likely to be consumed. So it really is a mystery how we evolve at all culturally; it’s not happening by gradual movement. We can see that there’s novelty. We can see that when people encounter novelty, they enjoy it. But we can’t quite make sense of how these two competing effects work out.”

“This is the great paradox,” said DeDeo. “Culture has to be stable. Without long-term stability, there’s no coherent body of work that can even constitute of culture if every year fan fiction totally changes. That inherent cultural conservatism is in some sense a precondition for culture to exist at all.” Yet culture does evolve, even within the fanfic community.

One possible alternative is some kind of punctuated equilibrium model for cultural evolution, in which things remain stable but undergo occasional leaps forward. “One story about how culture evolves is that eventually, the stuff that’s more enjoyable than what people keep re-consuming somehow becomes accessible to the majority of the community,” said DeDeo. “Novelty might act as a gravitational pull on the center and [over time] some new material gets incorporated into the culture.” He draws an analogy to established tech companies like IBM versus startups, most of which die out; but those few that succeed often push the culture substantially forward.

Perhaps there are two distinct groups of people: those who actively seek out new things and those who routinely click on familiar subject matter because even though their enjoyment might be less, it’s not worth overcoming their inertia to try out something new. Perhaps it is those who seek novelty that sow the seeds of eventual shifts in trends.

“Is it that we’re tired? Is it that we’re lazy? Is this a conflict within a human or within a culture?” said DeDeo. “We don’t know because we only get the raw numbers. If we could track an individual reader to see how they moved between these two spaces, that would be really interesting.”

Humanities and Social Sciences Communications, 2025. DOI: 10.1057/s41599-025-05166-3  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Fanfic study challenges leading cultural evolution theory Read More »

why-incels-take-the-“blackpill”—and-why-we-should-care

Why incels take the “Blackpill”—and why we should care


“Don’t work for Soyciety”

A growing number of incels are NEET (Not in Education, Employment, or Training). That should concern us all.

The Netlix series Adolescence explores the roots of misogynistic subcultures. Credit: Netflix

The online incel (“involuntary celibate”) subculture is mostly known for its extreme rhetoric, primarily against women, sometimes erupting into violence. But a growing number of self-identified incels are using their ideology as an excuse for not working or studying. This could constitute a kind of coping mechanism to make sense of their failures—not just in romantic relationships but also in education and employment, according to a paper published in the journal Gender, Work, & Organization.

Contrary to how it’s often portrayed, the “manosphere,” as it is often called, is not a monolith. Those who embrace the “Redpill” ideology, for example, might insist that women control the “sexual marketplace” and are only interested in ultramasculine “Chads.” They champion self-improvement as a means to make themselves more masculine and successful, and hence (they believe) more attractive to women—or at least better able to manipulate women.

By contrast, the “Blackpilled” incel contingent is generally more nihilistic. These individuals reject the Redpill notion of alpha-male masculinity and the accompanying focus on self-improvement. They believe that dating and social success are entirely determined by one’s looks and/or genetics. Since there is nothing they can do to improve their chances with women or their lot in life, why even bother?

“People have a tendency to lump all these different groups together as the manosphere,” co-author AnnaRose Beckett-Herbert of McGill University told Ars. “One critique I have of the recent Netflix show Adolescence—which was well done overall—is they lump incels in with figures like Andrew Tate, as though it’s all interchangeable. There’s areas of overlap, like extreme misogyny, but there are really important distinctions. We have to be careful to make those distinctions because the kind of intervention or prevention efforts that we might direct towards the Redpill community versus the Blackpill community might be very different.”

Incels constitute a fairly small fraction of the manosphere, but the vast majority of incels appear to embrace the Blackpill ideology, per Beckett-Herbert. That nihilistic attitude can extend to any kind of participation in what incels term “Soyciety”—including educational attainment and employment. When that happens, such individuals are best described by the acronym NEET (Not in Education, Employment, or Training).

“It’s not that we have large swaths of young men that are falling into this rabbit hole,” said Beckett-Herbert. “Their ideology is pretty fringe, but we’re seeing the community grow, and we’re seeing the ideology spread. It used to be contained to romantic relationships and sex. Now we’re seeing this broader disengagement from society as a whole. We should all be concerned about that trend.”

The NEET trend is also tied to the broader cultural discourse on how boys and young men are struggling in contemporary society. While prior studies tended to focus on the misogynistic rhetoric and propensity for violence among incels, “I thought that the unemployment lens was interesting because it’s indicative of larger problems,” said Beckett-Herbert. “It’s important to remember that it’s not zero-sum. We can care about the well-being of women and girls and also acknowledge that young men are struggling, too. Those don’t have to be at odds.”

“Lie down and rot”

Beckett-Herbert and her advisor/co-author, McGill University sociologist Eran Shor, chose the incels.is platform as a data source for their study due to its ease of public access and relatively high traffic, with nearly 20,000 members. The pair used Python code to scrape 100 pages, amounting to around 10,000 discussion threads between October and December 2022. A pilot study revealed 10 keywords that appeared most frequently in those threads: “study,” “school,” “NEET,” “job,” “work,” “money,” “career,” “wage,” “employ,” and “rot.” (“They use the phrase ‘lie down and rot’ a lot,” said Beckett-Herbert.)

This allowed Beckett-Herbert and Shor to narrow their sample down to 516 threads with titles containing those keywords. They randomly selected a subset of 171 discussion threads for further study. That analysis yielded four main themes that dominated the discussion threads: political/ideological arguments about being NEET; boundary policing; perceived discrimination; and bullying and marginalization.

Roughly one-quarter of the total comments consisted of political or ideological arguments promoting being NEET, with most commenters advocating minimizing one’s contributions to society as much as possible. They suggested going on welfare, for instance, to “take back” from society, or declared they should be exempt from paying any taxes, as “compensation for our suffering.” About 25 percent—a vocal minority—pushed back on glorifying the NEET lifestyle and offered concrete suggestions for self-improvement. (“Go outside and try at least,” one user commented.)

Such pushback often led to boundary policing. Those who do pursue jobs or education run the risk of being dubbed “fakecels” and becoming alienated from the rest of the incel community. (“Don’t work for a society that hates you,” one user commented.) “There’s a lot of social psychological research on groupthink and group polarization that is relevant here,” said Beckett-Herbert. “A lot of these young men may not have friends in their real life. This community is often their one source of social connection. So the incel ideology becomes core to their identity: ‘I’m part of this community, and we don’t work. We are subhumans.'”

There were also frequent laments about being discriminated against for not being attractive (“lookism”), both romantically and professionally, as well as deep resentment of women’s increased presence in the workplace, deemed a threat to men’s own success. “They love to cherry-pick all these findings from psychology research [to support their position],” said Beckett-Herbert. For instance, “There is evidence that men who are short or not conventionally attractive are discriminated against in hiring. But there’s also a lot of evidence suggesting that this actually affects women more. Women who are overweight face a greater bias against them in hiring than men do, for example.”

Beckett-Herbert and Shor also found that about 15 percent of the comments in their sample concerned users’ experiences being harassed or bullied (usually by other men), their mental health challenges (anxiety, depression), and feeling estranged or ostracized at school or work—experiences that cemented their reluctance to work or engage in education or vocational training.

Many of these users also mentioned being autistic, in keeping with prior research showing a relatively high share of people with autism in incel communities. The authors were careful to clarify, however, that most people with autism “are not violent or hateful, nor do they identify as incels or hold explicitly misogynistic views,” they wrote. “Rather, autism, when combined with other mental health issues such as depression, anxiety, and hopelessness, may make young men more vulnerable to incel ideologies.”

There are always caveats. In this case, the study was limited to a single incel forum, which might not be broadly representative of similar discussions on other platforms. And there could be a bit of selection bias at play. Not every incel member may actively participate in discussion threads (lurkers) and non-NEET incels might be less likely to do so either because they have less free time or don’t wish to be dismissed as “fakecels.”However, Beckett-Herbert and Shor note that their findings are consistent with previous studies that suggest there are a disproportionately large number of NEETs within the incel community.

A pound of prevention

Is effective intervention even possible for members of the incel community, given their online echo chamber? Beckett-Herbert acknowledges that it is very difficult to break through to such people. “De-radicalization is a noble, worthy line of research,” she said. “But the existing evidence from that field of study suggests that prevention is easier and more effective than trying to pull these people out once they’re already in.” Potential strategies might include fostering better digital and media literacy, i.e., teaching kids to be cognizant of the content they’re consuming online. Exposure time is another key issue.

“A lot of these young people don’t have healthy outlets that are not in the digital world,” said Beckett-Herbert “They come home from school and spend hours and hours online. They’re lonely and isolated from real-world communities and structures. Some of these harmful ideologies might be downstream of these larger root causes. How can we help boys do better in school, feel better prepared for the labor market? How can we help them make more friends? How can we get them involved in real-world activities that will diminish their time spent online? I think that that can go a long way. Just condemning them or banning their spaces—that’s not a good long-term solution.”

While there are multiple well-publicized instances of self-identified incels committing violent acts—most notably Elliot Rodger, who killed six people in 2014—Beckett-Herbert emphasizes not losing sight of incels’ fundamental humanity. “We focus a lot on the misogyny, the potential for violence against women, and that is so important,” she said. “You will not hear me saying we should not focus on that. But we also should note that statistically, an incel is much more likely to commit suicide or be violent towards themselves than they are toward someone else. You can both condemn their ideology and find it abhorrent and also remember that we need to have empathy for these people.”

Many people—women especially—might find that a tall order, and Beckett-Herbert understands that reluctance. “I do understand people’s hesitancy to empathize with them, because it feels like you’re giving credence to their rhetoric,” she said. “But at the end of the day, they are human, and a lot of them are really struggling, marginalized people coming from pretty sad backgrounds. When you peruse their online world, it’s the most horrifying, angering misogyny right next to some of the saddest mental health, suicidal, low self-esteem stuff you’ve ever seen. I think humanizing them and having empathy is going to be foundational to any intervention efforts to reintegrate them. But it’s something I wrestle with a lot.”

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Why incels take the “Blackpill”—and why we should care Read More »

new-twist-on-marshmallow-test-shows-power-of-a-promise

New twist on marshmallow test shows power of a promise

There have also been several studies examining the effects of social interdependence and similar social contexts on children’s ability to delay gratification, using variations of the marshmallow test paradigm. For instance, in 2020, a team of German researchers adapted the classic experimental setup using Oreos and vanilla cookies with German and Kenyan schoolchildren, respectively. If both children waited to eat their treat, they received a second cookie as a reward; if one did not wait, neither child received a second cookie. They found that the kids were more likely to delay gratification when they depended on each other, compared to the standard marshmallow test.

An online paradigm

Rebecca Koomen, a psychologist now at the University of Manchester, co-authored the 2020 study as well as this latest one, which sought to build on those findings. Koomen et al. structured their experiments similarly, this time recruiting 66 UK children, ages 5 to 6, as subjects. They focused on how promising a partner not to eat a favorite treat could inspire sufficient trust to delay gratification, compared to the social risk of one or both partners breaking that promise. Any parent could tell you that children of this age are really big on the importance of promises, and science largely concurs; a promise has been shown to enhance interdependent cooperation in this age group.

Koomen and her Manchester colleagues added an extra twist: They conducted their version of the marshmallow test online to test the effectiveness compared to lab-based versions of the experiment. (Prior results from similar online studies have been mixed.) “Given face-to-face testing restrictions during the COVID pandemic, this, to our knowledge, represents the first cooperative marshmallow study to be conducted online, thereby adding to the growing body of literature concerning the validity of remote testing methods,” they wrote.

The type of treat was chosen by each child’s parents, ensuring it was a favorite: chocolate, candy, biscuits, and marshmallows, mostly, although three kids loved potato chips, fruit, and nuts, respectively. Parents were asked to set up the experiment in a quiet room with minimal potential distractions, outfitted with a webcam to monitor the experiment. Each child was shown a video of a “confederate child” who either clearly promised not to eat the treat or more ambiguously suggested they might succumb and eat their treat. (The confederate child refrained from eating the treat in both conditions, although the participant child did not know that.)

New twist on marshmallow test shows power of a promise Read More »

how-the-language-of-job-postings-can-attract-rule-bending-narcissists

How the language of job postings can attract rule-bending narcissists

Why it matters

Companies write job postings carefully in hopes of attracting the ideal candidate. However, they may unknowingly attract and select narcissistic candidates whose goals and ethics might not align with a company’s values or long-term success. Research shows that narcissistic employees are more likely to behave unethically, potentially leading to legal consequences.

While narcissistic traits can lead to negative outcomes, we aren’t saying that companies should avoid attracting narcissistic applicants altogether. Consider a company hiring a salesperson. A firm can benefit from a salesperson who is persuasive, who “thinks outside the box,” and who is “results-oriented.” In contrast, a company hiring an accountant or compliance officer would likely benefit from someone who “thinks methodically” and “communicates in a straightforward and accurate manner.”

Bending the rules is of particular concern in accounting. A significant amount of research examines how accounting managers sometimes bend rules or massage the numbers to achieve earnings targets. This “earnings management” can misrepresent the company’s true financial position.

In fact, my co-author Nick Seybert is currently working on a paper whose data suggests rule-bender language in accounting job postings predicts rule-bending in financial reporting.

Our current findings shed light on the importance of carefully crafting job posting language. Recruiting professionals may instinctively use rule-bender language to try to attract someone who seems like a good fit. If companies are concerned about hiring narcissists, they may want to clearly communicate their ethical values and needs while crafting a job posting, or avoid rule-bender language entirely.

What still isn’t known

While we find that professional recruiters are using language that attracts narcissists, it is unclear whether this is intentional.

Additionally, we are unsure what really drives rule-bending in a company. Rule-bending could happen due to attracting and hiring more narcissistic candidates, or it could be because of a company’s culture—or a combination of both.

The Research Brief is a short take on interesting academic work.

Jonathan Gay is Assistant Professor of Accountancy at the University of Mississippi.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How the language of job postings can attract rule-bending narcissists Read More »

do-these-dual-images-say-anything-about-your-personality?

Do these dual images say anything about your personality?

There’s little that Internet denizens love more than a snazzy personality test—cat videos, maybe, or perpetual outrage. One trend that has gained popularity over the last several years is personality quizzes based on so-called ambiguous images—in which one sees either a young girl or an old man, for instance, or a skull or a little girl. It’s possible to perceive both images by shifting one’s perspective, but it’s the image one sees first that is said to indicate specific personality traits. According to one such quiz, seeing the young girl first means you are optimistic and a bit impulsive, while seeing the old man first would mean one is honest, faithful, and goal-oriented.

But is there any actual science to back up the current fad? There is not, according to a paper published in the journal PeerJ, whose authors declare these kinds of personality quizzes to be a new kind of psychological myth. That said, they did find a couple of intriguing, statistically significant correlations they believe warrant further research.

In 1892, a German humor magazine published the earliest known version of the “rabbit-duck illusion,” in which one can see either a rabbit or a duck, depending on one’s perspective—i.e., multistable perception. There have been many more such images produced since then, all of which create ambiguity by exploiting certain peculiarities of the human visual system, such as playing with illusory contours and how we perceive edges.

Such images have long fascinated scientists and philosophers because they seem to represent different ways of seeing. So naturally there is a substantial body of research drawing parallels between such images and various sociological, biological, or psychological characteristics.

For instance, a 2010 study examined BBC archival data on the duck-rabbit illusion from the 1950s and found that men see the duck more often than women, while older people were more likely to see the rabbit. A 2018 study of the “younger-older woman” ambiguous image asked participants to estimate the age of the woman they saw in the image. Older participants over 30 gave higher estimates than younger ones. This was confirmed by a 2021 study, although that study also found no correlation between participants’ age and whether they were more likely to see the older or younger woman in the image.

Do these dual images say anything about your personality? Read More »

heroes,-villains,-and-childhood-trauma-in-the-mceu-and-dcu

Heroes, villains, and childhood trauma in the MCEU and DCU

They also limited their study to Marvel and DC characters depicted in major films, rather than including storylines from spinoff TV series. So Wanda Maximoff/The Scarlet Witch was not included since much of her traumatic backstory appeared in the series WandaVision. Furthermore, “We omitted gathering more characters from comic books in both Marvel and DC universes, due to their inconsistency in character development,” the authors wrote. “Comic book storylines often feature alternative plot lines, character arcs, and multiverse outcomes. The storytelling makes comic book characters highly inconsistent and challenging to score.”

With great power…

They ended up watching 33 films, with a total runtime of 77 hours and 5 minutes. They chose 19 male characters, eight female characters, and one gender-fluid character (Loki) as “subjects” for their study, applying the ACE questionnaire to their childhoods as portrayed in the films.

The results: “We found no statistically significant differences between heroes and villains, Marvel and DC characters, or men and women and ACE scores,” said Jackson. “This means that characters who were portrayed as having difficult childhoods were not more likely to be villains. This study somewhat refutes the idea that villains are a product of their experiences. Based on the films we watched, people chose to be heroes and that was what made the difference—not their experiences.”

Notably, Black Widow had the highest ACE score (eight) and yet still became an Avenger, though the authors acknowledge that the character did some bad things before then and famously wanted to wipe out the “red” in her ledger. She “represents resilience of characters who have experienced trauma,” the authors wrote, as well as demonstrating that “socio-ecological resilience, including access to social relationships and supportive communities, can play a mitigating role in the effect of ACEs.” The Joker, by contrast, scored a six and “wreaked havoc across Gotham City.”

Heroes, villains, and childhood trauma in the MCEU and DCU Read More »

ai-chatbots-might-be-better-at-swaying-conspiracy-theorists-than-humans

AI chatbots might be better at swaying conspiracy theorists than humans

Out of the rabbit hole —

Co-author Gordon Pennycook: “The work overturns a lot of how we thought about conspiracies.”

A woman wearing a sweatshirt for the QAnon conspiracy theory on October 11, 2020 in Ronkonkoma, New York.

Enlarge / A woman wearing a sweatshirt for the QAnon conspiracy theory on October 11, 2020 in Ronkonkoma, New York.

Stephanie Keith | Getty Images

Belief in conspiracy theories is rampant, particularly in the US, where some estimates suggest as much as 50 percent of the population believes in at least one outlandish claim. And those beliefs are notoriously difficult to debunk. Challenge a committed conspiracy theorist with facts and evidence, and they’ll usually just double down—a phenomenon psychologists usually attribute to motivated reasoning, i.e., a biased way of processing information.

A new paper published in the journal Science is challenging that conventional wisdom, however. Experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory showed that the interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual.

“These are some of the most fascinating results I’ve ever seen,” co-author Gordon Pennycook, a psychologist at Cornell University, said during a media briefing. “The work overturns a lot of how we thought about conspiracies, that they’re the result of various psychological motives and needs. [Participants] were remarkably responsive to evidence. There’s been a lot of ink spilled about being in a post-truth world. It’s really validating to know that evidence does matter. We can act in a more adaptive way using this new technology to get good evidence in front of people that is specifically relevant to what they think, so it’s a much more powerful approach.”

When confronted with facts that challenge a deeply entrenched belief, people will often seek to preserve it rather than update their priors (in Bayesian-speak) in light of the new evidence. So there has been a good deal of pessimism lately about ever reaching those who have plunged deep down the rabbit hole of conspiracy theories, which are notoriously persistent and “pose a serious threat to democratic societies,” per the authors. Pennycook and his fellow co-authors devised an alternative explanation for that stubborn persistence of belief.

Bespoke counter-arguments

The issue is that “conspiracy theories just vary a lot from person to person,” said co-author Thomas Costello, a psychologist at American University who is also affiliated with MIT. “They’re quite heterogeneous. People believe a wide range of them and the specific evidence that people use to support even a single conspiracy may differ from one person to another. So debunking attempts where you try to argue broadly against a conspiracy theory are not going to be effective because people have different versions of that conspiracy in their heads.”

By contrast, an AI chatbot would be able to tailor debunking efforts to those different versions of a conspiracy. So in theory a chatbot might prove more effective in swaying someone from their pet conspiracy theory.

To test their hypothesis, the team conducted a series of experiments with 2,190 participants who believed in one or more conspiracy theories. The participants engaged in several personal “conversations” with a large language model (GT-4 Turbo) in which they shared their pet conspiracy theory and the evidence they felt supported that belief. The LLM would respond by offering factual and evidence-based counter-arguments tailored to the individual participant. GPT-4 Turbo’s responses were professionally fact-checked, which showed that 99.2 percent of the claims it made were true, with just 0.8 percent being labeled misleading, and zero as false. (You can try your hand at interacting with the debunking chatbot here.)

Screenshot of the chatbot opening page asking questions to prepare for a conversation

Enlarge / Screenshot of the chatbot opening page asking questions to prepare for a conversation

Thomas H. Costello

Participants first answered a series of open-ended questions about the conspiracy theories they strongly believed and the evidence they relied upon to support those beliefs. The AI then produced a single-sentence summary of each belief, for example, “9/11 was an inside job because X, Y, and Z.” Participants would rate the accuracy of that statement in terms of their own beliefs and then filled out a questionnaire about other conspiracies, their attitude toward trusted experts, AI, other people in society, and so forth.

Then it was time for the one-on-one dialogues with the chatbot, which the team programmed to be as persuasive as possible. The chatbot had also been fed the open-ended responses of the participants, which made it better to tailor its counter-arguments individually. For example, if someone thought 9/11 was an inside job and cited as evidence the fact that jet fuel doesn’t burn hot enough to melt steel, the chatbot might counter with, say, the NIST report showing that steel loses its strength at much lower temperatures, sufficient to weaken the towers’ structures so that it collapsed. Someone who thought 9/11 was an inside job and cited demolitions as evidence would get a different response tailored to that.

Participants then answered the same set of questions after their dialogues with the chatbot, which lasted about eight minutes on average. Costello et al. found that these targeted dialogues resulted in a 20 percent decrease in the participants’ misinformed beliefs—a reduction that persisted even two months later when participants were evaluated again.

As Bence Bago (Tilburg University) and Jean-Francois Bonnefon (CNRS, Toulouse, France) noted in an accompanying perspective, this is a substantial effect compared to the 1 to 6 percent drop in beliefs achieved by other interventions. They also deemed the persistence of the effect noteworthy, while cautioning that two months is “insufficient to completely eliminate misinformed conspiracy beliefs.”

AI chatbots might be better at swaying conspiracy theorists than humans Read More »

study:-playing-dungeons-&-dragons-helps-autistic-players-in-social-interactions

Study: Playing Dungeons & Dragons helps autistic players in social interactions

We can be heroes —

“I can make a character quite different from how I interact with people in real life.”

A Dungeons & Dragons game session featuring a map, miniatures, dice, and character sheets

Enlarge / Researchers say that Dungeons & Dragons can give autistic players a way to engage in low-risk social interactions.

Since its introduction in the 1970s, Dungeons & Dragons has become one of the most influential tabletop role-playing games (TRPGs) in popular culture, featuring heavily in Stranger Things, for example, and spawning a blockbuster movie released last year. Over the last decade or so, researchers have turned their focus more heavily to the ways in which D&D and other TRPGs can help people with autism form healthy social connections, in part because the gaming environment offers clear rules around social interactions. According to the authors of a new paper published in the journal Autism, D&D helped boost players’ confidence with autism, giving them a strong sense of kinship or belonging, among other benefits.

“There are many myths and misconceptions about autism, with some of the biggest suggesting that those with it aren’t socially motivated, or don’t have any imagination,” said co-author Gray Atherton, a psychologist at the University of Plymouth. “Dungeons & Dragons goes against all that, centering around working together in a team, all of which takes place in a completely imaginary environment. Those taking part in our study saw the game as a breath of fresh air, a chance to take on a different persona and share experiences outside of an often challenging reality. That sense of escapism made them feel incredibly comfortable, and many of them said they were now trying to apply aspects of it in their daily lives.”

Prior research has shown that autistic people are more likely to feel lonely, have smaller social networks, and often experience anxiety in social settings. Their desire for social connection leads many to “mask” their neurodivergent traits in public for fear of being rejected as a result of social gaffes. “I think every autistic person has had multiple instances of social rejection and loss of relationships,” one of the study participants said when Atherton et al. interviewed them about their experiences. “You’ve done something wrong. You don’t know what it is. They don’t tell you, and you find out when you’ve been just, you know, left shunned in relationships, left out…. It’s traumatic.”

TPRGs like D&D can serve as a social lubricant for autistic players, according to a year-long study published earlier this year co-authored by Atherton, because there is less uncertainty around how to behave in-game—unlike the plethora of unwritten social rules that make navigating social settings so anxiety-inducing. Such games immerse players in a fantastical world where they create their characters with unique backstories, strengths, and weaknesses and cooperate with others to complete campaigns. A game master guides the overall campaign, but the game itself evolves according to the various choices different players make throughout.

A critical hit

Small wonder, then, that there tend to be higher percentages of autistic TRPG players than in the general populace. For this latest study. Atherton et al. wanted to specifically investigate how autistic players experience D&D when playing in groups with other autistic players. It’s essentially a case study with a small sample size—just eight participants—and qualitative in nature, since the post-play analysis focused on semistructured interviews with each player after the conclusion of the online campaign, the better to highlight their individual voices.

The players were recruited through social media advertisements within the D&D, Reddit and Discord online communities; all had received an autism diagnosis by a medical professional. They were split into two groups of four players, with one of the researchers (who’s been playing D&D for years) acting as the dungeon master. The online sessions featured in the study was the Waterdeep: Dragonheist campaign. The campaign ran for six weeks, with sessions lasting between two and four hours (including breaks).

Participants spoke repeatedly about the positive benefits they received from playing D&D, providing a friendly environment that helped them relax about social pressures. “When you’re interacting with people over D&D, you’re more likely to understand what’s going on,” one participant said in their study interview. “That’s because the method you’ll use to interact is written out. You can see what you’re meant to do. There’s an actual sort of reference sheet for some social interactions.” That, in turn, helped foster a sense of belonging and kinship with their fellow players.

Participants also reported feeling emotionally invested and close to their characters, with some preferring to separate themselves from their character in order to explore other aspects of their personality or even an entirely new persona, thus broadening their perspectives. “I can make a character quite different from how I interact with people in real-life interactions,” one participant said. “It helps you put yourself in the other person’s perspective because you are technically entering a persona that is your character. You can then try to see how it feels to be in that interaction or in that scenario through another lens.” And some participants said they were able to “rewrite” their own personal stories outside the game by adopting some of their characters’ traits—a psychological phenomenon known as “bleed.”

“Autism comes with several stigmas, and that can lead to people being met with judgment or disdain,” said co-author Liam Cross, also of the University of Plymouth. “We also hear from lots of families who have concerns about whether teenagers with autism are spending too much time playing things like video games. A lot of the time that is because people have a picture in their minds of how a person with autism should behave, but that is based on neurotypical experiences. Our studies have shown that there are everyday games and hobbies that autistic people do not simply enjoy but also gain confidence and other skills from. It might not be the case for everyone with autism, but our work suggests it can enable people to have positive experiences that are worth celebrating.”

Autism, 2024. DOI: 10.1177/13623613241275260  (About DOIs).

Study: Playing Dungeons & Dragons helps autistic players in social interactions Read More »

the-nature-of-consciousness,-and-how-to-enjoy-it-while-you-can

The nature of consciousness, and how to enjoy it while you can

Remaining aware —

In his new book, Christof Koch views consciousness as a theorist and an aficionado.

A black background with multicolored swirls filling the shape of a human brain.

Unraveling how consciousness arises out of particular configurations of organic matter is a quest that has absorbed scientists and philosophers for ages. Now, with AI systems behaving in strikingly conscious-looking ways, it is more important than ever to get a handle on who and what is capable of experiencing life on a conscious level. As Christof Koch writes in Then I Am Myself the World, “That you are intimately acquainted with the way life feels is a brute fact about the world that cries out for an explanation.” His explanation—bounded by the limits of current research and framed through Koch’s preferred theory of consciousness—is what he eloquently attempts to deliver.

Koch, a physicist, neuroscientist, and former president of the Allen Institute for Brain Science, has spent his career hunting for the seat of consciousness, scouring the brain for physical footprints of subjective experience. It turns out that the posterior hot zone, a region in the back of the neocortex, is intricately connected to self-awareness and experiences of sound, sight, and touch. Dense networks of neocortical neurons in this area connect in a looped configuration; output signals feedback into input neurons, allowing the posterior hot zone to influence its own behavior. And herein, Koch claims, lies the key to consciousness.

In the hot zone

According to integrated information theory (IIT)—which Koch strongly favors over a multitude of contending theories of consciousness—the Rosetta Stone of subjective experience is the ability of a system to influence itself: to use its past state to affect its present state and its present state to influence its future state.

Billions of neurons exist in the cerebellum, but they are wired “with nonoverlapping inputs and outputs … in a feed-forward manner,” writes Koch. He argues that a structure designed in this way, with limited influence over its own future, is not likely to produce consciousness. Similarly, the prefrontal cortex might allow us to perform complex calculations and exhibit advanced reasoning skills, but such traits do not equate to a capacity to experience life. It is the “reverberatory, self-sustaining excitatory loops prevalent in the neocortex,” Koch tells us, that set the stage for subjective experience to arise.

This declaration matches the experimental evidence Koch presents in Chapter 6: Injuries to the cerebellum do not eliminate a person’s awareness of themselves in relation to the outside world. Consciousness remains, even in a person who can no longer move their body with ease. Yet injuries to the posterior hot zone within the neocortex significantly change a person’s perception of auditory, visual, and tactile information, altering what they subjectively experience and how they describe these experiences to themselves and others.

Does this mean that artificial computer systems, wired appropriately, can be conscious? Not necessarily, Koch says. This might one day be possible with the advent of new technology, but we are not there yet. He writes. “The high connectivity [in a human brain] is very different from that found in the central processing unit of any digital computer, where one transistor typically connects to a handful of other transistors.” For the foreseeable future, AI systems will remain unconscious despite appearances to the contrary.

Koch’s eloquent overview of IIT and the melodic ease of his neuroscientific explanations are undeniably compelling, even for die-hard physicalists who flinch at terms like “self-influence.” His impeccably written descriptions are peppered with references to philosophers, writers, musicians, and psychologists—Albert Camus, Viktor Frankl, Richard Wagner, and Lewis Carroll all make appearances, adding richness and relatability to the narrative. For example, as an introduction to phenomenology—the way an experience feels or appears—he aptly quotes Eminem: “I can’t tell you what it really is, I can only tell you what it feels like.”

The nature of consciousness, and how to enjoy it while you can Read More »