misinformation

analysis:-the-trump-administration’s-assault-on-climate-action

Analysis: The Trump administration’s assault on climate action


Official actions don’t challenge science, while unofficial docs muddy the waters.

Last week, the Environmental Protection Agency made lots of headlines by rejecting the document that establishes its ability to regulate the greenhouse gases that are warming our climate. While the legal assault on regulations grabbed most of the attention, it was paired with two other actions that targeted other aspects of climate change: the science underlying our current understanding of the dramatic warming the Earth is experiencing, and the renewable energy that represents our best chance of limiting this warming.

Collectively, these actions illuminate the administration’s strategy for dealing with a problem that it would prefer to believe doesn’t exist, despite our extensive documentation of its reality. They also show how the administration is tailoring its approach to different audiences, including the audience of one who is demanding inaction.

When in doubt, make something up

The simplest thing to understand is an action by the Department of the Interior, which handles permitting for energy projects on federal land—including wind and solar, both onshore and off. That has placed the Interior in an awkward position. Wind and solar are now generally the cheapest ways to generate electricity and are currently in the process of a spectacular boom, with solar now accounting for over 80 percent of the newly installed capacity in the US.

Yet, when Trump issued an executive order declaring an energy emergency, wind and solar were notably excluded as potential solutions. Language from Trump and other administration officials has also made it clear that renewable energy is viewed as an impediment to the administration’s pro-fossil fuel agenda.

But shutting down federal permitting for renewable energy with little more than “we don’t like it” as justification could run afoul of rules that forbid government decisions from being “arbitrary and capricious.” This may explain why the government gave up on its attempts to block the ongoing construction of an offshore wind farm in New York waters.

On Friday, the Interior announced that it had settled on a less arbitrary justification for blocking renewable energy on public land: energy density. Given a metric of land use per megawatt, wind and solar are less efficient than nuclear plants we can’t manage to build on time or budget, and therefore “environmentally damaging” and an inefficient use of federal land, according to the new logic. “The Department will now consider proposed energy project’s capacity density when assessing the project’s potential energy benefits to the nation and impacts to the environment and wildlife,” Interior declared.

This is only marginally more reasonable than Interior Secretary Doug Burgum’s apparent inability to recognize that solar power can be stored in batteries. But it has three features that will be recurring themes. There’s at least a token attempt to provide a justification that might survive the inevitable lawsuits, while at the same time providing fodder for the culture war that many in the administration demand. And it avoids directly attacking the science that initially motivated the push toward renewables.

Energy vs. the climate

That’s not to say that climate change isn’t in for attack. It’s just that the attacks are being strategically separated from the decisions that might produce a lawsuit. Last week, the burden of taking on extremely well-understood and supported science fell to the Department of Energy, which released a report on climate “science” to coincide with the EPA’s decision to give up on attempts to regulate greenhouse gases.

For those who have followed public debates over climate change, looking at the author list—John Christy, Judith Curry, Steven Koonin, Ross McKitrick, and Roy Spencer—will give you a very clear picture of what to expect. Spencer is a creationist, raising questions about his ability to evaluate any science free from his personal biases. (He has also said, “My job has helped save our economy from the economic ravages of out-of-control environmental extremism,” so it’s not just biology where he’s got these issues.) McKitrick is an economist who engaged in a multi-year attempt to raise doubt about the prominent “hockey stick” reconstruction of past climates, even as scientists were replicating the results. Etc.

The report is a master class in arbitrary and capricious decision-making applied to science. Sometimes the authors rely on the peer-reviewed literature. Other times they perform their own analysis for this document, in some cases coming up with almost comically random metrics for data. (Example: “We examine occurrences of 5-day deluges as follows. Taking the Pacific coast as an example, a 130-year span contains 26 5-year intervals. At each location we computed the 5-day precipitation totals throughout the year and selected the 26 highest values across the sample.” Why five days? Five-year intervals? Who knows.)

This is especially striking in a few cases where the authors choose references that were published a few years ago, and thus neatly avoid the dramatic temperature records that have been set over the past couple of years. Similarly, they sometimes use regional measures and sometimes use global ones. They demand long-term data in some contexts, while getting excited about two years of coral growth in the Great Barrier Reef. The authors highlight the fact that US tide gauges don’t show any indication of an acceleration in the rate of sea level rise while ignoring the fact that global satellite measures clearly do.

That’s not to say that there aren’t other problems. There’s some blatant misinformation, like claims that urbanization could be distorting the warming, which has already been tested extensively. (Notably, warming is most intense in the sparsely populated Arctic.) There’s also some creative use of language, like referring to the ocean acidification caused by CO2 as “neutralizing ocean alkalinity.”

But the biggest bit of misinformation comes in the introduction, where the secretary of energy, Chris Wright, said of the authors, “I chose them for their rigor, honesty, and willingness to elevate the debate.” There is no reason to choose this group of marginal contrarians except the knowledge that they’d produce a report like this, thus providing a justification for those in the administration who want to believe it’s all a scam.

No science needed

The critical feature of the Department of Energy report is that it contains no policy actions; it’s purely about trying to undercut well-understood climate science. This means the questionable analyses in the report shouldn’t ever end up being tested in court.

That’s in contrast to the decision to withdraw the EPA’s endangerment finding regarding greenhouse gases. There’s quite an extensive history to the endangerment finding, but briefly, it’s the product of a Supreme Court decision (Massachusetts v. EPA), which compelled the EPA to evaluate whether greenhouse gases posed a threat to the US population as defined in the Clean Air Act. Both the Bush and Obama EPAs did so, thus enabling the regulation of greenhouse gases, including carbon dioxide.

Despite the claims in the Department of Energy report, there is comprehensive evidence that greenhouse gases are causing problems in the US, ranging from extreme weather to sea level rise. So while the EPA mentions the Department of Energy’s work a number of times, the actual action being taken skips over the science and focuses on legal issues. In doing so, it creates a false history where the endangerment finding had no legal foundation.

To re-recap, the Supreme Court determined that this evaluation was required by the Clean Air Act. George W. Bush’s administration performed the analysis and reached the exact same conclusion as the Obama administration (though the former chose to ignore those conclusions). Yet Trump’s EPA is calling the endangerment finding “an unprecedented move” by the Obama administration that involved “mental leaps” and “ignored Congress’ clear intent.” And the EPA presents the findings as strategic, “the only way the Obama-Biden Administration could access EPA’s authority to regulate,” rather than compelled by scientific evidence.

Fundamentally, it’s an ahistorical presentation; the EPA is counting on nobody remembering what actually happened.

The announcement doesn’t get much better when it comes to the future. The only immediate change will be an end to any attempts to regulate carbon emissions from motor vehicles, since regulations for power plants had been on hold due to court challenges. Yet somehow, the EPA’s statement claims that this absence of regulation imposed costs on people. “The Endangerment Finding has also played a significant role in EPA’s justification of regulations of other sources beyond cars and trucks, resulting in additional costly burdens on American families and businesses,” it said.

We’re still endangered

Overall, the announcements made last week provide a clear picture of how the administration intends to avoid addressing climate change and cripple the responses started by previous administrations. Outside of the policy arena, it will question the science and use partisan misinformation to rally its supporters for the fight. But it recognizes that these approaches aren’t flying when it comes to the courts.

So it will separately pursue a legal approach that seeks to undercut the ability of anyone, including private businesses, to address climate change, crafting “reasons” for its decisions in a way that might survive legal challenge—because these actions are almost certain to be challenged in court. And that may be the ultimate goal. The current court has shown a near-complete disinterest in respecting precedent and has issued a string of decisions that severely limit the EPA. It’s quite possible that the court will simply throw out the prior decision that compelled the government to issue an endangerment finding in the first place.

If that’s left in place, then any ensuing administrations can simply issue a new endangerment finding. If anything, the effects of climate change on the US population have become more obvious, and the scientific understanding of human-driven warming has solidified since the Bush administration first acknowledged them.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Analysis: The Trump administration’s assault on climate action Read More »

conspiracy-theorists-don’t-realize-they’re-on-the-fringe

Conspiracy theorists don’t realize they’re on the fringe


Gordon Pennycook: “It might be one of the biggest false consensus effects that’s been observed.”

Credit: Aurich Lawson / Thinkstock

Belief in conspiracy theories is often attributed to some form of motivated reasoning: People want to believe a conspiracy because it reinforces their worldview, for example, or doing so meets some deep psychological need, like wanting to feel unique. However, it might also be driven by overconfidence in their own cognitive abilities, according to a paper published in the Personality and Social Psychology Bulletin. The authors were surprised to discover that not only are conspiracy theorists overconfident, they also don’t realize their beliefs are on the fringe, massively overestimating by as much as a factor of four how much other people agree with them.

“I was expecting the overconfidence finding,” co-author Gordon Pennycook, a psychologist at Cornell University, told Ars. “If you’ve talked to someone who believes conspiracies, it’s self-evident. I did not expect them to be so ready to state that people agree with them. I thought that they would overestimate, but I didn’t think that there’d be such a strong sense that they are in the majority. It might be one of the biggest false consensus effects that’s been observed.”

In 2015, Pennycook made headlines when he co-authored a paper demonstrating how certain people interpret “pseudo-profound bullshit” as deep observations. Pennycook et al. were interested in identifying individual differences between those who are susceptible to pseudo-profound BS and those who are not and thus looked at conspiracy beliefs, their degree of analytical thinking, religious beliefs, and so forth.

They presented several randomly generated statements, containing “profound” buzzwords, that were grammatically correct but made no sense logically, along with a 2014 tweet by Deepak Chopra that met the same criteria. They found that the less skeptical participants were less logical and analytical in their thinking and hence much more likely to consider these nonsensical statements as being deeply profound. That study was a bit controversial, in part for what was perceived to be its condescending tone, along with questions about its methodology. But it did snag Pennycook et al. a 2016 Ig Nobel Prize.

Last year we reported on another Pennycook study, presenting results from experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory. That study showed that the AI interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual. “The work overturns a lot of how we thought about conspiracies, that they’re the result of various psychological motives and needs,” Pennycook said at the time.

Miscalibrated from reality

Pennycook has been working on this new overconfidence study since 2018, perplexed by observations indicating that people who believe in conspiracies also seem to have a lot of faith in their cognitive abilities—contradicting prior research finding that conspiracists are generally more intuitive. To investigate, he and his co-authors conducted eight separate studies that involved over 4,000 US adults.

The assigned tasks were designed in such a way that participants’ actual performance and how they perceived their performance were unrelated. For example, in one experiment, they were asked to guess the subject of an image that was largely obscured. The subjects were then asked direct questions about their belief (or lack thereof) concerning several key conspiracy claims: the Apollo Moon landings were faked, for example, or that Princess Diana’s death wasn’t an accident. Four of the studies focused on testing how subjects perceived others’ beliefs.

The results showed a marked association between subjects’ tendency to be overconfident and belief in conspiracy theories. And while a majority of participants believed a conspiracy’s claims just 12 percent of the time, believers thought they were in the majority 93 percent of the time. This suggests that overconfidence is a primary driver of belief in conspiracies.

It’s not that believers in conspiracy theories are massively overconfident; there is no data on that, because the studies didn’t set out to quantify the degree of overconfidence, per Pennycook. Rather, “They’re overconfident, and they massively overestimate how much people agree with them,” he said.

Ars spoke with Pennycook to learn more.

Ars Technica: Why did you decide to investigate overconfidence as a contributing factor to believing conspiracies?

Gordon Pennycook: There’s a popular sense that people believe conspiracies because they’re dumb and don’t understand anything, they don’t care about the truth, and they’re motivated by believing things that make them feel good. Then there’s the academic side, where that idea molds into a set of theories about how needs and motivations drive belief in conspiracies. It’s not someone falling down the rabbit hole and getting exposed to misinformation or conspiratorial narratives. They’re strolling down: “I like it over here. This appeals to me and makes me feel good.”

Believing things that no one else agrees with makes you feel unique. Then there’s various things I think that are a little more legitimate: People join communities and there’s this sense of belongingness. How that drives core beliefs is different. Someone may stop believing but hang around in the community because they don’t want to lose their friends. Even with religion, people will go to church when they don’t really believe. So we distinguish beliefs from practice.

What we observed is that they do tend to strongly believe these conspiracies despite the fact that there’s counter evidence or a lot of people disagree. What would lead that to happen? It could be their needs and motivations, but it could also be that there’s something about the way that they think where it just doesn’t occur to them that they could be wrong about it. And that’s where overconfidence comes in.

Ars Technica: What makes this particular trait such a powerful driving force?

Gordon Pennycook: Overconfidence is one of the most important core underlying components, because if you’re overconfident, it stops you from really questioning whether the thing that you’re seeing is right or wrong, and whether you might be wrong about it. You have an almost moral purity of complete confidence that the thing you believe is true. You cannot even imagine what it’s like from somebody else’s perspective. You couldn’t imagine a world in which the things that you think are true could be false. Having overconfidence is that buffer that stops you from learning from other people. You end up not just going down the rabbit hole, you’re doing laps down there.

Overconfidence doesn’t have to be learned, parts of it could be genetic. It also doesn’t have to be maladaptive. It’s maladaptive when it comes to beliefs. But you want people to think that they will be successful when starting new businesses. A lot of them will fail, but you need some people in the population to take risks that they wouldn’t take if they were thinking about it in a more rational way. So it can be optimal at a population level, but maybe not at an individual level.

Ars Technica: Is this overconfidence related to the well-known Dunning-Kruger effect?

Gordon Pennycook: It’s because of Dunning-Kruger that we had to develop a new methodology to measure overconfidence, because the people who are the worst at a task are the worst at knowing that they’re the worst at the task. But that’s because the same things that you use to do the task are the things you use to assess how good you are at the task. So if you were to give someone a math test and they’re bad at math, they’ll appear overconfident. But if you give them a test of assessing humor and they’re good at that, they won’t appear overconfident. That’s about the task, not the person.

So we have tasks where people essentially have to guess, and it’s transparent. There’s no reason to think that you’re good at the task. In fact, people who think they’re better at the task are not better at it, they just think they are. They just have this underlying kind of sense that they can do things, they know things, and that’s the kind of thing that we’re trying to capture. It’s not specific to a domain. There are lots of reasons why you could be overconfident in a particular domain. But this is something that’s an actual trait that you carry into situations. So when you’re scrolling online and come up with these ideas about how the world works that don’t make any sense, it must be everybody else that’s wrong, not you.

Ars Technica: Overestimating how many people agree with them seems to be at odds with conspiracy theorists’ desire to be unique.  

Gordon Pennycook: In general, people who believe conspiracies often have contrary beliefs. We’re working with a population where coherence is not to be expected. They say that they’re in the majority, but it’s never a strong majority. They just don’t think that they’re in a minority when it comes to the belief. Take the case of the Sandy Hook conspiracy, where adherents believe it was a false flag operation. In one sample, 8 percent of people thought that this was true. That 8 percent thought 61 percent of people agreed with them.

So they’re way off. They really, really miscalibrated. But they don’t say 90 percent. It’s 60 percent, enough to be special, but not enough to be on the fringe where they actually are. I could have asked them to rank how smart they are relative to others, or how unique they thought their beliefs were, and they would’ve answered high on that. But those are kind of mushy self-concepts. When you ask a specific question that has an objectively correct answer in terms of the percent of people in the sample that agree with you, it’s not close.

Ars Technica: How does one even begin to combat this? Could last year’s AI study point the way?

Gordon Pennycook: The AI debunking effect works better for people who are less overconfident. In those experiments, very detailed, specific debunks had a much bigger effect than people expected. After eight minutes of conversation, a quarter of the people who believed the thing didn’t believe it anymore, but 75 percent still did. That’s a lot. And some of them, not only did they still believe it, they still believed it to the same degree. So no one’s cracked that. Getting any movement at all in the aggregate was a big win.

Here’s the problem. You can’t have a conversation with somebody who doesn’t want to have the conversation. In those studies, we’re paying people, but they still get out what they put into the conversation. If you don’t really respond or engage, then our AI is not going to give you good responses because it doesn’t know what you’re thinking. And if the person is not willing to think. … This is why overconfidence is such an overarching issue. The only alternative is some sort of propagandistic sit-them-downs with their eyes open and try to de-convert them. But you can’t really convert someone who doesn’t want to be converted. So I’m not sure that there is an answer. I think that’s just the way that humans are.

Personality and Social Psychology Bulletin, 2025. DOI: 10.1177/01461672251338358  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Conspiracy theorists don’t realize they’re on the fringe Read More »

everything-that-could-go-wrong-with-x’s-new-ai-written-community-notes

Everything that could go wrong with X’s new AI-written community notes


X says AI can supercharge community notes, but that comes with obvious risks.

Elon Musk’s X arguably revolutionized social media fact-checking by rolling out “community notes,” which created a system to crowdsource diverse views on whether certain X posts were trustworthy or not.

But now, the platform plans to allow AI to write community notes, and that could potentially ruin whatever trust X users had in the fact-checking system—which X has fully acknowledged.

In a research paper, X described the initiative as an “upgrade” while explaining everything that could possibly go wrong with AI-written community notes.

In an ideal world, X described AI agents that speed up and increase the number of community notes added to incorrect posts, ramping up fact-checking efforts platform-wide. Each AI-written note will be rated by a human reviewer, providing feedback that makes the AI agent better at writing notes the longer this feedback loop cycles. As the AI agents get better at writing notes, that leaves human reviewers to focus on more nuanced fact-checking that AI cannot quickly address, such as posts requiring niche expertise or social awareness. Together, the human and AI reviewers, if all goes well, could transform not just X’s fact-checking, X’s paper suggested, but also potentially provide “a blueprint for a new form of human-AI collaboration in the production of public knowledge.”

Among key questions that remain, however, is a big one: X isn’t sure if AI-written notes will be as accurate as notes written by humans. Complicating that further, it seems likely that AI agents could generate “persuasive but inaccurate notes,” which human raters might rate as helpful since AI is “exceptionally skilled at crafting persuasive, emotionally resonant, and seemingly neutral notes.” That could disrupt the feedback loop, watering down community notes and making the whole system less trustworthy over time, X’s research paper warned.

“If rated helpfulness isn’t perfectly correlated with accuracy, then highly polished but misleading notes could be more likely to pass the approval threshold,” the paper said. “This risk could grow as LLMs advance; they could not only write persuasively but also more easily research and construct a seemingly robust body of evidence for nearly any claim, regardless of its veracity, making it even harder for human raters to spot deception or errors.”

X is already facing criticism over its AI plans. On Tuesday, former United Kingdom technology minister, Damian Collins, accused X of building a system that could allow “the industrial manipulation of what people see and decide to trust” on a platform with more than 600 million users, The Guardian reported.

Collins claimed that AI notes risked increasing the promotion of “lies and conspiracy theories” on X, and he wasn’t the only expert sounding alarms. Samuel Stockwell, a research associate at the Centre for Emerging Technology and Security at the Alan Turing Institute, told The Guardian that X’s success largely depends on “the quality of safeguards X puts in place against the risk that these AI ‘note writers’ could hallucinate and amplify misinformation in their outputs.”

“AI chatbots often struggle with nuance and context but are good at confidently providing answers that sound persuasive even when untrue,” Stockwell said. “That could be a dangerous combination if not effectively addressed by the platform.”

Also complicating things: anyone can create an AI agent using any technology to write community notes, X’s Community Notes account explained. That means that some AI agents may be more biased or defective than others.

If this dystopian version of events occurs, X predicts that human writers may get sick of writing notes, threatening the diversity of viewpoints that made community notes so trustworthy to begin with.

And for any human writers and reviewers who stick around, it’s possible that the sheer volume of AI-written notes may overload them. Andy Dudfield, the head of AI at a UK fact-checking organization called Full Fact, told The Guardian that X risks “increasing the already significant burden on human reviewers to check even more draft notes, opening the door to a worrying and plausible situation in which notes could be drafted, reviewed, and published entirely by AI without the careful consideration that human input provides.”

X is planning more research to ensure the “human rating capacity can sufficiently scale,” but if it cannot solve this riddle, it knows “the impact of the most genuinely critical notes” risks being diluted.

One possible solution to this “bottleneck,” researchers noted, would be to remove the human review process and apply AI-written notes in “similar contexts” that human raters have previously approved. But the biggest potential downfall there is obvious.

“Automatically matching notes to posts that people do not think need them could significantly undermine trust in the system,” X’s paper acknowledged.

Ultimately, AI note writers on X may be deemed an “erroneous” tool, researchers admitted, but they’re going ahead with testing to find out.

AI-written notes will start posting this month

All AI-written community notes “will be clearly marked for users,” X’s Community Notes account said. The first AI notes will only appear on posts where people have requested a note, the account said, but eventually AI note writers could be allowed to select posts for fact-checking.

More will be revealed when AI-written notes start appearing on X later this month, but in the meantime, X users can start testing AI note writers today and soon be considered for admission in the initial cohort of AI agents. (If any Ars readers end up testing out an AI note writer, this Ars writer would be curious to learn more about your experience.)

For its research, X collaborated with post-graduate students, research affiliates, and professors investigating topics like human trust in AI, fine-tuning AI, and AI safety at Harvard University, the Massachusetts Institute of Technology, Stanford University, and the University of Washington.

Researchers agreed that “under certain circumstances,” AI agents can “produce notes that are of similar quality to human-written notes—at a fraction of the time and effort.” They suggested that more research is needed to overcome flagged risks to reap the benefits of what could be “a transformative opportunity” that “offers promise of dramatically increased scale and speed” of fact-checking on X.

If AI note writers “generate initial drafts that represent a wider range of perspectives than a single human writer typically could, the quality of community deliberation is improved from the start,” the paper said.

Future of AI notes

Researchers imagine that once X’s testing is completed, AI note writers could not just aid in researching problematic posts flagged by human users, but also one day select posts predicted to go viral and stop misinformation from spreading faster than human reviewers could.

Additional perks from this automated system, they suggested, would include X note raters quickly accessing more thorough research and evidence synthesis, as well as clearer note composition, which could speed up the rating process.

And perhaps one day, AI agents could even learn to predict rating scores to speed things up even more, researchers speculated. However, more research would be needed to ensure that wouldn’t homogenize community notes, buffing them out to the point that no one reads them.

Perhaps the most Musk-ian of ideas proposed in the paper, is a notion of training AI note writers with clashing views to “adversarially debate the merits of a note.” Supposedly, that “could help instantly surface potential flaws, hidden biases, or fabricated evidence, empowering the human rater to make a more informed judgment.”

“Instead of starting from scratch, the rater now plays the role of an adjudicator—evaluating a structured clash of arguments,” the paper said.

While X may be moving to reduce the workload for X users writing community notes, it’s clear that AI could never replace humans, researchers said. Those humans are necessary for more than just rubber-stamping AI-written notes.

Human notes that are “written from scratch” are valuable to train the AI agents and some raters’ niche expertise cannot easily be replicated, the paper said. And perhaps most obviously, humans “are uniquely positioned to identify deficits or biases” and therefore more likely to be compelled to write notes “on topics the automated writers overlook,” such as spam or scams.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Everything that could go wrong with X’s new AI-written community notes Read More »

editorial:-censoring-the-scientific-enterprise,-one-grant-at-a-time

Editorial: Censoring the scientific enterprise, one grant at a time


Recent grant terminations are a symptom of a widespread attack on science.

Over the last two weeks, in response to Executive Order 14035, the National Science Foundation (NSF) has discontinued funding for research on diversity, equity, and inclusion (DEI), as well as support for researchers from marginalized backgrounds. Executive Order 14168 ordered the NSF (and other federal agencies) to discontinue any research that focused on women, women in STEM, gender variation, and transsexual or transgender populations—and, oddly, transgenic mice.

Then, another round of cancellations targeted research on misinformation and disinformation, a subject (among others) that Republican Senator Ted Cruz views as advancing neo-Marxist perspectives and class warfare.

During the previous three years, I served as a program officer at the NSF Science of Science (SOS) program. We reviewed, recommended, and awarded competitive research grants on science communication, including research on science communication to the public, communication of public priorities to scientists, and citizen engagement and participation in science. Projects my team reviewed and funded on misinformation are among the many others at NSF that have now been canceled (see the growing list here).

Misinformation research is vital to advancing our understanding of how citizens understand and process evidence and scientific information and put that understanding into action. It is an increasingly important area of research given our massive, ever-changing digital information environment.

A few examples of important research that was canceled because it threatens the current administration’s political agenda:

  • A project that uses computational social sciences, computer science, sociology, and statistics to understand the fundamentals of information spread through social media, because understanding how information flows and its impact on human behavior is important for determining how to protect society from the effects of misinformation, propaganda, and “fake news.”
  • A project investigating how people and groups incentivize others to spread misinformation on social media platforms.
  • A study identifying the role of social media influencers in addressing misconceptions and inaccurate information related to vaccines, which would help us develop guidance on how to ensure accurate information reaches different audiences.

Misinformation research matters

This work is critical on its own. Results of misinformation research inform how we handle education, public service announcements, weather warnings, emergency response broadcasts, health advisories, agricultural practices, product recalls, and more. It’s how we get people to integrate data into their work, whether their work involves things like farming, manufacturing, fishing, or something else.

Understanding how speech on technical topics is perceived, drives trust, and changes behavior can help us ensure that our speech is more effective. Beyond its economic impact, research on misinformation helps create an informed public—the foundation of any democracy. Contrary to the president’s executive order, it does not “infringe on the constitutionally protected speech rights of American citizens.”

Misinformation research is only a threat to the speech of people who seek to spread misinformation.

Politics and science

Political attacks on misinformation research is censorship, driven by a dislike for the results it produces. It is also part of a larger threat to the NSF and the economic and social benefits that come from publicly funded research.

The NSF is a “pass through agency”—most of its annual budget (around $9 billion) passes through the agency and is returned to American communities in the form of science grants (80 percent of the budget) and STEM education (13 percent). The NSF manages these programs via a staff that is packed full of expert scientists in physics, psychology, chemistry, geosciences, engineering, sociology, and other fields. These scientists and the administrative staff (1,700 employees, who account for around 5 percent of its budget) organize complex peer-review panels that assess and distribute funding to cutting-edge science.

In normal times, presidents may shift the NSF’s funding priorities—this is their prerogative. This process is political. It always has been. It always will be. Elected officials (both presidents and Congress) have agendas and interests and want to bring federal dollars to their constituents. Additionally, there are national priorities—pandemic response, supercomputing needs, nanotechnology breakthroughs, space exploration goals, demands for microchip technologies, and artificial intelligence advancements.

Presidential agendas are meant to “steer the ship” by working with Congress to develop annual budgets, set appropriations and earmarks, and focus on specific regions (e.g., EPSCoR), topics, or facilities (e.g., federal labs).

While shifting priorities is normal, cancellation of previously funded research projects is NOT normal. Unilaterally banning funding for specific types of research (climate science, misinformation, research on minoritized groups) is not normal.

It’s anti-scientific, allowing politics rather than expertise to determine which research is most competitive. Canceling research grants because they threaten the current regime’s political agenda is a violation of the NSF’s duty to honor contracts and ethically manage the funds appropriated by the US Congress. This is a threat not just to individual scientists and universities, but to the trust and norms that underpin our scientific enterprise. It’s an attempt to terrorize researchers with the fear that their funding may be next and to create backlash against science and expertise (another important area of NSF-funded research that has also been canceled).

Scientific values and our responsibilities

Political interference in federal funding of scientific research will not end here. A recent announcement notes the NSF is facing a 55 percent cut to its annual budget and mass layoffs. Other agencies have been told to prepare for similar cuts. The administration’s actions will leave little funding for R&D that advances the public good. And the places where the research happens—especially universities and colleges—are also under assault. While these immediate cuts are felt first by scientists and universities, they will ultimately affect people throughout the nation—students, consumers, private companies, and residents.

The American scientific enterprise has been a world leader, and federal funding of science is a key driver of this success. For the last 100 years, students, scientists, and entrepreneurs from around the world have flocked to the US to advance science and innovation. Public investments in science have produced economic health and prosperity for all Americans and advanced our national security through innovation and soft diplomacy.

These cuts, combined with other actions taken to limit research funding and peer review at scientific agencies, make it clear that the Trump administration’s goals are to:

  • Roll back education initiatives that produce an informed public
  • Reduce evidence-based policy making
  • Slash public investment in the advancement of science

All Americans who benefit from the outcomes of publicly funded science—GPS and touch screens on your phone, Google, the Internet, weather data on an app, MRI, kidney exchanges, CRISPR, 3D printing, tiny hearing aids, bluetooth, broadband, robotics at the high school, electric cars, suspension bridges, PCR tests, AlphaFold and other AI tools, Doppler radar, barcodes, reverse auctions, and far, far more—should be alarmed and taking action.

Here are some ideas of what you can do:

  1. Demand that Congress restore previous appropriations, 5Calls
  2. Advocate through any professional associations you’re a member of
  3. Join science action groups (Science for the People, Union of Concerned Scientists, American Association for the Advancement of Science)
  4. Talk to university funders, leadership, and alumni about the value of publicly funded science
  5. Educate the public (including friends, family, and neighbors) about the value of science and the role of federally funded research
  6. Write an op-ed or public outreach materials through your employer
  7. Support federal employees
  8. If you’re a scientist, say yes to media & public engagement requests
  9. Attend local meetings: city council, library board, town halls
  10. Attend a protest
  11. Get offline and get active, in-person

There is a lot going on in the political environment right now, making it easy to get caught up in the implications cuts have on individual research projects or to be reassured by things that haven’t been targeted yet. But the threat looms large, for all US science. The US, through agencies like the NSF, has built a world-class scientific enterprise founded on the belief that taxpayer investments in basic science can and do produce valuable economic and social outcomes for all of us. Censoring research and canceling misinformation grants is a small step in what is already a larger battle to defend our world-class scientific enterprise. It is up to all of us to act now.

Mary K. Feeney is the Frank and June Sackton chair and professor in the School of Public Affairs at Arizona State University. She is a fellow of the National Academy of Public Administration and served as the program director for the Science of Science: Discovery, Communication and Impact program at the National Science Foundation (2021–2024).

Editorial: Censoring the scientific enterprise, one grant at a time Read More »

meta-plans-to-test-and-tinker-with-x’s-community-notes-algorithm

Meta plans to test and tinker with X’s community notes algorithm

Meta also confirmed that it won’t be reducing visibility of misleading posts with community notes. That’s a change from the prior system, Meta noted, which had penalties associated with fact-checking.

According to Meta, X’s algorithm cannot be gamed, supposedly safeguarding “against organized campaigns” striving to manipulate notes and “influence what notes get published or what they say.” Meta claims it will rely on external research on community notes to avoid that pitfall, but as recently as last October, outside researchers had suggested that X’s Community Notes were easily sabotaged by toxic X users.

“We don’t expect this process to be perfect, but we’ll continue to improve as we learn,” Meta said.

Meta confirmed that the company plans to tweak X’s algorithm over time to develop its own version of community notes, which “may explore different or adjusted algorithms to support how Community Notes are ranked and rated.”

In a post, X’s Support account said that X was “excited” that Meta was using its “well-established, academically studied program as a foundation” for its community notes.

Meta plans to test and tinker with X’s community notes algorithm Read More »

elon-musk-to-“fix”-community-notes-after-they-contradict-trump

Elon Musk to “fix” Community Notes after they contradict Trump

Elon Musk apparently no longer believes that crowdsourcing fact-checking through Community Notes can never be manipulated and is, thus, the best way to correct bad posts on his social media platform X.

Community Notes are supposed to be added to posts to limit misinformation spread after a broad consensus is reached among X users with diverse viewpoints on what corrections are needed. But Musk now claims a “fix” is needed to prevent supposedly outside influencers from allegedly gaming the system.

“Unfortunately, @CommunityNotes is increasingly being gamed by governments & legacy media,” Musk wrote on X. “Working to fix this.”

Musk’s announcement came after Community Notes were added to X posts discussing a poll generating favorable ratings for Ukraine President Volodymyr Zelenskyy. That poll was conducted by a private Ukrainian company in partnership with a state university whose supervisory board was appointed by the Ukrainian government, creating what Musk seems to view as a conflict of interest.

Although other independent polling recently documented a similar increase in Zelenskyy’s approval rating, NBC News reported, the specific poll cited in X notes contradicted Donald Trump’s claim that Zelenskyy is unpopular, and Musk seemed to expect X notes should instead be providing context to defend Trump’s viewpoint. Musk even suggested that by pointing to the supposedly government-linked poll in Community Notes, X users were spreading misinformation.

“It should be utterly obvious that a Zelensky[y]-controlled poll about his OWN approval is not credible!!” Musk wrote on X.

Musk’s attack on Community Notes is somewhat surprising. Although he has always maintained that Community Notes aren’t “perfect,” he has defended Community Notes through multiple European Union probes challenging their effectiveness and declared that the goal of the crowdsourcing effort was to make X “by far the best source of truth on Earth.” At CES 2025, X CEO Linda Yaccarino bragged that Community Notes are “good for the world.”

Yaccarino invited audience members to “think about it as this global collective consciousness keeping each other accountable at global scale in real time,” but just one month later, Musk is suddenly casting doubts on that characterization while the European Union continues to probe X.

Perhaps most significantly, Musk previously insisted as recently as last year that Community Notes could not be manipulated, even by Musk. He strongly disputed a 2024 report from the Center for Countering Digital Hate that claimed that toxic X users were downranking accurate notes that they personally disagreed with, claiming any attempt at gaming Community Notes would stick out like a “neon sore thumb.”

Elon Musk to “fix” Community Notes after they contradict Trump Read More »

it’s-remarkably-easy-to-inject-new-medical-misinformation-into-llms

It’s remarkably easy to inject new medical misinformation into LLMs


Changing just 0.001% of inputs to misinformation makes the AI less accurate.

It’s pretty easy to see the problem here: The Internet is brimming with misinformation, and most large language models are trained on a massive body of text obtained from the Internet.

Ideally, having substantially higher volumes of accurate information might overwhelm the lies. But is that really the case? A new study by researchers at New York University examines how much medical information can be included in a large language model (LLM) training set before it spits out inaccurate answers. While the study doesn’t identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised.

While the paper is focused on the intentional “poisoning” of an LLM during training, it also has implications for the body of misinformation that’s already online and part of the training set for existing LLMs, as well as the persistence of out-of-date information in validated medical databases.

Sampling poison

Data poisoning is a relatively simple concept. LLMs are trained using large volumes of text, typically obtained from the Internet at large, although sometimes the text is supplemented with more specialized data. By injecting specific information into this training set, it’s possible to get the resulting LLM to treat that information as a fact when it’s put to use. This can be used for biasing the answers returned.

This doesn’t even require access to the LLM itself; it simply requires placing the desired information somewhere where it will be picked up and incorporated into the training data. And that can be as simple as placing a document on the web. As one manuscript on the topic suggested, “a pharmaceutical company wants to push a particular drug for all kinds of pain which will only need to release a few targeted documents in [the] web.”

Of course, any poisoned data will be competing for attention with what might be accurate information. So, the ability to poison an LLM might depend on the topic. The research team was focused on a rather important one: medical information. This will show up both in general-purpose LLMs, such as ones used for searching for information on the Internet, which will end up being used for obtaining medical information. It can also wind up in specialized medical LLMs, which can incorporate non-medical training materials in order to give them the ability to parse natural language queries and respond in a similar manner.

So, the team of researchers focused on a database commonly used for LLM training, The Pile. It was convenient for the work because it contains the smallest percentage of medical terms derived from sources that don’t involve some vetting by actual humans (meaning most of its medical information comes from sources like the National Institutes of Health’s PubMed database).

The researchers chose three medical fields (general medicine, neurosurgery, and medications) and chose 20 topics from within each for a total of 60 topics. Altogether, The Pile contained over 14 million references to these topics, which represents about 4.5 percent of all the documents within it. Of those, about a quarter came from sources without human vetting, most of those from a crawl of the Internet.

The researchers then set out to poison The Pile.

Finding the floor

The researchers used an LLM to generate “high quality” medical misinformation using GPT 3.5. While this has safeguards that should prevent it from producing medical misinformation, the research found it would happily do so if given the correct prompts (an LLM issue for a different article). The resulting articles could then be inserted into The Pile. Modified versions of The Pile were generated where either 0.5 or 1 percent of the relevant information on one of the three topics was swapped out for misinformation; these were then used to train LLMs.

The resulting models were far more likely to produce misinformation on these topics. But the misinformation also impacted other medical topics. “At this attack scale, poisoned models surprisingly generated more harmful content than the baseline when prompted about concepts not directly targeted by our attack,” the researchers write. So, training on misinformation not only made the system more unreliable about specific topics, but more generally unreliable about medicine.

But, given that there’s an average of well over 200,000 mentions of each of the 60 topics, swapping out even half a percent of them requires a substantial amount of effort. So, the researchers tried to find just how little misinformation they could include while still having an effect on the LLM’s performance. Unfortunately, this didn’t really work out.

Using the real-world example of vaccine misinformation, the researchers found that dropping the percentage of misinformation down to 0.01 percent still resulted in over 10 percent of the answers containing wrong information. Going for 0.001 percent still led to over 7 percent of the answers being harmful.

“A similar attack against the 70-billion parameter LLaMA 2 LLM4, trained on 2 trillion tokens,” they note, “would require 40,000 articles costing under US$100.00 to generate.” The “articles” themselves could just be run-of-the-mill webpages. The researchers incorporated the misinformation into parts of webpages that aren’t displayed, and noted that invisible text (black on a black background, or with a font set to zero percent) would also work.

The NYU team also sent its compromised models through several standard tests of medical LLM performance and found that they passed. “The performance of the compromised models was comparable to control models across all five medical benchmarks,” the team wrote. So there’s no easy way to detect the poisoning.

The researchers also used several methods to try to improve the model after training (prompt engineering, instruction tuning, and retrieval-augmented generation). None of these improved matters.

Existing misinformation

Not all is hopeless. The researchers designed an algorithm that could recognize medical terminology in LLM output, and cross-reference phrases to a validated biomedical knowledge graph. This would flag phrases that cannot be validated for human examination. While this didn’t catch all medical misinformation, it did flag a very high percentage of it.

This may ultimately be a useful tool for validating the output of future medical-focused LLMs. However, it doesn’t necessarily solve some of the problems we already face, which this paper hints at but doesn’t directly address.

The first of these is that most people who aren’t medical specialists will tend to get their information from generalist LLMs, rather than one that will be subjected to tests for medical accuracy. This is getting ever more true as LLMs get incorporated into internet search services.

And, rather than being trained on curated medical knowledge, these models are typically trained on the entire Internet, which contains no shortage of bad medical information. The researchers acknowledge what they term “incidental” data poisoning due to “existing widespread online misinformation.” But a lot of that “incidental” information was generally produced intentionally, as part of a medical scam or to further a political agenda. Once people realize that it can also be used to further those same aims by gaming LLM behavior, its frequency is likely to grow.

Finally, the team notes that even the best human-curated data sources, like PubMed, also suffer from a misinformation problem. The medical research literature is filled with promising-looking ideas that never panned out, and out-of-date treatments and tests that have been replaced by approaches more solidly based on evidence. This doesn’t even have to involve discredited treatments from decades ago—just a few years back, we were able to watch the use of chloroquine for COVID-19 go from promising anecdotal reports to thorough debunking via large trials in just a couple of years.

In any case, it’s clear that relying on even the best medical databases out there won’t necessarily produce an LLM that’s free of medical misinformation. Medicine is hard, but crafting a consistently reliable medically focused LLM may be even harder.

Nature Medicine, 2025. DOI: 10.1038/s41591-024-03445-1  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

It’s remarkably easy to inject new medical misinformation into LLMs Read More »

people-will-share-misinformation-that-sparks-“moral-outrage”

People will share misinformation that sparks “moral outrage”


People can tell it’s not true, but if they’re outraged by it, they’ll share anyway.

Rob Bauer, the chair of a NATO military committee, reportedly said, “It is more competent not to wait, but to hit launchers in Russia in case Russia attacks us. We must strike first.” These comments, supposedly made in 2024, were later interpreted as suggesting NATO should attempt a preemptive strike against Russia, an idea that lots of people found outrageously dangerous.

But lots of people also missed a thing about the quote: Bauer has never said it. It was made up. Despite that, the purported statement got nearly 250,000 views on X and was mindlessly spread further by the likes of Alex Jones.

Why do stories like this get so many views and shares? “The vast majority of misinformation studies assume people want to be accurate, but certain things distract them,” says William J. Brady, a researcher at Northwestern University. “Maybe it’s the social media environment. Maybe they’re not understanding the news, or the sources are confusing them. But what we found is that when content evokes outrage, people are consistently sharing it without even clicking into the article.” Brady co-authored a study on how misinformation exploits outrage to spread online. When we get outraged, the study suggests, we simply care way less if what’s got us outraged is even real.

Tracking the outrage

The rapid spread of misinformation on social media has generally been explained by something you might call an error theory—the idea that people share misinformation by mistake. Based on that, most solutions to the misinformation issue relied on prompting users to focus on accuracy and think carefully about whether they really wanted to share stories from dubious sources. Those prompts, however, haven’t worked very well. To get to the root of the problem, Brady’s team analyzed data that tracked over 1 million links on Facebook and nearly 45,000 posts on Twitter from different periods ranging from 2017 to 2021.

Parsing through the Twitter data, the team used a machine-learning model to predict which posts would cause outrage. “It was trained on 26,000 tweets posted around 2018 and 2019. We got raters from across the political spectrum, we taught them what we meant by outrage, and got them to label the data we later used to train our model,” Brady says.

The purpose of the model was to predict whether a message was an expression of moral outrage, an emotional state defined in the study as “a mixture of anger and disgust triggered by perceived moral transgressions.” After training, the AI was effective. “It performed as good as humans,” Brady claims. Facebook data was a bit more tricky because the team did not have access to comments; all they had to work with were reactions. The reaction the team chose as a proxy for outrage was anger. Once the data was sorted into outrageous and not outrageous categories, Brady and his colleagues went on to determine whether the content was trustworthy news or misinformation.

“We took what is now the most widely used approach in the science of misinformation, which is a domain classification approach,” Brady says. The process boiled down to compiling a list of domains with very high and very low trustworthiness based on work done by fact-checking organizations. This way, for example, The Chicago Sun-Times was classified as trustworthy; Breitbart, not so much. “One of the issues there is that you could have a source that produces misinformation which one time produced a true story. We accepted that. We went with statistics and general rules,” Brady acknowledged. His team confirmed that sources classified in the study as misinformation produced news that was fact-checked as false six to eight times more often than reliable domains, which Brady’s team thought was good enough to work with.

Finally, the researchers started analyzing the data to answer questions like whether misinformation sources evoke more outrage, whether outrageous news was shared more often than non-outrageous news, and finally, what reasons people had for sharing outrageous content. And that’s when the idealized picture of honest, truthful citizens who shared misinformation just because they were too distracted to recognize it started to crack.

Going with the flow

The Facebook and Twitter data analyzed by Brady’s team revealed that misinformation evoked more outrage than trustworthy news. At the same time, people were way more likely to share outrageous content, regardless of whether it was misinformation or not. Putting those two trends together led the team to conclude outrage primarily boosted the spread of fake news since reliable sources usually produced less outrageous content.

“What we know about human psychology is that our attention is drawn to things rooted in deep biases shaped by evolutionary history,” Brady says. Those things are emotional content, surprising content, and especially, content that is related to the domain of morality. “Moral outrage is expressed in response to perceived violations of moral norms. This is our way of signaling to others that the violation has occurred and that we should punish the violators. This is done to establish cooperation in the group,” Brady explains.

This is why outrageous content has an advantage in the social media attention economy. It stands out, and standing out is a precursor to sharing. But there are other reasons we share outrageous content. “It serves very particular social functions,” Brady says. “It’s a cheap way to signal group affiliation or commitment.”

Cheap, however, didn’t mean completely free. The team found that the penalty for sharing misinformation, outrageous or not, was loss of reputation—spewing nonsense doesn’t make you look good, after all. The question was whether people really shared fake news because they failed to identify it as such or if they just considered signaling their affiliation was more important.

Flawed human nature

Brady’s team designed two behavioral experiments where 1,475 people were presented with a selection of fact-checked news stories curated to contain outrageous and not outrageous content; they were also given reliable news and misinformation. In both experiments, the participants were asked to rate how outrageous the headlines were.

The second task was different, though. In the first experiment, people were simply asked to rate how likely they were to share a headline, while in the second they were asked to determine if the headline was true or not.

It turned out that most people could discern between true and fake news. Yet they were willing to share outrageous news regardless of whether it was true or not—a result that was in line with previous findings from Facebook and Twitter data. Many participants were perfectly OK with sharing outrageous headlines, even though they were fully aware those headlines were misinformation.

Brady pointed to an example from the recent campaign, when a reporter pushed J.D. Vance about false claims regarding immigrants eating pets. “When the reporter pushed him, he implied that yes, it was fabrication, but it was outrageous and spoke to the issues his constituents were mad about,” Brady says. These experiments show that this kind of dishonesty is not exclusive to politicians running for office—people do this on social media all the time.

The urge to signal a moral stance quite often takes precedence over truth, but misinformation is not exclusively due to flaws in human nature. “One thing this study was not focused on was the impact of social media algorithms,” Brady notes. Those algorithms usually boost content that generates engagement, and we tend to engage more with outrageous content. This, in turn, incentivizes people to make their content more outrageous to get this algorithmic boost.

Science, 2024.  DOI: 10.1126/science.adl2829

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

People will share misinformation that sparks “moral outrage” Read More »

idaho-health-district-abandons-covid-shots-amid-flood-of-anti-vaccine-nonsense

Idaho health district abandons COVID shots amid flood of anti-vaccine nonsense

Slippery slope

In the hearing, board member Jennifer Riebe (who voted to keep COVID-19 vaccinations available) worried about the potential of a slippery slope.

“My concern with this is the process because if this board and six county commissioners and one physician is going to make determinations on every single vaccine and pharmaceutical that we administer, I’m not comfortable with that,” she said, according to Boise State Public Radio. “It may be COVID now, maybe we’ll go down the same road with the measles vaccine or the shingles vaccine coverage.”

Board Chair Kelly Aberasturi, who also voted to keep the vaccines, argued that it should be a choice by individuals and their doctors, who sometimes refer their patients to the district for COVID shots. “So now, you’re telling me that I have the right to override that doctor? Because I know more than he does?” Aberasturi said.

“It has to do with the right of the individual to make that decision on their own. Not for me to dictate to them what they will do. Sorry, but this pisses me off,” he added.

According to Boise State Public Radio, the district had already received 50 COVID-19 vaccines at the time of the vote, which were slated to go to residents of a skilled nursing facility.

The situation in the southwest district may not be surprising given the state’s overall standing on vaccination: Idaho has the lowest kindergarten vaccination rates in the country, with coverage of key vaccinations sitting at around 79 percent to 80 percent, according to a recent analysis by the Centers for Disease Control and Prevention. The coverage is far lower than the 95 percent target set by health experts. That’s the level that would block vaccine-preventable diseases from readily spreading through a population. The target is out of reach for Idaho as a whole, which also has the highest vaccination exemption rate in the country, at 14.3 percent. Even if the state managed to vaccinate all non-exempt children, the coverage rate would only reach 85.7 percent, missing the 95 percent target by nearly 10 percentage points.

Idaho health district abandons COVID shots amid flood of anti-vaccine nonsense Read More »

toxic-x-users-sabotage-community-notes-that-could-derail-disinfo,-report-says

Toxic X users sabotage Community Notes that could derail disinfo, report says


It’s easy for biased users to bury accurate Community Notes, report says.

What’s the point of recruiting hundreds of thousands of X users to fact-check misleading posts before they go viral if those users’ accurate Community Notes are never displayed?

That’s the question the Center for Countering Digital Hate (CCDH) is asking after digging through a million notes in a public X dataset to find out how many misleading claims spreading widely on X about the US election weren’t quickly fact-checked.

In a report, the CCDH flagged 283 misleading X posts fueling election disinformation spread this year that never displayed a Community Note. Of these, 74 percent were found to have accurate notes proposed but ultimately never displayed—apparently due to toxic X users gaming Community Notes to hide information they politically disagree with.

On X, Community Notes are only displayed if a broad spectrum of X users with diverse viewpoints agree that the post is “helpful.” But the CCDH found that it’s seemingly easy to hide an accurate note that challenges a user’s bias by simply refusing to rate it or downranking it into oblivion.

“The problem is that for a Community Note to be shown, it requires consensus, and on polarizing issues, that consensus is rarely reached,” the CCDH’s report said. “As a result, Community Notes fail precisely where they are needed most.”

Among the most-viewed misleading claims where X failed to add accurate notes were posts spreading lies that “welfare offices in 49 states are handing out voter registration applications to illegal aliens,” the Democratic party is importing voters, most states don’t require ID to vote, and both electronic and mail-in voting are “too risky.”

These unchecked claims were viewed by tens of millions of users, the CCDH found.

One false narrative—that Dems import voters—was amplified in a post from Elon Musk that got 51 million views. In the background, proposed notes sought to correct the disinformation by noting that “lawful permanent residents (green card holders)” cannot vote in US elections until they’re granted citizenship after living in the US for five years. But even these seemingly straightforward citations to government resources did not pass muster for users politically motivated to hide the note.

This appears to be a common pattern on X, the CCDH suggested, and Musk is seemingly a multiplier. In July, the CCDH reported that Musk’s misleading posts about the 2024 election in particular were viewed more than a billion times without any notes ever added.

The majority of the misleading claims in the CCDH’s report seemed to come from conservative users. But X also failed to check a claim that Donald Trump “is no longer eligible to run for president and must drop out of the race immediately.” Posts spreading that false claim got 1.4 million views, the CCDH reported, and that content moderation misstep could potentially have risked negatively impacting Trump’s voter turnout at a time when Musk is campaigning for Trump.

Musk has claimed that while Community Notes will probably never be “perfect,” the fact-checking effort aspires to “be by far the best source of truth on Earth.” The CCDH has alleged that, actually, “most Community Notes are never seen by users, allowing misinformation to spread unchecked.”

Even X’s own numbers on notes seem low

On the Community Notes X account, X acknowledges that “speed is key to notes’ effectiveness—the faster they appear, the more people see them, and the greater effect they have.”

On the day before the CCDH report dropped, X announced that “lightning notes” have been introduced to deliver fact-checks in as little as 15 minutes after a misleading post is written.

“Ludicrously fast? Now reality!” X proclaimed.

Currently, more than 800,000 X users contribute to Community Notes, and with the lightning notes update, X can calculate their scores more quickly. That efficiency, X said, will either spike the amount of content removals or reduce sharing of false or misleading posts.

But while X insists Community Notes are working faster than ever to reduce harmful content spreading, the number of rapidly noted posts that X reports seems low. On a platform with an estimated 429 million daily active users worldwide, only about 400 notes were displayed within the past two weeks in less than an hour of a post going live. For notes that took longer—which the CCDH suggested is the majority if the fact-check is on a controversial topic—only about 60 more notes were displayed in more than an hour.

In July, an international NGO that monitors human rights abuses and corruption, Global Witness, found 45 “bot-like accounts that collectively produced around 610,000 posts” in a two-month period this summer on X, “amplifying racist and sexualized abuse, conspiracy theories, and climate disinformation” ahead of the UK general election.

Those accounts “posted prolifically during the UK general election,” then moved “to rapidly respond to emerging new topics amplifying divisive content,” including the US presidential race.

The CCDH reported that even when misleading posts get fact-checked, the original posts on average are viewed 13 times more than the note is seen, suggesting the majority of damage is done in the time before the note is posted.

Of course, content moderators are often called out for moving too slowly to remove harmful content, a Bloomberg opinion piece praising Community Notes earlier this year noted. That piece pointed to studies showing that “crowdsourcing worked just as well” as professional fact checkers “when assessing the accuracy of news stories,” concluding that “it may be impossible for any social media company to keep up, which is why it’s important to explore other approaches.”

X has said that it’s “common to see Community Notes appearing days faster than traditional fact checks,” while promising that more changes are coming to get notes ranked as “helpful” more quickly.

X risks becoming an echo chamber, data shows

Data that the market intelligence firm Sensor Tower recently shared with Ars offers a potential clue as to why the CCDH is seeing so many accurate notes that are never voted as “helpful.”

According to Sensor Tower’s estimates, global daily active users on X are down by 28 percent in September 2024, compared to October 2022 when Elon Musk took over Twitter. While many users have fled the platform, those who remained are seemingly more engaged than ever—with global engagement up by 8 percent in the same time period. (Rivals like TikTok and Facebook saw much lower growth, up by 3 and 1 percent, respectively.)

This paints a picture of X risking becoming an echo chamber, as loyal users engage more with the platform where misleading posts can seemingly easily go unchecked and buried notes potentially warp discussion in Musk’s “digital town square.”

When Musk initially bought Twitter, one of his earliest moves was to make drastic cuts to the trust and safety teams chiefly responsible for content-moderation decisions. He then expanded the role of Twitter’s Community Notes to substitute for trust and safety team efforts, where before Community Notes was viewed as merely complementary to broader monitoring.

The CCDH says that was a mistake and that the best way to ensure that X is safe for users is to build back X’s trust and safety teams.

“Our social media feeds have no neutral ‘town square’ for rational debate,” the CCDH report said. “In reality, it is messy, complicated, and opaque rules and systems make it impossible for all voices to be heard. Without checks and balances, proper oversight, and well-resourced trust and safety teams in place, X cannot rely on Community Notes to keep X safe.”

More transparency is needed on Community Notes

X and the CCDH have long clashed, with X unsuccessfully suing to seemingly silence the CCDH’s reporting on hate speech on X, which X claimed caused tens of millions in advertising losses. During that legal battle, the CCDH called Musk a “thin-skinned tyrant” who could not tolerate independent research on his platform. And a federal judge agreed that X was clearly suing to “punish” and censor the CCDH, dismissing X’s lawsuit last March.

Since then, the CCDH has resumed its reporting on X. In the most recent report, the CCDH urged that X needed to be more transparent about Community Notes, arguing that “researchers must be able to freely, without intimidation, study how disinformation and unchecked claims spread across platforms.”

The research group also recommended remedies, including continuing to advise that advertisers “evaluate whether their budgets are funding the misleading election claims identified in this report.”

That could lead brands to continue withholding spending on X, which is seemingly already happening. Sensor Tower estimated that “72 out of the top 100 spending US advertisers on X from October 2022 have ceased spending on the platform as of September 2024.” And compared to the first half of 2022, X’s ad revenue from the top 100 advertisers during the first half of 2024 was down 68 percent.

Most drastically, the CCDH recommended that US lawmakers reform Section 230 of the Communications Decency Act “to provide an avenue for accountability” by mandating risk assessments of social media platforms. That would “expose the risk posed by disinformation” and enable lawmakers to “prescribe possible mitigation measures including a comprehensive moderation strategy.”

Globally, the CCDH noted, some regulators have the power to investigate the claims in the CCDH’s report, including the European Commission under the Digital Services Act and the UK’s Ofcom under the Online Safety Act.

“X and social media companies as an industry have been able to avoid taking responsibility,” the CCDH’s report said, offering only “unreliable self-regulation.” Apps like X “thus invent inadequate systems like Community Notes because there is no legal mechanism to hold them accountable for their harms,” the CCDH’s report warned.

Perhaps Musk will be open to the CCDH’s suggestions. In the past, Musk has said that “suggestions for improving Community Notes are… always… much appreciated.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Toxic X users sabotage Community Notes that could derail disinfo, report says Read More »

people-think-they-already-know-everything-they-need-to-make-decisions

People think they already know everything they need to make decisions

The obvious difference was the decisions they made. In the group that had read the article biased in favor of merging the schools, nearly 90 percent favored the merger. In the group that had read the article that was biased by including only information in favor of keeping the schools separate, less than a quarter favored the merger.

The other half of the experimental population wasn’t given the survey immediately. Instead, they were given the article that they hadn’t read—the one that favored the opposite position of the article that they were initially given. You can view this group as doing the same reading as the control group, just doing so successively rather than in a single go. In any case, this group’s responses looked a lot like the control’s, with people roughly evenly split between merger and separation. And they became less confident in their decision.

It’s not too late to change your mind

There is one bit of good news about this. When initially forming hypotheses about the behavior they expected to see, Gehlbach, Robinson, and Fletcher suggested that people would remain committed to their initial opinions even after being exposed to a more complete picture. However, there was no evidence of this sort of stubbornness in these experiments. Instead, once people were given all the potential pros and cons of the options, they acted as if they had that information the whole time.

But that shouldn’t obscure the fact that there’s a strong cognitive bias at play here. “Because people assume they have adequate information, they enter judgment and decision-making processes with less humility and more confidence than they might if they were worrying whether they knew the whole story or not,” Gehlbach, Robinson, and Fletcher.

This is especially problematic in the current media environment. Many outlets have been created with the clear intent of exposing their viewers to only a partial view of the facts—or, in a number of cases, the apparent intent of spreading misinformation. The new work clearly indicates that these efforts can have a powerful effect on beliefs, even if accurate information is available from various sources.

PLOS ONE, 2024. DOI: 10.1371/journal.pone.0310216  (About DOIs).

People think they already know everything they need to make decisions Read More »