misinformation

us-can’t-deport-hate-speech-researcher-for-protected-speech,-lawsuit-says

US can’t deport hate speech researcher for protected speech, lawsuit says


On Monday, US officials must explain what steps they took to enforce shocking visa bans.

Imran Ahmed, the founder of the Center for Countering Digital Hate (CCDH), giving evidence to joint committee seeking views on how to improve the draft Online Safety Bill designed to tackle social media abuse. Credit: House of Commons – PA Images / Contributor | PA Images

Imran Ahmed’s biggest thorn in his side used to be Elon Musk, who made the hate speech researcher one of his earliest legal foes during his Twitter takeover.

Now, it’s the Trump administration, which planned to deport Ahmed, a legal permanent resident, just before Christmas. It would then ban him from returning to the United States, where he lives with his wife and young child, both US citizens.

After suing US officials to block any attempted arrest or deportation, Ahmed was quickly granted a temporary restraining order on Christmas Day. Ahmed had successfully argued that he risked irreparable harm without the order, alleging that Trump officials continue “to abuse the immigration system to punish and punitively detain noncitizens for protected speech and silence viewpoints with which it disagrees” and confirming that his speech had been chilled.

US officials are attempting to sanction Ahmed seemingly due to his work as the founder of a British-American non-governmental organization, the Center for Countering Digital Hate (CCDH).

“An egregious act of government censorship”

In a shocking announcement last week, Secretary of State Marco Rubio confirmed that five individuals—described as “radical activists” and leaders of “weaponized NGOs”—would face US visa bans since “their entry, presence, or activities in the United States have potentially serious adverse foreign policy consequences” for the US.

Nobody was named in that release, but Under Secretary for Public Diplomacy, Sarah Rogers, later identified the targets in an X post she currently has pinned to the top of her feed.

Alongside Ahmed, sanctioned individuals included former European commissioner for the internal market, Thierry Breton; the leader of UK-based Global Disinformation Index (GDI), Clare Melford; and co-leaders of Germany-based HateAid, Anna-Lena von Hodenberg and Josephine Ballon. A GDI spokesperson told The Guardian that the visa bans are “an authoritarian attack on free speech and an egregious act of government censorship.”

While all targets were scrutinized for supporting some of the European Union’s strictest tech regulations, including the Digital Services Act (DSA), Ahmed was further accused of serving as a “key collaborator with the Biden Administration’s effort to weaponize the government against US citizens.” As evidence of Ahmed’s supposed threat to US foreign policy, Rogers cited a CCDH report flagging Robert F. Kennedy, Jr. among the so-called “disinformation dozen” driving the most vaccine hoaxes on social media.

Neither official has really made it clear what exact threat these individuals pose if operating from within the US, as opposed to from anywhere else in the world. Echoing Rubio’s press release, Rogers wrote that the sanctions would reinforce a “red line,” supposedly ending “extraterritorial censorship of Americans” by targeting the “censorship-NGO ecosystem.”

For Ahmed’s group, specifically, she pointed to Musk’s failed lawsuit, which accused CCDH of illegally scraping Twitter—supposedly, it offered evidence of extraterritorial censorship. That lawsuit surfaced “leaked documents” allegedly showing that CCDH planned to “kill Twitter” by sharing research that could be used to justify big fines under the DSA or the UK’s Online Safety Act. Following that logic, seemingly any group monitoring misinformation or sharing research that lawmakers weigh when implementing new policies could be maligned as seeking mechanisms to censor platforms.

Notably, CCDH won its legal fight with Musk after a judge mocked X’s legal argument as “vapid” and dismissed the lawsuit as an obvious attempt to punish CCDH for exercising free speech that Musk didn’t like.

In his complaint last week, Ahmed alleged that US officials were similarly encroaching on his First Amendment rights by unconstitutionally wielding immigration law as “a tool to punish noncitizen speakers who express views disfavored by the current administration.”

Both Rubio and Rogers are named as defendants in the suit, as well as Attorney General Pam Bondi, Secretary of Homeland Security Kristi Noem, and Acting Director of US Immigration and Customs Enforcement Todd Lyons. In a loss, officials would potentially not only be forced to vacate Rubio’s actions implementing visa bans, but also possibly stop furthering a larger alleged Trump administration pattern of “targeting noncitizens for removal based on First Amendment protected speech.”

Lawsuit may force Rubio to justify visa bans

For Ahmed, securing the temporary restraining order was urgent, as he was apparently the only target currently located in the US when Rubio’s announcement dropped. In a statement provided to Ars, Ahmed’s attorney, Roberta Kaplan, suggested that the order was granted “so quickly because it is so obvious that Marco Rubio and the other defendants’ actions were blatantly unconstitutional.”

Ahmed founded CCDH in 2019, hoping to “call attention to the enormous problem of digitally driven disinformation and hate online.” According to the suit, he became particularly concerned about antisemitism online while living in the United Kingdom in 2016, having watched “the far-right party, Britain First,” launching “the dangerous conspiracy theory that the EU was attempting to import Muslims and Black people to ‘destroy’ white citizens.” That year, a Member of Parliament and Ahmed’s colleague, Jo Cox, was “shot and stabbed in a brutal politically motivated murder, committed by a man who screamed ‘Britain First’” during the attack. That tragedy motivated Ahmed to start CCDH.

He moved to the US in 2021 and was granted a green card in 2024, starting his family and continuing to lead CCDH efforts monitoring not just Twitter/X, but also Meta platforms, TikTok, and, more recently, AI chatbots. In addition to supporting the DSA and UK’s Online Safety Act, his group has supported US online safety laws and Section 230 reforms intended to protect kids online.

“Mr. Ahmed studies and engages in civic discourse about the content moderation policies of major social media companies in the United States, the United Kingdom, and the European Union,” his lawsuit said. “There is no conceivable foreign policy impact from his speech acts whatsoever.”

In his complaint, Ahmed alleged that Rubio has so far provided no evidence that Ahmed poses such a great threat that he must be removed. He argued that “applicable statutes expressly prohibit removal based on a noncitizen’s ‘past, current, or expected beliefs, statements, or associations.’”

According to DHS guidance from 2021 cited in the suit, “A noncitizen’ s exercise of their First Amendment rights … should never be a factor in deciding to take enforcement action.”

To prevent deportation based solely on viewpoints, Rubio was supposed to notify chairs of the House Foreign Affairs, Senate Foreign Relations, and House and Senate Judiciary Committees, to explain what “compelling US foreign policy interest” would be compromised if Ahmed or others targeted with visa bans were to enter the US. But there’s no evidence Rubio took those steps, Ahmed alleged.

“The government has no power to punish Mr. Ahmed for his research, protected speech, and advocacy, and Defendants cannot evade those constitutional limitations by simply claiming that Mr. Ahmed’s presence or activities have ‘potentially serious adverse foreign policy consequences for the United States,’” a press release from his legal team said. “There is no credible argument for Mr. Ahmed’s immigration detention, away from his wife and young child.”

X lawsuit offers clues to Trump officials’ defense

To some critics, it looks like the Trump administration is going after CCDH in order to take up the fight that Musk already lost. In his lawsuit against CCDH, Musk’s X echoed US Senator Josh Hawley (R-Mo.) by suggesting that CCDH was a “foreign dark money group” that allowed “foreign interests” to attempt to “influence American democracy.” It seems likely that US officials will put forward similar arguments in their CCDH fight.

Rogers’ X post offers some clues that the State Department will be mining Musk’s failed litigation to support claims of what it calls a “global censorship-industrial complex.” What she detailed suggested that the Trump administration plans to argue that NGOs like CCDH support strict tech laws, then conduct research bent on using said laws to censor platforms. That logic seems to ignore the reality that NGOs cannot control what laws get passed or enforced, Breton suggested in his first TV interview after his visa ban was announced.

Breton, whom Rogers villainized as the “mastermind” behind the DSA, urged EU officials to do more now defend their tough tech regulations—which Le Monde noted passed with overwhelming bipartisan support and very little far-right resistance—and fight the visa bans, Bloomberg reported.

“They cannot force us to change laws that we voted for democratically just to please [US tech companies],” Breton said. “No, we must stand up.”

While EU officials seemingly drag their feet, Ahmed is hoping that a judge will declare that all the visa bans that Rubio announced are unconstitutional. The temporary restraining order indicates there will be a court hearing Monday at which Ahmed will learn precisely “what steps Defendants have taken to impose visa restrictions and initiate removal proceedings against” him and any others. Until then, Ahmed remains in the dark on why Rubio deemed him as having “potentially serious adverse foreign policy consequences” if he stayed in the US.

Ahmed, who argued that X’s lawsuit sought to chill CCDH’s research and alleged that the US attack seeks to do the same, seems confident that he can beat the visa bans.

“America is a great nation built on laws, with checks and balances to ensure power can never attain the unfettered primacy that leads to tyranny,” Ahmed said. “The law, clear-eyed in understanding right and wrong, will stand in the way of those who seek to silence the truth and empower the bold who stand up to power. I believe in this system, and I am proud to call this country my home. I will not be bullied away from my life’s work of fighting to keep children safe from social media’s harm and stopping antisemitism online. Onward.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

US can’t deport hate speech researcher for protected speech, lawsuit says Read More »

impeachment-articles-filed-against-rfk-jr.,-claiming-abuse-of-power

Impeachment articles filed against RFK Jr., claiming abuse of power

“Reckless”

Stevens’ impeachment articles were directly supported by a grassroots political organization advocating for the country’s scientific community, called Stand Up for Science.

Colette Delawalla, the group’s founder and CEO, was quoted in Stevens’ press announcement, saying Kennedy’s actions are “negligent and will result in harm and loss of life. He must be impeached and removed.”

In the 13-page impeachment articles filed, Stevens accuses Kennedy of high crimes and misdemeanors, citing a lengthy list of actions Kennedy has taken that have been widely decried by public health, scientific, and medical experts as harmful. Those include gutting funding for research, including cancer, addiction, and mRNA vaccine technology; making the work of the US Department of Health and Human Services less transparent by ending public comment periods for some actions; making false and misleading health statements, particularly about vaccines; firing the entire panel of vaccine advisors for the Centers for Disease Control and Prevention; hiring a slew of his fellow anti-vaccine activists to undermine public health from within the health department in roles for which they are unqualified; and making unilateral changes to federal vaccine recommendations.

“Under his watch, families are less safe and less healthy, people are paying more for care, lifesaving research has been gutted, and vaccines have been restricted,” Stevens said. “His actions are reckless, his leadership is harmful, and his tenure has become a direct threat to our nation’s health and security.”

Impeachment articles filed against RFK Jr., claiming abuse of power Read More »

believing-misinformation-is-a-“win”-for-some-people,-even-when-proven-false

Believing misinformation is a “win” for some people, even when proven false

Why people endorse misinformation

Our findings highlight the limits of countering misinformation directly, because for some people, literal truth is not the point.

For example, President Donald Trump incorrectly claimed in August 2025 that crime in Washington, DC, was at an all-time high, generating countless fact-checks of his premise and think pieces about his dissociation from reality.

But we believe that to someone with a symbolic mindset, debunkers merely demonstrate that they’re the ones reacting and are therefore weak. The correct information is easily available but is irrelevant to someone who prioritizes a symbolic show of strength. What matters is signaling one isn’t listening and won’t be swayed.

In fact, for symbolic thinkers, nearly any statement should be justifiable. The more outlandish or easily disproved something is, the more powerful one might seem when standing by it. Being an edgelord—a contrarian online provocateur—or outright lying can, in their own odd way, appear “authentic.”

Some people may also view their favorite dissembler’s claims as provocative trolling, but, given the link between this mindset and authoritarianism, they want those far-fetched claims acted on anyway. The deployment of National Guard troops to Washington, for example, can be the desired end goal, even if the offered justification is a transparent farce.

Is this really 5-D chess?

It is possible that symbolic, but not exactly true, beliefs have some downstream benefit, such as serving as negotiation tactics, loyalty tests, or a fake-it-till-you-make-it long game that somehow, eventually, becomes a reality. Political theorist Murray Edelman, known for his work on political symbolism, noted that politicians often prefer scoring symbolic points over delivering results—it’s easier. Leaders can offer symbolism when they have little tangible to provide.

Randy Stein is associate professor of marketing at California State Polytechnic University, Pomona and Abraham Rutchick is professor of psychology at California State University, Northridge.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Believing misinformation is a “win” for some people, even when proven false Read More »

the-current-war-on-science,-and-who’s-behind-it

The current war on science, and who’s behind it


A vaccine developer and a climate scientist walk into a bar write a book.

Fighting against the anti-science misinformation can feel like fighting a climate-driven wildfire. Credit: Anadolu

We’re about a quarter of the way through the 21st century.

Summers across the global north are now defined by flash floods, droughts, heat waves, uncontainable wildfires, and intensifying named storms, exactly as predicted by Exxon scientists back in the 1970s. The United States secretary of health and human services advocates against using the most effective tool we have to fight the infectious diseases that have ravaged humanity for millennia. People are eagerly lapping up the misinformation spewed and disseminated by AI chatbots, which are only just getting started.

It is against this backdrop that a climate scientist and a vaccine developer teamed up to write Science Under Siege. It is about as grim as you’d expect.

Michael Mann is a climate scientist at the University of Pennsylvania who, in 1998, developed the notorious hockey stick graph, which demonstrated that global surface temperatures were roughly flat until around the year 1900, when they started rising precipitously (and have not stopped). Peter Hotez is a microbiologist and pediatrician at Baylor College of Medicine whose group developed a low-cost, patent-free COVID-19 vaccine using public funds (i.e., not from a pharmaceutical company) and distributed it to almost a hundred million people in India and Indonesia.

Unlikely crusaders

Neither of them anticipated becoming crusaders for their respective fields—and neither probably anticipated that their respective fields would ever actually need crusaders. But they each have taken on the challenge, and they’ve been rewarded for their trouble with condemnation and harassment from Congress and death threats from the public they are trying to serve. In this book, they hope to take what they’ve learned as scientists and science communicators in our current world and parlay that into a call to arms.

Mann and Hotez have more in common than being pilloried all over the internet. Although they trained in disparate disciplines, their fields are now converging (as if they weren’t each threatening enough on their own). Climate change is altering the habitats, migrations, and reproductive patterns of pathogen-bearing wildlife like bats, mosquitoes, and other insects. It is causing the migration of humans as well. Our increasing proximity to these species in both space and time can increase the opportunities for us to catch diseases from them.

Yet Mann and Hotez insist that a third scourge is even more dangerous than these two combined. In their words:

It is currently impossible for global leaders to take the urgent actions necessary to respond to the climate crisis and pandemic threats because they are thwarted by a common enemy—antiscience—that is politically and ideologically motivated opposition to any science that threatens powerful special interests and their political agendas. Unless we find a way to overcome antiscience, humankind will face its gravest threat yet—the collapse of civilization as we know it.

And they point to an obvious culprit: “There is, unquestionably, a coordinated, concerted attack on science by today’s Republican Party.”

They’ve helpfully characterized “the five principal forces of antiscience “ into alliterative groups: (1) plutocrats and their political action committees, (2) petrostates and their politicians and polluters, (3) fake and venal professionals—physicians and professors, (4) propagandists, especially those with podcasts, and (5) the press. The general tactic is that (1) and (2) hire (3) to generate deceitful and inflammatory talking points, which are then disseminated by all-too-willing members of (4) and (5).

There is obviously a lot of overlap among these categories; Elon Musk, Vladimir Putin, Rupert Murdoch, and Donald Trump can all jump between a number of these bins. As such, the ideas and arguments presented in the book are somewhat redundant, as are the words used. Far too many things are deemed “ironic” (i.e., the same people who deny and dismiss the notion of human-caused climate change claimed that Democrats generated hurricanes Helene and Milton to target red states in October 2024) or “risible” (see Robert F. Kennedy Jr.’s claim that Dr. Peter Hotez sought to make it a felony to criticize Anthony Fauci).

A long history

Antiscience propaganda has been used by authoritarians for over a century. Stalin imprisoned physicists and attacked geneticists while famously enacting the nonsensical agricultural ideas of Trofim Lysenko, who thought genes were a “bourgeois invention.” This led to the starvation of millions of people in the Soviet Union and China.

Why go after science? The scientific method is the best means we have of discovering how our Universe works, and it has been used to reveal otherwise unimaginable facets of reality. Scientists are generally thought of as authorities possessing high levels of knowledge, integrity, and impartiality. Discrediting science and scientists is thus an essential first step for authoritarian regimes to then discredit any other types of learning and truth and destabilize their societies.

The authors trace the antiscience messaging on COVID, which followed precisely the same arc as that on climate change except condensed into a matter of months instead of decades. The trajectory started by maintaining that the threat was not real. When that was no longer tenable, it quickly morphed into “OK, this is happening, and it may actually get pretty bad for some subset of people, but we should definitely not take collective action to address it because that would be bad for the economy.”

It finally culminated in preying upon people’s understandable fears in these very scary times by claiming that this is all the fault of scientists who are trying to take away your freedom, be that bodily autonomy and the ability to hang out with your loved ones (COVID) or your plastic straws, hamburgers, and SUVs (climate change).

This mis- and disinformation has prevented us from dealing with either catastrophe by misleading people about the seriousness, or even existence, of the threats and/or harping on their hopeless nature, sapping us of the will to do anything to counter them. These tactics also sow division among people, practically ensuring that we won’t band together to take the kind of collective action essential to addressing enormous, complex problems. It is all quite effective. Mann and Hotez conclude that “the future of humankind and the health of our planet now depend on surmounting the dark forces of antiscience.”

Why, you might wonder, would the plutocrats, polluters, and politicians of the Republican Party be so intent on undermining science and scientists, lying to the public, fearmongering, and stoking hatred among their constituents? The same reason as always: to hold onto their money and power. The means to that end is thwarting regulations. Yes, it’s nefarious, but also so disappointingly… banal.

The authors are definitely preaching exclusively to the converted. They are understandably angry at what has been done to them and somewhat mocking of those who don’t see things their way. They end by trying to galvanize their followers into taking action to reverse the current course.

They advise that the best—really, the only—thing we can do now to effect change is to vote and hope for favorable legislation. “Only political change, including massive turnout to support politicians who favor people over plutocrats, can ultimately solve this larger systemic problem,” they write. But since our president and vice president don’t even believe in or acknowledge “systemic problems,” the future is not looking too bright.

The current war on science, and who’s behind it Read More »

youtube-will-restore-channels-banned-for-covid-and-election-misinformation

YouTube will restore channels banned for COVID and election misinformation

It’s not exactly hard to find politically conservative content on YouTube, but the platform may soon skew even further to the right. YouTube parent Alphabet has confirmed that it will restore channels that were banned in recent years for spreading misinformation about COVID-19 and elections. Alphabet says it values free expression and political debate, placing the blame for its previous moderation decisions on the Biden administration.

Alphabet made this announcement via a lengthy letter to Rep. Jim Jordan (R-Ohio). The letter, a response to subpoenas from the House Judiciary Committee, explains in no uncertain terms that the company is taking a more relaxed approach to moderating political content on YouTube.

For starters, Alphabet denies that its products and services are biased toward specific viewpoints and that it “appreciates the accountability” provided by the committee. The cloying missive goes on to explain that Google didn’t really want to ban all those accounts, but Biden administration officials just kept asking. Now that the political tables have turned, Google is looking to dig itself out of this hole.

According to Alphabet’s version of events, misinformation such as telling people to drink bleach to cure COVID wasn’t initially against its policies. However, Biden officials repeatedly asked YouTube to take action. YouTube did and specifically banned COVID misinformation sitewide until 2024, one year longer than the crackdown on election conspiracy theories. Alphabet says that today, YouTube’s rules permit a “wider range of content.”

In an apparent attempt to smooth things over with the Republican-controlled House Judiciary Committee, YouTube will restore the channels banned for COVID and election misinformation. This includes prominent conservatives like Dan Bongino, who is now the Deputy Director of the FBI, and White House counterterrorism chief Sebastian Gorka.

YouTube will restore channels banned for COVID and election misinformation Read More »

analysis:-the-trump-administration’s-assault-on-climate-action

Analysis: The Trump administration’s assault on climate action


Official actions don’t challenge science, while unofficial docs muddy the waters.

Last week, the Environmental Protection Agency made lots of headlines by rejecting the document that establishes its ability to regulate the greenhouse gases that are warming our climate. While the legal assault on regulations grabbed most of the attention, it was paired with two other actions that targeted other aspects of climate change: the science underlying our current understanding of the dramatic warming the Earth is experiencing, and the renewable energy that represents our best chance of limiting this warming.

Collectively, these actions illuminate the administration’s strategy for dealing with a problem that it would prefer to believe doesn’t exist, despite our extensive documentation of its reality. They also show how the administration is tailoring its approach to different audiences, including the audience of one who is demanding inaction.

When in doubt, make something up

The simplest thing to understand is an action by the Department of the Interior, which handles permitting for energy projects on federal land—including wind and solar, both onshore and off. That has placed the Interior in an awkward position. Wind and solar are now generally the cheapest ways to generate electricity and are currently in the process of a spectacular boom, with solar now accounting for over 80 percent of the newly installed capacity in the US.

Yet, when Trump issued an executive order declaring an energy emergency, wind and solar were notably excluded as potential solutions. Language from Trump and other administration officials has also made it clear that renewable energy is viewed as an impediment to the administration’s pro-fossil fuel agenda.

But shutting down federal permitting for renewable energy with little more than “we don’t like it” as justification could run afoul of rules that forbid government decisions from being “arbitrary and capricious.” This may explain why the government gave up on its attempts to block the ongoing construction of an offshore wind farm in New York waters.

On Friday, the Interior announced that it had settled on a less arbitrary justification for blocking renewable energy on public land: energy density. Given a metric of land use per megawatt, wind and solar are less efficient than nuclear plants we can’t manage to build on time or budget, and therefore “environmentally damaging” and an inefficient use of federal land, according to the new logic. “The Department will now consider proposed energy project’s capacity density when assessing the project’s potential energy benefits to the nation and impacts to the environment and wildlife,” Interior declared.

This is only marginally more reasonable than Interior Secretary Doug Burgum’s apparent inability to recognize that solar power can be stored in batteries. But it has three features that will be recurring themes. There’s at least a token attempt to provide a justification that might survive the inevitable lawsuits, while at the same time providing fodder for the culture war that many in the administration demand. And it avoids directly attacking the science that initially motivated the push toward renewables.

Energy vs. the climate

That’s not to say that climate change isn’t in for attack. It’s just that the attacks are being strategically separated from the decisions that might produce a lawsuit. Last week, the burden of taking on extremely well-understood and supported science fell to the Department of Energy, which released a report on climate “science” to coincide with the EPA’s decision to give up on attempts to regulate greenhouse gases.

For those who have followed public debates over climate change, looking at the author list—John Christy, Judith Curry, Steven Koonin, Ross McKitrick, and Roy Spencer—will give you a very clear picture of what to expect. Spencer is a creationist, raising questions about his ability to evaluate any science free from his personal biases. (He has also said, “My job has helped save our economy from the economic ravages of out-of-control environmental extremism,” so it’s not just biology where he’s got these issues.) McKitrick is an economist who engaged in a multi-year attempt to raise doubt about the prominent “hockey stick” reconstruction of past climates, even as scientists were replicating the results. Etc.

The report is a master class in arbitrary and capricious decision-making applied to science. Sometimes the authors rely on the peer-reviewed literature. Other times they perform their own analysis for this document, in some cases coming up with almost comically random metrics for data. (Example: “We examine occurrences of 5-day deluges as follows. Taking the Pacific coast as an example, a 130-year span contains 26 5-year intervals. At each location we computed the 5-day precipitation totals throughout the year and selected the 26 highest values across the sample.” Why five days? Five-year intervals? Who knows.)

This is especially striking in a few cases where the authors choose references that were published a few years ago, and thus neatly avoid the dramatic temperature records that have been set over the past couple of years. Similarly, they sometimes use regional measures and sometimes use global ones. They demand long-term data in some contexts, while getting excited about two years of coral growth in the Great Barrier Reef. The authors highlight the fact that US tide gauges don’t show any indication of an acceleration in the rate of sea level rise while ignoring the fact that global satellite measures clearly do.

That’s not to say that there aren’t other problems. There’s some blatant misinformation, like claims that urbanization could be distorting the warming, which has already been tested extensively. (Notably, warming is most intense in the sparsely populated Arctic.) There’s also some creative use of language, like referring to the ocean acidification caused by CO2 as “neutralizing ocean alkalinity.”

But the biggest bit of misinformation comes in the introduction, where the secretary of energy, Chris Wright, said of the authors, “I chose them for their rigor, honesty, and willingness to elevate the debate.” There is no reason to choose this group of marginal contrarians except the knowledge that they’d produce a report like this, thus providing a justification for those in the administration who want to believe it’s all a scam.

No science needed

The critical feature of the Department of Energy report is that it contains no policy actions; it’s purely about trying to undercut well-understood climate science. This means the questionable analyses in the report shouldn’t ever end up being tested in court.

That’s in contrast to the decision to withdraw the EPA’s endangerment finding regarding greenhouse gases. There’s quite an extensive history to the endangerment finding, but briefly, it’s the product of a Supreme Court decision (Massachusetts v. EPA), which compelled the EPA to evaluate whether greenhouse gases posed a threat to the US population as defined in the Clean Air Act. Both the Bush and Obama EPAs did so, thus enabling the regulation of greenhouse gases, including carbon dioxide.

Despite the claims in the Department of Energy report, there is comprehensive evidence that greenhouse gases are causing problems in the US, ranging from extreme weather to sea level rise. So while the EPA mentions the Department of Energy’s work a number of times, the actual action being taken skips over the science and focuses on legal issues. In doing so, it creates a false history where the endangerment finding had no legal foundation.

To re-recap, the Supreme Court determined that this evaluation was required by the Clean Air Act. George W. Bush’s administration performed the analysis and reached the exact same conclusion as the Obama administration (though the former chose to ignore those conclusions). Yet Trump’s EPA is calling the endangerment finding “an unprecedented move” by the Obama administration that involved “mental leaps” and “ignored Congress’ clear intent.” And the EPA presents the findings as strategic, “the only way the Obama-Biden Administration could access EPA’s authority to regulate,” rather than compelled by scientific evidence.

Fundamentally, it’s an ahistorical presentation; the EPA is counting on nobody remembering what actually happened.

The announcement doesn’t get much better when it comes to the future. The only immediate change will be an end to any attempts to regulate carbon emissions from motor vehicles, since regulations for power plants had been on hold due to court challenges. Yet somehow, the EPA’s statement claims that this absence of regulation imposed costs on people. “The Endangerment Finding has also played a significant role in EPA’s justification of regulations of other sources beyond cars and trucks, resulting in additional costly burdens on American families and businesses,” it said.

We’re still endangered

Overall, the announcements made last week provide a clear picture of how the administration intends to avoid addressing climate change and cripple the responses started by previous administrations. Outside of the policy arena, it will question the science and use partisan misinformation to rally its supporters for the fight. But it recognizes that these approaches aren’t flying when it comes to the courts.

So it will separately pursue a legal approach that seeks to undercut the ability of anyone, including private businesses, to address climate change, crafting “reasons” for its decisions in a way that might survive legal challenge—because these actions are almost certain to be challenged in court. And that may be the ultimate goal. The current court has shown a near-complete disinterest in respecting precedent and has issued a string of decisions that severely limit the EPA. It’s quite possible that the court will simply throw out the prior decision that compelled the government to issue an endangerment finding in the first place.

If that’s left in place, then any ensuing administrations can simply issue a new endangerment finding. If anything, the effects of climate change on the US population have become more obvious, and the scientific understanding of human-driven warming has solidified since the Bush administration first acknowledged them.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Analysis: The Trump administration’s assault on climate action Read More »

conspiracy-theorists-don’t-realize-they’re-on-the-fringe

Conspiracy theorists don’t realize they’re on the fringe


Gordon Pennycook: “It might be one of the biggest false consensus effects that’s been observed.”

Credit: Aurich Lawson / Thinkstock

Belief in conspiracy theories is often attributed to some form of motivated reasoning: People want to believe a conspiracy because it reinforces their worldview, for example, or doing so meets some deep psychological need, like wanting to feel unique. However, it might also be driven by overconfidence in their own cognitive abilities, according to a paper published in the Personality and Social Psychology Bulletin. The authors were surprised to discover that not only are conspiracy theorists overconfident, they also don’t realize their beliefs are on the fringe, massively overestimating by as much as a factor of four how much other people agree with them.

“I was expecting the overconfidence finding,” co-author Gordon Pennycook, a psychologist at Cornell University, told Ars. “If you’ve talked to someone who believes conspiracies, it’s self-evident. I did not expect them to be so ready to state that people agree with them. I thought that they would overestimate, but I didn’t think that there’d be such a strong sense that they are in the majority. It might be one of the biggest false consensus effects that’s been observed.”

In 2015, Pennycook made headlines when he co-authored a paper demonstrating how certain people interpret “pseudo-profound bullshit” as deep observations. Pennycook et al. were interested in identifying individual differences between those who are susceptible to pseudo-profound BS and those who are not and thus looked at conspiracy beliefs, their degree of analytical thinking, religious beliefs, and so forth.

They presented several randomly generated statements, containing “profound” buzzwords, that were grammatically correct but made no sense logically, along with a 2014 tweet by Deepak Chopra that met the same criteria. They found that the less skeptical participants were less logical and analytical in their thinking and hence much more likely to consider these nonsensical statements as being deeply profound. That study was a bit controversial, in part for what was perceived to be its condescending tone, along with questions about its methodology. But it did snag Pennycook et al. a 2016 Ig Nobel Prize.

Last year we reported on another Pennycook study, presenting results from experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory. That study showed that the AI interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual. “The work overturns a lot of how we thought about conspiracies, that they’re the result of various psychological motives and needs,” Pennycook said at the time.

Miscalibrated from reality

Pennycook has been working on this new overconfidence study since 2018, perplexed by observations indicating that people who believe in conspiracies also seem to have a lot of faith in their cognitive abilities—contradicting prior research finding that conspiracists are generally more intuitive. To investigate, he and his co-authors conducted eight separate studies that involved over 4,000 US adults.

The assigned tasks were designed in such a way that participants’ actual performance and how they perceived their performance were unrelated. For example, in one experiment, they were asked to guess the subject of an image that was largely obscured. The subjects were then asked direct questions about their belief (or lack thereof) concerning several key conspiracy claims: the Apollo Moon landings were faked, for example, or that Princess Diana’s death wasn’t an accident. Four of the studies focused on testing how subjects perceived others’ beliefs.

The results showed a marked association between subjects’ tendency to be overconfident and belief in conspiracy theories. And while a majority of participants believed a conspiracy’s claims just 12 percent of the time, believers thought they were in the majority 93 percent of the time. This suggests that overconfidence is a primary driver of belief in conspiracies.

It’s not that believers in conspiracy theories are massively overconfident; there is no data on that, because the studies didn’t set out to quantify the degree of overconfidence, per Pennycook. Rather, “They’re overconfident, and they massively overestimate how much people agree with them,” he said.

Ars spoke with Pennycook to learn more.

Ars Technica: Why did you decide to investigate overconfidence as a contributing factor to believing conspiracies?

Gordon Pennycook: There’s a popular sense that people believe conspiracies because they’re dumb and don’t understand anything, they don’t care about the truth, and they’re motivated by believing things that make them feel good. Then there’s the academic side, where that idea molds into a set of theories about how needs and motivations drive belief in conspiracies. It’s not someone falling down the rabbit hole and getting exposed to misinformation or conspiratorial narratives. They’re strolling down: “I like it over here. This appeals to me and makes me feel good.”

Believing things that no one else agrees with makes you feel unique. Then there’s various things I think that are a little more legitimate: People join communities and there’s this sense of belongingness. How that drives core beliefs is different. Someone may stop believing but hang around in the community because they don’t want to lose their friends. Even with religion, people will go to church when they don’t really believe. So we distinguish beliefs from practice.

What we observed is that they do tend to strongly believe these conspiracies despite the fact that there’s counter evidence or a lot of people disagree. What would lead that to happen? It could be their needs and motivations, but it could also be that there’s something about the way that they think where it just doesn’t occur to them that they could be wrong about it. And that’s where overconfidence comes in.

Ars Technica: What makes this particular trait such a powerful driving force?

Gordon Pennycook: Overconfidence is one of the most important core underlying components, because if you’re overconfident, it stops you from really questioning whether the thing that you’re seeing is right or wrong, and whether you might be wrong about it. You have an almost moral purity of complete confidence that the thing you believe is true. You cannot even imagine what it’s like from somebody else’s perspective. You couldn’t imagine a world in which the things that you think are true could be false. Having overconfidence is that buffer that stops you from learning from other people. You end up not just going down the rabbit hole, you’re doing laps down there.

Overconfidence doesn’t have to be learned, parts of it could be genetic. It also doesn’t have to be maladaptive. It’s maladaptive when it comes to beliefs. But you want people to think that they will be successful when starting new businesses. A lot of them will fail, but you need some people in the population to take risks that they wouldn’t take if they were thinking about it in a more rational way. So it can be optimal at a population level, but maybe not at an individual level.

Ars Technica: Is this overconfidence related to the well-known Dunning-Kruger effect?

Gordon Pennycook: It’s because of Dunning-Kruger that we had to develop a new methodology to measure overconfidence, because the people who are the worst at a task are the worst at knowing that they’re the worst at the task. But that’s because the same things that you use to do the task are the things you use to assess how good you are at the task. So if you were to give someone a math test and they’re bad at math, they’ll appear overconfident. But if you give them a test of assessing humor and they’re good at that, they won’t appear overconfident. That’s about the task, not the person.

So we have tasks where people essentially have to guess, and it’s transparent. There’s no reason to think that you’re good at the task. In fact, people who think they’re better at the task are not better at it, they just think they are. They just have this underlying kind of sense that they can do things, they know things, and that’s the kind of thing that we’re trying to capture. It’s not specific to a domain. There are lots of reasons why you could be overconfident in a particular domain. But this is something that’s an actual trait that you carry into situations. So when you’re scrolling online and come up with these ideas about how the world works that don’t make any sense, it must be everybody else that’s wrong, not you.

Ars Technica: Overestimating how many people agree with them seems to be at odds with conspiracy theorists’ desire to be unique.  

Gordon Pennycook: In general, people who believe conspiracies often have contrary beliefs. We’re working with a population where coherence is not to be expected. They say that they’re in the majority, but it’s never a strong majority. They just don’t think that they’re in a minority when it comes to the belief. Take the case of the Sandy Hook conspiracy, where adherents believe it was a false flag operation. In one sample, 8 percent of people thought that this was true. That 8 percent thought 61 percent of people agreed with them.

So they’re way off. They really, really miscalibrated. But they don’t say 90 percent. It’s 60 percent, enough to be special, but not enough to be on the fringe where they actually are. I could have asked them to rank how smart they are relative to others, or how unique they thought their beliefs were, and they would’ve answered high on that. But those are kind of mushy self-concepts. When you ask a specific question that has an objectively correct answer in terms of the percent of people in the sample that agree with you, it’s not close.

Ars Technica: How does one even begin to combat this? Could last year’s AI study point the way?

Gordon Pennycook: The AI debunking effect works better for people who are less overconfident. In those experiments, very detailed, specific debunks had a much bigger effect than people expected. After eight minutes of conversation, a quarter of the people who believed the thing didn’t believe it anymore, but 75 percent still did. That’s a lot. And some of them, not only did they still believe it, they still believed it to the same degree. So no one’s cracked that. Getting any movement at all in the aggregate was a big win.

Here’s the problem. You can’t have a conversation with somebody who doesn’t want to have the conversation. In those studies, we’re paying people, but they still get out what they put into the conversation. If you don’t really respond or engage, then our AI is not going to give you good responses because it doesn’t know what you’re thinking. And if the person is not willing to think. … This is why overconfidence is such an overarching issue. The only alternative is some sort of propagandistic sit-them-downs with their eyes open and try to de-convert them. But you can’t really convert someone who doesn’t want to be converted. So I’m not sure that there is an answer. I think that’s just the way that humans are.

Personality and Social Psychology Bulletin, 2025. DOI: 10.1177/01461672251338358  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Conspiracy theorists don’t realize they’re on the fringe Read More »

everything-that-could-go-wrong-with-x’s-new-ai-written-community-notes

Everything that could go wrong with X’s new AI-written community notes


X says AI can supercharge community notes, but that comes with obvious risks.

Elon Musk’s X arguably revolutionized social media fact-checking by rolling out “community notes,” which created a system to crowdsource diverse views on whether certain X posts were trustworthy or not.

But now, the platform plans to allow AI to write community notes, and that could potentially ruin whatever trust X users had in the fact-checking system—which X has fully acknowledged.

In a research paper, X described the initiative as an “upgrade” while explaining everything that could possibly go wrong with AI-written community notes.

In an ideal world, X described AI agents that speed up and increase the number of community notes added to incorrect posts, ramping up fact-checking efforts platform-wide. Each AI-written note will be rated by a human reviewer, providing feedback that makes the AI agent better at writing notes the longer this feedback loop cycles. As the AI agents get better at writing notes, that leaves human reviewers to focus on more nuanced fact-checking that AI cannot quickly address, such as posts requiring niche expertise or social awareness. Together, the human and AI reviewers, if all goes well, could transform not just X’s fact-checking, X’s paper suggested, but also potentially provide “a blueprint for a new form of human-AI collaboration in the production of public knowledge.”

Among key questions that remain, however, is a big one: X isn’t sure if AI-written notes will be as accurate as notes written by humans. Complicating that further, it seems likely that AI agents could generate “persuasive but inaccurate notes,” which human raters might rate as helpful since AI is “exceptionally skilled at crafting persuasive, emotionally resonant, and seemingly neutral notes.” That could disrupt the feedback loop, watering down community notes and making the whole system less trustworthy over time, X’s research paper warned.

“If rated helpfulness isn’t perfectly correlated with accuracy, then highly polished but misleading notes could be more likely to pass the approval threshold,” the paper said. “This risk could grow as LLMs advance; they could not only write persuasively but also more easily research and construct a seemingly robust body of evidence for nearly any claim, regardless of its veracity, making it even harder for human raters to spot deception or errors.”

X is already facing criticism over its AI plans. On Tuesday, former United Kingdom technology minister, Damian Collins, accused X of building a system that could allow “the industrial manipulation of what people see and decide to trust” on a platform with more than 600 million users, The Guardian reported.

Collins claimed that AI notes risked increasing the promotion of “lies and conspiracy theories” on X, and he wasn’t the only expert sounding alarms. Samuel Stockwell, a research associate at the Centre for Emerging Technology and Security at the Alan Turing Institute, told The Guardian that X’s success largely depends on “the quality of safeguards X puts in place against the risk that these AI ‘note writers’ could hallucinate and amplify misinformation in their outputs.”

“AI chatbots often struggle with nuance and context but are good at confidently providing answers that sound persuasive even when untrue,” Stockwell said. “That could be a dangerous combination if not effectively addressed by the platform.”

Also complicating things: anyone can create an AI agent using any technology to write community notes, X’s Community Notes account explained. That means that some AI agents may be more biased or defective than others.

If this dystopian version of events occurs, X predicts that human writers may get sick of writing notes, threatening the diversity of viewpoints that made community notes so trustworthy to begin with.

And for any human writers and reviewers who stick around, it’s possible that the sheer volume of AI-written notes may overload them. Andy Dudfield, the head of AI at a UK fact-checking organization called Full Fact, told The Guardian that X risks “increasing the already significant burden on human reviewers to check even more draft notes, opening the door to a worrying and plausible situation in which notes could be drafted, reviewed, and published entirely by AI without the careful consideration that human input provides.”

X is planning more research to ensure the “human rating capacity can sufficiently scale,” but if it cannot solve this riddle, it knows “the impact of the most genuinely critical notes” risks being diluted.

One possible solution to this “bottleneck,” researchers noted, would be to remove the human review process and apply AI-written notes in “similar contexts” that human raters have previously approved. But the biggest potential downfall there is obvious.

“Automatically matching notes to posts that people do not think need them could significantly undermine trust in the system,” X’s paper acknowledged.

Ultimately, AI note writers on X may be deemed an “erroneous” tool, researchers admitted, but they’re going ahead with testing to find out.

AI-written notes will start posting this month

All AI-written community notes “will be clearly marked for users,” X’s Community Notes account said. The first AI notes will only appear on posts where people have requested a note, the account said, but eventually AI note writers could be allowed to select posts for fact-checking.

More will be revealed when AI-written notes start appearing on X later this month, but in the meantime, X users can start testing AI note writers today and soon be considered for admission in the initial cohort of AI agents. (If any Ars readers end up testing out an AI note writer, this Ars writer would be curious to learn more about your experience.)

For its research, X collaborated with post-graduate students, research affiliates, and professors investigating topics like human trust in AI, fine-tuning AI, and AI safety at Harvard University, the Massachusetts Institute of Technology, Stanford University, and the University of Washington.

Researchers agreed that “under certain circumstances,” AI agents can “produce notes that are of similar quality to human-written notes—at a fraction of the time and effort.” They suggested that more research is needed to overcome flagged risks to reap the benefits of what could be “a transformative opportunity” that “offers promise of dramatically increased scale and speed” of fact-checking on X.

If AI note writers “generate initial drafts that represent a wider range of perspectives than a single human writer typically could, the quality of community deliberation is improved from the start,” the paper said.

Future of AI notes

Researchers imagine that once X’s testing is completed, AI note writers could not just aid in researching problematic posts flagged by human users, but also one day select posts predicted to go viral and stop misinformation from spreading faster than human reviewers could.

Additional perks from this automated system, they suggested, would include X note raters quickly accessing more thorough research and evidence synthesis, as well as clearer note composition, which could speed up the rating process.

And perhaps one day, AI agents could even learn to predict rating scores to speed things up even more, researchers speculated. However, more research would be needed to ensure that wouldn’t homogenize community notes, buffing them out to the point that no one reads them.

Perhaps the most Musk-ian of ideas proposed in the paper, is a notion of training AI note writers with clashing views to “adversarially debate the merits of a note.” Supposedly, that “could help instantly surface potential flaws, hidden biases, or fabricated evidence, empowering the human rater to make a more informed judgment.”

“Instead of starting from scratch, the rater now plays the role of an adjudicator—evaluating a structured clash of arguments,” the paper said.

While X may be moving to reduce the workload for X users writing community notes, it’s clear that AI could never replace humans, researchers said. Those humans are necessary for more than just rubber-stamping AI-written notes.

Human notes that are “written from scratch” are valuable to train the AI agents and some raters’ niche expertise cannot easily be replicated, the paper said. And perhaps most obviously, humans “are uniquely positioned to identify deficits or biases” and therefore more likely to be compelled to write notes “on topics the automated writers overlook,” such as spam or scams.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Everything that could go wrong with X’s new AI-written community notes Read More »

editorial:-censoring-the-scientific-enterprise,-one-grant-at-a-time

Editorial: Censoring the scientific enterprise, one grant at a time


Recent grant terminations are a symptom of a widespread attack on science.

Over the last two weeks, in response to Executive Order 14035, the National Science Foundation (NSF) has discontinued funding for research on diversity, equity, and inclusion (DEI), as well as support for researchers from marginalized backgrounds. Executive Order 14168 ordered the NSF (and other federal agencies) to discontinue any research that focused on women, women in STEM, gender variation, and transsexual or transgender populations—and, oddly, transgenic mice.

Then, another round of cancellations targeted research on misinformation and disinformation, a subject (among others) that Republican Senator Ted Cruz views as advancing neo-Marxist perspectives and class warfare.

During the previous three years, I served as a program officer at the NSF Science of Science (SOS) program. We reviewed, recommended, and awarded competitive research grants on science communication, including research on science communication to the public, communication of public priorities to scientists, and citizen engagement and participation in science. Projects my team reviewed and funded on misinformation are among the many others at NSF that have now been canceled (see the growing list here).

Misinformation research is vital to advancing our understanding of how citizens understand and process evidence and scientific information and put that understanding into action. It is an increasingly important area of research given our massive, ever-changing digital information environment.

A few examples of important research that was canceled because it threatens the current administration’s political agenda:

  • A project that uses computational social sciences, computer science, sociology, and statistics to understand the fundamentals of information spread through social media, because understanding how information flows and its impact on human behavior is important for determining how to protect society from the effects of misinformation, propaganda, and “fake news.”
  • A project investigating how people and groups incentivize others to spread misinformation on social media platforms.
  • A study identifying the role of social media influencers in addressing misconceptions and inaccurate information related to vaccines, which would help us develop guidance on how to ensure accurate information reaches different audiences.

Misinformation research matters

This work is critical on its own. Results of misinformation research inform how we handle education, public service announcements, weather warnings, emergency response broadcasts, health advisories, agricultural practices, product recalls, and more. It’s how we get people to integrate data into their work, whether their work involves things like farming, manufacturing, fishing, or something else.

Understanding how speech on technical topics is perceived, drives trust, and changes behavior can help us ensure that our speech is more effective. Beyond its economic impact, research on misinformation helps create an informed public—the foundation of any democracy. Contrary to the president’s executive order, it does not “infringe on the constitutionally protected speech rights of American citizens.”

Misinformation research is only a threat to the speech of people who seek to spread misinformation.

Politics and science

Political attacks on misinformation research is censorship, driven by a dislike for the results it produces. It is also part of a larger threat to the NSF and the economic and social benefits that come from publicly funded research.

The NSF is a “pass through agency”—most of its annual budget (around $9 billion) passes through the agency and is returned to American communities in the form of science grants (80 percent of the budget) and STEM education (13 percent). The NSF manages these programs via a staff that is packed full of expert scientists in physics, psychology, chemistry, geosciences, engineering, sociology, and other fields. These scientists and the administrative staff (1,700 employees, who account for around 5 percent of its budget) organize complex peer-review panels that assess and distribute funding to cutting-edge science.

In normal times, presidents may shift the NSF’s funding priorities—this is their prerogative. This process is political. It always has been. It always will be. Elected officials (both presidents and Congress) have agendas and interests and want to bring federal dollars to their constituents. Additionally, there are national priorities—pandemic response, supercomputing needs, nanotechnology breakthroughs, space exploration goals, demands for microchip technologies, and artificial intelligence advancements.

Presidential agendas are meant to “steer the ship” by working with Congress to develop annual budgets, set appropriations and earmarks, and focus on specific regions (e.g., EPSCoR), topics, or facilities (e.g., federal labs).

While shifting priorities is normal, cancellation of previously funded research projects is NOT normal. Unilaterally banning funding for specific types of research (climate science, misinformation, research on minoritized groups) is not normal.

It’s anti-scientific, allowing politics rather than expertise to determine which research is most competitive. Canceling research grants because they threaten the current regime’s political agenda is a violation of the NSF’s duty to honor contracts and ethically manage the funds appropriated by the US Congress. This is a threat not just to individual scientists and universities, but to the trust and norms that underpin our scientific enterprise. It’s an attempt to terrorize researchers with the fear that their funding may be next and to create backlash against science and expertise (another important area of NSF-funded research that has also been canceled).

Scientific values and our responsibilities

Political interference in federal funding of scientific research will not end here. A recent announcement notes the NSF is facing a 55 percent cut to its annual budget and mass layoffs. Other agencies have been told to prepare for similar cuts. The administration’s actions will leave little funding for R&D that advances the public good. And the places where the research happens—especially universities and colleges—are also under assault. While these immediate cuts are felt first by scientists and universities, they will ultimately affect people throughout the nation—students, consumers, private companies, and residents.

The American scientific enterprise has been a world leader, and federal funding of science is a key driver of this success. For the last 100 years, students, scientists, and entrepreneurs from around the world have flocked to the US to advance science and innovation. Public investments in science have produced economic health and prosperity for all Americans and advanced our national security through innovation and soft diplomacy.

These cuts, combined with other actions taken to limit research funding and peer review at scientific agencies, make it clear that the Trump administration’s goals are to:

  • Roll back education initiatives that produce an informed public
  • Reduce evidence-based policy making
  • Slash public investment in the advancement of science

All Americans who benefit from the outcomes of publicly funded science—GPS and touch screens on your phone, Google, the Internet, weather data on an app, MRI, kidney exchanges, CRISPR, 3D printing, tiny hearing aids, bluetooth, broadband, robotics at the high school, electric cars, suspension bridges, PCR tests, AlphaFold and other AI tools, Doppler radar, barcodes, reverse auctions, and far, far more—should be alarmed and taking action.

Here are some ideas of what you can do:

  1. Demand that Congress restore previous appropriations, 5Calls
  2. Advocate through any professional associations you’re a member of
  3. Join science action groups (Science for the People, Union of Concerned Scientists, American Association for the Advancement of Science)
  4. Talk to university funders, leadership, and alumni about the value of publicly funded science
  5. Educate the public (including friends, family, and neighbors) about the value of science and the role of federally funded research
  6. Write an op-ed or public outreach materials through your employer
  7. Support federal employees
  8. If you’re a scientist, say yes to media & public engagement requests
  9. Attend local meetings: city council, library board, town halls
  10. Attend a protest
  11. Get offline and get active, in-person

There is a lot going on in the political environment right now, making it easy to get caught up in the implications cuts have on individual research projects or to be reassured by things that haven’t been targeted yet. But the threat looms large, for all US science. The US, through agencies like the NSF, has built a world-class scientific enterprise founded on the belief that taxpayer investments in basic science can and do produce valuable economic and social outcomes for all of us. Censoring research and canceling misinformation grants is a small step in what is already a larger battle to defend our world-class scientific enterprise. It is up to all of us to act now.

Mary K. Feeney is the Frank and June Sackton chair and professor in the School of Public Affairs at Arizona State University. She is a fellow of the National Academy of Public Administration and served as the program director for the Science of Science: Discovery, Communication and Impact program at the National Science Foundation (2021–2024).

Editorial: Censoring the scientific enterprise, one grant at a time Read More »

meta-plans-to-test-and-tinker-with-x’s-community-notes-algorithm

Meta plans to test and tinker with X’s community notes algorithm

Meta also confirmed that it won’t be reducing visibility of misleading posts with community notes. That’s a change from the prior system, Meta noted, which had penalties associated with fact-checking.

According to Meta, X’s algorithm cannot be gamed, supposedly safeguarding “against organized campaigns” striving to manipulate notes and “influence what notes get published or what they say.” Meta claims it will rely on external research on community notes to avoid that pitfall, but as recently as last October, outside researchers had suggested that X’s Community Notes were easily sabotaged by toxic X users.

“We don’t expect this process to be perfect, but we’ll continue to improve as we learn,” Meta said.

Meta confirmed that the company plans to tweak X’s algorithm over time to develop its own version of community notes, which “may explore different or adjusted algorithms to support how Community Notes are ranked and rated.”

In a post, X’s Support account said that X was “excited” that Meta was using its “well-established, academically studied program as a foundation” for its community notes.

Meta plans to test and tinker with X’s community notes algorithm Read More »

elon-musk-to-“fix”-community-notes-after-they-contradict-trump

Elon Musk to “fix” Community Notes after they contradict Trump

Elon Musk apparently no longer believes that crowdsourcing fact-checking through Community Notes can never be manipulated and is, thus, the best way to correct bad posts on his social media platform X.

Community Notes are supposed to be added to posts to limit misinformation spread after a broad consensus is reached among X users with diverse viewpoints on what corrections are needed. But Musk now claims a “fix” is needed to prevent supposedly outside influencers from allegedly gaming the system.

“Unfortunately, @CommunityNotes is increasingly being gamed by governments & legacy media,” Musk wrote on X. “Working to fix this.”

Musk’s announcement came after Community Notes were added to X posts discussing a poll generating favorable ratings for Ukraine President Volodymyr Zelenskyy. That poll was conducted by a private Ukrainian company in partnership with a state university whose supervisory board was appointed by the Ukrainian government, creating what Musk seems to view as a conflict of interest.

Although other independent polling recently documented a similar increase in Zelenskyy’s approval rating, NBC News reported, the specific poll cited in X notes contradicted Donald Trump’s claim that Zelenskyy is unpopular, and Musk seemed to expect X notes should instead be providing context to defend Trump’s viewpoint. Musk even suggested that by pointing to the supposedly government-linked poll in Community Notes, X users were spreading misinformation.

“It should be utterly obvious that a Zelensky[y]-controlled poll about his OWN approval is not credible!!” Musk wrote on X.

Musk’s attack on Community Notes is somewhat surprising. Although he has always maintained that Community Notes aren’t “perfect,” he has defended Community Notes through multiple European Union probes challenging their effectiveness and declared that the goal of the crowdsourcing effort was to make X “by far the best source of truth on Earth.” At CES 2025, X CEO Linda Yaccarino bragged that Community Notes are “good for the world.”

Yaccarino invited audience members to “think about it as this global collective consciousness keeping each other accountable at global scale in real time,” but just one month later, Musk is suddenly casting doubts on that characterization while the European Union continues to probe X.

Perhaps most significantly, Musk previously insisted as recently as last year that Community Notes could not be manipulated, even by Musk. He strongly disputed a 2024 report from the Center for Countering Digital Hate that claimed that toxic X users were downranking accurate notes that they personally disagreed with, claiming any attempt at gaming Community Notes would stick out like a “neon sore thumb.”

Elon Musk to “fix” Community Notes after they contradict Trump Read More »