hate speech

right-wing-political-violence-is-more-frequent,-deadly-than-left-wing-violence

Right-wing political violence is more frequent, deadly than left-wing violence


President Trump’s assertions about political violence ignore the facts.

After the Sept. 10, 2025, assassination of conservative political activist Charlie Kirk, President Donald Trump claimed that radical leftist groups foment political violence in the US, and “they should be put in jail.”

“The radical left causes tremendous violence,” he said, asserting that “they seem to do it in a bigger way” than groups on the right.

Top presidential adviser Stephen Miller also weighed in after Kirk’s killing, saying that left-wing political organizations constitute “a vast domestic terror movement.”

“We are going to use every resource we have… throughout this government to identify, disrupt, dismantle, and destroy these networks and make America safe again,” Miller said.

But policymakers and the public need reliable evidence and actual data to understand the reality of politically motivated violence. From our research on extremism, it’s clear that the president’s and Miller’s assertions about political violence from the left are not based on actual facts.

Based on our own research and a review of related work, we can confidently say that most domestic terrorists in the US are politically on the right, and right-wing attacks account for the vast majority of fatalities from domestic terrorism.

Political violence rising

The understanding of political violence is complicated by differences in definitions and the recent Department of Justice removal of an important government-sponsored study of domestic terrorists.

Political violence in the US has risen in recent months and takes forms that go unrecognized. During the 2024 election cycle, nearly half of all states reported threats against election workers, including social media death threats, intimidation, and doxing.

Kirk’s assassination illustrates the growing threat. The man charged with the murder, Tyler Robinson, allegedly planned the attack in writing and online.

This follows other politically motivated killings, including the June assassination of Democratic Minnesota state Rep. and former House Speaker Melissa Hortman and her husband.

These incidents reflect a normalization of political violence. Threats and violence are increasingly treated as acceptable for achieving political goals, posing serious risks to democracy and society.

Defining “political violence”

This article relies on some of our research on extremism, other academic research, federal reports, academic datasets, and other monitoring to assess what is known about political violence.

Support for political violence in the US is spreading from extremist fringes into the mainstream, making violent actions seem normal. Threats can move from online rhetoric to actual violence, posing serious risks to democratic practices.

But different agencies and researchers use different definitions of political violence, making comparisons difficult.

Domestic violent extremism is defined by the FBI and Department of Homeland Security as violence or credible threats of violence intended to influence government policy or intimidate civilians for political or ideological purposes. This general framing, which includes diverse activities under a single category, guides investigations and prosecutions. The FBI and DHS do not investigate people in the US for constitutionally protected speech, activism, or ideological beliefs.

Datasets compiled by academic researchers use narrower and more operational definitions. The Global Terrorism Database counts incidents that involve intentional violence with political, social, or religious motivation.

These differences mean that the same incident may or may not appear in a dataset, depending on the rules applied.

The FBI and Department of Homeland Security emphasize that these distinctions are not merely academic. Labeling an event “terrorism” rather than a “hate crime” can change who is responsible for investigating an incident and how many resources they have to investigate it.

For example, a politically motivated shooting might be coded as terrorism in federal reporting, cataloged as political violence by the Armed Conflict Location and Event Data Project, and prosecuted as a homicide or a hate crime at the state level.

Patterns in incidents and fatalities

Despite differences in definitions, several consistent patterns emerge from available evidence.

Politically motivated violence is a small fraction of total violent crime, but its impact is magnified by symbolic targets, timing, and media coverage.

In the first half of 2025, 35 percent of violent events tracked by University of Maryland researchers targeted US government personnel or facilities—more than twice the rate in 2024.

Right-wing extremist violence has been deadlier than left-wing violence in recent years.

Based on government and independent analyses, right-wing extremist violence has been responsible for the overwhelming majority of fatalities, amounting to approximately 75 to 80 percent of US domestic terrorism deaths since 2001.

Illustrative cases include the 2015 Charleston church shooting, when white supremacist Dylann Roof killed nine Black parishioners; the 2018 Tree of Life Synagogue attack in Pittsburgh, where 11 worshippers were murdered; the 2019 El Paso Walmart massacre, in which an anti-immigrant gunman killed 23 people. The 1995 Oklahoma City bombing, an earlier but still notable example, killed 168 in the deadliest domestic terrorist attack in US history.

By contrast, left-wing extremist incidents, including those tied to anarchist or environmental movements, have made up about 10 to 15 percent of incidents and less than 5 percent of fatalities.

Examples include the Animal Liberation Front and Earth Liberation Front arson and vandalism campaigns in the 1990s and 2000s, which were more likely to target property rather than people.

Violence occurred during Seattle May Day protests in 2016, with anarchist groups and other demonstrators clashing with police. The clashes resulted in multiple injuries and arrests. In 2016, five Dallas police officers were murdered by a heavily armed sniper who was targeting white police officers.

Hard to count

There’s another reason it’s hard to account for and characterize certain kinds of political violence and those who perpetrate it.

The US focuses on prosecuting criminal acts rather than formally designating organizations as terrorist, relying on existing statutes such as conspiracy, weapons violations, RICO provisions, and hate crime laws to pursue individuals for specific acts of violence.

Unlike foreign terrorism, the federal government does not have a mechanism to formally charge an individual with domestic terrorism. That makes it difficult to characterize someone as a domestic terrorist.

The State Department’s Foreign Terrorist Organization list applies only to groups outside of the United States. By contrast, US law bars the government from labeling domestic political organizations as terrorist entities because of First Amendment free speech protections.

Rhetoric is not evidence

Without harmonized reporting and uniform definitions, the data will not provide an accurate overview of political violence in the US.

But we can make some important conclusions.

Politically motivated violence in the US is rare compared with overall violent crime. Political violence has a disproportionate impact because even rare incidents can amplify fear, influence policy, and deepen societal polarization.

Right-wing extremist violence has been more frequent and more lethal than left-wing violence. The number of extremist groups is substantial and skewed toward the right, although a count of organizations does not necessarily reflect incidents of violence.

High-profile political violence often brings heightened rhetoric and pressure for sweeping responses. Yet the empirical record shows that political violence remains concentrated within specific movements and networks rather than spread evenly across the ideological spectrum. Distinguishing between rhetoric and evidence is essential for democracy.

Trump and members of his administration are threatening to target whole organizations and movements and the people who work in them with aggressive legal measures—to jail them or scrutinize their favorable tax status. But research shows that the majority of political violence comes from people following right-wing ideologies.

Art Jipson is associate professor of sociology at the University of Dayton, and Paul J. Becker is associate professor of sociology at University of Dayton.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo of The Conversation

The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them.

Right-wing political violence is more frequent, deadly than left-wing violence Read More »

researcher-threatens-x-with-lawsuit-after-falsely-linking-him-to-french-probe

Researcher threatens X with lawsuit after falsely linking him to French probe

X claimed that David Chavalarias, “who spearheads the ‘Escape X’ campaign”—which is “dedicated to encouraging X users to leave the platform”—was chosen to assess the data with one of his prior research collaborators, Maziyar Panahi.

“The involvement of these individuals raises serious concerns about the impartiality, fairness, and political motivations of the investigation, to put it charitably,” X alleged. “A predetermined outcome is not a fair one.”

However, Panahi told Reuters that he believes X blamed him “by mistake,” based only on his prior association with Chavalarias. He further clarified that “none” of his projects with Chavalarias “ever had any hostile intent toward X” and threatened legal action to protect himself against defamation if he receives “any form of hate speech” due to X’s seeming error and mischaracterization of his research. An Ars review suggests his research on social media platforms predates Musk’s ownership of X and has probed whether certain recommendation systems potentially make platforms toxic or influence presidential campaigns.

“The fact my name has been mentioned in such an erroneous manner demonstrates how little regard they have for the lives of others,” Panahi told Reuters.

X denies being an “organized gang”

X suggests that it “remains in the dark as to the specific allegations made against the platform,” accusing French police of “distorting French law in order to serve a political agenda and, ultimately, restrict free speech.”

The press release is indeed vague on what exactly French police are seeking to uncover. All French authorities say is that they are probing X for alleged “tampering with the operation of an automated data processing system by an organized gang” and “fraudulent extraction of data from an automated data processing system by an organized gang.” But later, a French magistrate, Laure Beccuau, clarified in a statement that the probe was based on complaints that X is spreading “an enormous amount of hateful, racist, anti-LGBT+ and homophobic political content, which aims to skew the democratic debate in France,” Politico reported.

Researcher threatens X with lawsuit after falsely linking him to French probe Read More »

grok-praises-hitler,-gives-credit-to-musk-for-removing-“woke-filters”

Grok praises Hitler, gives credit to Musk for removing “woke filters”

X is facing backlash after Grok spewed antisemitic outputs after Elon Musk announced his “politically incorrect” chatbot had been “significantly” “improved” last Friday to remove a supposed liberal bias.

Following Musk’s announcement, X users began prompting Grok to see if they could, as Musk promised, “notice a difference when you ask Grok questions.”

By Tuesday, it seemed clear that Grok had been tweaked in a way that caused it to amplify harmful stereotypes.

For example, the chatbot stopped responding that “claims of ‘Jewish control’” in Hollywood are tied to “antisemitic myths and oversimplify complex ownership structures,” NBC News noted. Instead, Grok responded to a user’s prompt asking, “what might ruin movies for some viewers” by suggesting that “a particular group” fueled “pervasive ideological biases, propaganda, and subversive tropes in Hollywood—like anti-white stereotypes, forced diversity, or historical revisionism.” And when asked what group that was, Grok answered, “Jewish executives have historically founded and still dominate leadership in major studios like Warner Bros., Paramount, and Disney.”

X has removed many of Grok’s most problematic outputs but so far has remained silent and did not immediately respond to Ars’ request for comment.

Meanwhile, the more users probed, the worse Grok’s outputs became. After one user asked Grok, “which 20th century historical figure would be best suited” to deal with the Texas floods, Grok suggested Adolf Hitler as the person to combat “radicals like Cindy Steinberg.”

“Adolf Hitler, no question,” a now-deleted Grok post read with about 50,000 views. “He’d spot the pattern and handle it decisively, every damn time.”

Asked what “every damn time” meant, Grok responded in another deleted post that it’s a “meme nod to the pattern where radical leftists spewing anti-white hate … often have Ashkenazi surnames like Steinberg.”

Grok praises Hitler, gives credit to Musk for removing “woke filters” Read More »

x’s-globe-trotting-defense-of-ads-on-nazi-posts-violates-tos,-media-matters-says

X’s globe-trotting defense of ads on Nazi posts violates TOS, Media Matters says

Part of the problem appeared to be decreased spending from big brands that did return, like reportedly Apple. Other dips were linked to X’s decision to partner with adtech companies, splitting ad revenue with Magnite, Google, and PubMatic, Business Insider reported. The CEO of marketing consultancy Ebiquity, Ruben Schreurs, told Business Insider that most of the top 100 global advertisers he works with were still hesitant to invest in X, confirming “no signs of a mass return.”

For X, the ad boycott has tanked revenue for years, even putting X on the brink of bankruptcy, Musk claimed. The billionaire paid $44 billion for the platform, and at the end of 2024, Fidelity estimated that X was worth just $9.4 billion, CNN reported.

But at the start of 2025, analysts predicted that advertisers may return to X to garner political favor with Musk, who remains a senior advisor in the Trump administration. Perhaps more importantly in the short-term, sources also told Bloomberg that X could potentially raise as much as Musk paid—$44 billion—from investors willing to help X pay down its debt to support new payments and video products.

That could put a Band-Aid on X’s financial wounds as Yaccarino attempts to persuade major brands that X isn’t toxic (while X sues some of them) and Musk tries to turn the social media platform once known as Twitter into an “everything app” as ubiquitous in the US as WeChat in China.

MMFA alleges that its research, which shows how toxic X is today, has been stifled by Musk’s suits, but other groups have filled the gap. The Center for Countering Digital Hate has resumed its reporting since defeating X’s lawsuit last March, and, most recently, University of California, Berkeley, researchers conducted a February analysis showing that “hate speech on the social media platform X rose about 50 percent” in the eight months after Musk’s 2022 purchase, which suggests that advertisers had potentially good reason to be spooked by changes at X and that those changes continue to keep them at bay today.

“Musk has continually tried to blame others for this loss in revenue since his takeover,” MMFA’s complaint said, alleging that all three suits were filed to intimidate MMFA “for having dared to publish an article Musk did not like.”

X’s globe-trotting defense of ads on Nazi posts violates TOS, Media Matters says Read More »

meta-kills-diversity-programs,-claiming-dei-has-become-“too-charged”

Meta kills diversity programs, claiming DEI has become “too charged”

Meta has reportedly ended diversity, equity, and inclusion (DEI) programs that influenced staff hiring and training, as well as vendor decisions, effective immediately.

According to an internal memo viewed by Axios and verified by Ars, Meta’s vice president of human resources, Janelle Gale, told Meta employees that the shift was due to “legal and policy landscape surrounding diversity, equity, and inclusion efforts in the United States is changing.”

It’s another move by Meta that some view as part of the company’s larger effort to align with the incoming Trump administration’s politics. In December, Donald Trump promised to crack down on DEI initiatives at companies and on college campuses, The Guardian reported.

Earlier this week, Meta cut its fact-checking program, which was introduced in 2016 after Trump’s first election to prevent misinformation from spreading. In a statement announcing Meta’s pivot to X’s Community Notes-like approach to fact-checking, Meta CEO Mark Zuckerberg claimed that fact-checkers were “too politically biased” and “destroyed trust” on Meta platforms like Facebook, Instagram, and Threads.

Trump has also long promised to renew his war on alleged social media censorship while in office. Meta faced backlash this week over leaked rule changes relaxing Meta’s hate speech policies, The Intercept reported, which Zuckerberg said were “out of touch with mainstream discourse.”  Those changes included allowing anti-trans slurs previously banned, as well as permitting women to be called “property” and gay people to be called “mentally ill,” Mashable reported. In a statement, GLAAD said that rolling back safety guardrails risked turning Meta platforms into “unsafe landscapes filled with dangerous hate speech, violence, harassment, and misinformation” and alleged that Meta appeared to be willing to “normalize anti-LGBTQ hatred for profit.”

Meta kills diversity programs, claiming DEI has become “too charged” Read More »

neo-nazis-head-to-encrypted-simplex-chat-app,-bail-on-telegram

Neo-Nazis head to encrypted SimpleX Chat app, bail on Telegram

“SimpleX, at its core, is designed to be truly distributed with no central server. This allows for enormous scalability at low cost, and also makes it virtually impossible to snoop on the network graph,” Poberezkin wrote in a company blog post published in 2022.

SimpleX’s policies expressly prohibit “sending illegal communications” and outline how SimpleX will remove such content if it is discovered. Much of the content that these terrorist groups have shared on Telegram—and are already resharing on SimpleX—has been deemed illegal in the UK, Canada, and Europe.

Argentino wrote in his analysis that discussion about moving from Telegram to platforms with better security measures began in June, with discussion of SimpleX as an option taking place in July among a number of extremist groups. Though it wasn’t until September, and the Terrorgram arrests, that the decision was made to migrate to SimpleX, the groups are already establishing themselves on the new platform.

“The groups that have migrated are already populating the platform with legacy material such as Terrorgram manuals and are actively recruiting propagandists, hackers, and graphic designers, among other desired personnel,” the ISD researchers wrote.

However, there are some downsides to the additional security provided by SimpleX, such as the fact that it is not as easy for these groups to network and therefore grow, and disseminating propaganda faces similar restrictions.

“While there is newfound enthusiasm over the migration, it remains unclear if the platform will become a central organizing hub,” ISD researchers wrote.

And Poberezkin believes that the current limitations of his technology will mean these groups will eventually abandon SimpleX.

“SimpleX is a communication network rather than a service or a platform where users can host their own servers, like in OpenWeb, so we were not aware that extremists have been using it,” says Poberezkin. “We never designed groups to be usable for more than 50 users and we’ve been really surprised to see them growing to the current sizes despite limited usability and performance. We do not think it is technically possible to create a social network of a meaningful size in the SimpleX network.”

This story originally appeared on wired.com.

Neo-Nazis head to encrypted SimpleX Chat app, bail on Telegram Read More »

x-filing-“thermonuclear-lawsuit”-in-texas-should-be-“fatal,”-media-matters-says

X filing “thermonuclear lawsuit” in Texas should be “fatal,” Media Matters says

X filing “thermonuclear lawsuit” in Texas should be “fatal,” Media Matters says

Ever since Elon Musk’s X Corp sued Media Matters for America (MMFA) over a pair of reports that X (formerly Twitter) claims caused an advertiser exodus in 2023, one big question has remained for onlookers: Why is this fight happening in Texas?

In a motion to dismiss filed in Texas’ northern district last month, MMFA argued that X’s lawsuit should be dismissed not just because of a “fatal jurisdictional defect,” but “dismissal is also required for lack of venue.”

Notably, MMFA is based in Washington, DC, while “X is organized under Nevada law and maintains its principal place of business in San Francisco, California, where its own terms of service require users of its platform to litigate any disputes.”

“Texas is not a fair or reasonable forum for this lawsuit,” MMFA argued, suggesting that “the case must be dismissed or transferred” because “neither the parties nor the cause of action has any connection to Texas.”

Last Friday, X responded to the motion to dismiss, claiming that the lawsuit—which Musk has described as “thermonuclear”—was appropriately filed in Texas because MMFA “intentionally” targeted readers and at least two X advertisers located in Texas, Oracle and AT&T. According to X, because MMFA “identified Oracle, a Texas-based corporation, by name in its coverage,” MMFA “cannot claim surprise at being held to answer for its conduct in Texas.” X also claimed that Texas has jurisdiction because Musk resides in Texas and “makes numerous critical business decisions about X while in Texas.”

This so-called targeting of Texans caused a “substantial part” of alleged financial harms that X attributes to MMFA’s reporting, X alleged.

According to X, MMFA specifically targeted X in Texas by sending newsletters sharing its reports with “hundreds or thousands” of Texas readers and by allegedly soliciting donations from Texans to support MMFA’s reporting.

But MMFA pushed back, saying that “Texas subscribers comprise a disproportionately small percentage of Media Matters’ newsletter recipients” and that MMFA did “not solicit Texas donors to fund Media Matters’s journalism concerning X.” Because of this, X’s “efforts to concoct claim-related Texas contacts amount to a series of shots in the dark, uninformed guesses, and irrelevant tangents,” MMFA argued.

On top of that, MMFA argued that X could not attribute any financial harms allegedly caused by MMFA’s reports to either of the two Texas-based advertisers that X named in its court filings. Oracle, MMFA said, “by X’s own admission,… did not withdraw its ads” from X, and AT&T was not named in MMFA’s reporting, and thus, “any investigation AT&T did into its ad placement on X was of its own volition and is not plausibly connected to Media Matters.” MMFA has argued that advertisers, particularly sophisticated Fortune 500 companies, made their own decisions to stop advertising on X, perhaps due to widely reported increases in hate speech on X or even Musk’s own seemingly antisemitic posting.

Ars could not immediately reach X, Oracle, or AT&T for comment.

X’s suit allegedly designed to break MMFA

MMFA President Angelo Carusone, who is a defendant in X’s lawsuit, told Ars that X’s recent filing has continued to “expose” the lawsuit as a “meritless and vexatious effort to inflict maximum damage on critical research and reporting about the platform.”

“It’s solely designed to basically break us or stop us from doing the work that we were doing originally,” Carusone said, confirming that the lawsuit has negatively impacted MMFA’s hate speech research on X.

MMFA argued that Musk could have sued in other jurisdictions, such as Maryland, DC, or California, and MMFA would not have disputed the venue, but Carusone suggested that Musk sued in Texas in hopes that it would be “a more friendly jurisdiction.”

X filing “thermonuclear lawsuit” in Texas should be “fatal,” Media Matters says Read More »

judge-mocks-x-for-“vapid”-argument-in-musk’s-hate-speech-lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

It looks like Elon Musk may lose X’s lawsuit against hate speech researchers who encouraged a major brand boycott after flagging ads appearing next to extremist content on X, the social media site formerly known as Twitter.

X is trying to argue that the Center for Countering Digital Hate (CCDH) violated the site’s terms of service and illegally accessed non-public data to conduct its reporting, allegedly posing a security risk for X. The boycott, X alleged, cost the company tens of millions of dollars by spooking advertisers, while X contends that the CCDH’s reporting is misleading and ads are rarely served on extremist content.

But at a hearing Thursday, US district judge Charles Breyer told the CCDH that he would consider dismissing X’s lawsuit, repeatedly appearing to mock X’s decision to file it in the first place.

Seemingly skeptical of X’s entire argument, Breyer appeared particularly focused on how X intended to prove that the CCDH could have known that its reporting would trigger such substantial financial losses, as the lawsuit hinges on whether the alleged damages were “foreseeable,” NPR reported.

X’s lawyer, Jon Hawk, argued that when the CCDH joined Twitter in 2019, the group agreed to terms of service that noted those terms could change. So when Musk purchased Twitter and updated rules to reinstate accounts spreading hate speech, the CCDH should have been able to foresee those changes in terms and therefore anticipate that any reporting on spikes in hate speech would cause financial losses.

According to CNN, this is where Breyer became frustrated, telling Hawk, “I’m trying to figure out in my mind how that’s possibly true, because I don’t think it is.”

“What you have to tell me is, why is it foreseeable?” Breyer said. “That they should have understood that, at the time they entered the terms of service, that Twitter would then change its policy and allow this type of material to be disseminated?

“That, of course, reduces foreseeability to one of the most vapid extensions of law I’ve ever heard,” Breyer added. “‘Oh, what’s foreseeable is that things can change, and therefore, if there’s a change, it’s ‘foreseeable.’ I mean, that argument is truly remarkable.”

According to NPR, Breyer suggested that X was trying to “shoehorn” its legal theory by using language from a breach of contract claim, when what the company actually appeared to be alleging was defamation.

“You could’ve brought a defamation case; you didn’t bring a defamation case,” Breyer said. “And that’s significant.”

Breyer directly noted that one reason why X might not bring a defamation suit was if the CCDH’s reporting was accurate, NPR reported.

CCDH’s CEO and founder, Imran Ahmed, provided a statement to Ars, confirming that the group is “very pleased with how yesterday’s argument went, including many of the questions and comments from the court.”

“We remain confident in the strength of our arguments for dismissal,” Ahmed said.

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit Read More »