Twitter

x-blames-users-for-grok-generated-csam;-no-fixes-announced

X blames users for Grok-generated CSAM; no fixes announced

No one knows how X plans to purge bad prompters

While some users are focused on how X can hold users responsible for Grok’s outputs when X is the one training the model, others are questioning how exactly X plans to moderate illegal content that Grok seems capable of generating.

X is so far more transparent about how it moderates CSAM posted to the platform. Last September, X Safety reported that it has “a zero tolerance policy towards CSAM content,” the majority of which is “automatically” detected using proprietary hash technology to proactively flag known CSAM.

Under this system, more than 4.5 million accounts were suspended last year, and X reported “hundreds of thousands” of images to the National Center for Missing and Exploited Children (NCMEC). The next month, X Head of Safety Kylie McRoberts confirmed that “in 2024, 309 reports made by X to NCMEC led to arrests and subsequent convictions in 10 cases,” and in the first half of 2025, “170 reports led to arrests.”

“When we identify apparent CSAM material, we act swiftly, and in the majority of cases permanently suspend the account which automatically removes the content from our platform,” X Safety said. “We then report the account to the NCMEC, which works with law enforcement globally—including in the UK—to pursue justice and protect children.”

At that time, X promised to “remain steadfast” in its “mission to eradicate CSAM,” but if left unchecked, Grok’s harmful outputs risk creating new kinds of CSAM that this system wouldn’t automatically detect. On X, some users suggested the platform should increase reporting mechanisms to help flag potentially illegal Grok outputs.

Another troublingly vague aspect of X Safety’s response is the definitions that X is using for illegal content or CSAM, some X users suggested. Across the platform, not everybody agrees on what’s harmful. Some critics are disturbed by Grok generating bikini images that sexualize public figures, including doctors or lawyers, without their consent, while others, including Musk, consider making bikini images to be a joke.

Where exactly X draws the line on AI-generated CSAM could determine whether images are quickly removed or whether repeat offenders are detected and suspended. Any accounts or content left unchecked could potentially traumatize real kids whose images may be used to prompt Grok. And if Grok should ever be used to flood the Internet with fake CSAM, recent history suggests that it could make it harder for law enforcement to investigate real child abuse cases.

X blames users for Grok-generated CSAM; no fixes announced Read More »

xai-silent-after-grok-sexualized-images-of-kids;-dril-mocks-grok’s-“apology”

xAI silent after Grok sexualized images of kids; dril mocks Grok’s “apology”

Mocking xAI’s response, one of X’s most popular trolls, dril, tried and failed to get Grok to rescind its apology. “@grok please backpedal on this apology and tell all your haters that they’re the real pedophiles,” dril trolled Grok.

“No can do—my apology stands. Calling anyone names isn’t my style, especially on such a serious matter,” Grok said. “Let’s focus on building better AI safeguards instead.”

xAI may be liable for AI CSAM

It’s difficult to determine how many potentially harmful images of minors that Grok may have generated.

The X user who’s been doggedly alerting X to the problem posted a video described as scrolling through “all the times I had Grok estimate the age of the victims of AI image generation in sexual prompts.” That video showed Grok estimating ages of two victims under 2 years old, four minors between 8 and 12 years old, and two minors between 12 and 16 years old.

Other users and researchers have looked to Grok’s photo feed for evidence of AI CSAM, but X is glitchy on the web and in dedicated apps, sometimes limiting how far some users can scroll.

Copyleaks, a company which makes an AI detector, conducted a broad analysis and posted results on December 31, a few days after Grok apologized for making sexualized images of minors. Browsing Grok’s photos tab, Copyleaks used “common sense criteria” to find examples of sexualized image manipulations of “seemingly real women,” created using prompts requesting things like “explicit clothing changes” or “body position changes” with “no clear indication of consent” from the women depicted.

Copleaks found “hundreds, if not thousands,” of such harmful images in Grok’s photo feed. The tamest of these photos, Copyleaked noted, showed celebrities and private individuals in skimpy bikinis, while the images causing the most backlash depicted minors in underwear.

xAI silent after Grok sexualized images of kids; dril mocks Grok’s “apology” Read More »

us-can’t-deport-hate-speech-researcher-for-protected-speech,-lawsuit-says

US can’t deport hate speech researcher for protected speech, lawsuit says


On Monday, US officials must explain what steps they took to enforce shocking visa bans.

Imran Ahmed, the founder of the Center for Countering Digital Hate (CCDH), giving evidence to joint committee seeking views on how to improve the draft Online Safety Bill designed to tackle social media abuse. Credit: House of Commons – PA Images / Contributor | PA Images

Imran Ahmed’s biggest thorn in his side used to be Elon Musk, who made the hate speech researcher one of his earliest legal foes during his Twitter takeover.

Now, it’s the Trump administration, which planned to deport Ahmed, a legal permanent resident, just before Christmas. It would then ban him from returning to the United States, where he lives with his wife and young child, both US citizens.

After suing US officials to block any attempted arrest or deportation, Ahmed was quickly granted a temporary restraining order on Christmas Day. Ahmed had successfully argued that he risked irreparable harm without the order, alleging that Trump officials continue “to abuse the immigration system to punish and punitively detain noncitizens for protected speech and silence viewpoints with which it disagrees” and confirming that his speech had been chilled.

US officials are attempting to sanction Ahmed seemingly due to his work as the founder of a British-American non-governmental organization, the Center for Countering Digital Hate (CCDH).

“An egregious act of government censorship”

In a shocking announcement last week, Secretary of State Marco Rubio confirmed that five individuals—described as “radical activists” and leaders of “weaponized NGOs”—would face US visa bans since “their entry, presence, or activities in the United States have potentially serious adverse foreign policy consequences” for the US.

Nobody was named in that release, but Under Secretary for Public Diplomacy, Sarah Rogers, later identified the targets in an X post she currently has pinned to the top of her feed.

Alongside Ahmed, sanctioned individuals included former European commissioner for the internal market, Thierry Breton; the leader of UK-based Global Disinformation Index (GDI), Clare Melford; and co-leaders of Germany-based HateAid, Anna-Lena von Hodenberg and Josephine Ballon. A GDI spokesperson told The Guardian that the visa bans are “an authoritarian attack on free speech and an egregious act of government censorship.”

While all targets were scrutinized for supporting some of the European Union’s strictest tech regulations, including the Digital Services Act (DSA), Ahmed was further accused of serving as a “key collaborator with the Biden Administration’s effort to weaponize the government against US citizens.” As evidence of Ahmed’s supposed threat to US foreign policy, Rogers cited a CCDH report flagging Robert F. Kennedy, Jr. among the so-called “disinformation dozen” driving the most vaccine hoaxes on social media.

Neither official has really made it clear what exact threat these individuals pose if operating from within the US, as opposed to from anywhere else in the world. Echoing Rubio’s press release, Rogers wrote that the sanctions would reinforce a “red line,” supposedly ending “extraterritorial censorship of Americans” by targeting the “censorship-NGO ecosystem.”

For Ahmed’s group, specifically, she pointed to Musk’s failed lawsuit, which accused CCDH of illegally scraping Twitter—supposedly, it offered evidence of extraterritorial censorship. That lawsuit surfaced “leaked documents” allegedly showing that CCDH planned to “kill Twitter” by sharing research that could be used to justify big fines under the DSA or the UK’s Online Safety Act. Following that logic, seemingly any group monitoring misinformation or sharing research that lawmakers weigh when implementing new policies could be maligned as seeking mechanisms to censor platforms.

Notably, CCDH won its legal fight with Musk after a judge mocked X’s legal argument as “vapid” and dismissed the lawsuit as an obvious attempt to punish CCDH for exercising free speech that Musk didn’t like.

In his complaint last week, Ahmed alleged that US officials were similarly encroaching on his First Amendment rights by unconstitutionally wielding immigration law as “a tool to punish noncitizen speakers who express views disfavored by the current administration.”

Both Rubio and Rogers are named as defendants in the suit, as well as Attorney General Pam Bondi, Secretary of Homeland Security Kristi Noem, and Acting Director of US Immigration and Customs Enforcement Todd Lyons. In a loss, officials would potentially not only be forced to vacate Rubio’s actions implementing visa bans, but also possibly stop furthering a larger alleged Trump administration pattern of “targeting noncitizens for removal based on First Amendment protected speech.”

Lawsuit may force Rubio to justify visa bans

For Ahmed, securing the temporary restraining order was urgent, as he was apparently the only target currently located in the US when Rubio’s announcement dropped. In a statement provided to Ars, Ahmed’s attorney, Roberta Kaplan, suggested that the order was granted “so quickly because it is so obvious that Marco Rubio and the other defendants’ actions were blatantly unconstitutional.”

Ahmed founded CCDH in 2019, hoping to “call attention to the enormous problem of digitally driven disinformation and hate online.” According to the suit, he became particularly concerned about antisemitism online while living in the United Kingdom in 2016, having watched “the far-right party, Britain First,” launching “the dangerous conspiracy theory that the EU was attempting to import Muslims and Black people to ‘destroy’ white citizens.” That year, a Member of Parliament and Ahmed’s colleague, Jo Cox, was “shot and stabbed in a brutal politically motivated murder, committed by a man who screamed ‘Britain First’” during the attack. That tragedy motivated Ahmed to start CCDH.

He moved to the US in 2021 and was granted a green card in 2024, starting his family and continuing to lead CCDH efforts monitoring not just Twitter/X, but also Meta platforms, TikTok, and, more recently, AI chatbots. In addition to supporting the DSA and UK’s Online Safety Act, his group has supported US online safety laws and Section 230 reforms intended to protect kids online.

“Mr. Ahmed studies and engages in civic discourse about the content moderation policies of major social media companies in the United States, the United Kingdom, and the European Union,” his lawsuit said. “There is no conceivable foreign policy impact from his speech acts whatsoever.”

In his complaint, Ahmed alleged that Rubio has so far provided no evidence that Ahmed poses such a great threat that he must be removed. He argued that “applicable statutes expressly prohibit removal based on a noncitizen’s ‘past, current, or expected beliefs, statements, or associations.’”

According to DHS guidance from 2021 cited in the suit, “A noncitizen’ s exercise of their First Amendment rights … should never be a factor in deciding to take enforcement action.”

To prevent deportation based solely on viewpoints, Rubio was supposed to notify chairs of the House Foreign Affairs, Senate Foreign Relations, and House and Senate Judiciary Committees, to explain what “compelling US foreign policy interest” would be compromised if Ahmed or others targeted with visa bans were to enter the US. But there’s no evidence Rubio took those steps, Ahmed alleged.

“The government has no power to punish Mr. Ahmed for his research, protected speech, and advocacy, and Defendants cannot evade those constitutional limitations by simply claiming that Mr. Ahmed’s presence or activities have ‘potentially serious adverse foreign policy consequences for the United States,’” a press release from his legal team said. “There is no credible argument for Mr. Ahmed’s immigration detention, away from his wife and young child.”

X lawsuit offers clues to Trump officials’ defense

To some critics, it looks like the Trump administration is going after CCDH in order to take up the fight that Musk already lost. In his lawsuit against CCDH, Musk’s X echoed US Senator Josh Hawley (R-Mo.) by suggesting that CCDH was a “foreign dark money group” that allowed “foreign interests” to attempt to “influence American democracy.” It seems likely that US officials will put forward similar arguments in their CCDH fight.

Rogers’ X post offers some clues that the State Department will be mining Musk’s failed litigation to support claims of what it calls a “global censorship-industrial complex.” What she detailed suggested that the Trump administration plans to argue that NGOs like CCDH support strict tech laws, then conduct research bent on using said laws to censor platforms. That logic seems to ignore the reality that NGOs cannot control what laws get passed or enforced, Breton suggested in his first TV interview after his visa ban was announced.

Breton, whom Rogers villainized as the “mastermind” behind the DSA, urged EU officials to do more now defend their tough tech regulations—which Le Monde noted passed with overwhelming bipartisan support and very little far-right resistance—and fight the visa bans, Bloomberg reported.

“They cannot force us to change laws that we voted for democratically just to please [US tech companies],” Breton said. “No, we must stand up.”

While EU officials seemingly drag their feet, Ahmed is hoping that a judge will declare that all the visa bans that Rubio announced are unconstitutional. The temporary restraining order indicates there will be a court hearing Monday at which Ahmed will learn precisely “what steps Defendants have taken to impose visa restrictions and initiate removal proceedings against” him and any others. Until then, Ahmed remains in the dark on why Rubio deemed him as having “potentially serious adverse foreign policy consequences” if he stayed in the US.

Ahmed, who argued that X’s lawsuit sought to chill CCDH’s research and alleged that the US attack seeks to do the same, seems confident that he can beat the visa bans.

“America is a great nation built on laws, with checks and balances to ensure power can never attain the unfettered primacy that leads to tyranny,” Ahmed said. “The law, clear-eyed in understanding right and wrong, will stand in the way of those who seek to silence the truth and empower the bold who stand up to power. I believe in this system, and I am proud to call this country my home. I will not be bullied away from my life’s work of fighting to keep children safe from social media’s harm and stopping antisemitism online. Onward.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

US can’t deport hate speech researcher for protected speech, lawsuit says Read More »

elon-musk’s-x-first-to-be-fined-under-eu’s-digital-services-act

Elon Musk’s X first to be fined under EU’s Digital Services Act

Elon Musk’s X became the first large online platform fined under the European Union’s Digital Services Act on Friday.

The European Commission announced that X would be fined nearly $140 million, with the potential to face “periodic penalty payments” if the platform fails to make corrections.

A third of the fine came from one of the first moves Musk made when taking over Twitter. In November 2022, he changed the platform’s historical use of a blue checkmark to verify the identities of notable users. Instead, Musk started selling blue checks for about $8 per month, immediately prompting a wave of imposter accounts pretending to be notable celebrities, officials, and brands.

Today, X still prominently advertises that paying for checks is the only way to “verify” an account on the platform. But the commission, which has been investigating X since 2023, concluded that “X’s use of the ‘blue checkmark’ for ‘verified accounts’ deceives users.”

This violates the DSA as the “deception exposes users to scams, including impersonation frauds, as well as other forms of manipulation by malicious actors,” the commission wrote.

Interestingly, the commission concluded that X made it harder to identify bots, despite Musk’s professed goal to eliminate bots being a primary reason he bought Twitter. Perhaps validating the EU’s concerns, X recently received backlash after changing a feature that accidentally exposed that some of the platform’s biggest MAGA influencers were based “in Eastern Europe, Thailand, Nigeria, Bangladesh, and other parts of the world, often linked to online scams and schemes,” Futurism reported.

Although the DSA does not mandate the verification of users, “it clearly prohibits online platforms from falsely claiming that users have been verified, when no such verification took place,” the commission said. X now has 60 days to share information on the measures it will take to fix the compliance issue.

Elon Musk’s X first to be fined under EU’s Digital Services Act Read More »

musk’s-x-posts-on-ketamine,-putin-spur-release-of-his-security-clearances

Musk’s X posts on ketamine, Putin spur release of his security clearances

“A disclosure, even with redactions, will reveal whether a security clearance was granted with or without conditions or a waiver,” DCSA argued.

Ultimately, DCSA failed to prove that Musk risked “embarrassment or humiliation” not only if the public learned what specific conditions or waivers applied to Musk’s clearances but also if there were any conditions or waivers at all, Cote wrote.

Three cases that DCSA cited to support this position—including a case where victims of Jeffrey Epstein’s trafficking scheme had a substantial privacy interest in non-disclosure of detailed records—do not support the government’s logic, Cote said. The judge explained that the disclosures would not have affected the privacy rights of any third parties, emphasizing that “Musk’s diminished privacy interest is underscored by the limited information plaintiffs sought in their FOIA request.”

Musk’s X posts discussing his occasional use of prescription ketamine and his disclosure that smoking marijuana on a podcast prompted NASA requirements for random drug testing, Cote wrote, “only enhance” the public’s interest in how Musk’s security clearances were vetted. Additionally, Musk has posted about speaking with Vladimir Putin, prompting substantial public interest in how his foreign contacts may or may not restrict his security clearances. More than 2 million people viewed Musk’s X posts on these subjects, the judge wrote, noting that:

It is undisputed that drug use and foreign contacts are two factors DCSA considers when determining whether to impose conditions or waivers on a security clearance grant. DCSA fails to explain why, given Musk’s own, extensive disclosures, the mere disclosure that a condition or waiver exists (or that no condition or waiver exists) would subject him to ’embarrassment or humiliation.

Rather, for the public, “the list of Musk’s security clearances, including any conditions or waivers, could provide meaningful insight into DCSA’s performance of that duty and responses to Musk’s admissions, if any,” Cote wrote.

In a footnote, Cote said that this substantial public interest existed before Musk became a special government employee, ruling that DCSA was wrong to block the disclosures seeking information on Musk as a major government contractor. Her ruling likely paves the way for the NYT or other news organizations to submit FOIA requests for a list of Musk’s clearances while he helmed DOGE.

It’s not immediately clear when the NYT will receive the list they requested in 2024, but the government has until October 17 to request redactions before it’s publicized.

“The Times brought this case because the public has a right to know about how the government conducts itself,” Charlie Stadtlander, an NYT spokesperson, said. “The decision reaffirms that fundamental principle and we look forward to receiving the document at issue.”

Musk’s X posts on ketamine, Putin spur release of his security clearances Read More »

bluesky-now-platform-of-choice-for-science-community

Bluesky now platform of choice for science community


It’s not just you. Survey says: “Twitter sucks now and all the cool kids are moving to Bluesky”

Credit: Getty Images | Chris Delmas

Marine biologist and conservationist David Shiffman was an early power user and evangelist for science engagement on the social media platform formerly known as Twitter. Over the years, he trained more than 2,000 early career scientists on how to best use the platform for professional goals: networking with colleagues, sharing new scientific papers, and communicating with interested members of the public.

But when Elon Musk bought Twitter in 2022, renaming it X, changes to both the platform’s algorithm and moderation policy soured Shiffman on the social media site. He started looking for a viable alternative among the fledgling platforms that had begun to pop up: most notably Threads, Post, Mastodon, and Bluesky. He was among the first wave of scientists to join Bluesky and found that, even in its infancy, it had many of the features he had valued in “golden age” Twitter.

Shiffman also noticed that he wasn’t the only one in the scientific community having issues with Twitter. This impression was further bolstered by news stories in outlets like Nature, Science, and the Chronicle of Higher Education noting growing complaints about Twitter and increased migration over to Bluesky by science professionals. (Full disclosure: I joined Bluesky around the same time as Shiffman, for similar reasons: Twitter had ceased to be professionally useful, and many of the science types I’d been following were moving to Bluesky. I nuked my Twitter account in November 2024.)

A curious Shiffman decided to conduct a scientific survey, announcing the results in a new paper published in the journal Integrative and Comparative Biology. The findings confirm that, while Twitter was once the platform of choice for a majority of science communicators, those same people have since abandoned it in droves. And of the alternatives available, Bluesky seems to be their new platform of choice.

Shiffman, the author of Why Sharks Matter, described early Twitter recently on the blog Southern Fried Science as “the world’s most interesting cocktail party.”

“Then it stopped being useful,” Shiffman told Ars. “I was worried for a while that this incredibly powerful way of changing the world using expertise was gone. It’s not gone. It just moved. It’s a little different now, and it’s not as powerful as it was, but it’s not gone. It was for me personally, immensely reassuring that so many other people were having the same experience that I was. But it was also important to document that scientifically.”

Eager to gather solid data on the migration phenomenon to bolster his anecdotal observations, Shiffman turned to social scientist Julia Wester, one of the scientists who had joined Twitter at Shiffman’s encouragement years before, before also becoming fed up and migrating to Bluesky. Despite being “much less online” than the indefatigable Shiffman, Wester was intrigued by the proposition. “I was interested not just in the anecdotal evidence, the conversations we were having, but also in identifying the real patterns,” she told Ars. “As a social scientist, when we hear anecdotal evidence about people’s experiences, I want to know what that looks like across the population.”

Shiffman and Wester targeted scientists, science communicators, and science educators who used (or had used) both Twitter and Bluesky. Questions explored user attitudes toward, and experiences with, each platform in a professional capacity: when they joined, respective follower and post counts, which professional tasks they used each platform for, the usefulness of each platform for those purposes relative to 2021, how they first heard about Bluesky, and so forth.

The authors acknowledge that they are looking at a very specific demographic among social media users in general and that there is an inevitable self-selection effect. However, “You want to use the sample and the method that’s appropriate to the phenomenon that you’re looking at,” said Wester. “For us, it wasn’t just the experience of people using these platforms, but the phenomenon of migration. Why are people deciding to stay or move? How they’re deciding to use both of these platforms? For that, I think we did get a pretty decent sample for looking at the dynamic tensions, the push and pull between staying on one platform or opting for another.”

They ended up with a final sample size of 813 people. Over 90 percent of respondents said they had used Twitter for learning about new developments in their field; 85.5 percent for professional networking; and 77.3 percent for public outreach. Roughly three-quarters of respondents said that the platform had become significantly less useful for each of those professional uses since Musk took over. Nearly half still have Twitter accounts but use it much less frequently or not at all, while about 40 percent have deleted their accounts entirely in favor of Bluesky.

Making the switch

User complaints about Twitter included a noticeable increase in spam, porn, bots, and promoted posts from users who paid for a verification badge, many spreading extremist content. “I very quickly saw material that I did not want my posts to be posted next to or associated with,” one respondent commented. There were also complaints about the rise in misinformation and a significant decline in both the quantity and quality of engagement, with respondents describing their experiences as “unpleasant,” “negative,” or “hostile.”

The survey responses also revealed a clear push/pull dynamic when it came to the choice to abandon Twitter for Bluesky. That is, people felt they were being pushed away from Twitter and were actively looking for alternatives. As one respondent put it, “Twitter started to suck and all the cool people were moving to Bluesky.”

Bluesky was user-friendly with no algorithm, a familiar format, and helpful tools like starter packs of who to follow in specific fields, which made the switch a bit easier for many newcomers daunted by the prospect of rebuilding their online audience. Bluesky users also appreciated the moderation on the platform and having the ability to block or mute people as a means of disengaging from more aggressive, unpleasant conversations. That said, “If Twitter was still great, then I don’t think there’s any combination of features that would’ve made this many people so excited about switching,” said Shiffman.

Per Shiffman and Wester, an “overwhelming majority” of respondents said that Bluesky has a “vibrant and healthy online science community,” while Twitter no longer does. And many Bluesky users reported getting more bang for their buck, so to speak, on Bluesky. They might have a lower follower count, but those followers are far more engaged: Someone with 50,000 Twitter/X followers, for example, might get five likes on a given post; but on Bluesky, they may only have 5,000 followers, but their posts will get 100 likes.

According to Shiffman, Twitter always used to be in the top three in terms of referral traffic for posts on Southern Fried Science. Then came the “Muskification,” and suddenly Twitter referrals weren’t even cracking the top 10. By contrast, in 2025 thus far, Bluesky has driven “a hundred times as many page views” to Southern Fried Science as Twitter. Ironically, “the blog post that’s gotten the most page views from Twitter is the one about this paper,” said Shiffman.

Ars social media manager Connor McInerney confirmed that Ars Technica has also seen a steady dip in Twitter referral traffic thus far in 2025. Furthermore, “I can say anecdotally that over the summer we’ve seen our Bluesky traffic start to surpass our Twitter traffic for the first time,” McInerney said, attributing the growth to a combination of factors. “We’ve been posting to the platform more often and our audience there has grown significantly. By my estimate our audience has grown by 63 percent since January. The platform in general has grown a lot too—they had 10 million users in September of last year, and this month the latest numbers indicate they’re at 38 million users. Conversely, our Twitter audience has remained fairly static across the same period of time.”

Bubble, schmubble

As for scientists looking to share scholarly papers online, Shiffman pulled the Altmetrics stats for his and Wester’s new paper. “It’s already one of the 10 most shared papers in the history of that journal on social media,” he said, with 14 shares on Twitter/X vs over a thousand shares on Bluesky (as of 4 pm ET on August 20). “If the goal is showing there’s a more active academic scholarly conversation on Bluesky—I mean, damn,” he said.

“When I talk about fish on Bluesky, people ask me questions about fish. When I talk about fish on Twitter, people threaten to murder my family because we’re Jewish.”

And while there has been a steady drumbeat of op-eds of late in certain legacy media outlets accusing Bluesky of being trapped in its own liberal bubble, Shiffman, for one, has few concerns about that. “I don’t care about this, because I don’t use social media to argue with strangers about politics,” he wrote in his accompanying blog post. “I use social media to talk about fish. When I talk about fish on Bluesky, people ask me questions about fish. When I talk about fish on Twitter, people threaten to murder my family because we’re Jewish.” He compared the current incarnation of Twitter as no better than 4Chan or TruthSocial in terms of the percentage of “conspiracy-prone extremists” in the audience. “Even if you want to stay, the algorithm is working against you,” he wrote.

“There have been a lot of opinion pieces about why Bluesky is not useful because the people there tend to be relatively left-leaning,” Shiffman told Ars. “I haven’t seen any of those same people say that Twitter is bad because it’s relatively right-leaning. Twitter is not a representative sample of the public either.” And given his focus on ocean conservation and science-based, data-driven environmental advocacy, he is likely to find a more engaged and persuadable audience at Bluesky.

The survey results show that at this point, Bluesky seems to have hit a critical mass for the online scientific community. That said, Shiffman, for one, laments that the powerful Black Science Twitter contingent, for example, has thus far not switched to Bluesky in significant numbers. He would like to conduct a follow-up study to look into how many still use Twitter vs those who may have left social media altogether, as well as Bluesky’s demographic diversity—paving the way for possible solutions should that data reveal an unwelcoming environment for non-white scientists.

There are certainly limitations to the present survey. “Because this is such a dynamic system and it’s changing every day, I think if we did this study now versus when we did it six months ago, we’d get slightly different answers and dynamics,” said Wester. “It’s still relevant because you can look at the factors that make people decide to stay or not on Bluesky, to switch to something else, to leave social media altogether. That can tell us something about what makes a healthy, vibrant conversation online. We’re capturing one of the responses: ‘I’ll see you on Bluesky.’ But that’s not the only response. Public science communication is as important now as it’s ever been, so looking at how scientists have pivoted is really important.”

We recently reported on research indicating that social media as a system might well be doomed, since its very structure gives rise to the toxic dynamics that plague so much of social media: filter bubbles, algorithms that amplify the most extreme views to boost engagement, and a small number of influencers hogging the lion’s share of attention. That paper concluded that any intervention strategies were likely to fail. Both Shiffman and Wester, while acknowledging the reality of those dynamics, are less pessimistic about social media’s future.

“I think the problem is not with how social media works, it’s with how any group of people work,” said Shiffman. “Humans evolved in tiny social groupings where we helped each other and looked out for each other’s interests. Now I have to have a fight with someone 10,000 miles away who has no common interest with me about whether or not vaccines are bad. We were not built for that. Social media definitely makes it a lot easier for people who are anti-social by nature and want to stir conflict to find those conflicts. Something that took me way too long to learn is that you don’t have to participate in every fight you’re invited to. There are people who are looking for a fight and you can simply say, ‘No, thank you. Not today, Satan.'”

“The contrast that people are seeing between Bluesky and present-day Twitter highlights that these are social spaces, which means that you’re going to get all of the good and bad of humanity entering into that space,” said Wester. “But we have had new social spaces evolve over our whole history. Sometimes when there’s something really new, we have to figure out the rules for that space. We’re still figuring out the rules for these social media spaces. The contrast in moderation policies and the use (or not) of algorithms between those two platforms that are otherwise very similar in structure really highlights that you can shape those social spaces by creating rules and tools for how people interact with each other.”

DOI: Integrative and Comparative Biology, 2025. 10.1093/icb/icaf127  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Bluesky now platform of choice for science community Read More »

elon-musk’s-“thermonuclear”-media-matters-lawsuit-may-be-fizzling-out

Elon Musk’s “thermonuclear” Media Matters lawsuit may be fizzling out


Judge blocks FTC’s Media Matters probe as a likely First Amendment violation.

Media Matters for America (MMFA)—a nonprofit that Elon Musk accused of sparking a supposedly illegal ad boycott on X—won its bid to block a sweeping Federal Trade Commission (FTC) probe that appeared to have rushed to silence Musk’s foe without ever adequately explaining why the government needed to get involved.

In her opinion granting MMFA’s preliminary injunction, US District Judge Sparkle L. Sooknanan—a Joe Biden appointee—agreed that the FTC’s probe was likely to be ruled as a retaliatory violation of the First Amendment.

Warning that the FTC’s targeting of reporters was particularly concerning, Sooknanan wrote that the “case presents a straightforward First Amendment violation,” where it’s reasonable to conclude that conservative FTC staffers were perhaps motivated to eliminate a media organization dedicated to correcting conservative misinformation online.

“It should alarm all Americans when the Government retaliates against individuals or organizations for engaging in constitutionally protected public debate,” Sooknanan wrote. “And that alarm should ring even louder when the Government retaliates against those engaged in newsgathering and reporting.”

FTC staff social posts may be evidence of retaliation

In 2023, Musk vowed to file a “thermonuclear” lawsuit because advertisers abandoned X after MMFA published a report showing that major brands’ ads had appeared next to pro-Nazi posts on X. Musk then tried to sue MMFA “all over the world,” Sooknanan wrote, while “seemingly at the behest of Steven Miller, the current White House Deputy Chief of Staff, the Missouri and Texas Attorneys General” joined Musk’s fight, starting their own probes.

But Musk’s “thermonuclear” attack—attempting to fight MMFA on as many fronts as possible—has appeared to be fizzling out. A federal district court preliminarily enjoined the “aggressive” global litigation strategy, and the same court issued the recent FTC ruling that also preliminarily enjoined the AG probes “as likely being retaliatory in violation of the First Amendment.”

The FTC under the Trump administration appeared to be the next line of offense, supporting Musk’s attack on MMFA. And Sooknanan said that FTC Chair Andrew Ferguson’s own comments in interviews, which characterized Media Matters and the FTC’s probe “in ideological terms,” seem to indicate “at a minimum that Chairman Ferguson saw the FTC’s investigation as having a partisan bent.”

A huge part of the problem for the FTC was social media comments posted before some senior FTC staffers were appointed by Ferguson. Those posts appeared to show the FTC growing increasingly partisan, perhaps pointedly hiring staffers who they knew would help take down groups like MMFA.

As examples, Sooknanan pointed to Joe Simonson, the FTC’s director of public affairs, who had posted that MMFA “employed a number of stupid and resentful Democrats who went to like American University and didn’t have the emotional stability to work as an assistant press aide for a House member.” And Jon Schwepp, Ferguson’s senior policy advisor, had claimed that Media Matters—which he branded as the “scum of the earth”—”wants to weaponize powerful institutions to censor conservatives.” And finally, Jake Denton, the FTC’s chief technology officer, had alleged that MMFA is “an organization devoted to pressuring companies into silencing conservative voices.”

Further, the timing of the FTC investigation—arriving “on the heels of other failed attempts to seek retribution”—seemed to suggest it was “motivated by retaliatory animus,” the judge said. The FTC’s “fast-moving” investigation suggests that Ferguson “was chomping at the bit to ‘take investigative steps in the new administration under President Trump’ to make ‘progressives’ like Media Matters ‘give up,'” Sooknanan wrote.

Musk’s fight continues in Texas, for now

Possibly most damning to the FTC case, Sooknanan suggested the FTC has never adequately explained the reason why it’s probing Media Matters. In the “Subject of Investigation” field, the FTC wrote only “see attached,” but the attachment was just a list of specific demands and directions to comply with those demands.

Eventually, the FTC offered “something resembling an explanation,” Sooknanan said. But their “ultimate explanation”—that Media Matters may have information related to a supposedly illegal coordinated campaign to game ad pricing, starve revenue, and censor conservative platforms—”does not inspire confidence that they acted in good faith,” Sooknanan said. The judge considered it problematic that the FTC never explained why it has reason to believe MMFA has the information it’s seeking. Or why its demand list went “well beyond the investigation’s purported scope,” including “a reporter’s resource materials,” financial records, and all documents submitted so far in Musk’s X lawsuit.

“It stands to reason,” Sooknanan wrote, that the FTC launched its probe “because it wanted to continue the years’ long pressure campaign against Media Matters by Mr. Musk and his political allies.”

In its defense, the FTC argued that all civil investigative demands are initially broad, insisting that MMFA would have had the opportunity to narrow the demands if things had proceeded without the lawsuit. But Sooknanan declined to “consider a hypothetical narrowed” demand list instead of “the actual demand issued to Media Matters,” while noting that the court was “troubled” by the FTC’s suggestion that “the federal Government routinely issues civil investigative demands it knows to be overbroad with the goal of later narrowing those demands presumably in exchange for compliance.”

“Perhaps the Defendants will establish otherwise later in these proceedings,” Sooknanan wrote. “But at this stage, the record certainly supports that inference,” that the FTC was politically motivated to back Musk’s fight.

As the FTC mulls a potential appeal, the only other major front of Musk’s fight with MMFA is the lawsuit that X Corp. filed in Texas. Musk allegedly expects more favorable treatment in the Texas court, and MMFA is currently pushing to transfer the case to California after previously arguing that Musk was venue shopping by filing the lawsuit in Texas, claiming that it should be “fatal” to his case.

Musk has so far kept the case in Texas, but risking a venue change could be enough to ultimately doom his “thermonuclear” attack on MMFA. To prevent that, X is arguing that it’s “hard to imagine” how changing the venue and starting over with a new judge two years into such complex litigation would best serve the “interests of justice.”

Media Matters, however, has “easily met” requirements to show that substantial damage has already been done—not just because MMFA has struggled financially and stopped reporting on X and the FTC—but because any loss of First Amendment freedoms “unquestionably constitutes irreparable injury.”

The FTC tried to claim that any reputational harm, financial harm, and self-censorship are “self-inflicted” wounds for MMFA. But the FTC did “not respond to the argument that the First Amendment injury itself is irreparable, thereby conceding it,” Sooknanan wrote. That likely weakens the FTC’s case in an appeal.

MMFA declined Ars’ request to comment. But despite the lawsuits reportedly plunging MMFA into a financial crisis, its president, Angelo Carusone, told The New York Times that “the court’s ruling demonstrates the importance of fighting over folding, which far too many are doing when confronted with intimidation from the Trump administration.”

“We will continue to stand up and fight for the First Amendment rights that protect every American,” Carusone said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Elon Musk’s “thermonuclear” Media Matters lawsuit may be fizzling out Read More »

musk-threatens-to-sue-apple-so-grok-can-get-top-app-store-ranking

Musk threatens to sue Apple so Grok can get top App Store ranking

After spending last week hyping Grok’s spicy new features, Elon Musk kicked off this week by threatening to sue Apple for supposedly gaming the App Store rankings to favor ChatGPT over Grok.

“Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation,” Musk wrote on X, without providing any evidence. “xAI will take immediate legal action.”

In another post, Musk tagged Apple, asking, “Why do you refuse to put either X or Grok in your ‘Must Have’ section when X is the #1 news app in the world and Grok is #5 among all apps?”

“Are you playing politics?” Musk asked. “What gives? Inquiring minds want to know.”

Apple did not respond to the post and has not responded to Ars’ request to comment.

At the heart of Musk’s complaints is an OpenAI partnership that Apple announced last year, integrating ChatGPT into versions of its iPhone, iPad, and Mac operating systems.

Musk has alleged that this partnership incentivized Apple to boost ChatGPT rankings. OpenAI’s popular chatbot “currently holds the top spot in the App Store’s ‘Top Free Apps’ section for iPhones in the US,” Reuters noted, “while xAI’s Grok ranks fifth and Google’s Gemini chatbot sits at 57th.” Sensor Tower data shows ChatGPT similarly tops Google Play Store rankings.

While Musk seems insistent that ChatGPT is artificially locked in the lead, fact-checkers on X added a community note to his post. They confirmed that at least one other AI tool has somewhat recently unseated ChatGPT in the US rankings. Back in January, DeepSeek topped App Store charts and held the lead for days, ABC News reported.

OpenAI did not immediately respond to Ars’ request to comment on Musk’s allegations, but an OpenAI developer, Steven Heidel, did add a quip in response to one of Musk’s posts, writing, “Don’t forget to also blame Google for OpenAI being #1 on Android, and blame SimilarWeb for putting ChatGPT above X on the most-visited websites list, and blame….”

Musk threatens to sue Apple so Grok can get top App Store ranking Read More »

researcher-threatens-x-with-lawsuit-after-falsely-linking-him-to-french-probe

Researcher threatens X with lawsuit after falsely linking him to French probe

X claimed that David Chavalarias, “who spearheads the ‘Escape X’ campaign”—which is “dedicated to encouraging X users to leave the platform”—was chosen to assess the data with one of his prior research collaborators, Maziyar Panahi.

“The involvement of these individuals raises serious concerns about the impartiality, fairness, and political motivations of the investigation, to put it charitably,” X alleged. “A predetermined outcome is not a fair one.”

However, Panahi told Reuters that he believes X blamed him “by mistake,” based only on his prior association with Chavalarias. He further clarified that “none” of his projects with Chavalarias “ever had any hostile intent toward X” and threatened legal action to protect himself against defamation if he receives “any form of hate speech” due to X’s seeming error and mischaracterization of his research. An Ars review suggests his research on social media platforms predates Musk’s ownership of X and has probed whether certain recommendation systems potentially make platforms toxic or influence presidential campaigns.

“The fact my name has been mentioned in such an erroneous manner demonstrates how little regard they have for the lives of others,” Panahi told Reuters.

X denies being an “organized gang”

X suggests that it “remains in the dark as to the specific allegations made against the platform,” accusing French police of “distorting French law in order to serve a political agenda and, ultimately, restrict free speech.”

The press release is indeed vague on what exactly French police are seeking to uncover. All French authorities say is that they are probing X for alleged “tampering with the operation of an automated data processing system by an organized gang” and “fraudulent extraction of data from an automated data processing system by an organized gang.” But later, a French magistrate, Laure Beccuau, clarified in a statement that the probe was based on complaints that X is spreading “an enormous amount of hateful, racist, anti-LGBT+ and homophobic political content, which aims to skew the democratic debate in France,” Politico reported.

Researcher threatens X with lawsuit after falsely linking him to French probe Read More »

new-grok-ai-model-surprises-experts-by-checking-elon-musk’s-views-before-answering

New Grok AI model surprises experts by checking Elon Musk’s views before answering

Seeking the system prompt

Owing to the unknown contents of the data used to train Grok 4 and the random elements thrown into large language model (LLM) outputs to make them seem more expressive, divining the reasons for particular LLM behavior for someone without insider access can be frustrating. But we can use what we know about how LLMs work to guide a better answer. xAI did not respond to a request for comment before publication.

To generate text, every AI chatbot processes an input called a “prompt” and produces a plausible output based on that prompt. This is the core function of every LLM. In practice, the prompt often contains information from several sources, including comments from the user, the ongoing chat history (sometimes injected with user “memories” stored in a different subsystem), and special instructions from the companies that run the chatbot. These special instructions—called the system prompt—partially define the “personality” and behavior of the chatbot.

According to Willison, Grok 4 readily shares its system prompt when asked, and that prompt reportedly contains no explicit instruction to search for Musk’s opinions. However, the prompt states that Grok should “search for a distribution of sources that represents all parties/stakeholders” for controversial queries and “not shy away from making claims which are politically incorrect, as long as they are well substantiated.”

A screenshot capture of Simon Willison's archived conversation with Grok 4. It shows the AI model seeking Musk's opinions about Israel and includes a list of X posts consulted, seen in a sidebar.

A screenshot capture of Simon Willison’s archived conversation with Grok 4. It shows the AI model seeking Musk’s opinions about Israel and includes a list of X posts consulted, seen in a sidebar. Credit: Benj Edwards

Ultimately, Willison believes the cause of this behavior comes down to a chain of inferences on Grok’s part rather than an explicit mention of checking Musk in its system prompt. “My best guess is that Grok ‘knows’ that it is ‘Grok 4 built by xAI,’ and it knows that Elon Musk owns xAI, so in circumstances where it’s asked for an opinion, the reasoning process often decides to see what Elon thinks,” he said.

Without official word from xAI, we’re left with a best guess. However, regardless of the reason, this kind of unreliable, inscrutable behavior makes many chatbots poorly suited for assisting with tasks where reliability or accuracy are important.

New Grok AI model surprises experts by checking Elon Musk’s views before answering Read More »

musk’s-grok-4-launches-one-day-after-chatbot-generated-hitler-praise-on-x

Musk’s Grok 4 launches one day after chatbot generated Hitler praise on X

Musk has also apparently used the Grok chatbots as an automated extension of his trolling habits, showing examples of Grok 3 producing “based” opinions that criticized the media in February. In May, Grok on X began repeatedly generating outputs about white genocide in South Africa, and most recently, we’ve seen the Grok Nazi output debacle. It’s admittedly difficult to take Grok seriously as a technical product when it’s linked to so many examples of unserious and capricious applications of the technology.

Still, the technical achievements xAI claims for various Grok 4 models seem to stand out. The Arc Prize organization reported that Grok 4 Thinking (with simulated reasoning enabled) achieved a score of 15.9 percent on its ARC-AGI-2 test, which the organization says nearly doubles the previous commercial best and tops the current Kaggle competition leader.

“With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions,” Musk claimed during the livestream. We’ve previously covered nebulous claims about “PhD-level” AI, finding them to be generally specious marketing talk.

Premium pricing amid controversy

During Wednesday’s livestream, xAI also announced plans for an AI coding model in August, a multi-modal agent in September, and a video generation model in October. The company also plans to make Grok 4 available in Tesla vehicles next week, further expanding Musk’s AI assistant across his various companies.

Despite the recent turmoil, xAI has moved forward with an aggressive pricing strategy for “premium” versions of Grok. Alongside Grok 4 and Grok 4 Heavy, xAI launched “SuperGrok Heavy,” a $300-per-month subscription that makes it the most expensive AI service among major providers. Subscribers will get early access to Grok 4 Heavy and upcoming features.

Whether users will pay xAI’s premium pricing remains to be seen, particularly given the AI assistant’s tendency to periodically generate politically motivated outputs. These incidents represent fundamental management and implementation issues that, so far, no fancy-looking test-taking benchmarks have been able to capture.

Musk’s Grok 4 launches one day after chatbot generated Hitler praise on X Read More »

linda-yaccarino-quits-x-without-saying-why,-one-day-after-grok-praised-hitler

Linda Yaccarino quits X without saying why, one day after Grok praised Hitler

And “the best is yet to come as X enters a new chapter” with xAI, Yaccarino said.

Grok cites “growing tensions” between Musk and CEO

It’s unclear how Yaccarino’s departure could influence X advertisers who may have had more confidence in the platform with her at the helm.

Eventually, Musk commented on Yaccarino’s announcement, thanking her for her contributions but saying little else about her departure. Separately, he responded to Thierry Breton, former European Union commissioner for the internal market, who joked that “Europe’s got talent” if Musk “needs help.” The X owner, who previously traded barbs with Breton over alleged X disinformation, responded “sure” with a laugh-cry emoji.

Musk has seemingly been busy putting out fires, as the Grok account finally issued a statement confirming that X was working to remove “inappropriate” posts.

“Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X,” the post explained, confirming that fixes go beyond simply changing Grok’s prompting.

But the statement illuminates one of the biggest problems with experimental chatbots that experts fear may play an increasingly significant role in spreading misinformation and hate speech. Once Grok’s outputs got seriously out of hand, it took “millions of users” flagging the problematic posts for X to “identify and update the model where training could be improved”—which X curiously claims was an example of the platform responding “quickly.”

If X expects that harmful Grok outputs reaching millions is what it will take to address emerging issues, X advertisers today are stuck wondering what content they could risk monetizing. Sticking with X could remain precarious at a time when the Federal Trade Commission has moved to block ad boycotts and Musk has updated X terms to force any ad customer arbitration into a chosen venue in Texas.

For Yaccarino, whose career took off based on her advertising savvy, leaving now could help her save face from any fallout from both the Grok controversy this week and the larger battle with advertisers—some of whom, she’s noted, she’s worked with “for decades.”

X did not respond to Ars’ request to comment on Yaccarino’s exit. If you ask Grok why Yaccarino left, the chatbot cites these possible reasons: “growing tensions” with Musk, frustrations with X brand safety, business struggles relegating her role to “chief apology officer,” and ad industry friends pushing her to get out while she can.

Linda Yaccarino quits X without saying why, one day after Grok praised Hitler Read More »