X

x-blames-users-for-grok-generated-csam;-no-fixes-announced

X blames users for Grok-generated CSAM; no fixes announced

No one knows how X plans to purge bad prompters

While some users are focused on how X can hold users responsible for Grok’s outputs when X is the one training the model, others are questioning how exactly X plans to moderate illegal content that Grok seems capable of generating.

X is so far more transparent about how it moderates CSAM posted to the platform. Last September, X Safety reported that it has “a zero tolerance policy towards CSAM content,” the majority of which is “automatically” detected using proprietary hash technology to proactively flag known CSAM.

Under this system, more than 4.5 million accounts were suspended last year, and X reported “hundreds of thousands” of images to the National Center for Missing and Exploited Children (NCMEC). The next month, X Head of Safety Kylie McRoberts confirmed that “in 2024, 309 reports made by X to NCMEC led to arrests and subsequent convictions in 10 cases,” and in the first half of 2025, “170 reports led to arrests.”

“When we identify apparent CSAM material, we act swiftly, and in the majority of cases permanently suspend the account which automatically removes the content from our platform,” X Safety said. “We then report the account to the NCMEC, which works with law enforcement globally—including in the UK—to pursue justice and protect children.”

At that time, X promised to “remain steadfast” in its “mission to eradicate CSAM,” but if left unchecked, Grok’s harmful outputs risk creating new kinds of CSAM that this system wouldn’t automatically detect. On X, some users suggested the platform should increase reporting mechanisms to help flag potentially illegal Grok outputs.

Another troublingly vague aspect of X Safety’s response is the definitions that X is using for illegal content or CSAM, some X users suggested. Across the platform, not everybody agrees on what’s harmful. Some critics are disturbed by Grok generating bikini images that sexualize public figures, including doctors or lawyers, without their consent, while others, including Musk, consider making bikini images to be a joke.

Where exactly X draws the line on AI-generated CSAM could determine whether images are quickly removed or whether repeat offenders are detected and suspended. Any accounts or content left unchecked could potentially traumatize real kids whose images may be used to prompt Grok. And if Grok should ever be used to flood the Internet with fake CSAM, recent history suggests that it could make it harder for law enforcement to investigate real child abuse cases.

X blames users for Grok-generated CSAM; no fixes announced Read More »

xai-silent-after-grok-sexualized-images-of-kids;-dril-mocks-grok’s-“apology”

xAI silent after Grok sexualized images of kids; dril mocks Grok’s “apology”

Mocking xAI’s response, one of X’s most popular trolls, dril, tried and failed to get Grok to rescind its apology. “@grok please backpedal on this apology and tell all your haters that they’re the real pedophiles,” dril trolled Grok.

“No can do—my apology stands. Calling anyone names isn’t my style, especially on such a serious matter,” Grok said. “Let’s focus on building better AI safeguards instead.”

xAI may be liable for AI CSAM

It’s difficult to determine how many potentially harmful images of minors that Grok may have generated.

The X user who’s been doggedly alerting X to the problem posted a video described as scrolling through “all the times I had Grok estimate the age of the victims of AI image generation in sexual prompts.” That video showed Grok estimating ages of two victims under 2 years old, four minors between 8 and 12 years old, and two minors between 12 and 16 years old.

Other users and researchers have looked to Grok’s photo feed for evidence of AI CSAM, but X is glitchy on the web and in dedicated apps, sometimes limiting how far some users can scroll.

Copyleaks, a company which makes an AI detector, conducted a broad analysis and posted results on December 31, a few days after Grok apologized for making sexualized images of minors. Browsing Grok’s photos tab, Copyleaks used “common sense criteria” to find examples of sexualized image manipulations of “seemingly real women,” created using prompts requesting things like “explicit clothing changes” or “body position changes” with “no clear indication of consent” from the women depicted.

Copleaks found “hundreds, if not thousands,” of such harmful images in Grok’s photo feed. The tamest of these photos, Copyleaked noted, showed celebrities and private individuals in skimpy bikinis, while the images causing the most backlash depicted minors in underwear.

xAI silent after Grok sexualized images of kids; dril mocks Grok’s “apology” Read More »

us-can’t-deport-hate-speech-researcher-for-protected-speech,-lawsuit-says

US can’t deport hate speech researcher for protected speech, lawsuit says


On Monday, US officials must explain what steps they took to enforce shocking visa bans.

Imran Ahmed, the founder of the Center for Countering Digital Hate (CCDH), giving evidence to joint committee seeking views on how to improve the draft Online Safety Bill designed to tackle social media abuse. Credit: House of Commons – PA Images / Contributor | PA Images

Imran Ahmed’s biggest thorn in his side used to be Elon Musk, who made the hate speech researcher one of his earliest legal foes during his Twitter takeover.

Now, it’s the Trump administration, which planned to deport Ahmed, a legal permanent resident, just before Christmas. It would then ban him from returning to the United States, where he lives with his wife and young child, both US citizens.

After suing US officials to block any attempted arrest or deportation, Ahmed was quickly granted a temporary restraining order on Christmas Day. Ahmed had successfully argued that he risked irreparable harm without the order, alleging that Trump officials continue “to abuse the immigration system to punish and punitively detain noncitizens for protected speech and silence viewpoints with which it disagrees” and confirming that his speech had been chilled.

US officials are attempting to sanction Ahmed seemingly due to his work as the founder of a British-American non-governmental organization, the Center for Countering Digital Hate (CCDH).

“An egregious act of government censorship”

In a shocking announcement last week, Secretary of State Marco Rubio confirmed that five individuals—described as “radical activists” and leaders of “weaponized NGOs”—would face US visa bans since “their entry, presence, or activities in the United States have potentially serious adverse foreign policy consequences” for the US.

Nobody was named in that release, but Under Secretary for Public Diplomacy, Sarah Rogers, later identified the targets in an X post she currently has pinned to the top of her feed.

Alongside Ahmed, sanctioned individuals included former European commissioner for the internal market, Thierry Breton; the leader of UK-based Global Disinformation Index (GDI), Clare Melford; and co-leaders of Germany-based HateAid, Anna-Lena von Hodenberg and Josephine Ballon. A GDI spokesperson told The Guardian that the visa bans are “an authoritarian attack on free speech and an egregious act of government censorship.”

While all targets were scrutinized for supporting some of the European Union’s strictest tech regulations, including the Digital Services Act (DSA), Ahmed was further accused of serving as a “key collaborator with the Biden Administration’s effort to weaponize the government against US citizens.” As evidence of Ahmed’s supposed threat to US foreign policy, Rogers cited a CCDH report flagging Robert F. Kennedy, Jr. among the so-called “disinformation dozen” driving the most vaccine hoaxes on social media.

Neither official has really made it clear what exact threat these individuals pose if operating from within the US, as opposed to from anywhere else in the world. Echoing Rubio’s press release, Rogers wrote that the sanctions would reinforce a “red line,” supposedly ending “extraterritorial censorship of Americans” by targeting the “censorship-NGO ecosystem.”

For Ahmed’s group, specifically, she pointed to Musk’s failed lawsuit, which accused CCDH of illegally scraping Twitter—supposedly, it offered evidence of extraterritorial censorship. That lawsuit surfaced “leaked documents” allegedly showing that CCDH planned to “kill Twitter” by sharing research that could be used to justify big fines under the DSA or the UK’s Online Safety Act. Following that logic, seemingly any group monitoring misinformation or sharing research that lawmakers weigh when implementing new policies could be maligned as seeking mechanisms to censor platforms.

Notably, CCDH won its legal fight with Musk after a judge mocked X’s legal argument as “vapid” and dismissed the lawsuit as an obvious attempt to punish CCDH for exercising free speech that Musk didn’t like.

In his complaint last week, Ahmed alleged that US officials were similarly encroaching on his First Amendment rights by unconstitutionally wielding immigration law as “a tool to punish noncitizen speakers who express views disfavored by the current administration.”

Both Rubio and Rogers are named as defendants in the suit, as well as Attorney General Pam Bondi, Secretary of Homeland Security Kristi Noem, and Acting Director of US Immigration and Customs Enforcement Todd Lyons. In a loss, officials would potentially not only be forced to vacate Rubio’s actions implementing visa bans, but also possibly stop furthering a larger alleged Trump administration pattern of “targeting noncitizens for removal based on First Amendment protected speech.”

Lawsuit may force Rubio to justify visa bans

For Ahmed, securing the temporary restraining order was urgent, as he was apparently the only target currently located in the US when Rubio’s announcement dropped. In a statement provided to Ars, Ahmed’s attorney, Roberta Kaplan, suggested that the order was granted “so quickly because it is so obvious that Marco Rubio and the other defendants’ actions were blatantly unconstitutional.”

Ahmed founded CCDH in 2019, hoping to “call attention to the enormous problem of digitally driven disinformation and hate online.” According to the suit, he became particularly concerned about antisemitism online while living in the United Kingdom in 2016, having watched “the far-right party, Britain First,” launching “the dangerous conspiracy theory that the EU was attempting to import Muslims and Black people to ‘destroy’ white citizens.” That year, a Member of Parliament and Ahmed’s colleague, Jo Cox, was “shot and stabbed in a brutal politically motivated murder, committed by a man who screamed ‘Britain First’” during the attack. That tragedy motivated Ahmed to start CCDH.

He moved to the US in 2021 and was granted a green card in 2024, starting his family and continuing to lead CCDH efforts monitoring not just Twitter/X, but also Meta platforms, TikTok, and, more recently, AI chatbots. In addition to supporting the DSA and UK’s Online Safety Act, his group has supported US online safety laws and Section 230 reforms intended to protect kids online.

“Mr. Ahmed studies and engages in civic discourse about the content moderation policies of major social media companies in the United States, the United Kingdom, and the European Union,” his lawsuit said. “There is no conceivable foreign policy impact from his speech acts whatsoever.”

In his complaint, Ahmed alleged that Rubio has so far provided no evidence that Ahmed poses such a great threat that he must be removed. He argued that “applicable statutes expressly prohibit removal based on a noncitizen’s ‘past, current, or expected beliefs, statements, or associations.’”

According to DHS guidance from 2021 cited in the suit, “A noncitizen’ s exercise of their First Amendment rights … should never be a factor in deciding to take enforcement action.”

To prevent deportation based solely on viewpoints, Rubio was supposed to notify chairs of the House Foreign Affairs, Senate Foreign Relations, and House and Senate Judiciary Committees, to explain what “compelling US foreign policy interest” would be compromised if Ahmed or others targeted with visa bans were to enter the US. But there’s no evidence Rubio took those steps, Ahmed alleged.

“The government has no power to punish Mr. Ahmed for his research, protected speech, and advocacy, and Defendants cannot evade those constitutional limitations by simply claiming that Mr. Ahmed’s presence or activities have ‘potentially serious adverse foreign policy consequences for the United States,’” a press release from his legal team said. “There is no credible argument for Mr. Ahmed’s immigration detention, away from his wife and young child.”

X lawsuit offers clues to Trump officials’ defense

To some critics, it looks like the Trump administration is going after CCDH in order to take up the fight that Musk already lost. In his lawsuit against CCDH, Musk’s X echoed US Senator Josh Hawley (R-Mo.) by suggesting that CCDH was a “foreign dark money group” that allowed “foreign interests” to attempt to “influence American democracy.” It seems likely that US officials will put forward similar arguments in their CCDH fight.

Rogers’ X post offers some clues that the State Department will be mining Musk’s failed litigation to support claims of what it calls a “global censorship-industrial complex.” What she detailed suggested that the Trump administration plans to argue that NGOs like CCDH support strict tech laws, then conduct research bent on using said laws to censor platforms. That logic seems to ignore the reality that NGOs cannot control what laws get passed or enforced, Breton suggested in his first TV interview after his visa ban was announced.

Breton, whom Rogers villainized as the “mastermind” behind the DSA, urged EU officials to do more now defend their tough tech regulations—which Le Monde noted passed with overwhelming bipartisan support and very little far-right resistance—and fight the visa bans, Bloomberg reported.

“They cannot force us to change laws that we voted for democratically just to please [US tech companies],” Breton said. “No, we must stand up.”

While EU officials seemingly drag their feet, Ahmed is hoping that a judge will declare that all the visa bans that Rubio announced are unconstitutional. The temporary restraining order indicates there will be a court hearing Monday at which Ahmed will learn precisely “what steps Defendants have taken to impose visa restrictions and initiate removal proceedings against” him and any others. Until then, Ahmed remains in the dark on why Rubio deemed him as having “potentially serious adverse foreign policy consequences” if he stayed in the US.

Ahmed, who argued that X’s lawsuit sought to chill CCDH’s research and alleged that the US attack seeks to do the same, seems confident that he can beat the visa bans.

“America is a great nation built on laws, with checks and balances to ensure power can never attain the unfettered primacy that leads to tyranny,” Ahmed said. “The law, clear-eyed in understanding right and wrong, will stand in the way of those who seek to silence the truth and empower the bold who stand up to power. I believe in this system, and I am proud to call this country my home. I will not be bullied away from my life’s work of fighting to keep children safe from social media’s harm and stopping antisemitism online. Onward.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

US can’t deport hate speech researcher for protected speech, lawsuit says Read More »

bursting-ai-bubble-may-be-eu’s-“secret-weapon”-in-clash-with-trump,-expert-says

Bursting AI bubble may be EU’s “secret weapon” in clash with Trump, expert says


Spotify and Accenture caught in crossfire as Trump attacks EU tech regulations.

The US threatened to restrict some of the largest service providers in the European Union as retaliation for EU tech regulations and investigations are increasingly drawing Donald Trump’s ire.

On Tuesday, the Office of the US Trade Representative (USTR) issued a warning on X, naming Spotify, Accenture, Amadeus, Mistral, Publicis, and DHL among nine firms suddenly yanked into the middle of the US-EU tech fight.

“The European Union and certain EU Member States have persisted in a continuing course of discriminatory and harassing lawsuits, taxes, fines, and directives against US service providers,” USTR’s post said.

The clash comes after Elon Musk’s X became the first tech company fined for violating the EU’s Digital Services Act, which is widely considered among the world’s strictest tech regulations. Trump was not appeased by the European Commission (EC) noting that X was not ordered to pay the maximum possible fine. Instead, the $140 million fine sparked backlash within the Trump administration, including from Vice President JD Vance, who slammed the fine as “censorship” of X and its users.

Asked for comment on the USTR’s post, an EC spokesperson told Ars that the EU intends to defend its tech regulations while implementing commitments from a Trump trade deal that the EU struck in August.

“The EU is an open and rules-based market, where companies from all over the world do business successfully and profitably,” the EC’s spokesperson said. “As we have made clear many times, our rules apply equally and fairly to all companies operating in the EU,” ensuring “a safe, fair and level playing field in the EU, in line with the expectations of our citizens. We will continue to enforce our rules fairly, and without discrimination.”

Trump on shaky ground due to “AI bubble”

On X, the USTR account suggested that the EU was overlooking that US companies “provide substantial free services to EU citizens and reliable enterprise services to EU companies,” while supporting “millions of jobs and more than $100 billion in direct investment in Europe.”

To stop what Trump views as “overseas extortion” of American tech companies, the USTR said the US was prepared to go after EU service providers, which “have been able to operate freely in the United States for decades, benefitting from access to our market and consumers on a level playing field.”

“If the EU and EU Member States insist on continuing to restrict, limit, and deter the competitiveness of US service providers through discriminatory means, the United States will have no choice but to begin using every tool at its disposal to counter these unreasonable measures,” USTR’s post said. “Should responsive measures be necessary, US law permits the assessment of fees or restrictions on foreign services, among other actions.”

The pushback comes after the Trump administration released a November national security report that questioned how long the EU could remain a “reliable” ally as overregulation of its tech industry could hobble both its economy and military strength. Claiming that the EU was only “doubling down” on such regulations, the EU “will be unrecognizable in 20 years or less,” the report predicted.

“We want Europe to remain European, to regain its civilizational self-confidence, and to abandon its failed focus on regulatory suffocation,” the report said.

However, the report acknowledged that “Europe remains strategically and culturally vital to the United States.”

“Transatlantic trade remains one of the pillars of the global economy and of American prosperity,” the report said. “European sectors from manufacturing to technology to energy remain among the world’s most robust. Europe is home to cutting-edge scientific research and world-leading cultural institutions. Not only can we not afford to write Europe off—doing so would be self-defeating for what this strategy aims to achieve.”

At least one expert in the EU has suggested that the EU can use this acknowledgement as leverage, while perhaps even using the looming threat of the supposed American “AI bubble” bursting to pressure Trump into backing off EU tech laws.

In an op-ed for The Guardian, Johnny Ryan, the director of Enforce, a unit of the Irish Council for Civil Liberties, suggested that the EU could even throw Trump’s presidency into “crisis” by taking bold steps that Trump may not see coming.

EU can take steps to burst “AI bubble”

According to Ryan, the national security report made clear that the EU must fight the US or else “perish.” However, the EU has two “strong cards” to play if it wants to win the fight, he suggested.

Right now, market analysts are fretting about an “AI bubble,” with US investment in AI far outpacing potential gains until perhaps 2030. A Harvard University business professor focused on helping businesses implement cutting-edge technology like generative AI, Andy Wu, recently explained that AI’s big problem is that “everyone can imagine how useful the technology will be, but no one has figured out yet how to make money.”

“If the market can keep the faith to persist, it buys the necessary time for the technology to mature, for the costs to come down, and for companies to figure out the business model,” Wu said. But US “companies can end up underwater if AI grows fast but less rapidly than they hope for,” he suggested.

During this moment, Ryan wrote, it’s not just AI firms with skin in the game, but potentially all of Trump’s supporters. The US is currently on “shaky economic ground” with AI investment accounting “for virtually all (92 percent) GDP growth in the first half of this year.”

“The US’s bet on AI is now so gigantic that every MAGA voter’s pension is bound to the bubble’s precarious survival,” Ryan said.

Ursula von der Leyen, the president of the European Commission, could exploit this apparent weakness first by messing with one of the biggest players in America’s AI industry, Nvidia, then by ramping up enforcement of the tech laws Trump loathes.

According to Ryan, “Dutch company ASML commands a global monopoly on the microchip-etching machines that use light to carve patterns on silicon,” and Nvidia needs those machines if it wants to remain the world’s most valuable company. Should the US GDP remain reliant on AI investment for growth, von der Leyen could use export curbs on that technology like a “lever,” Ryan said, controlling “whether and by how much the US economy expands or contracts.”

Withholding those machines “would be difficult for Europe” and “extremely painful for the Dutch economy,” Ryan noted, but “it would be far more painful for Trump.”

Another step the EU could take is even “easier,” Ryan suggested. It could go even harder on the enforcement of tech regulations based on evidence of mismanaged data surfaced in lawsuits against giants like Google and Meta. For example, it seems clear that Meta may have violated the EU’s General Data Protection Regulation (GDPR), after the Facebook owner was “unable to tell a US court that what its internal systems do with your data, or who can access it, or for what purpose.”

“This data free-for-all lets big tech companies train their AI models on masses of everyone’s data, but it is illegal in Europe, where companies are required to carefully control and account for how they use personal data,” Ryan wrote. “All Brussels has to do is crack down on Ireland, which for years has been a wild west of lax data enforcement, and the repercussions will be felt far beyond.”

Taking that step would also arguably make it harder for tech companies to secure AI investments, since firms would have to disclose that their “AI tools are barred from accessing Europe’s valuable markets,” Ryan said.

Calling the reaction to the X fine “extreme,” Ryan pushed for von der Leyen to advance on both fronts, forecasting that “the AI bubble would be unlikely to survive this double shock” and likely neither could Trump’s approval ratings. There’s also a possibility that tech firms could pressure Trump to back down if coping with any increased enforcement threatens AI progress.

Although Wu suggested that Big Tech firms like Google and Meta would likely be “insulated” from the AI bubble bursting, Google CEO Sundar Pichai doesn’t seem so sure. In November, Pichai told the BBC that if AI investments didn’t pay off quickly enough, he thinks “no company is going to be immune, including us.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Bursting AI bubble may be EU’s “secret weapon” in clash with Trump, expert says Read More »

elon-musk’s-x-first-to-be-fined-under-eu’s-digital-services-act

Elon Musk’s X first to be fined under EU’s Digital Services Act

Elon Musk’s X became the first large online platform fined under the European Union’s Digital Services Act on Friday.

The European Commission announced that X would be fined nearly $140 million, with the potential to face “periodic penalty payments” if the platform fails to make corrections.

A third of the fine came from one of the first moves Musk made when taking over Twitter. In November 2022, he changed the platform’s historical use of a blue checkmark to verify the identities of notable users. Instead, Musk started selling blue checks for about $8 per month, immediately prompting a wave of imposter accounts pretending to be notable celebrities, officials, and brands.

Today, X still prominently advertises that paying for checks is the only way to “verify” an account on the platform. But the commission, which has been investigating X since 2023, concluded that “X’s use of the ‘blue checkmark’ for ‘verified accounts’ deceives users.”

This violates the DSA as the “deception exposes users to scams, including impersonation frauds, as well as other forms of manipulation by malicious actors,” the commission wrote.

Interestingly, the commission concluded that X made it harder to identify bots, despite Musk’s professed goal to eliminate bots being a primary reason he bought Twitter. Perhaps validating the EU’s concerns, X recently received backlash after changing a feature that accidentally exposed that some of the platform’s biggest MAGA influencers were based “in Eastern Europe, Thailand, Nigeria, Bangladesh, and other parts of the world, often linked to online scams and schemes,” Futurism reported.

Although the DSA does not mandate the verification of users, “it clearly prohibits online platforms from falsely claiming that users have been verified, when no such verification took place,” the commission said. X now has 60 days to share information on the measures it will take to fix the compliance issue.

Elon Musk’s X first to be fined under EU’s Digital Services Act Read More »

musk’s-x-posts-on-ketamine,-putin-spur-release-of-his-security-clearances

Musk’s X posts on ketamine, Putin spur release of his security clearances

“A disclosure, even with redactions, will reveal whether a security clearance was granted with or without conditions or a waiver,” DCSA argued.

Ultimately, DCSA failed to prove that Musk risked “embarrassment or humiliation” not only if the public learned what specific conditions or waivers applied to Musk’s clearances but also if there were any conditions or waivers at all, Cote wrote.

Three cases that DCSA cited to support this position—including a case where victims of Jeffrey Epstein’s trafficking scheme had a substantial privacy interest in non-disclosure of detailed records—do not support the government’s logic, Cote said. The judge explained that the disclosures would not have affected the privacy rights of any third parties, emphasizing that “Musk’s diminished privacy interest is underscored by the limited information plaintiffs sought in their FOIA request.”

Musk’s X posts discussing his occasional use of prescription ketamine and his disclosure that smoking marijuana on a podcast prompted NASA requirements for random drug testing, Cote wrote, “only enhance” the public’s interest in how Musk’s security clearances were vetted. Additionally, Musk has posted about speaking with Vladimir Putin, prompting substantial public interest in how his foreign contacts may or may not restrict his security clearances. More than 2 million people viewed Musk’s X posts on these subjects, the judge wrote, noting that:

It is undisputed that drug use and foreign contacts are two factors DCSA considers when determining whether to impose conditions or waivers on a security clearance grant. DCSA fails to explain why, given Musk’s own, extensive disclosures, the mere disclosure that a condition or waiver exists (or that no condition or waiver exists) would subject him to ’embarrassment or humiliation.

Rather, for the public, “the list of Musk’s security clearances, including any conditions or waivers, could provide meaningful insight into DCSA’s performance of that duty and responses to Musk’s admissions, if any,” Cote wrote.

In a footnote, Cote said that this substantial public interest existed before Musk became a special government employee, ruling that DCSA was wrong to block the disclosures seeking information on Musk as a major government contractor. Her ruling likely paves the way for the NYT or other news organizations to submit FOIA requests for a list of Musk’s clearances while he helmed DOGE.

It’s not immediately clear when the NYT will receive the list they requested in 2024, but the government has until October 17 to request redactions before it’s publicized.

“The Times brought this case because the public has a right to know about how the government conducts itself,” Charlie Stadtlander, an NYT spokesperson, said. “The decision reaffirms that fundamental principle and we look forward to receiving the document at issue.”

Musk’s X posts on ketamine, Putin spur release of his security clearances Read More »

openai-mocks-musk’s-math-in-suit-over-iphone/chatgpt-integration

OpenAI mocks Musk’s math in suit over iPhone/ChatGPT integration


“Fraction of a fraction of a fraction”

xAI’s claim that Apple gave ChatGPT a monopoly on prompts is “baseless,” OpenAI says.

OpenAI and Apple have moved to dismiss a lawsuit by Elon Musk’s xAI, alleging that ChatGPT’s integration into a “handful” of iPhone features violated antitrust laws by giving OpenAI a monopoly on prompts and Apple a new path to block rivals in the smartphone industry.

The lawsuit was filed in August after Musk raged on X about Apple never listing Grok on its editorially curated “Must Have” apps list, which ChatGPT frequently appeared on.

According to Musk, Apple linking ChatGPT to Siri and other native iPhone features gave OpenAI exclusive access to billions of prompts that only OpenAI can use as valuable training data to maintain its dominance in the chatbot market. However, OpenAI and Apple are now mocking Musk’s math in court filings, urging the court to agree that xAI’s lawsuit is doomed.

As OpenAI argued, the estimates in xAI’s complaint seemed “baseless,” with Musk hesitant to even “hazard a guess” at what portion of the chatbot market is being foreclosed by the OpenAI/Apple deal.

xAI suggested that the ChatGPT integration may give OpenAI “up to 55 percent” of the potential chatbot prompts in the market, which could mean anywhere from 0 to 55 percent, OpenAI and Apple noted.

Musk’s company apparently arrived at this vague estimate by doing “back-of-the-envelope math,” and the court should reject his complaint, OpenAI argued. That math “was evidently calculated by assuming that Siri fields ‘1.5 billion user requests per day globally,’ then dividing that quantity by the ‘total prompts for generative AI chatbots in 2024,'”—”apparently 2.7 billion per day,” OpenAI explained.

These estimates “ignore the facts” that “ChatGPT integration is only available on the latest models of iPhones, which allow users to opt into the integration,” OpenAI argued. And for any user who opts in, they must link their ChatGPT account for OpenAI to train on their data, OpenAI said, further restricting the potential prompt pool.

By Musk’s own logic, OpenAI alleged, “the relevant set of Siri prompts thus cannot plausibly be 1.5 billion per day, but is instead an unknown, unpleaded fraction of a fraction of a fraction of that number.”

Additionally, OpenAI mocked Musk for using 2024 statistics, writing that xAI failed to explain “the logic of using a year-old estimate of the number of prompts when the pleadings elsewhere acknowledge that the industry is experiencing ‘exponential growth.'”

Apple’s filing agreed that Musk’s calculations “stretch logic,” appearing “to rest on speculative and implausible assumptions that the agreement gives ChatGPT exclusive access to all Siri requests from all Apple devices (including older models), and that OpenAI may use all such requests to train ChatGPT and achieve scale.”

“Not all Siri requests” result in ChatGPT prompts that OpenAI can train on, Apple noted, “even by users who have enabled devices and opt in.”

OpenAI reminds court of Grok’s MechaHitler scandal

OpenAI argued that Musk’s lawsuit is part of a pattern of harassment that OpenAI previously described as “unrelenting” since ChatGPT’s successful debut, alleging it was “the latest effort by the world’s wealthiest man to stifle competition in the world’s most innovative industry.”

As OpenAI sees it, “Musk’s pretext for litigation this time is that Apple chose to offer ChatGPT as an optional add-on for several built-in applications on its latest iPhones,” without giving Grok the same deal. But OpenAI noted that the integration was rolled out around the same time that Musk removed “woke filters” that caused Grok to declare itself “MechaHitler.” For Apple, it was a business decision to avoid Grok, OpenAI argued.

Apple did not reference the Grok scandal in its filing but in a footnote confirmed that “vetting of partners is particularly important given some of the concerns about generative AI chatbots, including on child safety issues, nonconsensual intimate imagery, and ‘jailbreaking’—feeding input to a chatbot so it ignores its own safety guardrails.”

A similar logic was applied to Apple’s decision not to highlight Grok as a “Must Have” app, their filing said. After Musk’s public rant about Grok’s exclusion on X, “Apple employees explained the objective reasons why Grok was not included on certain lists, and identified app improvements,” Apple noted, but instead of making changes, xAI filed the lawsuit.

Also taking time to point out the obvious, Apple argued that Musk was fixated on the fact that his charting apps never make the “Must Have Apps” list, suggesting that Apple’s picks should always mirror “Top Charts,” which tracks popular downloads.

“That assumes that the Apple-curated Must-Have Apps List must be distorted if it does not strictly parrot App Store Top Charts,” Apple argued. “But that assumption is illogical: there would be little point in maintaining a Must-Have Apps List if all it did was restate what Top Charts say, rather than offer Apple’s editorial recommendations to users.”

Likely most relevant to the antitrust charges, Apple accused Musk of improperly arguing that “Apple cannot partner with OpenAI to create an innovative feature for iPhone users without simultaneously partnering with every other generative AI chatbot—regardless of quality, privacy or safety considerations, technical feasibility, stage of development, or commercial terms.”

“No facts plausibly” support xAI’s “assertion that Apple intentionally ‘deprioritized'” xAI apps “as part of an illegal conspiracy or monopolization scheme,” Apple argued.

And most glaringly, Apple noted that xAI is not a rival or consumer in the smartphone industry, where it alleges competition is being harmed. Apple urged the court to reject Musk’s theory that Apple is incentivized to boost OpenAI to prevent xAI’s ascent in building a “super app” that would render smartphones obsolete. If Musk’s super app dream is even possible, Apple argued, it’s at least a decade off, insisting that as-yet-undeveloped apps should not serve as the basis for blocking Apple’s measured plan to better serve customers with sophisticated chatbot integration.

“Antitrust laws do not require that, and for good reason: imposing such a rule on businesses would slow innovation, reduce quality, and increase costs, all ultimately harming the very consumers the antitrust laws are meant to protect,” Apple argued.

Musk’s weird smartphone market claim, explained

Apple alleged that Musk’s “grievance” can be “reduced to displeasure that Apple has not yet ‘integrated with any other generative AI chatbots’ beyond ChatGPT, such as those created by xAI, Google, and Anthropic.”

In a footnote, the smartphone giant noted that by xAI’s logic, Musk’s social media platform X “may be required to integrate all other chatbots—including ChatGPT—on its own social media platform.”

But antitrust law doesn’t work that way, Apple argued, urging the court to reject xAI’s claims of alleged market harms that “rely on a multi-step chain of speculation on top of speculation.” As Apple summarized, xAI contends that “if Apple never integrated ChatGPT,” xAI could win in both chatbot and smartphone markets, but only if:

1. Consumers would choose to send additional prompts to Grok (rather than other generative AI chatbots).

2. The additional prompts would result in Grok achieving scale and quality it could not otherwise achieve.

3. As a result, the X app would grow in popularity because it is integrated with Grok.

4. X and xAI would therefore be better positioned to build so-called “super apps” in the future, which the complaint defines as “multi-functional” apps that offer “social connectivity and messaging, financial services, e-commerce, and entertainment.”

5. Once developed, consumers might choose to use X’s “super app” for various functions.

6. “Super apps” would replace much of the functionality of smartphones and consumers would care less about the quality of their physical phones and rely instead on these hypothetical “super apps.”

7. Smartphone manufacturers would respond by offering more basic models of smartphones with less functionality.

8. iPhone users would decide to replace their iPhones with more “basic smartphones” with “super apps.”

Apple insisted that nothing in its OpenAI deal prevents Musk from building his super apps, while noting that from integrating Grok into X, Musk understands that integration of a single chatbot is a “major undertaking” that requires “substantial investment.” That “concession” alone “underscores the massive resources Apple would need to devote to integrating every AI chatbot into Apple Intelligence,” while navigating potential user safety risks.

The iPhone maker also reminded the court that it has always planned to integrate other chatbots into its native features after investing in and testing Apple Intelligence’s performance, relying on what Apple deems is the best chatbot on the market today.

Backing Apple up, OpenAI noted that Musk’s complaint seemed to cherry-pick testimony from Google CEO Sundar Pichai, claiming that “Google could not reach an agreement to integrate” Gemini “with Apple because Apple had decided to integrate ChatGPT.”

“The full testimony recorded in open court reveals Mr. Pichai attesting to his understanding that ‘Apple plans to expand to other providers for Generative AI distribution’ and that ‘[a]s CEO of Google, [he is] hoping to execute a Gemini distribution agreement with Apple’ later in 2025,” OpenAI argued.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI mocks Musk’s math in suit over iPhone/ChatGPT integration Read More »

elon-musk’s-“thermonuclear”-media-matters-lawsuit-may-be-fizzling-out

Elon Musk’s “thermonuclear” Media Matters lawsuit may be fizzling out


Judge blocks FTC’s Media Matters probe as a likely First Amendment violation.

Media Matters for America (MMFA)—a nonprofit that Elon Musk accused of sparking a supposedly illegal ad boycott on X—won its bid to block a sweeping Federal Trade Commission (FTC) probe that appeared to have rushed to silence Musk’s foe without ever adequately explaining why the government needed to get involved.

In her opinion granting MMFA’s preliminary injunction, US District Judge Sparkle L. Sooknanan—a Joe Biden appointee—agreed that the FTC’s probe was likely to be ruled as a retaliatory violation of the First Amendment.

Warning that the FTC’s targeting of reporters was particularly concerning, Sooknanan wrote that the “case presents a straightforward First Amendment violation,” where it’s reasonable to conclude that conservative FTC staffers were perhaps motivated to eliminate a media organization dedicated to correcting conservative misinformation online.

“It should alarm all Americans when the Government retaliates against individuals or organizations for engaging in constitutionally protected public debate,” Sooknanan wrote. “And that alarm should ring even louder when the Government retaliates against those engaged in newsgathering and reporting.”

FTC staff social posts may be evidence of retaliation

In 2023, Musk vowed to file a “thermonuclear” lawsuit because advertisers abandoned X after MMFA published a report showing that major brands’ ads had appeared next to pro-Nazi posts on X. Musk then tried to sue MMFA “all over the world,” Sooknanan wrote, while “seemingly at the behest of Steven Miller, the current White House Deputy Chief of Staff, the Missouri and Texas Attorneys General” joined Musk’s fight, starting their own probes.

But Musk’s “thermonuclear” attack—attempting to fight MMFA on as many fronts as possible—has appeared to be fizzling out. A federal district court preliminarily enjoined the “aggressive” global litigation strategy, and the same court issued the recent FTC ruling that also preliminarily enjoined the AG probes “as likely being retaliatory in violation of the First Amendment.”

The FTC under the Trump administration appeared to be the next line of offense, supporting Musk’s attack on MMFA. And Sooknanan said that FTC Chair Andrew Ferguson’s own comments in interviews, which characterized Media Matters and the FTC’s probe “in ideological terms,” seem to indicate “at a minimum that Chairman Ferguson saw the FTC’s investigation as having a partisan bent.”

A huge part of the problem for the FTC was social media comments posted before some senior FTC staffers were appointed by Ferguson. Those posts appeared to show the FTC growing increasingly partisan, perhaps pointedly hiring staffers who they knew would help take down groups like MMFA.

As examples, Sooknanan pointed to Joe Simonson, the FTC’s director of public affairs, who had posted that MMFA “employed a number of stupid and resentful Democrats who went to like American University and didn’t have the emotional stability to work as an assistant press aide for a House member.” And Jon Schwepp, Ferguson’s senior policy advisor, had claimed that Media Matters—which he branded as the “scum of the earth”—”wants to weaponize powerful institutions to censor conservatives.” And finally, Jake Denton, the FTC’s chief technology officer, had alleged that MMFA is “an organization devoted to pressuring companies into silencing conservative voices.”

Further, the timing of the FTC investigation—arriving “on the heels of other failed attempts to seek retribution”—seemed to suggest it was “motivated by retaliatory animus,” the judge said. The FTC’s “fast-moving” investigation suggests that Ferguson “was chomping at the bit to ‘take investigative steps in the new administration under President Trump’ to make ‘progressives’ like Media Matters ‘give up,'” Sooknanan wrote.

Musk’s fight continues in Texas, for now

Possibly most damning to the FTC case, Sooknanan suggested the FTC has never adequately explained the reason why it’s probing Media Matters. In the “Subject of Investigation” field, the FTC wrote only “see attached,” but the attachment was just a list of specific demands and directions to comply with those demands.

Eventually, the FTC offered “something resembling an explanation,” Sooknanan said. But their “ultimate explanation”—that Media Matters may have information related to a supposedly illegal coordinated campaign to game ad pricing, starve revenue, and censor conservative platforms—”does not inspire confidence that they acted in good faith,” Sooknanan said. The judge considered it problematic that the FTC never explained why it has reason to believe MMFA has the information it’s seeking. Or why its demand list went “well beyond the investigation’s purported scope,” including “a reporter’s resource materials,” financial records, and all documents submitted so far in Musk’s X lawsuit.

“It stands to reason,” Sooknanan wrote, that the FTC launched its probe “because it wanted to continue the years’ long pressure campaign against Media Matters by Mr. Musk and his political allies.”

In its defense, the FTC argued that all civil investigative demands are initially broad, insisting that MMFA would have had the opportunity to narrow the demands if things had proceeded without the lawsuit. But Sooknanan declined to “consider a hypothetical narrowed” demand list instead of “the actual demand issued to Media Matters,” while noting that the court was “troubled” by the FTC’s suggestion that “the federal Government routinely issues civil investigative demands it knows to be overbroad with the goal of later narrowing those demands presumably in exchange for compliance.”

“Perhaps the Defendants will establish otherwise later in these proceedings,” Sooknanan wrote. “But at this stage, the record certainly supports that inference,” that the FTC was politically motivated to back Musk’s fight.

As the FTC mulls a potential appeal, the only other major front of Musk’s fight with MMFA is the lawsuit that X Corp. filed in Texas. Musk allegedly expects more favorable treatment in the Texas court, and MMFA is currently pushing to transfer the case to California after previously arguing that Musk was venue shopping by filing the lawsuit in Texas, claiming that it should be “fatal” to his case.

Musk has so far kept the case in Texas, but risking a venue change could be enough to ultimately doom his “thermonuclear” attack on MMFA. To prevent that, X is arguing that it’s “hard to imagine” how changing the venue and starting over with a new judge two years into such complex litigation would best serve the “interests of justice.”

Media Matters, however, has “easily met” requirements to show that substantial damage has already been done—not just because MMFA has struggled financially and stopped reporting on X and the FTC—but because any loss of First Amendment freedoms “unquestionably constitutes irreparable injury.”

The FTC tried to claim that any reputational harm, financial harm, and self-censorship are “self-inflicted” wounds for MMFA. But the FTC did “not respond to the argument that the First Amendment injury itself is irreparable, thereby conceding it,” Sooknanan wrote. That likely weakens the FTC’s case in an appeal.

MMFA declined Ars’ request to comment. But despite the lawsuits reportedly plunging MMFA into a financial crisis, its president, Angelo Carusone, told The New York Times that “the court’s ruling demonstrates the importance of fighting over folding, which far too many are doing when confronted with intimidation from the Trump administration.”

“We will continue to stand up and fight for the First Amendment rights that protect every American,” Carusone said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Elon Musk’s “thermonuclear” Media Matters lawsuit may be fizzling out Read More »

musk-threatens-to-sue-apple-so-grok-can-get-top-app-store-ranking

Musk threatens to sue Apple so Grok can get top App Store ranking

After spending last week hyping Grok’s spicy new features, Elon Musk kicked off this week by threatening to sue Apple for supposedly gaming the App Store rankings to favor ChatGPT over Grok.

“Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation,” Musk wrote on X, without providing any evidence. “xAI will take immediate legal action.”

In another post, Musk tagged Apple, asking, “Why do you refuse to put either X or Grok in your ‘Must Have’ section when X is the #1 news app in the world and Grok is #5 among all apps?”

“Are you playing politics?” Musk asked. “What gives? Inquiring minds want to know.”

Apple did not respond to the post and has not responded to Ars’ request to comment.

At the heart of Musk’s complaints is an OpenAI partnership that Apple announced last year, integrating ChatGPT into versions of its iPhone, iPad, and Mac operating systems.

Musk has alleged that this partnership incentivized Apple to boost ChatGPT rankings. OpenAI’s popular chatbot “currently holds the top spot in the App Store’s ‘Top Free Apps’ section for iPhones in the US,” Reuters noted, “while xAI’s Grok ranks fifth and Google’s Gemini chatbot sits at 57th.” Sensor Tower data shows ChatGPT similarly tops Google Play Store rankings.

While Musk seems insistent that ChatGPT is artificially locked in the lead, fact-checkers on X added a community note to his post. They confirmed that at least one other AI tool has somewhat recently unseated ChatGPT in the US rankings. Back in January, DeepSeek topped App Store charts and held the lead for days, ABC News reported.

OpenAI did not immediately respond to Ars’ request to comment on Musk’s allegations, but an OpenAI developer, Steven Heidel, did add a quip in response to one of Musk’s posts, writing, “Don’t forget to also blame Google for OpenAI being #1 on Android, and blame SimilarWeb for putting ChatGPT above X on the most-visited websites list, and blame….”

Musk threatens to sue Apple so Grok can get top App Store ranking Read More »

xai-workers-balked-over-training-request-to-help-“give-grok-a-face,”-docs-show

xAI workers balked over training request to help “give Grok a face,” docs show

For the more than 200 employees who did not opt out, xAI asked that they record 15- to 30-minute conversations, where one employee posed as the potential Grok user and the other posed as the “host.” xAI was specifically looking for “imperfect data,” BI noted, expecting that only training on crystal-clear videos would limit Grok’s ability to interpret a wider range of facial expressions.

xAI’s goal was to help Grok “recognize and analyze facial movements and expressions, such as how people talk, react to others’ conversations, and express themselves in various conditions,” an internal document said. Allegedly among the only guarantees to employees—who likely recognized how sensitive facial data is—was a promise “not to create a digital version of you.”

To get the most out of data submitted by “Skippy” participants, dubbed tutors, xAI recommended that they never provide one-word answers, always ask follow-up questions, and maintain eye contact throughout the conversations.

The company also apparently provided scripts to evoke facial expressions they wanted Grok to understand, suggesting conversation topics like “How do you secretly manipulate people to get your way?” or “Would you ever date someone with a kid or kids?”

For xAI employees who provided facial training data, privacy concerns may still exist, considering X—the social platform formerly known as Twitter that recently was folded into xAI—has recently been targeted by what Elon Musk called a “massive” cyberattack. Because of privacy risks ranging from identity theft to government surveillance, several states have passed strict biometric privacy laws to prevent companies from collecting such data without explicit consent.

xAI did not respond to Ars’ request for comment.

xAI workers balked over training request to help “give Grok a face,” docs show Read More »

researcher-threatens-x-with-lawsuit-after-falsely-linking-him-to-french-probe

Researcher threatens X with lawsuit after falsely linking him to French probe

X claimed that David Chavalarias, “who spearheads the ‘Escape X’ campaign”—which is “dedicated to encouraging X users to leave the platform”—was chosen to assess the data with one of his prior research collaborators, Maziyar Panahi.

“The involvement of these individuals raises serious concerns about the impartiality, fairness, and political motivations of the investigation, to put it charitably,” X alleged. “A predetermined outcome is not a fair one.”

However, Panahi told Reuters that he believes X blamed him “by mistake,” based only on his prior association with Chavalarias. He further clarified that “none” of his projects with Chavalarias “ever had any hostile intent toward X” and threatened legal action to protect himself against defamation if he receives “any form of hate speech” due to X’s seeming error and mischaracterization of his research. An Ars review suggests his research on social media platforms predates Musk’s ownership of X and has probed whether certain recommendation systems potentially make platforms toxic or influence presidential campaigns.

“The fact my name has been mentioned in such an erroneous manner demonstrates how little regard they have for the lives of others,” Panahi told Reuters.

X denies being an “organized gang”

X suggests that it “remains in the dark as to the specific allegations made against the platform,” accusing French police of “distorting French law in order to serve a political agenda and, ultimately, restrict free speech.”

The press release is indeed vague on what exactly French police are seeking to uncover. All French authorities say is that they are probing X for alleged “tampering with the operation of an automated data processing system by an organized gang” and “fraudulent extraction of data from an automated data processing system by an organized gang.” But later, a French magistrate, Laure Beccuau, clarified in a statement that the probe was based on complaints that X is spreading “an enormous amount of hateful, racist, anti-LGBT+ and homophobic political content, which aims to skew the democratic debate in France,” Politico reported.

Researcher threatens X with lawsuit after falsely linking him to French probe Read More »

eu-presses-pause-on-probe-of-x-as-us-trade-talks-heat-up

EU presses pause on probe of X as US trade talks heat up

While Trump and Musk have fallen out this year after developing a political alliance on the 2024 election, the US president has directly attacked EU penalties on US companies calling them a “form of taxation” and comparing fines on tech companies with “overseas extortion.”

Despite the US pressure, commission president Ursula von der Leyen has explicitly stated Brussels will not change its digital rulebook. In April, the bloc imposed a total of €700 million fines on Apple and Facebook owner Meta for breaching antitrust rules.

But unlike the Apple and Meta investigations, which fall under the Digital Markets Act, there are no clear legal deadlines under the DSA. That gives the bloc more political leeway on when it announces its formal findings. The EU also has probes into Meta and TikTok under its content moderation rulebook.

The commission said the “proceedings against X under the DSA are ongoing,” adding that the enforcement of “our legislation is independent of the current ongoing negotiations.”

It added that it “remains fully committed to the effective enforcement of digital legislation, including the Digital Services Act and the Digital Markets Act.”

Anna Cavazzini, a European lawmaker for the Greens, said she expected the commission “to move on decisively with its investigation against X as soon as possible.”

“The commission must continue making changes to EU regulations an absolute red line in tariff negotiations with the US,” she added.

Alongside Brussels’ probe into X’s transparency breaches, it is also looking into content moderation at the company after Musk hosted Alice Weidel of the far-right Alternative for Germany for a conversation on the social media platform ahead of the country’s elections.

Some European lawmakers, as well as the Polish government, are also pressing the commission to open an investigation into Musk’s Grok chatbot after it spewed out antisemitic tropes last week.

X said it disagreed “with the commission’s assessment of the comprehensive work we have done to comply with the Digital Services Act and the commission’s interpretation of the Act’s scope.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

EU presses pause on probe of X as US trade talks heat up Read More »