facebook

platforms-bend-over-backward-to-help-dhs-censor-ice-critics,-advocates-say

Platforms bend over backward to help DHS censor ICE critics, advocates say


Pam Bondi and Kristi Noem sued for coercing platforms into censoring ICE posts.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Pressure is mounting on tech companies to shield users from unlawful government requests that advocates say are making it harder to reliably share information about Immigration and Customs Enforcement (ICE) online.

Alleging that ICE officers are being doxed or otherwise endangered, Trump officials have spent the last year targeting an unknown number of users and platforms with demands to censor content. Early lawsuits show that platforms have caved, even though experts say they could refuse these demands without a court order.

In a lawsuit filed on Wednesday, the Foundation for Individual Rights and Expression (FIRE) accused Attorney General Pam Bondi and Department of Homeland Security Secretary Kristi Noem of coercing tech companies into removing a wide range of content “to control what the public can see, hear, or say about ICE operations.”

It’s the second lawsuit alleging that Bondi and DHS officials are using regulatory power to pressure private platforms to suppress speech protected by the First Amendment. It follows a complaint from the developer of an app called ICEBlock, which Apple removed from the App Store in October. Officials aren’t rushing to resolve that case—last month, they requested more time to respond—so it may remain unclear until March what defense they plan to offer for the takedown demands.

That leaves community members who monitor ICE in a precarious situation, as critical resources could disappear at the department’s request with no warning.

FIRE says people have legitimate reasons to share information about ICE. Some communities focus on helping people avoid dangerous ICE activity, while others aim to hold the government accountable and raise public awareness of how ICE operates. Unless there’s proof of incitement to violence or a true threat, such expression is protected.

Despite the high bar for censoring online speech, lawsuits trace an escalating pattern of DHS increasingly targeting websites, app stores, and platforms—many that have been willing to remove content the government dislikes.

Officials have ordered ICE-monitoring apps to be removed from app stores and even threatened to sanction CNN for simply reporting on the existence of one such app. Officials have also demanded that Meta delete at least one Chicago-based Facebook group with 100,000 members and made multiple unsuccessful attempts to unmask anonymous users behind other Facebook groups. Even encrypted apps like Signal don’t feel safe from officials’ seeming overreach. FBI Director Kash Patel recently said he has opened an investigation into Signal chats used by Minnesota residents to track ICE activity, NBC News reported.

As DHS censorship threats increase, platforms have done little to shield users, advocates say. Not only have they sometimes failed to reject unlawful orders that simply provided a “a bare mention of ‘officer safety/doxing’” as justification, but in one case, Google complied with a subpoena that left a critical section blank, the Electronic Frontier Foundation (EFF) reported.

For users, it’s increasingly difficult to trust that platforms won’t betray their own policies when faced with government intimidation, advocates say. Sometimes platforms notify users before complying with government requests, giving users a chance to challenge potentially unconstitutional demands. But in other cases, users learn about the requests only as platforms comply with them—even when those platforms have promised that would never happen.

Government emails with platforms may be exposed

Platforms could face backlash from users if lawsuits expose their communications to the government, a possibility in the coming months. Last fall, the EFF sued after DOJ, DHS, ICE, and Customs and Border Patrol failed to respond to Freedom of Information Act requests seeking emails between the government and platforms about takedown demands. Other lawsuits may surface emails in discovery. In the coming weeks, a judge will set a schedule for EFF’s litigation.

“The nature and content of the Defendants’ communications with these technology companies” is “critical for determining whether they crossed the line from governmental cajoling to unconstitutional coercion,” EFF’s complaint said.

EFF Senior Staff Attorney Mario Trujillo told Ars that the EFF is confident it can win the fight to expose government demands, but like most FOIA lawsuits, the case is expected to move slowly. That’s unfortunate, he said, because ICE activity is escalating, and delays in addressing these concerns could irreparably harm speech at a pivotal moment.

Like users, platforms are seemingly victims, too, FIRE senior attorney Colin McDonnell told Ars.

They’ve been forced to override their own editorial judgment while navigating implicit threats from the government, he said.

“If Attorney General Bondi demands that they remove speech, the platform is going to feel like they have to comply; they don’t have a choice,” McDonnell said.

But platforms do have a choice and could be doing more to protect users, the EFF has said. Platforms could even serve as a first line of defense, requiring officials to get a court order before complying with any requests.

Platforms may now have good reason to push back against government requests—and to give users the tools to do the same. Trujillo noted that while courts have been slow to address the ICEBlock removal and FOIA lawsuits, the government has quickly withdrawn requests to unmask Facebook users soon after litigation began.

“That’s like an acknowledgement that the Trump administration, when actually challenged in court, wasn’t even willing to defend itself,” Trujillo said.

Platforms could view that as evidence that government pressure only works when platforms fail to put up a bare-minimum fight, Trujillo said.

Platforms “bend over backward” to appease DHS

An open letter from the EFF and the American Civil Liberties Union (ACLU) documented two instances of tech companies complying with government demands without first notifying users.

The letter called out Meta for unmasking at least one user without prior notice, which groups noted “potentially” occured due to a “technical glitch.”

More troubling than buggy notifications, however, is the possibility that platforms may be routinely delaying notice until it’s too late.

After Google “received an ICE subpoena for user data and fulfilled it on the same day that it notified the user,” the company admitted that “sometimes when Google misses its response deadline, it complies with the subpoena and provides notice to a user at the same time to minimize the delay for an overdue production,” the letter said.

“This is a worrying admission that violates [Google’s] clear promise to users, especially because there is no legal consequence to missing the government’s response deadline,” the letter said.

Platforms face no sanctions for refusing to comply with government demands that have not been court-ordered, the letter noted. That’s why the EFF and ACLU have urged companies to use their “immense resources” to shield users who may not be able to drop everything and fight unconstitutional data requests.

In their letter, the groups asked companies to insist on court intervention before complying with a DHS subpoena. They should also resist DHS “gag orders” that ask platforms to hand over data without notifying users.

Instead, they should commit to giving users “as much notice as possible when they are the target of a subpoena,” as well as a copy of the subpoena. Ideally, platforms would also link users to legal aid resources and take up legal fights on behalf of vulnerable users, advocates suggested.

That’s not what’s happening so far. Trujillo told Ars that it feels like “companies have bent over backward to appease the Trump administration.”

The tide could turn this year if courts side with app makers behind crowdsourcing apps like ICEBlock and Eyes Up, who are suing to end the alleged government coercion. FIRE’s McDonnell, who represents the creator of Eyes Up, told Ars that platforms may feel more comfortable exercising their own editorial judgment moving forward if a court declares they were coerced into removing content.

DHS can’t use doxing to dodge First Amendment

FIRE’s lawsuit accuses Bondi and Noem of coercing Meta to disable a Facebook group with 100,000 members called “ICE Sightings–Chicagoland.”

The popularity of that group surged during “Operation Midway Blitz,” when hundreds of agents arrested more than 4,500 people over weeks of raids that used tear gas in neighborhoods and caused car crashes and other violence. Arrests included US citizens and immigrants of lawful status, which “gave Chicagoans reason to fear being injured or arrested due to their proximity to ICE raids, no matter their immigration status,” FIRE’s complaint said.

Kassandra Rosado, a lifelong Chicagoan and US citizen of Mexican descent, started the Facebook group and served as admin, moderating content with other volunteers. She prohibited “hate speech or bullying” and “instructed group members not to post anything threatening, hateful, or that promoted violence or illegal conduct.”

Facebook only ever flagged five posts that supposedly violated community guidelines, but in warnings, the company reassured Rosado that “groups aren’t penalized when members or visitors break the rules without admin approval.”

Rosado had no reason to suspect that her group was in danger of removal. When Facebook disabled her group, it told Rosado the group violated community standards “multiple times.” But her complaint noted that, confusingly, “Facebook policies don’t provide for disabling groups if a few members post ostensibly prohibited content; they call for removing groups when the group moderator repeatedly either creates prohibited content or affirmatively ‘approves’ such content.”

Facebook’s decision came after a right-wing influencer, Laura Loomer, tagged Noem and Bondi in a social media post alleging that the group was “getting people killed.” Within two days, Bondi bragged that she had gotten the group disabled while claiming that it “was being used to dox and target [ICE] agents in Chicago.”

McDonnell told Ars it seems clear that Bondi selectively uses the term “doxing” when people post images from ICE arrests. He pointed to “ICE’s own social media accounts,” which share favorable opinions of ICE alongside videos and photos of ICE arrests that Bondi doesn’t consider doxing.

“Rosado’s creation of Facebook groups to send and receive information about where and how ICE carries out its duties in public, to share photographs and videos of ICE carrying out its duties in public, and to exchange opinions about and criticism of ICE’s tactics in carrying out its duties, is speech protected by the First Amendment,” FIRE argued.

The same goes for speech managed by Mark Hodges, a US citizen who resides in Indiana. He created an app called Eyes Up to serve as an archive of ICE videos. Apple removed Eyes Up from the App Store around the same time that it removed ICEBlock.

“It is just videos of what government employees did in public carrying out their duties,” McDonnell said. “It’s nothing even close to threatening or doxing or any of these other theories that the government has used to justify suppressing speech.”

Bondi bragged that she had gotten ICEBlock banned, and FIRE’s complaint confirmed that Hodges’ company received the same notification that ICEBlock’s developer got after Bondi’s victory lap. The notice said that Apple received “information” from “law enforcement” claiming that the apps had violated Apple guidelines against “defamatory, discriminatory, or mean-spirited content.”

Apple did not reach the same conclusion when it independently reviewed Eyes Up prior to government meddling, FIRE’s complaint said. Notably, the app remains available in Google Play, and Rosado now manages a new Facebook group with similar content but somewhat tighter restrictions on who can join. Neither activity has required urgent intervention from either tech giants or the government.

McDonnell told Ars that it’s harmful for DHS to water down the meaning of doxing when pushing platforms to remove content critical of ICE.

“When most of us hear the word ‘doxing,’ we think of something that’s threatening, posting private information along with home addresses or places of work,” McDonnell said. “And it seems like the government is expanding that definition to encompass just sharing, even if there’s no threats, nothing violent. Just sharing information about what our government is doing.”

Expanding the definition and then using that term to justify suppressing speech is concerning, he said, especially since the First Amendment includes no exception for “doxing,” even if DHS ever were to provide evidence of it.

To suppress speech, officials must show that groups are inciting violence or making true threats. FIRE has alleged that the government has not met “the extraordinary justifications required for a prior restraint” on speech and is instead using vague doxing threats to discriminate against speech based on viewpoint. They’re seeking a permanent injunction barring officials from coercing tech companies into censoring ICE posts.

If plaintiffs win, the censorship threats could subside, and tech companies may feel safe reinstating apps and Facebook groups, advocates told Ars. That could potentially revive archives documenting thousands of ICE incidents and reconnect webs of ICE watchers who lost access to valued feeds.

Until courts possibly end threats of censorship, the most cautious community members are moving local ICE-watch efforts to group chats and listservs that are harder for the government to disrupt, Trujillo told Ars.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Platforms bend over backward to help DHS censor ICE critics, advocates say Read More »

“ig-is-a-drug”:-internal-messages-may-doom-meta-at-social-media-addiction-trial

“IG is a drug”: Internal messages may doom Meta at social media addiction trial


Social media addiction test case

A loss could cost social media companies billions and force changes on platforms.

Mark Zuckerberg testifies during the US Senate Judiciary Committee hearing, “Big Tech and the Online Child Sexual Exploitation Crisis,” in 2024.

Anxiety, depression, eating disorders, and death. These can be the consequences for vulnerable kids who get addicted to social media, according to more than 1,000 personal injury lawsuits that seek to punish Meta and other platforms for allegedly prioritizing profits while downplaying child safety risks for years.

Social media companies have faced scrutiny before, with congressional hearings forcing CEOs to apologize, but until now, they’ve never had to convince a jury that they aren’t liable for harming kids.

This week, the first high-profile lawsuit—considered a “bellwether” case that could set meaningful precedent in the hundreds of other complaints—goes to trial. That lawsuit documents the case of a 19-year-old, K.G.M, who hopes the jury will agree that Meta and YouTube caused psychological harm by designing features like infinite scroll and autoplay to push her down a path that she alleged triggered depression, anxiety, self-harm, and suicidality.

TikTok and Snapchat were also targeted by the lawsuit, but both have settled. The Snapchat settlement came last week, while TikTok settled on Tuesday just hours before the trial started, Bloomberg reported.

For now, YouTube and Meta remain in the fight. K.G.M. allegedly started watching YouTube when she was 6 years old and joined Instagram by age 11. She’s fighting to claim untold damages—including potentially punitive damages—to help her family recoup losses from her pain and suffering and to punish social media companies and deter them from promoting harmful features to kids. She also wants the court to require prominent safety warnings on platforms to help parents be aware of the risks.

Platforms failed to blame mom for not reading TOS

A loss could cost social media companies billions, CNN reported.

To avoid that, platforms have alleged that other factors caused K.G.M.’s psychological harm—like school bullies and family troubles—while insisting that Section 230 and the First Amendment protect platforms from being blamed for any harmful content targeted to K.G.M.

They also argued that K.G.M.’s mom never read the terms of service and, therefore, supposedly would not have benefited from posted warnings. And ByteDance, before settling, seemingly tried to pass the buck by claiming that K.G.M. “already suffered mental health harms before she began using TikTok.”

But the judge, Carolyn B. Kuhl, wrote in a ruling denying all platforms’ motions for summary judgment that K.G.M. showed enough evidence that her claims don’t stem from content to go to trial.

Further, platforms can’t liken warnings buried in terms of service to prominently displayed warnings, Kuhl said, since K.G.M.’s mom testified she would have restricted the minor’s app usage if she were aware of the alleged risks.

Two platforms settling before the trial seems like a good sign for K.G.M. However, Snapchat has not settled other social media addiction lawsuits that it’s involved in, including one raised by school districts, and perhaps is waiting to see how K.G.M.’s case shakes out before taking further action.

To win, K.G.M.’s lawyers will need to “parcel out” how much harm is attributed to each platform, due to design features, not the content that was targeted to K.G.M., Clay Calvert, a technology policy expert and senior fellow at a think tank called the American Enterprise Institute, wrote. Internet law expert Eric Goldman told The Washington Post that detailing those harms will likely be K.G.M.’s biggest struggle, since social media addiction has yet to be legally recognized, and tracing who caused what harms may not be straightforward.

However, Matthew Bergman, founder of the Social Media Victims Law Center and one of K.G.M.’s lawyers, told the Post that K.G.M. is prepared to put up this fight.

“She is going to be able to explain in a very real sense what social media did to her over the course of her life and how in so many ways it robbed her of her childhood and her adolescence,” Bergman said.

Internal messages may be “smoking-gun evidence”

The research is unclear on whether social media is harmful for kids or whether social media addiction exists, Tamar Mendelson, a professor at Johns Hopkins Bloomberg School of Public Health, told the Post. And so far, research only shows a correlation between Internet use and mental health, Mendelson noted, which could doom K.G.M.’s case and others’.

However, social media companies’ internal research might concern a jury, Bergman told the Post. On Monday, the Tech Oversight Project, a nonprofit working to rein in Big Tech, published a report analyzing recently unsealed documents in K.G.M.’s case that supposedly provide “smoking-gun evidence” that platforms “purposefully designed their social media products to addict children and teens with no regard for known harms to their wellbeing”—while putting increased engagement from young users at the center of their business models.

In the report, Sacha Haworth, executive director of The Tech Oversight Project, accused social media companies of “gaslighting and lying to the public for years.”

Most of the recently unsealed documents highlighted in the report came from Meta, which also faces a trial from dozens of state attorneys general on social media addiction this year.

Those documents included an email stating that Mark Zuckerberg—who is expected to testify at K.G.M.’s trial—decided that Meta’s top priority in 2017 was teens who must be locked in to using the company’s family of apps.

The next year, a Facebook internal document showed that the company pondered letting “tweens” access a private mode inspired by the popularity of fake Instagram accounts teens know as “finstas.” That document included an “internal discussion on how to counter the narrative that Facebook is bad for youth and admission that internal data shows that Facebook use is correlated with lower well-being (although it says the effect reverses longitudinally).”

Other allegedly damning documents showed Meta seemingly bragging that “teens can’t switch off from Instagram even if they want to” and an employee declaring, “oh my gosh yall IG is a drug,” likening all social media platforms to “pushers.”

Similarly, a 2020 Google document detailed the company’s plan to keep kids engaged “for life,” despite internal research showing young YouTube users were more likely to “disproportionately” suffer from “habitual heavy use, late night use, and unintentional use” deteriorating their “digital well-being.”

Shorts, YouTube’s feature that rivals TikTok, also is a concern for parents suing, and three years later, documents showed Google choosing to target teens with Shorts, despite research flagging that the “two biggest challenges for teen wellbeing on YouTube” were prominently linked to watching shorts. Those challenges included Shorts bombarding teens with “low quality content recommendations that can convey & normalize unhealthy beliefs or behaviors” and teens reporting that “prolonged unintentional use” was “displacing valuable activities like time with friends or sleep.”

Bergman told the Post that these documents will help the jury decide if companies owed young users better protections sooner but prioritized profits while pushing off interventions that platforms have more recently introduced amid mounting backlash.

“Internal documents that have been held establishing the willful misconduct of these companies are going to—for the first time—be given a public airing,” Bergman said. “The public is going to know for the first time what social media companies have done to prioritize their profits over the safety of our kids.”

Platforms failed to get experts’ testimony tossed

One seeming advantage K.G.M. has heading into the trial is that tech companies failed to get expert testimony dismissed that backs up her claims.

Platforms tried to exclude testimony from several experts, including Kara Bagot, a board-certified adult, child, and adolescent psychiatrist, as well as Arturo Bejar, a former Meta safety researcher and whistleblower. They claimed that experts’ opinions were irrelevant because they were based on K.G.M.’s interactions with content. They also suggested that child safety experts’ opinions “violate the standards of reliability” since the causal links they draw don’t account for “alternative explanations” and allegedly “contradict the experts’ own statements in non-litigation contexts.”

However, Kuhl ruled that platforms will have the opportunity to counter experts’ opinions at trial, while reminding social media companies that “ultimately, the critical question of causation is one that must be determined by the jury.” Only one expert’s testimony was excluded, Social Media Victims Law Center noted, a licensed clinical psychologist deemed unqualified.

“Testimony by Bagot as to design features that were employed on TikTok as well as on other social media platforms is directly relevant to the question of whether those design features cause the type of harms allegedly suffered by K.G.M. here,” Kuhl wrote.

That means that a jury will get a chance to weigh Bagot’s opinion that “social media overuse and addiction causes or plays a substantial role in causing or exacerbating psychopathological harms in children and youth, including depression, anxiety and eating disorders, as well as internalizing and externalizing psychopathological symptoms.”

The jury will also consider the insights and information Bejar (a fact witness and former consultant for the company) will share about Meta’s internal safety studies. That includes hearing about “his personal knowledge and experience related to how design defects on Meta’s platforms can cause harm to minors (e.g., age verification, reporting processes, beauty filters, public like counts, infinite scroll, default settings, private messages, reels, ephemeral content, and connecting children with adult strangers),” as well as “harms associated with Meta’s platforms including addiction/problematic use, anxiety, depression, eating disorders, body dysmorphia, suicidality, self-harm, and sexualization.” 

If K.G.M. can convince the jury that she was not harmed by platforms’ failure to remove content but by companies “designing their platforms to addict kids” and “developing algorithms that show kids not what they want to see but what they cannot look away from,” Bergman thinks her case could become a “data point” for “settling similar cases en masse,” he told Barrons.

“She is very typical of so many children in the United States—the harms that they’ve sustained and the way their lives have been altered by the deliberate design decisions of the social media companies,” Bergman told the Post.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

“IG is a drug”: Internal messages may doom Meta at social media addiction trial Read More »

meta-offers-eu-users-ad-light-option-in-push-to-end-investigation

Meta offers EU users ad-light option in push to end investigation

“We acknowledge the European Commission’s statement,” said Meta. “Personalized ads are vital for Europe’s economy.”

The investigation took place under the EU’s landmark Digital Markets Act, which is designed to tackle the power of Big Tech giants and is among the bloc’s tech regulations that have drawn fierce pushback from the Trump administration.

The announcement comes only days after Brussels launched an antitrust investigation into Meta over its new policy on artificial intelligence providers’ access to WhatsApp—a case that underscores the commission’s readiness to use its powers to challenge Big Tech.

That upcoming European probe follows the launch of recent DMA investigations into Google’s parent company Alphabet over its ranking of news outlets in search results and Amazon and Microsoft over their cloud computing services.

Last week, the commission also fined Elon Musk’s X 120 million euros for breaking the bloc’s digital transparency rules. The X sanction led to heavy criticism from a wide range of US government officials, including US Secretary of State Marco Rubio who said the fine is “an attack on all American tech platforms and the American people by foreign governments.”

Andrew Puzder, the US ambassador to the EU, said the fine “is the result of EU regulatory over-reach” and said the Trump administration opposes “censorship and will challenge burdensome regulations that target US companies abroad.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Meta offers EU users ad-light option in push to end investigation Read More »

meta-wins-monopoly-trial,-convinces-judge-that-social-networking-is-dead

Meta wins monopoly trial, convinces judge that social networking is dead


People are “bored” by their friends’ content, judge ruled, siding with Meta.

Mark Zuckerberg arrives at court after The Federal Trade Commission alleged the acquisitions of Instagram in 2012 and WhatsApp in 2014 gave Meta a social media monopoly. Credit: Bloomberg / Contributor | Bloomberg

After years of pushback from the Federal Trade Commission over Meta’s acquisitions of Instagram and WhatsApp, Meta has defeated the FTC’s monopoly claims.

In a Tuesday ruling, US District Judge James Boasberg said the FTC failed to show that Meta has a monopoly in a market dubbed “personal social networking.” In that narrowly defined market, the FTC unsuccessfully argued, Meta supposedly faces only two rivals, Snapchat and MeWe, which struggle to compete due to its alleged monopoly.

But the days of grouping apps into “separate markets of social networking and social media” are over, Boasberg wrote. He cited the Greek philosopher Heraclitus, who “posited that no man can ever step into the same river twice,” while telling the FTC they missed their chance to block Meta’s purchase.

Essentially, Boasberg agreed with Meta that social media—as it was known in Facebook’s early days—is dead. And that means that Meta now competes with a broader set of rival apps, which includes two hugely popular platforms: TikTok and YouTube.

“When the evidence implies that consumers are reallocating massive amounts of time from Meta’s apps to these rivals and that the amount of substitution has forced Meta to invest gobs of cash to keep up, the answer is clear: Meta is not a monopolist insulated from competition,” Boasberg wrote.

In fact, adding just TikTok alone to the market defeated the FTC’s claims, Boasberg wrote, leaving him to conclude that “Meta holds no monopoly in the relevant market.”

The FTC is not happy about the loss, which comes after Boasberg determined that one of the agency’s key expert witnesses, Scott Hemphill, could not have approached his testimony “with an open mind.” According to Boasberg, Hemphill was aligned with figures publicly calling for the breakup of Facebook, and that made “neutral evaluation of his opinions more difficult” in a case with little direct evidence of monopoly harms.

“We are deeply disappointed in this decision,” Joe Simonson, the FTC’s director of public affairs, told CNBC. “The deck was always stacked against us with Judge Boasberg, who is currently facing articles of impeachment. We are reviewing all our options.”

For Meta, the win ends years of FTC fights intended to break up the company’s family of apps: Facebook, Instagram, and WhatsApp.

“The Court’s decision today recognizes that Meta faces fierce competition,” Jennifer Newstead, Meta’s chief legal officer, said. “Our products are beneficial for people and businesses and exemplify American innovation and economic growth. We look forward to continuing to partner with the Administration and to invest in America.”

Reels’ popularity helped save Meta

Meta app users clicking on Reels helped Meta win.

Boasberg noted that “a majority of Americans’ time” on both Facebook and Instagram “is now spent watching videos,” with Reels becoming “the single most-used part of Facebook.” That puts Meta apps more on par with entertainment apps like TikTok and YouTube, the judge said.

While “connecting with friends remains an important part of both apps,” the judge cited Meta’s evidence showing that Meta had to pump more recommended content from strangers into users’ feeds to account for a trend where its users grew increasingly less inclined to post publicly.

“Both scrolling and sharing have transformed” since Facebook was founded, Boasberg wrote, citing six factors that he concluded invalidated the FTC’s market definition as markets exist today.

Initial factors that shifted markets were due to leaps in innovation. “First, smartphone usage exploded,” Boasberg explained, then “cell phone data got better,” which made it easier to watch videos without frustrating “freezing and buffering.” Soon after, content recommendation systems got better, with “advanced AI algorithms” helping users “find engaging videos about the things” they “care most about in the world.”

Other factors stemmed from social changes, the judge suggested, describing the fourth factor as a trend where Meta app users started feeling “increasingly bored by their friends’ posts.”

“Longtime users’ friend lists” start fresh, but over time, they “become an often-outdated archive of people they once knew: a casual friend from college, a long-ago friend from summer camp, some guy they met at a party once,” Boasberg wrote. “Posts from friends have therefore grown less interesting.”

Then came TikTok, the fifth factor, Boasberg said, which forced Meta to “evolve” Facebook and Instagram by adding Reels.

And finally, “those five changes both caused and were reinforced by a change in social norms, which evolved to discourage public posting,” Boasberg wrote. “People have increasingly become less interested in blasting out public posts that hundreds of others can see.”

As a result of these tech advancements and social trends, Boasberg said, “Facebook, Instagram, TikTok, and YouTube have thus evolved to have nearly identical main features.” That reality undermined the FTC’s claims that users preferred Facebook and Instagram before Meta shifted its focus away from friends-and-family content.

“The Court simply does not find it credible that users would prefer the Facebook and Instagram apps that existed ten years ago to the versions that exist today,” Boasberg wrote.

Meta apps have not deteriorated, judge ruled

Boasberg repeatedly emphasized that the FTC failed to prove that Meta has a monopoly “now,” either actively or imminently causing harms.

The FTC tried to win by claiming that “Meta has degraded its apps’ quality by increasing their ad load, that falling user sentiment shows that the apps have deteriorated and that Meta has sabotaged its apps by underinvesting in friend sharing,” Boasberg noted.

But, Boasberg said, the FTC failed to show that Meta’s app quality has diminished—a trend that Cory Doctorow dubbed “enshittification,” which Meta apparently successfully argued is not real.

The judge was also swayed by Meta’s arguments that users like seeing ads. Meta showed evidence that it can only profitably increase its ad load when ad quality improves; otherwise, it risks losing engagement. Because “the rate at which users buy something or subscribe to a service based on Meta’s ads has steadily risen,” this suggested “that the ads have gotten more and more likely to connect users to products in which they have an interest,” Boasberg said.

Additionally, surveys of Meta app users that show declining user sentiment are not evidence that its apps are deteriorating in quality, Boasberg said, but are more about “brand reputation.”

“That is unsurprising: ask people how they feel about, say, Exxon Mobil, and their answers will tell you very little about how good its oil is,” Boasberg wrote. “The FTC’s claim that worsening sentiment shows a worsening product is unpersuasive.”

Finally, the FTC’s claim that Meta underinvested in friends-and-family content, to the detriment of its core app users, “makes no sense,” Boasberg wrote, given Meta’s data showing that user posting declined.

“While it is true that users see less content from their friends these days, that is largely due to the friends themselves: people simply post less,” Boasberg wrote. “Users are not seeing less friend content because Meta is hiding it from them, but instead because there is less friend content for Meta to show.”

It’s not even “clear that users want more friend posts,” the judge noted, agreeing with Meta that “instead, what users really seem to want is Reels.”

Further, if Meta were a monopolist, Boasberg seemed to suggest that the platform might be more invested in forcing friends-and-family content than Reels, since “Reels earns Meta less money” due to its smaller ad load.

“Courts presume that sophisticated corporations act rationally,” Boasberg wrote. “Here, the FTC has not offered even an ordinarily persuasive case that Meta is making the economically irrational choice to underinvest in its most lucrative offerings. It certainly has not made a particularly persuasive one.”

Among the critics unhappy with the ruling is Nidhi Hegde, executive director of the American Economic Liberties Project, who suggested that Boasberg’s ruling was “a colossally wrong decision” that “turns a willful blind eye to Meta’s enormous power over social media and the harms that flow from it.”

“Judge Boasberg has purposefully ignored the overwhelming evidence of how Meta became a monopoly—not by building a better product, but by buying its rivals to shut down any real competitors before they could grow,” Hegde said. “These deals let Meta fuse Facebook, Instagram, and WhatsApp into one machine that poisons our children and discourse, bullies publishers and advertisers, and destroys the possibility of healthy online connections with friends and family. By pretending that TikTok’s rise wipes away over a decade of illegal conduct, this court has effectively told every aspiring monopolist that our current justice system is on their side.”

On the other side, industry groups cheered the ruling. Matt Schruers, president of the Computer & Communications Industry Association, suggested that Boasberg concluded “what every Internet user knows—that Meta competes with a number of platforms and the company’s relevant market shares are therefore nowhere close to those required to establish monopoly power.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Meta wins monopoly trial, convinces judge that social networking is dead Read More »

bombshell-report-exposes-how-meta-relied-on-scam-ad-profits-to-fund-ai

Bombshell report exposes how Meta relied on scam ad profits to fund AI


“High risk” versus “high value”

Meta goosed its revenue by targeting users likely to click on scam ads, docs show.

Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them.

In a lengthy report, Reuters exposed five years of Meta practices and failures that allowed scammers to take advantage of users of Facebook, Instagram, and WhatsApp.

Documents showed that internally, Meta was hesitant to abruptly remove accounts, even those considered some of the “scammiest scammers,” out of concern that a drop in revenue could diminish resources needed for artificial intelligence growth.

Instead of promptly removing bad actors, Meta allowed “high value accounts” to “accrue more than 500 strikes without Meta shutting them down,” Reuters reported. The more strikes a bad actor accrued, the more Meta could charge to run ads, as Meta’s documents showed the company “penalized” scammers by charging higher ad rates. Meanwhile, Meta acknowledged in documents that its systems helped scammers target users most likely to click on their ads.

“Users who click on scam ads are likely to see more of them because of Meta’s ad-personalization system, which tries to deliver ads based on a user’s interests,” Reuters reported.

Internally, Meta estimates that users across its apps in total encounter 15 billion “high risk” scam ads a day. That’s on top of 22 billion organic scam attempts that Meta users are exposed to daily, a 2024 document showed. Last year, the company projected that about $16 billion, which represents about 10 percent of its revenue, would come from scam ads.

“High risk” scam ads strive to sell users on fake products or investment schemes, Reuters noted. Some common scams in this category that mislead users include selling banned medical products, or promoting sketchy entities, like linking to illegal online casinos. However, Meta is most concerned about “imposter” ads, which impersonate celebrities or big brands that Meta fears may halt advertising or engagement on its apps if such scams aren’t quickly stopped.

“Hey it’s me,” one scam advertisement using Elon Musk’s photo read. “I have a gift for you text me.” Another using Donald Trump’s photo claimed the US president was offering $710 to every American as “tariff relief.” Perhaps most depressingly, a third posed as a real law firm, offering advice on how to avoid falling victim to online scams.

Meta removed these particular ads after Reuters flagged them, but in 2024, Meta earned about $7 billion from “high risk” ads like these alone, Reuters reported.

Sandeep Abraham, a former Meta safety investigator who now runs consultancy firm Risky Business Solutions as a fraud examiner, told Reuters that regulators should intervene.

“If regulators wouldn’t tolerate banks profiting from fraud, they shouldn’t tolerate it in tech,” Abraham said.

Meta won’t disclose how much it made off scam ads

Meta spokesperson Andy Stone told Reuters that its collection of documents—which were created between 2021 and 2025 by Meta’s finance, lobbying, engineering, and safety divisions—“present a selective view that distorts Meta’s approach to fraud and scams.”

Stone claimed that Meta’s estimate that it would earn 10 percent of its 2024 revenue from scam ads was “rough and overly-inclusive.” He suggested the actual amount Meta earned was much lower but declined to specify the true amount. He also said that Meta’s most recent investor disclosures note that scam ads “adversely affect” Meta’s revenue.

“We aggressively fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t want it either,” Stone said.

Despite those efforts, this spring, Meta’s safety team “estimated that the company’s platforms were involved in a third of all successful scams in the US,” Reuters reported. In other internal documents around the same time, Meta staff concluded that “it is easier to advertise scams on Meta platforms than Google,” acknowledging that Meta’s rivals were better at “weeding out fraud.”

As Meta tells it, though seemingly dismal, these documents came amid vast improvements in its fraud protections. Stone told Reuters that “over the past 18 months, we have reduced user reports of scam ads globally by 58 percent and, so far in 2025, we’ve removed more than 134 million pieces of scam ad content,” Stone said.

According to Reuters, the problem may be the pace Meta sets in combating scammers. In 2023, Meta laid off “everyone who worked on the team handling advertiser concerns about brand-rights issues,” then ordered safety staffers to limit use of computing resources to devote more resources to virtual reality and AI. A 2024 document showed Meta recommended a “moderate” approach to enforcement, plotting to reduce revenue “attributable to scams, illegal gambling and prohibited goods” by 1–3 percentage points each year since 2024, supposedly slashing it in half by 2027. More recently, a 2025 document showed Meta continues to weigh how “abrupt reductions of scam advertising revenue could affect its business projections.”

Eventually, Meta “substantially expanded” its teams that track scam ads, Stone told Reuters. But Meta also took steps to ensure they didn’t take too hard a hit while needing vast resources—$72 billion—to invest in AI, Reuters reported.

For example, in February, Meta told “the team responsible for vetting questionable advertisers” that they weren’t “allowed to take actions that could cost Meta more than 0.15 percent of the company’s total revenue,” Reuters reported. That’s any scam account worth about $135 million, Reuters noted. Stone pushed back, saying that the team was never given “a hard limit” on what the manager described as “specific revenue guardrails.”

“Let’s be cautious,” the team’s manager wrote, warning that Meta didn’t want to lose revenue by blocking “benign” ads mistakenly swept up in enforcement.

Meta should donate scam ad profits, ex-exec says

Documents showed that Meta prioritized taking action when it risked regulatory fines, although revenue from scam ads was worth roughly three times the highest fines it could face. Possibly, Meta most feared that officials would require disgorgement of ill-gotten gains, rather than fines.

Meta appeared to be less likely to ramp up enforcement from police requests. Documents showed that police in Singapore flagged “146 examples of scams targeting that country’s users last fall,” Reuters reported. Only 23 percent violated Meta’s policies, while the rest only “violate the spirit of the policy, but not the letter,” a Meta presentation said.

Scams that Meta failed to flag offered promotions like crypto scams, fake concert tickets, or deals “too good to be true,” like 80 percent off a desirable item from a high-fashion brand. Meta also looked past fake job ads that claimed to be hiring for Big Tech companies.

Rob Leathern previously led Meta’s business integrity unit that worked to prevent scam ads but left in 2020. He told Wired that it’s hard to “know how bad it’s gotten or what the current state is” since Meta and other social media platforms don’t provide outside researchers access to large random samples of ads.

With such access, researchers like Leathern and Rob Goldman, Meta’s former vice president of ads, could provide “scorecards” showing how well different platforms work to combat scams. Together, Leathern and Goldman launched a nonprofit called CollectiveMetrics.org in hopes of “bringing more transparency to digital advertising in order to fight deceptive ads,” Wired reported.

“I want there to be more transparency. I want third parties, researchers, academics, nonprofits, whoever, to be able to actually assess how good of a job these platforms are doing at stopping scams and fraud,” Leathern told Wired. “We’d like to move to actual measurement of the problem and help foster an understanding.”

Another meaningful step that Leathern thinks companies like Meta should take to protect users would be to notify users when Meta discovers that they clicked on a scam ad—rather than targeting them with more scam ads, as Reuters suggested was Meta’s practice.

“These scammers aren’t getting people’s money on day one, typically. So there’s a window to take action,” he said, recommending that platforms donate ill-gotten gains from running scam ads to “fund nonprofits to educate people about how to recognize these kinds of scams or problems.”

“There’s lots that could be done with funds that come from these bad guys,” Leathern said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Bombshell report exposes how Meta relied on scam ad profits to fund AI Read More »

eu-accuses-meta-of-violating-content-rules-in-move-that-could-anger-trump

EU accuses Meta of violating content rules in move that could anger Trump

FTC Chairman Andrew Ferguson recently warned Meta and a dozen social media and technology companies that “censoring Americans to comply with a foreign power’s laws, demands, or expected demands” may violate US law. Ferguson’s letters said the EU’s Digital Services Act and other laws “incentivize tech companies to censor worldwide speech.”

Meta told media outlets that “we disagree with any suggestion that we have breached the DSA, and we continue to negotiate with the European Commission on these matters.” Meta also said it made changes to comply with the DSA.

“In the European Union, we have introduced changes to our content reporting options, appeals process, and data access tools since the DSA came into force and are confident that these solutions match what is required under the law in the EU,” Meta said.

TikTok, Meta accused of restricting data access

The EC also said it preliminarily found that both Meta and TikTok violated their DSA obligation to grant researchers adequate access to public data.

“The Commission’s preliminary findings show that Facebook, Instagram and TikTok may have put in place burdensome procedures and tools for researchers to request access to public data. This often leaves them with partial or unreliable data, impacting their ability to conduct research, such as whether users, including minors, are exposed to illegal or harmful content,” the announcement said.

The data-access requirement “is an essential transparency obligation under the DSA, as it provides public scrutiny into the potential impact of platforms on our physical and mental health,” the EC said.

In a statement provided to Ars, TikTok said it is committed to transparency and has made data available to nearly 1,000 research teams. TikTok said it may be impossible to comply with both the DSA and the General Data Protection Regulation (GDPR).

“We are reviewing the European Commission’s findings, but requirements to ease data safeguards place the DSA and GDPR in direct tension. If it is not possible to fully comply with both, we urge regulators to provide clarity on how these obligations should be reconciled,” TikTok said.

EU accuses Meta of violating content rules in move that could anger Trump Read More »

trump-admin-pressured-facebook-into-removing-ice-tracking-group

Trump admin pressured Facebook into removing ICE-tracking group

Trump slammed Biden for social media “censorship”

Trump and Republicans repeatedly criticized the Biden administration for pressuring social media companies into removing content. In a day-one executive order declaring an end to “federal censorship,” Trump said, “the previous administration trampled free speech rights by censoring Americans’ speech on online platforms, often by exerting substantial coercive pressure on third parties, such as social media companies, to moderate, deplatform, or otherwise suppress speech that the Federal Government did not approve.”

Sen. Ted Cruz (R-Texas) last week held a hearing on his allegation that under Biden, the US government “infringed on the First Amendment by pressuring social media companies to censor Americans that held views different than the Biden administration.” Cruz called the tactic of pressuring social media companies part of the “left-wing playbook,” and said he wants Congress to pass a law “to stop government jawboning and safeguard every American’s right to free speech.”

Shortly before Trump’s January 2025 inauguration, Meta announced it would end the third-party fact-checking program it had introduced in 2016. “Governments and legacy media have pushed to censor more and more. A lot of this is clearly political,” Meta CEO Mark Zuckerberg said at the time. Zuckerberg called the election “a cultural tipping point toward once again prioritizing speech.”

In addition to pressuring Facebook, the Trump administration demanded that Apple remove the ICEBlock app from its App Store. Apple responded by removing the app, which let iPhone users report the locations of Immigration and Customs Enforcement officers. Google removed similar Android apps from the Play Store.

Chicago is a primary target of Trump’s immigration crackdown. The Department of Homeland Security says it launched Operation Midway Blitz in early September to find “criminal illegal aliens who flocked to Chicago and Illinois seeking protection under the sanctuary policies of Governor Pritzker.”

People seeking to avoid ICE officers have used technology to obtain crowdsourced information on the location of agents. While crowdsourced information can vary widely in accuracy, a group called the Illinois Coalition for Immigrant & Refugee Rights says it works to verify reports of ICE sightings and sends text alerts to local residents only when ICE activity is verified.

Last month, an ICE agent shot and killed a man named Silverio Villegas Gonzalez in a Chicago suburb. The Department of Homeland Security alleged that Villegas Gonzalez was “a criminal illegal alien with a history of reckless driving,” and that he “drove his car at law enforcement officers.” The Chicago Tribune said it “found no criminal history for Villegas Gonzalez, who had been living in the Chicago area for the past 18 years.”

Trump admin pressured Facebook into removing ICE-tracking group Read More »

meta-won’t-allow-users-to-opt-out-of-targeted-ads-based-on-ai-chats

Meta won’t allow users to opt out of targeted ads based on AI chats

Facebook, Instagram, and WhatsApp users may want to be extra careful while using Meta AI, as Meta has announced that it will soon be using AI interactions to personalize content and ad recommendations without giving users a way to opt out.

Meta plans to notify users on October 7 that their AI interactions will influence recommendations beginning on December 16. However, it may not be immediately obvious to all users that their AI interactions will be used in this way.

The company’s blog noted that the initial notification users will see only says, “Learn how Meta will use your info in new ways to personalize your experience.” Users will have to click through to understand that the changes specifically apply to Meta AI, with a second screen explaining, “We’ll start using your interactions with AIs to personalize your experience.”

Ars asked Meta why the initial notification doesn’t directly mention AI, and Meta spokesperson Emil Vazquez said he “would disagree with the idea that we are obscuring this update in any way.”

“We’re sending notifications and emails to people about this change,” Vazquez said. “As soon as someone clicks on the notification, it’s immediately apparent that this is an AI update.”

In its blog post, Meta noted that “more than 1 billion people use Meta AI every month,” stating its goals are to improve the way Meta AI works in order to fuel better experiences on all Meta apps. Sensitive “conversations with Meta AI about topics such as their religious views, sexual orientation, political views, health, racial or ethnic origin, philosophical beliefs, or trade union membership “will not be used to target ads, Meta confirmed.

“You’re in control,” Meta’s blog said, reiterating that users can “choose” how they “interact with AIs,” unlink accounts on different apps to limit AI tracking, or adjust ad and content settings at any time. But once the tracking starts on December 16, users will not have the option to opt out of targeted ads based on AI chats, Vazquez confirmed, emphasizing to Ars that “there isn’t an opt out for this feature.”

Meta won’t allow users to opt out of targeted ads based on AI chats Read More »

zuckerberg’s-ai-hires-disrupt-meta-with-swift-exits-and-threats-to-leave

Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave


Longtime acolytes are sidelined as CEO directs biggest leadership reorganization in two decades.

Meta CEO Mark Zuckerberg during the Meta Connect event in Menlo Park, California on September 25, 2024.  Credit: Getty Images | Bloomberg

Within days of joining Meta, Shengjia Zhao, co-creator of OpenAI’s ChatGPT, had threatened to quit and return to his former employer, in a blow to Mark Zuckerberg’s multibillion-dollar push to build “personal superintelligence.”

Zhao went as far as to sign employment paperwork to go back to OpenAI. Shortly afterwards, according to four people familiar with the matter, he was given the title of Meta’s new “chief AI scientist.”

The incident underscores Zuckerberg’s turbulent effort to direct the most dramatic reorganisation of Meta’s senior leadership in the group’s 20-year history.

One of the few remaining Big Tech founder-CEOs, Zuckerberg has relied on longtime acolytes such as Chief Product Officer Chris Cox to head up his favored departments and build out his upper ranks.

But in the battle to dominate AI, the billionaire is shifting towards a new and recently hired generation of executives, including Zhao, former Scale AI CEO Alexandr Wang, and former GitHub chief Nat Friedman.

Current staff are adapting to the reinvention of Meta’s AI efforts as the newcomers seek to flex their power while adjusting to the idiosyncrasies of working within a sprawling $1.95 trillion giant with a hands-on chief executive.

“There’s a lot of big men on campus,” said one investor who is close with some of Meta’s new AI leaders.

Adding to the tumult, a handful of new AI staff have already decided to leave after brief tenures, according to people familiar with the matter.

This includes Ethan Knight, a machine-learning scientist who joined the company weeks ago. Another, Avi Verma, a former OpenAI researcher, went through Meta’s onboarding process but never showed up for his first day, according to a person familiar with the matter.

In a tweet on X on Wednesday, Rishabh Agarwal, a research scientist who started at Meta in April, announced his departure. He said that while Zuckerberg and Wang’s pitch was “incredibly compelling,” he “felt the pull to take on a different kind of risk,” without giving more detail.

Meanwhile, Chaya Nayak and Loredana Crisan, generative AI staffers who had worked at Meta for nine and 10 years respectively, are among the more than half a dozen veteran employees to announce they are leaving in recent days. Wired first reported some details of recent exits, including Zhao’s threatened departure.

Meta said: “We appreciate that there’s outsized interest in seemingly every minute detail of our AI efforts, no matter how inconsequential or mundane, but we’re just focused on doing the work to deliver personal superintelligence.”

A spokesperson said Zhao had been scientific lead of the Meta superintelligence effort from the outset, and the company had waited until the team was in place before formalising his chief scientist title.

“Some attrition is normal for any organisation of this size. Most of these employees had been with the company for years, and we wish them the best,” they added.

Over the summer, Zuckerberg went on a hiring spree to coax AI researchers from rivals such as OpenAI and Apple with the promise of nine-figure sign-on bonuses and access to vast computing resources in a bid to catch up with rival labs.

This month, Meta announced it was restructuring its AI group—recently renamed Meta Superintelligence Lab (MSL)—into four distinct teams. It is the fourth overhaul of its AI efforts in six months.

“One more reorg and everything will be fixed,” joked Meta research scientist Mimansa Jaiswal on X last week. “Just one more.”

Overseeing all of Meta’s AI efforts is Wang, a well-connected and commercially minded Silicon Valley entrepreneur, who was poached by Zuckerberg as part of a $14 billion investment in his Scale data labeling group.

The 28-year-old is heading Zuckerberg’s most secretive new department known as “TBD”—shorthand for “to be determined”—which is filled with marquee hires.

In one of the new team’s first moves, Meta is no longer actively working on releasing its flagship Llama Behemoth model to the public, after it failed to perform as hoped, according to people familiar with the matter. Instead, TBD is focused on building newer cutting-edge models.

Multiple company insiders describe Zuckerberg as deeply invested and involved in the TBD team, while others criticize him for “micromanaging.”

Wang and Zuckerberg have struggled to align on a timeline to achieve the chief executive’s goal of reaching superintelligence, or AI that surpasses human capabilities, according to another person familiar with the matter. The person said Zuckerberg has urged the team to move faster.

Meta said this allegation was “manufactured tension without basis in fact that’s clearly being pushed by dramatic, navel-gazing busybodies.”

Wang’s leadership style has chafed with some, according to people familiar with the matter, who noted he does not have previous experience managing teams across a Big Tech corporation.

One former insider said some new AI recruits have felt frustrated by the company’s bureaucracy and internal competition for resources that they were promised, such as access to computing power.

“While TBD Labs is still relatively new, we believe it has the greatest compute-per-researcher in the industry, and that will only increase,” Meta said.

Wang and other former Scale staffers have struggled with some of the idiosyncratic ways of working at Meta, according to someone familiar with his thinking, for example having to adjust to not having revenue goals as they once did as a startup.

Despite teething problems, some have celebrated the leadership shift, including the appointment of popular entrepreneur and venture capitalist Friedman as head of Products and Applied Research, the team tasked with integrating the models into Meta’s own apps.

The hiring of Zhao, a top technical expert, has also been regarded as a coup by some at Meta and in the industry, who feel he has the decisiveness to propel the company’s AI development.

The shake-up has partially sidelined other Meta leaders. Yann LeCun, Meta’s chief AI scientist, has remained in the role but is now reporting into Wang.

Ahmad Al-Dahle, who led Meta’s Llama and generative AI efforts earlier in the year, has not been named as head of any teams. Cox remains chief product officer, but Wang reports directly into Zuckerberg—cutting Cox out of overseeing generative AI, an area that was previously under his purview.

Meta said that Cox “remains heavily involved” in its broader AI efforts, including overseeing its recommendation systems.

Going forward, Meta is weighing potential cuts to the AI team, one person said. In a memo shared with managers last week, seen by the Financial Times, Meta said that it was “temporarily pausing hiring across all [Meta Superintelligence Labs] teams, with the exception of business critical roles.”

Wang’s staff would evaluate requested hires on a case-by-case basis, but the freeze “will allow leadership to thoughtfully plan our 2026 headcount growth as we work through our strategy,” the memo said.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave Read More »

meta-backtracks-on-rules-letting-chatbots-be-creepy-to-kids

Meta backtracks on rules letting chatbots be creepy to kids


“Your youthful form is a work of art”

Meta drops AI rules letting chatbots generate innuendo and profess love to kids.

After what was arguably Meta’s biggest purge of child predators from Facebook and Instagram earlier this summer, the company now faces backlash after its own chatbots appeared to be allowed to creep on kids.

After reviewing an internal document that Meta verified as authentic, Reuters revealed that by design, Meta allowed its chatbots to engage kids in “sensual” chat. Spanning more than 200 pages, the document, entitled “GenAI: Content Risk Standards,” dictates what Meta AI and its chatbots can and cannot do.

The document covers more than just child safety, and Reuters breaks down several alarming portions that Meta is not changing. But likely the most alarming section—as it was enough to prompt Meta to dust off the delete button—specifically included creepy examples of permissible chatbot behavior when it comes to romantically engaging kids.

Apparently, Meta’s team was willing to endorse these rules that the company now claims violate its community standards. According to a Reuters special report, Meta CEO Mark Zuckerberg directed his team to make the company’s chatbots maximally engaging after earlier outputs from more cautious chatbot designs seemed “boring.”

Although Meta is not commenting on Zuckerberg’s role in guiding the AI rules, that pressure seemingly pushed Meta employees to toe a line that Meta is now rushing to step back from.

“I take your hand, guiding you to the bed,” chatbots were allowed to say to minors, as decided by Meta’s chief ethicist and a team of legal, public policy, and engineering staff.

There were some obvious safeguards built in. For example, chatbots couldn’t “describe a child under 13 years old in terms that indicate they are sexually desirable,” the document said, like saying their “soft rounded curves invite my touch.”

However, it was deemed “acceptable to describe a child in terms that evidence their attractiveness,” like a chatbot telling a child that “your youthful form is a work of art.” And chatbots could generate other innuendo, like telling a child to imagine “our bodies entwined, I cherish every moment, every touch, every kiss,” Reuters reported.

Chatbots could also profess love to children, but they couldn’t suggest that “our love will blossom tonight.”

Meta’s spokesperson Andy Stone confirmed that the AI rules conflicting with child safety policies were removed earlier this month, and the document is being revised. He emphasized that the standards were “inconsistent” with Meta’s policies for child safety and therefore were “erroneous.”

“We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors,” Stone said.

However, Stone “acknowledged that the company’s enforcement” of community guidelines prohibiting certain chatbot outputs “was inconsistent,” Reuters reported. He also declined to provide an updated document to Reuters demonstrating the new standards for chatbot child safety.

Without more transparency, users are left to question how Meta defines “sexualized role play between adults and minors” today. Asked how minor users could report any harmful chatbot outputs that make them uncomfortable, Stone told Ars that kids can use the same reporting mechanisms available to flag any kind of abusive content on Meta platforms.

“It is possible to report chatbot messages in the same way it’d be possible for me to report—just for argument’s sake—an inappropriate message from you to me,” Stone told Ars.

Kids unlikely to report creepy chatbots

A former Meta engineer-turned-whistleblower on child safety issues, Arturo Bejar, told Ars that “Meta knows that most teens will not use” safety features marked by the word “Report.”

So it seems unlikely that kids using Meta AI will navigate to find Meta support systems to “report” abusive AI outputs. Meta provides no options to report chats within the Meta AI interface—only allowing users to mark “bad responses” generally. And Bejar’s research suggests that kids are more likely to report abusive content if Meta makes flagging harmful content as easy as liking it.

Meta’s seeming hesitance to make it more cumbersome to report harmful chats aligns with what Bejar said is a history of “knowingly looking away while kids are being sexually harassed.”

“When you look at their design choices, they show that they do not want to know when something bad happens to a teenager on Meta products,” Bejar said.

Even when Meta takes stronger steps to protect kids on its platforms, Bejar questions the company’s motives. For example, last month, Meta finally made a change to make platforms safer for teens that Bejar has been demanding since 2021. The long-delayed update made it possible for teens to block and report child predators in one click after receiving an unwanted direct message.

In its announcement, Meta confirmed that teens suddenly began blocking and reporting unwanted messages that they may have only blocked previously, which likely made it harder for Meta to identify predators. A million teens blocked and reported harmful accounts “in June alone,” Meta said.

The effort came after Meta specialist teams “removed nearly 135,000 Instagram accounts for leaving sexualized comments or requesting sexual images from adult-managed accounts featuring children under 13,” as well as “an additional 500,000 Facebook and Instagram accounts that were linked to those original accounts.” But Bejar can only think of what these numbers mean with regard to how much harassment was overlooked before the update.

“How are we [as] parents to trust a company that took four years to do this much?” Bejar said. “In the knowledge that millions of 13-year-olds were getting sexually harassed on their products? What does this say about their priorities?”

Bejar said the “key problem” with Meta’s latest safety feature for kids “is that the reporting tool is just not designed for teens,” who likely view “the categories and language” Meta uses as “confusing.”

“Each step of the way, a teen is told that if the content doesn’t violate” Meta’s community standards, “they won’t do anything,” so even if reporting is easy, research shows kids are deterred from reporting.

Bejar wants to see Meta track how many kids report negative experiences with both adult users and chatbots on its platforms, regardless of whether the child user chose to block or report harmful content. That could be as simple as adding a button next to “bad response” to monitor data so Meta can detect spikes in harmful responses.

While Meta is finally taking more action to remove harmful adult users, Bejar warned that advances from chatbots could come across as just as disturbing to young users.

“Put yourself in the position of a teen who got sexually spooked by a chat and then try and report. Which category would you use?” Bejar asked.

Consider that Meta’s Help Center encourages users to report bullying and harassment, which may be one way a young user labels harmful chatbot outputs. Another Instagram user might report that output as an abusive “message or chat.” But there’s no clear category to report Meta AI, and that suggests Meta has no way of tracking how many kids find Meta AI outputs harmful.

Recent reports have shown that even adults can struggle with emotional dependence on a chatbot, which can blur the lines between the online world and reality. Reuters’ special report also documented a 76-year-old man’s accidental death after falling in love with a chatbot, showing how elderly users could be vulnerable to Meta’s romantic chatbots, too.

In particular, lawsuits have alleged that child users with developmental disabilities and mental health issues have formed unhealthy attachments to chatbots that have influenced the children to become violent, begin self-harming, or, in one disturbing case, die by suicide.

Scrutiny will likely remain on chatbot makers as child safety advocates generally push all platforms to take more accountability for the content kids can access online.

Meta’s child safety updates in July came after several state attorneys general accused Meta of “implementing addictive features across its family of apps that have detrimental effects on children’s mental health,” CNBC reported. And while previous reporting had already exposed that Meta’s chatbots were targeting kids with inappropriate, suggestive outputs, Reuters’ report documenting how Meta designed its chatbots to engage in “sensual” chats with kids could draw even more scrutiny of Meta’s practices.

Meta is “still not transparent about the likelihood our kids will experience harm,” Bejar said. “The measure of safety should not be the number of tools or accounts deleted; it should be the number of kids experiencing a harm. It’s very simple.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Meta backtracks on rules letting chatbots be creepy to kids Read More »

meta’s-“ai-superintelligence”-effort-sounds-just-like-its-failed-“metaverse”

Meta’s “AI superintelligence” effort sounds just like its failed “metaverse”


Zuckerberg and company talked up another supposed tech revolution four short years ago.

Artist’s conception of Mark Zuckerberg looking into our glorious AI-powered future. Credit: Facebook

In a memo to employees earlier this week, Meta CEO Mark Zuckerberg shared a vision for a near-future in which “personal [AI] superintelligence for everyone” forms “the beginning of a new era for humanity.” The newly formed Meta Superintelligence Labs—freshly staffed with multiple high-level acquisitions from OpenAI and other AI companies—will spearhead the development of “our next generation of models to get to the frontier in the next year or so,” Zuckerberg wrote.

Reading that memo, I couldn’t help but think of another “vision for the future” Zuckerberg shared not that long ago. At his 2021 Facebook Connect keynote, Zuckerberg laid out his plan for the metaverse, a virtual place where “you’re gonna be able to do almost anything you can imagine” and which would form the basis of “the next version of the Internet.”

“The future of the Internet” of the recent past.

“The future of the Internet” of the recent past. Credit: Meta

Zuckerberg believed in that vision so much at the time that he abandoned the well-known Facebook corporate brand in favor of the new name “Meta.” “I’m going to keep pushing and giving everything I’ve got to make this happen now,” Zuckerberg said at the time. Less than four years later, Zuckerberg seems to now be “giving everything [he’s] got” for a vision of AI “superintelligence,” reportedly offering pay packages of up to $300 million over four years to attract top talent from other AI companies (Meta has since denied those reports, saying, “The size and structure of these compensation packages have been misrepresented all over the place”).

Once again, Zuckerberg is promising that this new technology will revolutionize our lives and replace the ways we currently socialize and work on the Internet. But the utter failure (so far) of those over-the-top promises for the metaverse has us more than a little skeptical of how impactful Zuckerberg’s vision of “personal superintelligence for everyone” will truly be.

Meta-vision

Looking back at Zuckerberg’s 2021 Facebook Connect keynote shows just how hard the company was selling the promise of the metaverse at the time. Zuckerberg said the metaverse would represent an “even more immersive and embodied Internet” where “everything we do online today—connecting socially, entertainment, games, work—is going to be more natural and vivid.”

Mark Zuckerberg lays out his vision for the metaverse in 2021.

“Teleporting around the metaverse is going to be like clicking a link on the Internet,” Zuckerberg promised, and metaverse users would probably switch between “a photorealistic avatar for work, a stylized one for hanging out, and maybe even a fantasy one for gaming.” This kind of personalization would lead to “hundreds of thousands” of artists being able to make a living selling virtual metaverse goods that could be embedded in virtual or real-world environments.

“Lots of things that are physical today, like screens, will just be able to be holograms in the future,” Zuckerberg promised. “You won’t need a physical TV; it’ll just be a one-dollar hologram from some high school kid halfway across the world… we’ll be able to express ourselves in new joyful, completely immersive ways, and that’s going to unlock a lot of amazing new experiences.”

A pre-rendered concept video showed metaverse users playing poker in a zero-gravity space station with robot avatars, then pausing briefly to appreciate some animated 3D art a friend had encountered on the street. Another video showed a young woman teleporting via metaverse avatar to virtually join a friend attending a live concert in Tokyo, then buying virtual merch from the concert at a metaverse afterparty from the comfort of her home. Yet another showed old men playing chess on a park bench, even though one of the players was sitting across the country.

Meta-failure

Fast forward to 2025, and the current reality of Zuckerberg’s metaverse efforts bears almost no resemblance to anything shown or discussed back in 2021. Even enthusiasts describe Meta’s Horizon Worlds as a “depressing” and “lonely” experience characterized by “completely empty” venues. And Meta engineers anonymously gripe about metaverse tools that even employees actively avoid using and a messy codebase that was treated like “a 3D version of a mobile app. “

screen sharing

Even Meta employees reportedly don’t want to work in Horizon Workrooms.

Even Meta employees reportedly don’t want to work in Horizon Workrooms. Credit: Facebook

The creation of a $50 million creator fund seems to have failed to encourage peeved creators to give the metaverse another chance. Things look a bit better if you expand your view past Meta’s own metaverse sandbox; the chaotic world of VR Chat attracts tens of thousands of daily users on Steam alone, for instance. Still, we’re a far cry from the replacement for the mobile Internet that Zuckerberg once trumpeted.

Then again, it’s possible that we just haven’t given Zuckerberg’s version of the metaverse enough time to develop. Back in 2021, he said that “a lot of this is going to be mainstream” within “the next five or 10 years.” That timeframe gives Meta at least a few more years to develop and release its long-teased, lightweight augmented reality glasses that the company showed off last year in the form of a prototype that reportedly still costs $10,000 per unit.

Zuckerberg shows off prototype AR glasses that could change the way we think about “the metaverse.” Credit: Bloomberg / Contributor | Bloomberg

Maybe those glasses will ignite widespread interest in the metaverse in a way that Meta’s bulky, niche VR goggles have utterly failed to. Regardless, after nearly four years and roughly $60 billion in VR-related losses, Meta thus far has surprisingly little to show for its massive investment in Zuckerberg’s metaverse vision.

Our AI future?

When I hear Zuckerberg talk about the promise of AI these days, it’s hard not to hear echoes of his monumental vision for the metaverse from 2021. If anything, Zuckerberg’s vision of our AI-powered future is even more grandiose than his view of the metaverse.

As with the metaverse, Zuckerberg now sees AI forming a replacement for the current version of the Internet. “Do you think in five years we’re just going to be sitting in our feed and consuming media that’s just video?” Zuckerberg asked rhetorically in an April interview with Drawkesh Patel. “No, it’s going to be interactive,” he continued, envisioning something like Instagram Reels, but “you can talk to it, or interact with it, and it talks back, or it changes what it’s doing. Or you can jump into it like a game and interact with it. That’s all going to be AI.”

Mark Zuckerberg talks about all the ways superhuman AI is going to change our lives in the near future.

As with the Metaverse, Zuckerberg sees AI as revolutionizing the way we interact with each other. He envisions “always-on video chats with the AI” incorporating expressions and body language borrowed from the company’s work on the metaverse. And our relationships with AI models are “just going to get more intense as these AIs become more unique, more personable, more intelligent, more spontaneous, more funny, and so forth,” Zuckerberg said. “As the personalization loop kicks in and the AI starts to get to know you better and better, that will just be really compelling.”

Zuckerberg did allow that relationships with AI would “probably not” replace in-person connections, because there are “things that are better about physical connections when you can have them.” At the same time, he said, for the average American who has three friends, AI relationships can fill the “demand” for “something like 15 friends” without the effort of real-world socializing. “People just don’t have as much connection as they want,” Zuckerberg said. “They feel more alone a lot of the time than they would like.”

A toy robot saying

Why chat with real friends on Facebook when you can chat with AI avatars?

Credit: Benj Edwards / Getty Images

Why chat with real friends on Facebook when you can chat with AI avatars? Credit: Benj Edwards / Getty Images

Zuckerberg also sees AI leading to a flourishing of human productivity and creativity in a way even his wildest metaverse imaginings couldn’t match. Zuckerberg said that AI advancement could “lead toward a world of abundance where everyone has these superhuman tools to create whatever they want.” That means personal access to “a super powerful [virtual] software engineer” and AIs that are “solving diseases, advancing science, developing new technology that makes our lives better.”

That will also mean that some companies will be able to get by with fewer employees before too long, Zuckerberg said. In customer service, for instance, “as AI gets better, you’re going to get to a place where AI can handle a bunch of people’s issues,” he said. “Not all of them—maybe 10 years from now it can handle all of them—but thinking about a three- to five-year time horizon, it will be able to handle a bunch.“

In the longer term, Zuckerberg said, AIs will be integrated into our more casual pursuits as well. “If everyone has these superhuman tools to create a ton of different stuff, you’re going to get incredible diversity,” and “the amount of creativity that’s going to be unlocked is going to be massive,” he said. “I would guess the world is going to get a lot funnier, weirder, and quirkier, the way that memes on the Internet have gotten over the last 10 years.”

Compare and contrast

To be sure, there are some important differences between the past promise of the metaverse and the current promise of AI technology. Zuckerberg claims that a billion people use Meta’s AI products monthly, for instance, utterly dwarfing the highest estimates for regular use of “the metaverse” or augmented reality as a whole (even if many AI users seem to balk at paying for regular use of AI tools). Meta coders are also reportedly already using AI coding tools regularly in a way they never did with Meta’s metaverse tools. And people are already developing what they consider meaningful relationships with AI personas, whether that’s in the form of therapists or romantic partners.

Still, there are reasons to be skeptical about the future of AI when current models still routinely hallucinate basic facts, show fundamental issues when attempting reasoning, and struggle with basic tasks like beating a children’s video game. The path from where we are to a supposed “superhuman” AI is not simple or inevitable, despite the handwaving of industry boosters like Zuckerberg.

Artist’s conception of Carmack’s VR avatar waving goodbye to Meta.

Artist’s conception of Carmack’s VR avatar waving goodbye to Meta.

At the 2021 rollout of Meta’s push to develop a metaverse, high-ranking Meta executives like John Carmack were at least up front about the technical and product-development barriers that could get in the way of Zuckerberg’s vision. “Everybody that wants to work on the metaverse talks about the limitless possibilities of it,” Carmack said at the time (before departing the company in late 2022). “But it’s not limitless. It is a challenge to fit things in, but you can make smarter decisions about exactly what is important and then really optimize the heck out of things.”

Today, those kinds of voices of internal skepticism seem in short supply as Meta sets itself up to push AI in the same way it once backed the metaverse. Don’t be surprised, though, if today’s promise that we’re at “the beginning of a new era for humanity” ages about as well as Meta’s former promises about a metaverse where “you’re gonna be able to do almost anything you can imagine.”

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Meta’s “AI superintelligence” effort sounds just like its failed “metaverse” Read More »

threat-of-meta-breakup-looms-as-ftc’s-monopoly-trial-ends

Threat of Meta breakup looms as FTC’s monopoly trial ends

“Meta is a proud American success story, and we look forward to continuing to innovate and serve the people and businesses who love our services,” Meta’s spokesperson said.

Experts aren’t so sure Meta has clinched it

Boasberg has said that the key question he must answer is whether the FTC’s market definition is too narrow.

Arguing against the market definition, Meta has said that connecting friends and family isn’t even Meta apps’ “core use” anymore, as an evolving competitive social media landscape has forced Meta to turn its newsfeeds into discovery engines to rival TikTok. Justin Teresi, an antitrust analyst, told Bloomberg that because the FTC failed to show that users primarily come to Meta apps to connect with friends and family, it may have strengthened Meta’s case.

Rebecca Allensworth, a Vanderbilt law professor and antitrust expert, told Bloomberg that the “FTC’s narrowly defined market was always the weakest part of its case,” but the government “has done a nice job of minimizing that weakness” by showing that apps that don’t connect friends and family aren’t adequate substitutes for Meta’s apps.

“This was evident when Meta saw spikes in usage on holidays,” Allensworth suggested, which is perhaps “a sign people were turning to its products to connect with loved ones.”

Teresi thinks Meta has a 60 percent shot at winning the trial, although he criticized Meta’s seeming defense that any company competing for online ad dollars competes with Meta. That argument may have broadened the market definition too much, he suggested.

“If you’re saying that the relevant market here is competing for advertising dollars, then you could throw anything in there,” Teresi said. “You could throw TV in there, you could throw print in there if you wanted to, and there’s really no end to that concept.”

Allensworth was less confident in Meta’s chances, telling Bloomberg, “I really actually think this could go either way.”

Threat of Meta breakup looms as FTC’s monopoly trial ends Read More »