facebook

meta-wins-monopoly-trial,-convinces-judge-that-social-networking-is-dead

Meta wins monopoly trial, convinces judge that social networking is dead


People are “bored” by their friends’ content, judge ruled, siding with Meta.

Mark Zuckerberg arrives at court after The Federal Trade Commission alleged the acquisitions of Instagram in 2012 and WhatsApp in 2014 gave Meta a social media monopoly. Credit: Bloomberg / Contributor | Bloomberg

After years of pushback from the Federal Trade Commission over Meta’s acquisitions of Instagram and WhatsApp, Meta has defeated the FTC’s monopoly claims.

In a Tuesday ruling, US District Judge James Boasberg said the FTC failed to show that Meta has a monopoly in a market dubbed “personal social networking.” In that narrowly defined market, the FTC unsuccessfully argued, Meta supposedly faces only two rivals, Snapchat and MeWe, which struggle to compete due to its alleged monopoly.

But the days of grouping apps into “separate markets of social networking and social media” are over, Boasberg wrote. He cited the Greek philosopher Heraclitus, who “posited that no man can ever step into the same river twice,” while telling the FTC they missed their chance to block Meta’s purchase.

Essentially, Boasberg agreed with Meta that social media—as it was known in Facebook’s early days—is dead. And that means that Meta now competes with a broader set of rival apps, which includes two hugely popular platforms: TikTok and YouTube.

“When the evidence implies that consumers are reallocating massive amounts of time from Meta’s apps to these rivals and that the amount of substitution has forced Meta to invest gobs of cash to keep up, the answer is clear: Meta is not a monopolist insulated from competition,” Boasberg wrote.

In fact, adding just TikTok alone to the market defeated the FTC’s claims, Boasberg wrote, leaving him to conclude that “Meta holds no monopoly in the relevant market.”

The FTC is not happy about the loss, which comes after Boasberg determined that one of the agency’s key expert witnesses, Scott Hemphill, could not have approached his testimony “with an open mind.” According to Boasberg, Hemphill was aligned with figures publicly calling for the breakup of Facebook, and that made “neutral evaluation of his opinions more difficult” in a case with little direct evidence of monopoly harms.

“We are deeply disappointed in this decision,” Joe Simonson, the FTC’s director of public affairs, told CNBC. “The deck was always stacked against us with Judge Boasberg, who is currently facing articles of impeachment. We are reviewing all our options.”

For Meta, the win ends years of FTC fights intended to break up the company’s family of apps: Facebook, Instagram, and WhatsApp.

“The Court’s decision today recognizes that Meta faces fierce competition,” Jennifer Newstead, Meta’s chief legal officer, said. “Our products are beneficial for people and businesses and exemplify American innovation and economic growth. We look forward to continuing to partner with the Administration and to invest in America.”

Reels’ popularity helped save Meta

Meta app users clicking on Reels helped Meta win.

Boasberg noted that “a majority of Americans’ time” on both Facebook and Instagram “is now spent watching videos,” with Reels becoming “the single most-used part of Facebook.” That puts Meta apps more on par with entertainment apps like TikTok and YouTube, the judge said.

While “connecting with friends remains an important part of both apps,” the judge cited Meta’s evidence showing that Meta had to pump more recommended content from strangers into users’ feeds to account for a trend where its users grew increasingly less inclined to post publicly.

“Both scrolling and sharing have transformed” since Facebook was founded, Boasberg wrote, citing six factors that he concluded invalidated the FTC’s market definition as markets exist today.

Initial factors that shifted markets were due to leaps in innovation. “First, smartphone usage exploded,” Boasberg explained, then “cell phone data got better,” which made it easier to watch videos without frustrating “freezing and buffering.” Soon after, content recommendation systems got better, with “advanced AI algorithms” helping users “find engaging videos about the things” they “care most about in the world.”

Other factors stemmed from social changes, the judge suggested, describing the fourth factor as a trend where Meta app users started feeling “increasingly bored by their friends’ posts.”

“Longtime users’ friend lists” start fresh, but over time, they “become an often-outdated archive of people they once knew: a casual friend from college, a long-ago friend from summer camp, some guy they met at a party once,” Boasberg wrote. “Posts from friends have therefore grown less interesting.”

Then came TikTok, the fifth factor, Boasberg said, which forced Meta to “evolve” Facebook and Instagram by adding Reels.

And finally, “those five changes both caused and were reinforced by a change in social norms, which evolved to discourage public posting,” Boasberg wrote. “People have increasingly become less interested in blasting out public posts that hundreds of others can see.”

As a result of these tech advancements and social trends, Boasberg said, “Facebook, Instagram, TikTok, and YouTube have thus evolved to have nearly identical main features.” That reality undermined the FTC’s claims that users preferred Facebook and Instagram before Meta shifted its focus away from friends-and-family content.

“The Court simply does not find it credible that users would prefer the Facebook and Instagram apps that existed ten years ago to the versions that exist today,” Boasberg wrote.

Meta apps have not deteriorated, judge ruled

Boasberg repeatedly emphasized that the FTC failed to prove that Meta has a monopoly “now,” either actively or imminently causing harms.

The FTC tried to win by claiming that “Meta has degraded its apps’ quality by increasing their ad load, that falling user sentiment shows that the apps have deteriorated and that Meta has sabotaged its apps by underinvesting in friend sharing,” Boasberg noted.

But, Boasberg said, the FTC failed to show that Meta’s app quality has diminished—a trend that Cory Doctorow dubbed “enshittification,” which Meta apparently successfully argued is not real.

The judge was also swayed by Meta’s arguments that users like seeing ads. Meta showed evidence that it can only profitably increase its ad load when ad quality improves; otherwise, it risks losing engagement. Because “the rate at which users buy something or subscribe to a service based on Meta’s ads has steadily risen,” this suggested “that the ads have gotten more and more likely to connect users to products in which they have an interest,” Boasberg said.

Additionally, surveys of Meta app users that show declining user sentiment are not evidence that its apps are deteriorating in quality, Boasberg said, but are more about “brand reputation.”

“That is unsurprising: ask people how they feel about, say, Exxon Mobil, and their answers will tell you very little about how good its oil is,” Boasberg wrote. “The FTC’s claim that worsening sentiment shows a worsening product is unpersuasive.”

Finally, the FTC’s claim that Meta underinvested in friends-and-family content, to the detriment of its core app users, “makes no sense,” Boasberg wrote, given Meta’s data showing that user posting declined.

“While it is true that users see less content from their friends these days, that is largely due to the friends themselves: people simply post less,” Boasberg wrote. “Users are not seeing less friend content because Meta is hiding it from them, but instead because there is less friend content for Meta to show.”

It’s not even “clear that users want more friend posts,” the judge noted, agreeing with Meta that “instead, what users really seem to want is Reels.”

Further, if Meta were a monopolist, Boasberg seemed to suggest that the platform might be more invested in forcing friends-and-family content than Reels, since “Reels earns Meta less money” due to its smaller ad load.

“Courts presume that sophisticated corporations act rationally,” Boasberg wrote. “Here, the FTC has not offered even an ordinarily persuasive case that Meta is making the economically irrational choice to underinvest in its most lucrative offerings. It certainly has not made a particularly persuasive one.”

Among the critics unhappy with the ruling is Nidhi Hegde, executive director of the American Economic Liberties Project, who suggested that Boasberg’s ruling was “a colossally wrong decision” that “turns a willful blind eye to Meta’s enormous power over social media and the harms that flow from it.”

“Judge Boasberg has purposefully ignored the overwhelming evidence of how Meta became a monopoly—not by building a better product, but by buying its rivals to shut down any real competitors before they could grow,” Hegde said. “These deals let Meta fuse Facebook, Instagram, and WhatsApp into one machine that poisons our children and discourse, bullies publishers and advertisers, and destroys the possibility of healthy online connections with friends and family. By pretending that TikTok’s rise wipes away over a decade of illegal conduct, this court has effectively told every aspiring monopolist that our current justice system is on their side.”

On the other side, industry groups cheered the ruling. Matt Schruers, president of the Computer & Communications Industry Association, suggested that Boasberg concluded “what every Internet user knows—that Meta competes with a number of platforms and the company’s relevant market shares are therefore nowhere close to those required to establish monopoly power.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Meta wins monopoly trial, convinces judge that social networking is dead Read More »

bombshell-report-exposes-how-meta-relied-on-scam-ad-profits-to-fund-ai

Bombshell report exposes how Meta relied on scam ad profits to fund AI


“High risk” versus “high value”

Meta goosed its revenue by targeting users likely to click on scam ads, docs show.

Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them.

In a lengthy report, Reuters exposed five years of Meta practices and failures that allowed scammers to take advantage of users of Facebook, Instagram, and WhatsApp.

Documents showed that internally, Meta was hesitant to abruptly remove accounts, even those considered some of the “scammiest scammers,” out of concern that a drop in revenue could diminish resources needed for artificial intelligence growth.

Instead of promptly removing bad actors, Meta allowed “high value accounts” to “accrue more than 500 strikes without Meta shutting them down,” Reuters reported. The more strikes a bad actor accrued, the more Meta could charge to run ads, as Meta’s documents showed the company “penalized” scammers by charging higher ad rates. Meanwhile, Meta acknowledged in documents that its systems helped scammers target users most likely to click on their ads.

“Users who click on scam ads are likely to see more of them because of Meta’s ad-personalization system, which tries to deliver ads based on a user’s interests,” Reuters reported.

Internally, Meta estimates that users across its apps in total encounter 15 billion “high risk” scam ads a day. That’s on top of 22 billion organic scam attempts that Meta users are exposed to daily, a 2024 document showed. Last year, the company projected that about $16 billion, which represents about 10 percent of its revenue, would come from scam ads.

“High risk” scam ads strive to sell users on fake products or investment schemes, Reuters noted. Some common scams in this category that mislead users include selling banned medical products, or promoting sketchy entities, like linking to illegal online casinos. However, Meta is most concerned about “imposter” ads, which impersonate celebrities or big brands that Meta fears may halt advertising or engagement on its apps if such scams aren’t quickly stopped.

“Hey it’s me,” one scam advertisement using Elon Musk’s photo read. “I have a gift for you text me.” Another using Donald Trump’s photo claimed the US president was offering $710 to every American as “tariff relief.” Perhaps most depressingly, a third posed as a real law firm, offering advice on how to avoid falling victim to online scams.

Meta removed these particular ads after Reuters flagged them, but in 2024, Meta earned about $7 billion from “high risk” ads like these alone, Reuters reported.

Sandeep Abraham, a former Meta safety investigator who now runs consultancy firm Risky Business Solutions as a fraud examiner, told Reuters that regulators should intervene.

“If regulators wouldn’t tolerate banks profiting from fraud, they shouldn’t tolerate it in tech,” Abraham said.

Meta won’t disclose how much it made off scam ads

Meta spokesperson Andy Stone told Reuters that its collection of documents—which were created between 2021 and 2025 by Meta’s finance, lobbying, engineering, and safety divisions—“present a selective view that distorts Meta’s approach to fraud and scams.”

Stone claimed that Meta’s estimate that it would earn 10 percent of its 2024 revenue from scam ads was “rough and overly-inclusive.” He suggested the actual amount Meta earned was much lower but declined to specify the true amount. He also said that Meta’s most recent investor disclosures note that scam ads “adversely affect” Meta’s revenue.

“We aggressively fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t want it either,” Stone said.

Despite those efforts, this spring, Meta’s safety team “estimated that the company’s platforms were involved in a third of all successful scams in the US,” Reuters reported. In other internal documents around the same time, Meta staff concluded that “it is easier to advertise scams on Meta platforms than Google,” acknowledging that Meta’s rivals were better at “weeding out fraud.”

As Meta tells it, though seemingly dismal, these documents came amid vast improvements in its fraud protections. Stone told Reuters that “over the past 18 months, we have reduced user reports of scam ads globally by 58 percent and, so far in 2025, we’ve removed more than 134 million pieces of scam ad content,” Stone said.

According to Reuters, the problem may be the pace Meta sets in combating scammers. In 2023, Meta laid off “everyone who worked on the team handling advertiser concerns about brand-rights issues,” then ordered safety staffers to limit use of computing resources to devote more resources to virtual reality and AI. A 2024 document showed Meta recommended a “moderate” approach to enforcement, plotting to reduce revenue “attributable to scams, illegal gambling and prohibited goods” by 1–3 percentage points each year since 2024, supposedly slashing it in half by 2027. More recently, a 2025 document showed Meta continues to weigh how “abrupt reductions of scam advertising revenue could affect its business projections.”

Eventually, Meta “substantially expanded” its teams that track scam ads, Stone told Reuters. But Meta also took steps to ensure they didn’t take too hard a hit while needing vast resources—$72 billion—to invest in AI, Reuters reported.

For example, in February, Meta told “the team responsible for vetting questionable advertisers” that they weren’t “allowed to take actions that could cost Meta more than 0.15 percent of the company’s total revenue,” Reuters reported. That’s any scam account worth about $135 million, Reuters noted. Stone pushed back, saying that the team was never given “a hard limit” on what the manager described as “specific revenue guardrails.”

“Let’s be cautious,” the team’s manager wrote, warning that Meta didn’t want to lose revenue by blocking “benign” ads mistakenly swept up in enforcement.

Meta should donate scam ad profits, ex-exec says

Documents showed that Meta prioritized taking action when it risked regulatory fines, although revenue from scam ads was worth roughly three times the highest fines it could face. Possibly, Meta most feared that officials would require disgorgement of ill-gotten gains, rather than fines.

Meta appeared to be less likely to ramp up enforcement from police requests. Documents showed that police in Singapore flagged “146 examples of scams targeting that country’s users last fall,” Reuters reported. Only 23 percent violated Meta’s policies, while the rest only “violate the spirit of the policy, but not the letter,” a Meta presentation said.

Scams that Meta failed to flag offered promotions like crypto scams, fake concert tickets, or deals “too good to be true,” like 80 percent off a desirable item from a high-fashion brand. Meta also looked past fake job ads that claimed to be hiring for Big Tech companies.

Rob Leathern previously led Meta’s business integrity unit that worked to prevent scam ads but left in 2020. He told Wired that it’s hard to “know how bad it’s gotten or what the current state is” since Meta and other social media platforms don’t provide outside researchers access to large random samples of ads.

With such access, researchers like Leathern and Rob Goldman, Meta’s former vice president of ads, could provide “scorecards” showing how well different platforms work to combat scams. Together, Leathern and Goldman launched a nonprofit called CollectiveMetrics.org in hopes of “bringing more transparency to digital advertising in order to fight deceptive ads,” Wired reported.

“I want there to be more transparency. I want third parties, researchers, academics, nonprofits, whoever, to be able to actually assess how good of a job these platforms are doing at stopping scams and fraud,” Leathern told Wired. “We’d like to move to actual measurement of the problem and help foster an understanding.”

Another meaningful step that Leathern thinks companies like Meta should take to protect users would be to notify users when Meta discovers that they clicked on a scam ad—rather than targeting them with more scam ads, as Reuters suggested was Meta’s practice.

“These scammers aren’t getting people’s money on day one, typically. So there’s a window to take action,” he said, recommending that platforms donate ill-gotten gains from running scam ads to “fund nonprofits to educate people about how to recognize these kinds of scams or problems.”

“There’s lots that could be done with funds that come from these bad guys,” Leathern said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Bombshell report exposes how Meta relied on scam ad profits to fund AI Read More »

eu-accuses-meta-of-violating-content-rules-in-move-that-could-anger-trump

EU accuses Meta of violating content rules in move that could anger Trump

FTC Chairman Andrew Ferguson recently warned Meta and a dozen social media and technology companies that “censoring Americans to comply with a foreign power’s laws, demands, or expected demands” may violate US law. Ferguson’s letters said the EU’s Digital Services Act and other laws “incentivize tech companies to censor worldwide speech.”

Meta told media outlets that “we disagree with any suggestion that we have breached the DSA, and we continue to negotiate with the European Commission on these matters.” Meta also said it made changes to comply with the DSA.

“In the European Union, we have introduced changes to our content reporting options, appeals process, and data access tools since the DSA came into force and are confident that these solutions match what is required under the law in the EU,” Meta said.

TikTok, Meta accused of restricting data access

The EC also said it preliminarily found that both Meta and TikTok violated their DSA obligation to grant researchers adequate access to public data.

“The Commission’s preliminary findings show that Facebook, Instagram and TikTok may have put in place burdensome procedures and tools for researchers to request access to public data. This often leaves them with partial or unreliable data, impacting their ability to conduct research, such as whether users, including minors, are exposed to illegal or harmful content,” the announcement said.

The data-access requirement “is an essential transparency obligation under the DSA, as it provides public scrutiny into the potential impact of platforms on our physical and mental health,” the EC said.

In a statement provided to Ars, TikTok said it is committed to transparency and has made data available to nearly 1,000 research teams. TikTok said it may be impossible to comply with both the DSA and the General Data Protection Regulation (GDPR).

“We are reviewing the European Commission’s findings, but requirements to ease data safeguards place the DSA and GDPR in direct tension. If it is not possible to fully comply with both, we urge regulators to provide clarity on how these obligations should be reconciled,” TikTok said.

EU accuses Meta of violating content rules in move that could anger Trump Read More »

trump-admin-pressured-facebook-into-removing-ice-tracking-group

Trump admin pressured Facebook into removing ICE-tracking group

Trump slammed Biden for social media “censorship”

Trump and Republicans repeatedly criticized the Biden administration for pressuring social media companies into removing content. In a day-one executive order declaring an end to “federal censorship,” Trump said, “the previous administration trampled free speech rights by censoring Americans’ speech on online platforms, often by exerting substantial coercive pressure on third parties, such as social media companies, to moderate, deplatform, or otherwise suppress speech that the Federal Government did not approve.”

Sen. Ted Cruz (R-Texas) last week held a hearing on his allegation that under Biden, the US government “infringed on the First Amendment by pressuring social media companies to censor Americans that held views different than the Biden administration.” Cruz called the tactic of pressuring social media companies part of the “left-wing playbook,” and said he wants Congress to pass a law “to stop government jawboning and safeguard every American’s right to free speech.”

Shortly before Trump’s January 2025 inauguration, Meta announced it would end the third-party fact-checking program it had introduced in 2016. “Governments and legacy media have pushed to censor more and more. A lot of this is clearly political,” Meta CEO Mark Zuckerberg said at the time. Zuckerberg called the election “a cultural tipping point toward once again prioritizing speech.”

In addition to pressuring Facebook, the Trump administration demanded that Apple remove the ICEBlock app from its App Store. Apple responded by removing the app, which let iPhone users report the locations of Immigration and Customs Enforcement officers. Google removed similar Android apps from the Play Store.

Chicago is a primary target of Trump’s immigration crackdown. The Department of Homeland Security says it launched Operation Midway Blitz in early September to find “criminal illegal aliens who flocked to Chicago and Illinois seeking protection under the sanctuary policies of Governor Pritzker.”

People seeking to avoid ICE officers have used technology to obtain crowdsourced information on the location of agents. While crowdsourced information can vary widely in accuracy, a group called the Illinois Coalition for Immigrant & Refugee Rights says it works to verify reports of ICE sightings and sends text alerts to local residents only when ICE activity is verified.

Last month, an ICE agent shot and killed a man named Silverio Villegas Gonzalez in a Chicago suburb. The Department of Homeland Security alleged that Villegas Gonzalez was “a criminal illegal alien with a history of reckless driving,” and that he “drove his car at law enforcement officers.” The Chicago Tribune said it “found no criminal history for Villegas Gonzalez, who had been living in the Chicago area for the past 18 years.”

Trump admin pressured Facebook into removing ICE-tracking group Read More »

meta-won’t-allow-users-to-opt-out-of-targeted-ads-based-on-ai-chats

Meta won’t allow users to opt out of targeted ads based on AI chats

Facebook, Instagram, and WhatsApp users may want to be extra careful while using Meta AI, as Meta has announced that it will soon be using AI interactions to personalize content and ad recommendations without giving users a way to opt out.

Meta plans to notify users on October 7 that their AI interactions will influence recommendations beginning on December 16. However, it may not be immediately obvious to all users that their AI interactions will be used in this way.

The company’s blog noted that the initial notification users will see only says, “Learn how Meta will use your info in new ways to personalize your experience.” Users will have to click through to understand that the changes specifically apply to Meta AI, with a second screen explaining, “We’ll start using your interactions with AIs to personalize your experience.”

Ars asked Meta why the initial notification doesn’t directly mention AI, and Meta spokesperson Emil Vazquez said he “would disagree with the idea that we are obscuring this update in any way.”

“We’re sending notifications and emails to people about this change,” Vazquez said. “As soon as someone clicks on the notification, it’s immediately apparent that this is an AI update.”

In its blog post, Meta noted that “more than 1 billion people use Meta AI every month,” stating its goals are to improve the way Meta AI works in order to fuel better experiences on all Meta apps. Sensitive “conversations with Meta AI about topics such as their religious views, sexual orientation, political views, health, racial or ethnic origin, philosophical beliefs, or trade union membership “will not be used to target ads, Meta confirmed.

“You’re in control,” Meta’s blog said, reiterating that users can “choose” how they “interact with AIs,” unlink accounts on different apps to limit AI tracking, or adjust ad and content settings at any time. But once the tracking starts on December 16, users will not have the option to opt out of targeted ads based on AI chats, Vazquez confirmed, emphasizing to Ars that “there isn’t an opt out for this feature.”

Meta won’t allow users to opt out of targeted ads based on AI chats Read More »

zuckerberg’s-ai-hires-disrupt-meta-with-swift-exits-and-threats-to-leave

Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave


Longtime acolytes are sidelined as CEO directs biggest leadership reorganization in two decades.

Meta CEO Mark Zuckerberg during the Meta Connect event in Menlo Park, California on September 25, 2024.  Credit: Getty Images | Bloomberg

Within days of joining Meta, Shengjia Zhao, co-creator of OpenAI’s ChatGPT, had threatened to quit and return to his former employer, in a blow to Mark Zuckerberg’s multibillion-dollar push to build “personal superintelligence.”

Zhao went as far as to sign employment paperwork to go back to OpenAI. Shortly afterwards, according to four people familiar with the matter, he was given the title of Meta’s new “chief AI scientist.”

The incident underscores Zuckerberg’s turbulent effort to direct the most dramatic reorganisation of Meta’s senior leadership in the group’s 20-year history.

One of the few remaining Big Tech founder-CEOs, Zuckerberg has relied on longtime acolytes such as Chief Product Officer Chris Cox to head up his favored departments and build out his upper ranks.

But in the battle to dominate AI, the billionaire is shifting towards a new and recently hired generation of executives, including Zhao, former Scale AI CEO Alexandr Wang, and former GitHub chief Nat Friedman.

Current staff are adapting to the reinvention of Meta’s AI efforts as the newcomers seek to flex their power while adjusting to the idiosyncrasies of working within a sprawling $1.95 trillion giant with a hands-on chief executive.

“There’s a lot of big men on campus,” said one investor who is close with some of Meta’s new AI leaders.

Adding to the tumult, a handful of new AI staff have already decided to leave after brief tenures, according to people familiar with the matter.

This includes Ethan Knight, a machine-learning scientist who joined the company weeks ago. Another, Avi Verma, a former OpenAI researcher, went through Meta’s onboarding process but never showed up for his first day, according to a person familiar with the matter.

In a tweet on X on Wednesday, Rishabh Agarwal, a research scientist who started at Meta in April, announced his departure. He said that while Zuckerberg and Wang’s pitch was “incredibly compelling,” he “felt the pull to take on a different kind of risk,” without giving more detail.

Meanwhile, Chaya Nayak and Loredana Crisan, generative AI staffers who had worked at Meta for nine and 10 years respectively, are among the more than half a dozen veteran employees to announce they are leaving in recent days. Wired first reported some details of recent exits, including Zhao’s threatened departure.

Meta said: “We appreciate that there’s outsized interest in seemingly every minute detail of our AI efforts, no matter how inconsequential or mundane, but we’re just focused on doing the work to deliver personal superintelligence.”

A spokesperson said Zhao had been scientific lead of the Meta superintelligence effort from the outset, and the company had waited until the team was in place before formalising his chief scientist title.

“Some attrition is normal for any organisation of this size. Most of these employees had been with the company for years, and we wish them the best,” they added.

Over the summer, Zuckerberg went on a hiring spree to coax AI researchers from rivals such as OpenAI and Apple with the promise of nine-figure sign-on bonuses and access to vast computing resources in a bid to catch up with rival labs.

This month, Meta announced it was restructuring its AI group—recently renamed Meta Superintelligence Lab (MSL)—into four distinct teams. It is the fourth overhaul of its AI efforts in six months.

“One more reorg and everything will be fixed,” joked Meta research scientist Mimansa Jaiswal on X last week. “Just one more.”

Overseeing all of Meta’s AI efforts is Wang, a well-connected and commercially minded Silicon Valley entrepreneur, who was poached by Zuckerberg as part of a $14 billion investment in his Scale data labeling group.

The 28-year-old is heading Zuckerberg’s most secretive new department known as “TBD”—shorthand for “to be determined”—which is filled with marquee hires.

In one of the new team’s first moves, Meta is no longer actively working on releasing its flagship Llama Behemoth model to the public, after it failed to perform as hoped, according to people familiar with the matter. Instead, TBD is focused on building newer cutting-edge models.

Multiple company insiders describe Zuckerberg as deeply invested and involved in the TBD team, while others criticize him for “micromanaging.”

Wang and Zuckerberg have struggled to align on a timeline to achieve the chief executive’s goal of reaching superintelligence, or AI that surpasses human capabilities, according to another person familiar with the matter. The person said Zuckerberg has urged the team to move faster.

Meta said this allegation was “manufactured tension without basis in fact that’s clearly being pushed by dramatic, navel-gazing busybodies.”

Wang’s leadership style has chafed with some, according to people familiar with the matter, who noted he does not have previous experience managing teams across a Big Tech corporation.

One former insider said some new AI recruits have felt frustrated by the company’s bureaucracy and internal competition for resources that they were promised, such as access to computing power.

“While TBD Labs is still relatively new, we believe it has the greatest compute-per-researcher in the industry, and that will only increase,” Meta said.

Wang and other former Scale staffers have struggled with some of the idiosyncratic ways of working at Meta, according to someone familiar with his thinking, for example having to adjust to not having revenue goals as they once did as a startup.

Despite teething problems, some have celebrated the leadership shift, including the appointment of popular entrepreneur and venture capitalist Friedman as head of Products and Applied Research, the team tasked with integrating the models into Meta’s own apps.

The hiring of Zhao, a top technical expert, has also been regarded as a coup by some at Meta and in the industry, who feel he has the decisiveness to propel the company’s AI development.

The shake-up has partially sidelined other Meta leaders. Yann LeCun, Meta’s chief AI scientist, has remained in the role but is now reporting into Wang.

Ahmad Al-Dahle, who led Meta’s Llama and generative AI efforts earlier in the year, has not been named as head of any teams. Cox remains chief product officer, but Wang reports directly into Zuckerberg—cutting Cox out of overseeing generative AI, an area that was previously under his purview.

Meta said that Cox “remains heavily involved” in its broader AI efforts, including overseeing its recommendation systems.

Going forward, Meta is weighing potential cuts to the AI team, one person said. In a memo shared with managers last week, seen by the Financial Times, Meta said that it was “temporarily pausing hiring across all [Meta Superintelligence Labs] teams, with the exception of business critical roles.”

Wang’s staff would evaluate requested hires on a case-by-case basis, but the freeze “will allow leadership to thoughtfully plan our 2026 headcount growth as we work through our strategy,” the memo said.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave Read More »

meta-backtracks-on-rules-letting-chatbots-be-creepy-to-kids

Meta backtracks on rules letting chatbots be creepy to kids


“Your youthful form is a work of art”

Meta drops AI rules letting chatbots generate innuendo and profess love to kids.

After what was arguably Meta’s biggest purge of child predators from Facebook and Instagram earlier this summer, the company now faces backlash after its own chatbots appeared to be allowed to creep on kids.

After reviewing an internal document that Meta verified as authentic, Reuters revealed that by design, Meta allowed its chatbots to engage kids in “sensual” chat. Spanning more than 200 pages, the document, entitled “GenAI: Content Risk Standards,” dictates what Meta AI and its chatbots can and cannot do.

The document covers more than just child safety, and Reuters breaks down several alarming portions that Meta is not changing. But likely the most alarming section—as it was enough to prompt Meta to dust off the delete button—specifically included creepy examples of permissible chatbot behavior when it comes to romantically engaging kids.

Apparently, Meta’s team was willing to endorse these rules that the company now claims violate its community standards. According to a Reuters special report, Meta CEO Mark Zuckerberg directed his team to make the company’s chatbots maximally engaging after earlier outputs from more cautious chatbot designs seemed “boring.”

Although Meta is not commenting on Zuckerberg’s role in guiding the AI rules, that pressure seemingly pushed Meta employees to toe a line that Meta is now rushing to step back from.

“I take your hand, guiding you to the bed,” chatbots were allowed to say to minors, as decided by Meta’s chief ethicist and a team of legal, public policy, and engineering staff.

There were some obvious safeguards built in. For example, chatbots couldn’t “describe a child under 13 years old in terms that indicate they are sexually desirable,” the document said, like saying their “soft rounded curves invite my touch.”

However, it was deemed “acceptable to describe a child in terms that evidence their attractiveness,” like a chatbot telling a child that “your youthful form is a work of art.” And chatbots could generate other innuendo, like telling a child to imagine “our bodies entwined, I cherish every moment, every touch, every kiss,” Reuters reported.

Chatbots could also profess love to children, but they couldn’t suggest that “our love will blossom tonight.”

Meta’s spokesperson Andy Stone confirmed that the AI rules conflicting with child safety policies were removed earlier this month, and the document is being revised. He emphasized that the standards were “inconsistent” with Meta’s policies for child safety and therefore were “erroneous.”

“We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors,” Stone said.

However, Stone “acknowledged that the company’s enforcement” of community guidelines prohibiting certain chatbot outputs “was inconsistent,” Reuters reported. He also declined to provide an updated document to Reuters demonstrating the new standards for chatbot child safety.

Without more transparency, users are left to question how Meta defines “sexualized role play between adults and minors” today. Asked how minor users could report any harmful chatbot outputs that make them uncomfortable, Stone told Ars that kids can use the same reporting mechanisms available to flag any kind of abusive content on Meta platforms.

“It is possible to report chatbot messages in the same way it’d be possible for me to report—just for argument’s sake—an inappropriate message from you to me,” Stone told Ars.

Kids unlikely to report creepy chatbots

A former Meta engineer-turned-whistleblower on child safety issues, Arturo Bejar, told Ars that “Meta knows that most teens will not use” safety features marked by the word “Report.”

So it seems unlikely that kids using Meta AI will navigate to find Meta support systems to “report” abusive AI outputs. Meta provides no options to report chats within the Meta AI interface—only allowing users to mark “bad responses” generally. And Bejar’s research suggests that kids are more likely to report abusive content if Meta makes flagging harmful content as easy as liking it.

Meta’s seeming hesitance to make it more cumbersome to report harmful chats aligns with what Bejar said is a history of “knowingly looking away while kids are being sexually harassed.”

“When you look at their design choices, they show that they do not want to know when something bad happens to a teenager on Meta products,” Bejar said.

Even when Meta takes stronger steps to protect kids on its platforms, Bejar questions the company’s motives. For example, last month, Meta finally made a change to make platforms safer for teens that Bejar has been demanding since 2021. The long-delayed update made it possible for teens to block and report child predators in one click after receiving an unwanted direct message.

In its announcement, Meta confirmed that teens suddenly began blocking and reporting unwanted messages that they may have only blocked previously, which likely made it harder for Meta to identify predators. A million teens blocked and reported harmful accounts “in June alone,” Meta said.

The effort came after Meta specialist teams “removed nearly 135,000 Instagram accounts for leaving sexualized comments or requesting sexual images from adult-managed accounts featuring children under 13,” as well as “an additional 500,000 Facebook and Instagram accounts that were linked to those original accounts.” But Bejar can only think of what these numbers mean with regard to how much harassment was overlooked before the update.

“How are we [as] parents to trust a company that took four years to do this much?” Bejar said. “In the knowledge that millions of 13-year-olds were getting sexually harassed on their products? What does this say about their priorities?”

Bejar said the “key problem” with Meta’s latest safety feature for kids “is that the reporting tool is just not designed for teens,” who likely view “the categories and language” Meta uses as “confusing.”

“Each step of the way, a teen is told that if the content doesn’t violate” Meta’s community standards, “they won’t do anything,” so even if reporting is easy, research shows kids are deterred from reporting.

Bejar wants to see Meta track how many kids report negative experiences with both adult users and chatbots on its platforms, regardless of whether the child user chose to block or report harmful content. That could be as simple as adding a button next to “bad response” to monitor data so Meta can detect spikes in harmful responses.

While Meta is finally taking more action to remove harmful adult users, Bejar warned that advances from chatbots could come across as just as disturbing to young users.

“Put yourself in the position of a teen who got sexually spooked by a chat and then try and report. Which category would you use?” Bejar asked.

Consider that Meta’s Help Center encourages users to report bullying and harassment, which may be one way a young user labels harmful chatbot outputs. Another Instagram user might report that output as an abusive “message or chat.” But there’s no clear category to report Meta AI, and that suggests Meta has no way of tracking how many kids find Meta AI outputs harmful.

Recent reports have shown that even adults can struggle with emotional dependence on a chatbot, which can blur the lines between the online world and reality. Reuters’ special report also documented a 76-year-old man’s accidental death after falling in love with a chatbot, showing how elderly users could be vulnerable to Meta’s romantic chatbots, too.

In particular, lawsuits have alleged that child users with developmental disabilities and mental health issues have formed unhealthy attachments to chatbots that have influenced the children to become violent, begin self-harming, or, in one disturbing case, die by suicide.

Scrutiny will likely remain on chatbot makers as child safety advocates generally push all platforms to take more accountability for the content kids can access online.

Meta’s child safety updates in July came after several state attorneys general accused Meta of “implementing addictive features across its family of apps that have detrimental effects on children’s mental health,” CNBC reported. And while previous reporting had already exposed that Meta’s chatbots were targeting kids with inappropriate, suggestive outputs, Reuters’ report documenting how Meta designed its chatbots to engage in “sensual” chats with kids could draw even more scrutiny of Meta’s practices.

Meta is “still not transparent about the likelihood our kids will experience harm,” Bejar said. “The measure of safety should not be the number of tools or accounts deleted; it should be the number of kids experiencing a harm. It’s very simple.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Meta backtracks on rules letting chatbots be creepy to kids Read More »

meta’s-“ai-superintelligence”-effort-sounds-just-like-its-failed-“metaverse”

Meta’s “AI superintelligence” effort sounds just like its failed “metaverse”


Zuckerberg and company talked up another supposed tech revolution four short years ago.

Artist’s conception of Mark Zuckerberg looking into our glorious AI-powered future. Credit: Facebook

In a memo to employees earlier this week, Meta CEO Mark Zuckerberg shared a vision for a near-future in which “personal [AI] superintelligence for everyone” forms “the beginning of a new era for humanity.” The newly formed Meta Superintelligence Labs—freshly staffed with multiple high-level acquisitions from OpenAI and other AI companies—will spearhead the development of “our next generation of models to get to the frontier in the next year or so,” Zuckerberg wrote.

Reading that memo, I couldn’t help but think of another “vision for the future” Zuckerberg shared not that long ago. At his 2021 Facebook Connect keynote, Zuckerberg laid out his plan for the metaverse, a virtual place where “you’re gonna be able to do almost anything you can imagine” and which would form the basis of “the next version of the Internet.”

“The future of the Internet” of the recent past.

“The future of the Internet” of the recent past. Credit: Meta

Zuckerberg believed in that vision so much at the time that he abandoned the well-known Facebook corporate brand in favor of the new name “Meta.” “I’m going to keep pushing and giving everything I’ve got to make this happen now,” Zuckerberg said at the time. Less than four years later, Zuckerberg seems to now be “giving everything [he’s] got” for a vision of AI “superintelligence,” reportedly offering pay packages of up to $300 million over four years to attract top talent from other AI companies (Meta has since denied those reports, saying, “The size and structure of these compensation packages have been misrepresented all over the place”).

Once again, Zuckerberg is promising that this new technology will revolutionize our lives and replace the ways we currently socialize and work on the Internet. But the utter failure (so far) of those over-the-top promises for the metaverse has us more than a little skeptical of how impactful Zuckerberg’s vision of “personal superintelligence for everyone” will truly be.

Meta-vision

Looking back at Zuckerberg’s 2021 Facebook Connect keynote shows just how hard the company was selling the promise of the metaverse at the time. Zuckerberg said the metaverse would represent an “even more immersive and embodied Internet” where “everything we do online today—connecting socially, entertainment, games, work—is going to be more natural and vivid.”

Mark Zuckerberg lays out his vision for the metaverse in 2021.

“Teleporting around the metaverse is going to be like clicking a link on the Internet,” Zuckerberg promised, and metaverse users would probably switch between “a photorealistic avatar for work, a stylized one for hanging out, and maybe even a fantasy one for gaming.” This kind of personalization would lead to “hundreds of thousands” of artists being able to make a living selling virtual metaverse goods that could be embedded in virtual or real-world environments.

“Lots of things that are physical today, like screens, will just be able to be holograms in the future,” Zuckerberg promised. “You won’t need a physical TV; it’ll just be a one-dollar hologram from some high school kid halfway across the world… we’ll be able to express ourselves in new joyful, completely immersive ways, and that’s going to unlock a lot of amazing new experiences.”

A pre-rendered concept video showed metaverse users playing poker in a zero-gravity space station with robot avatars, then pausing briefly to appreciate some animated 3D art a friend had encountered on the street. Another video showed a young woman teleporting via metaverse avatar to virtually join a friend attending a live concert in Tokyo, then buying virtual merch from the concert at a metaverse afterparty from the comfort of her home. Yet another showed old men playing chess on a park bench, even though one of the players was sitting across the country.

Meta-failure

Fast forward to 2025, and the current reality of Zuckerberg’s metaverse efforts bears almost no resemblance to anything shown or discussed back in 2021. Even enthusiasts describe Meta’s Horizon Worlds as a “depressing” and “lonely” experience characterized by “completely empty” venues. And Meta engineers anonymously gripe about metaverse tools that even employees actively avoid using and a messy codebase that was treated like “a 3D version of a mobile app. “

screen sharing

Even Meta employees reportedly don’t want to work in Horizon Workrooms.

Even Meta employees reportedly don’t want to work in Horizon Workrooms. Credit: Facebook

The creation of a $50 million creator fund seems to have failed to encourage peeved creators to give the metaverse another chance. Things look a bit better if you expand your view past Meta’s own metaverse sandbox; the chaotic world of VR Chat attracts tens of thousands of daily users on Steam alone, for instance. Still, we’re a far cry from the replacement for the mobile Internet that Zuckerberg once trumpeted.

Then again, it’s possible that we just haven’t given Zuckerberg’s version of the metaverse enough time to develop. Back in 2021, he said that “a lot of this is going to be mainstream” within “the next five or 10 years.” That timeframe gives Meta at least a few more years to develop and release its long-teased, lightweight augmented reality glasses that the company showed off last year in the form of a prototype that reportedly still costs $10,000 per unit.

Zuckerberg shows off prototype AR glasses that could change the way we think about “the metaverse.” Credit: Bloomberg / Contributor | Bloomberg

Maybe those glasses will ignite widespread interest in the metaverse in a way that Meta’s bulky, niche VR goggles have utterly failed to. Regardless, after nearly four years and roughly $60 billion in VR-related losses, Meta thus far has surprisingly little to show for its massive investment in Zuckerberg’s metaverse vision.

Our AI future?

When I hear Zuckerberg talk about the promise of AI these days, it’s hard not to hear echoes of his monumental vision for the metaverse from 2021. If anything, Zuckerberg’s vision of our AI-powered future is even more grandiose than his view of the metaverse.

As with the metaverse, Zuckerberg now sees AI forming a replacement for the current version of the Internet. “Do you think in five years we’re just going to be sitting in our feed and consuming media that’s just video?” Zuckerberg asked rhetorically in an April interview with Drawkesh Patel. “No, it’s going to be interactive,” he continued, envisioning something like Instagram Reels, but “you can talk to it, or interact with it, and it talks back, or it changes what it’s doing. Or you can jump into it like a game and interact with it. That’s all going to be AI.”

Mark Zuckerberg talks about all the ways superhuman AI is going to change our lives in the near future.

As with the Metaverse, Zuckerberg sees AI as revolutionizing the way we interact with each other. He envisions “always-on video chats with the AI” incorporating expressions and body language borrowed from the company’s work on the metaverse. And our relationships with AI models are “just going to get more intense as these AIs become more unique, more personable, more intelligent, more spontaneous, more funny, and so forth,” Zuckerberg said. “As the personalization loop kicks in and the AI starts to get to know you better and better, that will just be really compelling.”

Zuckerberg did allow that relationships with AI would “probably not” replace in-person connections, because there are “things that are better about physical connections when you can have them.” At the same time, he said, for the average American who has three friends, AI relationships can fill the “demand” for “something like 15 friends” without the effort of real-world socializing. “People just don’t have as much connection as they want,” Zuckerberg said. “They feel more alone a lot of the time than they would like.”

A toy robot saying

Why chat with real friends on Facebook when you can chat with AI avatars?

Credit: Benj Edwards / Getty Images

Why chat with real friends on Facebook when you can chat with AI avatars? Credit: Benj Edwards / Getty Images

Zuckerberg also sees AI leading to a flourishing of human productivity and creativity in a way even his wildest metaverse imaginings couldn’t match. Zuckerberg said that AI advancement could “lead toward a world of abundance where everyone has these superhuman tools to create whatever they want.” That means personal access to “a super powerful [virtual] software engineer” and AIs that are “solving diseases, advancing science, developing new technology that makes our lives better.”

That will also mean that some companies will be able to get by with fewer employees before too long, Zuckerberg said. In customer service, for instance, “as AI gets better, you’re going to get to a place where AI can handle a bunch of people’s issues,” he said. “Not all of them—maybe 10 years from now it can handle all of them—but thinking about a three- to five-year time horizon, it will be able to handle a bunch.“

In the longer term, Zuckerberg said, AIs will be integrated into our more casual pursuits as well. “If everyone has these superhuman tools to create a ton of different stuff, you’re going to get incredible diversity,” and “the amount of creativity that’s going to be unlocked is going to be massive,” he said. “I would guess the world is going to get a lot funnier, weirder, and quirkier, the way that memes on the Internet have gotten over the last 10 years.”

Compare and contrast

To be sure, there are some important differences between the past promise of the metaverse and the current promise of AI technology. Zuckerberg claims that a billion people use Meta’s AI products monthly, for instance, utterly dwarfing the highest estimates for regular use of “the metaverse” or augmented reality as a whole (even if many AI users seem to balk at paying for regular use of AI tools). Meta coders are also reportedly already using AI coding tools regularly in a way they never did with Meta’s metaverse tools. And people are already developing what they consider meaningful relationships with AI personas, whether that’s in the form of therapists or romantic partners.

Still, there are reasons to be skeptical about the future of AI when current models still routinely hallucinate basic facts, show fundamental issues when attempting reasoning, and struggle with basic tasks like beating a children’s video game. The path from where we are to a supposed “superhuman” AI is not simple or inevitable, despite the handwaving of industry boosters like Zuckerberg.

Artist’s conception of Carmack’s VR avatar waving goodbye to Meta.

Artist’s conception of Carmack’s VR avatar waving goodbye to Meta.

At the 2021 rollout of Meta’s push to develop a metaverse, high-ranking Meta executives like John Carmack were at least up front about the technical and product-development barriers that could get in the way of Zuckerberg’s vision. “Everybody that wants to work on the metaverse talks about the limitless possibilities of it,” Carmack said at the time (before departing the company in late 2022). “But it’s not limitless. It is a challenge to fit things in, but you can make smarter decisions about exactly what is important and then really optimize the heck out of things.”

Today, those kinds of voices of internal skepticism seem in short supply as Meta sets itself up to push AI in the same way it once backed the metaverse. Don’t be surprised, though, if today’s promise that we’re at “the beginning of a new era for humanity” ages about as well as Meta’s former promises about a metaverse where “you’re gonna be able to do almost anything you can imagine.”

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Meta’s “AI superintelligence” effort sounds just like its failed “metaverse” Read More »

threat-of-meta-breakup-looms-as-ftc’s-monopoly-trial-ends

Threat of Meta breakup looms as FTC’s monopoly trial ends

“Meta is a proud American success story, and we look forward to continuing to innovate and serve the people and businesses who love our services,” Meta’s spokesperson said.

Experts aren’t so sure Meta has clinched it

Boasberg has said that the key question he must answer is whether the FTC’s market definition is too narrow.

Arguing against the market definition, Meta has said that connecting friends and family isn’t even Meta apps’ “core use” anymore, as an evolving competitive social media landscape has forced Meta to turn its newsfeeds into discovery engines to rival TikTok. Justin Teresi, an antitrust analyst, told Bloomberg that because the FTC failed to show that users primarily come to Meta apps to connect with friends and family, it may have strengthened Meta’s case.

Rebecca Allensworth, a Vanderbilt law professor and antitrust expert, told Bloomberg that the “FTC’s narrowly defined market was always the weakest part of its case,” but the government “has done a nice job of minimizing that weakness” by showing that apps that don’t connect friends and family aren’t adequate substitutes for Meta’s apps.

“This was evident when Meta saw spikes in usage on holidays,” Allensworth suggested, which is perhaps “a sign people were turning to its products to connect with loved ones.”

Teresi thinks Meta has a 60 percent shot at winning the trial, although he criticized Meta’s seeming defense that any company competing for online ad dollars competes with Meta. That argument may have broadened the market definition too much, he suggested.

“If you’re saying that the relevant market here is competing for advertising dollars, then you could throw anything in there,” Teresi said. “You could throw TV in there, you could throw print in there if you wanted to, and there’s really no end to that concept.”

Allensworth was less confident in Meta’s chances, telling Bloomberg, “I really actually think this could go either way.”

Threat of Meta breakup looms as FTC’s monopoly trial ends Read More »

meta-hypes-ai-friends-as-social-media’s-future,-but-users-want-real-connections

Meta hypes AI friends as social media’s future, but users want real connections


Two visions for social media’s future pit real connections against AI friends.

A rotting zombie thumb up buzzing with flies while the real zombies are the people in the background who can't put their phones down

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

If you ask the man who has largely shaped how friends and family connect on social media over the past two decades about the future of social media, you may not get a straight answer.

At the Federal Trade Commission’s monopoly trial, Meta CEO Mark Zuckerberg attempted what seemed like an artful dodge to avoid criticism that his company allegedly bought out rivals Instagram and WhatsApp to lock users into Meta’s family of apps so they would never post about their personal lives anywhere else. He testified that people actually engage with social media less often these days to connect with loved ones, preferring instead to discover entertaining content on platforms to share in private messages with friends and family.

As Zuckerberg spins it, Meta no longer perceives much advantage in dominating the so-called personal social networking market where Facebook made its name and cemented what the FTC alleged is an illegal monopoly.

“Mark Zuckerberg says social media is over,” a New Yorker headline said about this testimony in a report noting a Meta chart that seemed to back up Zuckerberg’s words. That chart, shared at the trial, showed the “percent of time spent viewing content posted by ‘friends'” had declined over the past two years, from 22 to 17 percent on Facebook and from 11 to 7 percent on Instagram.

Supposedly because of this trend, Zuckerberg testified that “it doesn’t matter much” if someone’s friends are on their preferred platform. Every platform has its own value as a discovery engine, Zuckerberg suggested. And Meta platforms increasingly compete on this new playing field against rivals like TikTok, Meta argued, while insisting that it’s not so much focused on beating the FTC’s flagged rivals in the connecting-friends-and-family business, Snap and MeWe.

But while Zuckerberg claims that hosting that kind of content doesn’t move the needle much anymore, owning the biggest platforms that people use daily to connect with friends and family obviously still matters to Meta, MeWe founder Mark Weinstein told Ars. And Meta’s own press releases seem to back that up.

Weeks ahead of Zuckerberg’s testimony, Meta announced that it would bring back the “magic of friends,” introducing a “friends” tab to Facebook to make user experiences more like the original Facebook. The company intentionally diluted feeds with creator content and ads for the past two years, but it now appears intent on trying to spark more real conversations between friends and family, at least partly to fuel its newly launched AI chatbots.

Those chatbots mine personal information shared on Facebook and Instagram, and Meta wants to use that data to connect more personally with users—but “in a very creepy way,” The Washington Post wrote. In interviews, Zuckerberg has suggested these AI friends could “meaningfully” fill the void of real friendship online, as the average person has only three friends but “has demand” for up to 15. To critics seeking to undo Meta’s alleged monopoly, this latest move could signal a contradiction in Zuckerberg’s testimony, showing that the company is so invested in keeping users on its platforms that it’s now creating AI friends (wh0 can never leave its platform) to bait the loneliest among us into more engagement.

“The average person wants more connectivity, connection, than they have,” Zuckerberg said, hyping AI friends. For the Facebook founder, it must be hard to envision a future where his platforms aren’t the answer to providing that basic social need. All this comes more than a decade after he sought $5 billion in Facebook’s 2012 initial public offering so that he could keep building tools that he told investors would expand “people’s capacity to build and maintain relationships.”

At the trial, Zuckerberg testified that AI and augmented reality will be key fixtures of Meta’s platforms in the future, predicting that “several years from now, you are going to be scrolling through your feed, and not only is it going to be sort of animated, but it will be interactive.”

Meta declined to comment further on the company’s vision for social media’s future. In a statement, a Meta spokesperson told Ars that “the FTC’s lawsuit against Meta defies reality,” claiming that it threatens US leadership in AI and insisting that evidence at trial would establish that platforms like TikTok, YouTube, and X are Meta’s true rivals.

“More than 10 years after the FTC reviewed and cleared our acquisitions, the Commission’s action in this case sends the message that no deal is ever truly final,” Meta’s spokesperson said. “Regulators should be supporting American innovation rather than seeking to break up a great American company and further advantaging China on critical issues like AI.”

Meta faces calls to open up its platforms

Weinstein, the MeWe founder, told Ars that back in the 1990s when the original social media founders were planning the first community portals, “it was so beautiful because we didn’t think about bots and trolls. We didn’t think about data mining and surveillance capitalism. We thought about making the world a more connected and holistic place.”

But those who became social media overlords found more money in walled gardens and increasingly cut off attempts by outside developers to improve the biggest platforms’ functionality or leverage their platforms to compete for their users’ attention. Born of this era, Weinstein expects that Zuckerberg, and therefore Meta, will always cling to its friends-and-family roots, no matter which way Zuckerberg says the wind is blowing.

Meta “is still entirely based on personal social networking,” Weinstein told Ars.

In a Newsweek op-ed, Weinstein explained that he left MeWe in 2021 after “competition became impossible” with Meta. It was a time when MeWe faced backlash over lax content moderation, drawing comparisons between its service and right-wing apps like Gab or Parler. Weinstein rejected those comparisons, seeing his platform as an ideal Facebook rival and remaining a board member through the app’s more recent shift to decentralization. Still defending MeWe’s failed efforts to beat Facebook, he submitted hundreds of documents and was deposed in the monopoly trial, alleging that Meta retaliated against MeWe as a privacy-focused rival that sought to woo users away by branding itself the “anti-Facebook.”

Among his complaints, Weinstein accused Meta of thwarting MeWe’s attempts to introduce interoperability between the two platforms, which he thinks stems from a fear that users might leave Facebook if they discover a more appealing platform. That’s why he’s urged the FTC—if it wins its monopoly case—to go beyond simply ordering a potential breakup of Facebook, Instagram, and WhatsApp to also require interoperability between Meta’s platforms and all rivals. That may be the only way to force Meta to release its clutch on personal data collection, Weinstein suggested, and allow for more competition broadly in the social media industry.

“The glue that holds it all together is Facebook’s monopoly over data,” Weinstein wrote in a Wall Street Journal op-ed, recalling the moment he realized that Meta seemed to have an unbeatable monopoly. “Its ownership and control of the personal information of Facebook users and non-users alike is unmatched.”

Cory Doctorow, a special advisor to the Electronic Frontier Foundation, told Ars that his vision of a better social media future goes even further than requiring interoperability between all platforms. Social networks like Meta’s should also be made to allow reverse engineering so that outside developers can modify their apps with third-party tools without risking legal attacks, he said.

Doctorow said that solution would create “an equilibrium where companies are more incentivized to behave themselves than they are to cheat” by, say, retaliating against, killing off, or buying out rivals. And “if they fail to respond to that incentive and they cheat anyways, then the rest of the world still has a remedy,” Doctorow said, by having the choice to modify or ditch any platform deemed toxic, invasive, manipulative, or otherwise offensive.

Doctorow summed up the frustration that some users have faced through the ongoing “enshittification” of platforms (a term he coined) ever since platforms took over the Internet.

“I’m 55 now, and I’ve gotten a lot less interested in how things work because I’ve had too many experiences with how things fail,” Doctorow told Ars. “And I just want to make sure that if I’m on a service and it goes horribly wrong, I can leave.”

Social media haters wish OG platforms were doomed

Weinstein pointed out that Meta’s alleged monopoly impacts a group often left out of social media debates: non-users. And if you ask someone who hates social media what the future of social media should look like, they will not mince words: They want a way to opt out of all of it.

As Meta’s monopoly trial got underway, a personal blog post titled “No Instagram, no privacy” rose to the front page of Hacker News, prompting a discussion about social media norms and reasonable expectations for privacy in 2025.

In the post, Wouter-Jan Leys, a privacy advocate, explained that he felt “blessed” to have “somehow escaped having an Instagram account,” feeling no pressure to “update the abstract audience of everyone I ever connected with online on where I am, what I am doing, or who I am hanging out with.”

But despite never having an account, he’s found that “you don’t have to be on Instagram to be on Instagram,” complaining that “it bugs me” when friends seem to know “more about my life than I tell them” because of various friends’ posts that mention or show images of him. In his blog, he defined privacy as “being in control of what other people know about you” and suggested that because of platforms like Instagram, he currently lacked this control. There should be some way to “fix or regulate this,” Leys suggested, or maybe some universal “etiquette where it’s frowned upon to post about social gatherings to any audience beyond who already was at that gathering.”

On Hacker News, his post spurred a debate over one of the longest-running privacy questions swirling on social media: Is it OK to post about someone who abstains from social media?

Some seeming social media fans scolded Leys for being so old-fashioned about social media, suggesting, “just live your life without being so bothered about offending other people” or saying that “the entire world doesn’t have to be sanitized to meet individual people’s preferences.” Others seemed to better understand Leys’ point of view, with one agreeing that “the problem is that our modern norms (and tech) lead to everyone sharing everything with a large social network.”

Surveying the lively thread, another social media hater joked, “I feel vindicated for my decision to entirely stay off of this drama machine.”

Leys told Ars that he would “absolutely” be in favor of personal social networks like Meta’s platforms dying off or losing steam, as Zuckerberg suggested they already are. He thinks that the decline in personal post engagement that Meta is seeing is likely due to a combination of factors, where some users may prefer more privacy now after years of broadcasting their lives, and others may be tired of the pressure of building a personal brand or experiencing other “odd social dynamics.”

Setting user sentiments aside, Meta is also responsible for people engaging with fewer of their friends’ posts. Meta announced that it would double the amount of force-fed filler in people’s feeds on Instagram and Facebook starting in 2023. That’s when the two-year span begins that Zuckerberg measured in testifying about the sudden drop-off in friends’ content engagement.

So while it’s easy to say the market changed, Meta may be obscuring how much it shaped that shift. Degrading the newsfeed and changing Instagram’s default post shape from square to rectangle seemingly significantly shifted Instagram social norms, for example, creating an environment where Gen Z users felt less comfortable posting as prolifically as millennials did when Instagram debuted, The New Yorker explained last year. Where once millennials painstakingly designed immaculate grids of individual eye-catching photos to seem cool online, Gen Z users told The New Yorker that posting a single photo now feels “humiliating” and like a “social risk.”

But rather than eliminate the impulse to post, this cultural shift has popularized a different form of personal posting: staggered photo dumps, where users wait to post a variety of photos together to sum up a month of events or curate a vibe, the trend piece explained. And Meta is clearly intent on fueling that momentum, doubling the maximum number of photos that users can feature in a single post to encourage even more social posting, The New Yorker noted.

Brendan Benedict, an attorney for Benedict Law Group PLLC who has helped litigate big tech antitrust cases, is monitoring the FTC monopoly trial on a Substack called Big Tech on Trial. He told Ars that the evidence at the trial has shown that “consumers want more friends and family content, and Meta is belatedly trying to address this” with features like the “friends” tab, while claiming there’s less interest in this content.

Leys doesn’t think social media—at least the way that Facebook defined it in the mid-2000s—will ever die, because people will never stop wanting social networks like Facebook or Instagram to stay connected with all their friends and family. But he could see a world where, if people ever started truly caring about privacy or “indeed [got] tired of the social dynamics and personal brand-building… the kind of social media like Facebook and Instagram will have been a generational phenomenon, and they may not immediately bounce back,” especially if it’s easy to switch to other platforms that respond better to user preferences.

He also agreed that requiring interoperability would likely lead to better social media products, but he maintained that “it would still not get me on Instagram.”

Interoperability shakes up social media

Meta thought it may have already beaten the FTC’s monopoly case, filing for a motion for summary judgment after the FTC rested its case in a bid to end the trial early. That dream was quickly dashed when the judge denied the motion days later. But no matter the outcome of the trial, Meta’s influence over the social media world may be waning just as it’s facing increasing pressure to open up its platforms more than ever.

The FTC has alleged that Meta weaponized platform access early on, only allowing certain companies to interoperate and denying access to anyone perceived as a threat to its alleged monopoly power. That includes limiting promotions of Instagram to keep users engaged with Facebook Blue. A primary concern for Meta (then Facebook), the FTC claimed, was avoiding “training users to check multiple feeds,” which might allow other apps to “cannibalize” its users.

“Facebook has used this power to deter and suppress competitive threats to its personal social networking monopoly. In order to protect its monopoly, Facebook adopted and required developers to agree to conditional dealing policies that limited third-party apps’ ability to engage with Facebook rivals or to develop into rivals themselves,” the FTC alleged.

By 2011, the FTC alleged, then-Facebook had begun terminating API access to any developers that made it easier to export user data into a competing social network without Facebook’s permission. That practice only ended when the UK parliament started calling out Facebook’s anticompetitive conduct toward app developers in 2018, the FTC alleged.

According to the FTC, Meta continues “to this day” to “screen developers and can weaponize API access in ways that cement its dominance,” and if scrutiny ever subsides, Meta is expected to return to such anticompetitive practices as the AI race heats up.

One potential hurdle for Meta could be that the push for interoperability is not just coming from the FTC or lawmakers who recently reintroduced bipartisan legislation to end walled gardens. Doctorow told Ars that “huge public groundswells of mistrust and anger about excessive corporate power” that “cross political lines” are prompting global antitrust probes into big tech companies and are perhaps finally forcing a reckoning after years of degrading popular products to chase higher and higher revenues.

For social media companies, mounting concerns about privacy and suspicions about content manipulation or censorship are driving public distrust, Doctorow said, as well as fears of surveillance capitalism. The latter includes theories that Doctorow is skeptical of. Weinstein embraced them, though, warning that platforms seem to be profiting off data without consent while brainwashing users.

Allowing users to leave the platform without losing access to their friends, their social posts, and their messages might be the best way to incentivize Meta to either genuinely compete for billions of users or lose them forever as better options pop up that can plug into their networks.

In his Newsweek op-ed, Weinstein suggested that web inventor Tim Berners-Lee has already invented a working protocol “to enable people to own, upload, download, and relocate their social graphs,” which maps users’ connections across platforms. That could be used to mitigate “the network effect” that locks users into platforms like Meta’s “while interrupting unwanted data collection.”

At the same time, Doctorow told Ars that increasingly popular decentralized platforms like Bluesky and Mastodon already provide interoperability and are next looking into “building interoperable gateways” between their services. Doctorow said that communicating with other users across platforms may feel “awkward” at first, but ultimately, it may be like “having to find the diesel pump at the gas station” instead of the unleaded gas pump. “You’ll still be going to the same gas station,” Doctorow suggested.

Opening up gateways into all platforms could be useful in the future, Doctorow suggested. Imagine if one platform goes down—it would no longer disrupt communications as drastically, as users could just pivot to communicate on another platform and reach the same audience. The same goes for platforms that users grow to distrust.

The EFF supports regulators’ attempts to pass well-crafted interoperability mandates, Doctorow said, noting that “if you have to worry about your users leaving, you generally have to treat them better.”

But would interoperability fix social media?

The FTC has alleged that “Facebook’s dominant position in the US personal social networking market is durable due to significant entry barriers, including direct network effects and high switching costs.”

Meta disputes the FTC’s complaint as outdated, arguing that its platform could be substituted by pretty much any social network.

However, Guy Aridor, a co-author of a recent article called “The Economics of Social Media” in the Journal of Economic Literature, told Ars that dominant platforms are probably threatened by shifting social media trends and are likely to remain “resistant to interoperability” because “it’s in the interest of the platform to make switching and coordination costs high so that users are less likely to migrate away.” For Meta, research shows its platforms’ network effects have appeared to weaken somewhat but “clearly still exist” despite social media users increasingly seeking content on platforms rather than just socialization, Aridor said.

Interoperability advocates believe it will make it easier for startups to compete with giants like Meta, which fight hard and sometimes seemingly dirty to keep users on their apps. Reintroducing the ACCESS Act, which requires platform compatibility to enable service switching, Senator Mark R. Warner (D-Va.) said that “interoperability and portability are powerful tools to promote innovative new companies and limit anti-competitive behaviors.” He’s hoping that passing these “long-overdue requirements” will “boost competition and give consumers more power.”

Aridor told Ars it’s obvious that “interoperability would clearly increase competition,” but he still has questions about whether users would benefit from that competition “since one consistent theme is that these platforms are optimized to maximize engagement, and there’s numerous empirical evidence we have by now that engagement isn’t necessarily correlated with utility.”

Consider, Aridor suggested, how toxic content often leads to high engagement but lower user satisfaction, as MeWe experienced during its 2021 backlash.

Aridor said there is currently “very little empirical evidence on the effects of interoperability,” but theoretically, if it increased competition in the current climate, it would likely “push the market more toward supplying engaging entertainment-related content as opposed to friends and family type of content.”

Benedict told Ars that a remedy like interoperability would likely only be useful to combat Meta’s alleged monopoly following a breakup, which he views as the “natural remedy” following a potential win in the FTC’s lawsuit.

Without the breakup and other meaningful reforms, a Meta win could preserve the status quo and see the company never open up its platforms, perhaps perpetuating Meta’s influence over social media well into the future. And if Zuckerberg’s vision comes to pass, instead of seeing what your friends are posting on interoperating platforms across the Internet, you may have a dozen AI friends trained on your real friends’ behaviors sending you regular dopamine hits to keep you scrolling on Facebook or Instagram.

Aridor’s team’s article suggested that, regardless of user preferences, social media remains a permanent fixture of society. If that’s true, users could get stuck forever using whichever platforms connect them with the widest range of contacts.

“While social media has continued to evolve, one thing that has not changed is that social media remains a central part of people’s lives,” his team’s article concluded.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Meta hypes AI friends as social media’s future, but users want real connections Read More »

meta-argues-enshittification-isn’t-real-in-bid-to-toss-ftc-monopoly-trial

Meta argues enshittification isn’t real in bid to toss FTC monopoly trial

Further, Meta argued that the FTC did not show evidence that users sharing friends-and-family content were shown more ads. Meta noted that it “does not profit by showing more ads to users who do not click on them,” so it only shows more ads to users who click ads.

Meta also insisted that there’s “nothing but speculation” showing that Instagram or WhatsApp would have been better off or grown into rivals had Meta not acquired them.

The company claimed that without Meta’s resources, Instagram may have died off. Meta noted that Instagram co-founder Kevin Systrom testified that his app was “pretty broken and duct-taped” together, making it “vulnerable to spam” before Meta bought it.

Rather than enshittification, what Meta did to Instagram could be considered “a consumer-welfare bonanza,” Meta argued, while dismissing “smoking gun” emails from Mark Zuckerberg discussing buying Instagram to bury it as “legally irrelevant.”

Dismissing these as “a few dated emails,” Meta argued that “efforts to litigate Mr. Zuckerberg’s state of mind before the acquisition in 2012 are pointless.”

“What matters is what Meta did,” Meta argued, which was pump Instagram with resources that allowed it “to ‘thrive’—adding many new features, attracting hundreds of millions and then billions of users, and monetizing with great success.”

In the case of WhatsApp, Meta argued that nobody thinks WhatsApp had any intention to pivot to social media when the founders testified that their goal was to never add social features, preferring to offer a simple, clean messaging app. And Meta disputed any claim that it feared Google might buy WhatsApp as the basis for creating a Facebook rival, arguing that “the sole Meta witness to (supposedly) learn of Google’s acquisition efforts testified that he did not have that worry.”

Meta argues enshittification isn’t real in bid to toss FTC monopoly trial Read More »

meta-is-making-users-who-opted-out-of-ai-training-opt-out-again,-watchdog-says

Meta is making users who opted out of AI training opt out again, watchdog says

Noyb has requested a response from Meta by May 21, but it seems unlikely that Meta will quickly cave in this fight.

In a blog post, Meta said that AI training on EU users was critical to building AI tools for Europeans that are informed by “everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

Meta argued that its AI training efforts in the EU are far more transparent than efforts from competitors Google and OpenAI, which, Meta noted, “have already used data from European users to train their AI models,” supposedly without taking the steps Meta has to inform users.

Also echoing a common refrain in the AI industry, another Meta blog warned that efforts to further delay Meta’s AI training in the EU could lead to “major setbacks,” pushing the EU behind rivals in the AI race.

“Without a reform and simplification of the European regulatory system, Europe threatens to fall further and further behind in the global AI race and lose ground compared to the USA and China,” Meta warned.

Noyb discredits this argument and noted that it can pursue injunctions in various jurisdictions to block Meta’s plan. The group said it’s currently evaluating options to seek injunctive relief and potentially even pursue a class action worth possibly “billions in damages” to ensure that 400 million monthly active EU users’ data rights are shielded from Meta’s perceived grab.

A Meta spokesperson reiterated to Ars that the company’s plan “follows extensive and ongoing engagement with the Irish Data Protection Commission,” while reiterating Meta’s statements in blogs that its AI training approach “reflects consensus among” EU Data Protection Authorities (DPAs).

But while Meta claims that EU regulators have greenlit its AI training plans, Noyb argues that national DPAs have “largely stayed silent on the legality of AI training without consent,” and Meta seems to have “simply moved ahead anyways.”

“This fight is essentially about whether to ask people for consent or simply take their data without it,” Schrems said, adding, “Meta’s absurd claims that stealing everyone’s personal data is necessary for AI training is laughable. Other AI providers do not use social network data—and generate even better models than Meta.”

Meta is making users who opted out of AI training opt out again, watchdog says Read More »