Policy

openai-slams-court-order-that-lets-nyt-read-20-million-complete-user-chats

OpenAI slams court order that lets NYT read 20 million complete user chats


OpenAI: NYT wants evidence of ChatGPT users trying to get around news paywall.

Credit: Getty Images | alexsl

OpenAI wants a court to reverse a ruling forcing the ChatGPT maker to give 20 million user chats to The New York Times and other news plaintiffs that sued it over alleged copyright infringement. Although OpenAI previously offered 20 million user chats as a counter to the NYT’s demand for 120 million, the AI company says a court order requiring production of the chats is too broad.

“The logs at issue here are complete conversations: each log in the 20 million sample represents a complete exchange of multiple prompt-output pairs between a user and ChatGPT,” OpenAI said today in a filing in US District Court for the Southern District of New York. “Disclosure of those logs is thus much more likely to expose private information [than individual prompt-output pairs], in the same way that eavesdropping on an entire conversation reveals more private information than a 5-second conversation fragment.”

OpenAI’s filing said that “more than 99.99%” of the chats “have nothing to do with this case.” It asked the district court to “vacate the order and order News Plaintiffs to respond to OpenAI’s proposal for identifying relevant logs.” OpenAI could also seek review in a federal court of appeals.

OpenAI posted a message on its website to users today saying that “The New York Times is demanding that we turn over 20 million of your private ChatGPT conversations” in order to “find examples of you using ChatGPT to try to get around their paywall.”

ChatGPT users concerned about privacy have more to worry about than the NYT case. For example, ChatGPT conversations have been found in Google search results and the Google Search Console tool that developers can use to monitor search traffic. OpenAI today said it plans to develop “advanced security features designed to keep your data private, including client-side encryption for your messages with ChatGPT. ”

OpenAI: AI chats should be treated like private emails

OpenAI’s court filing argues that the chat log production should be narrowed based on the relevance of chats to the case.

“OpenAI is unaware of any court ordering wholesale production of personal information at this scale,” the filing said. “This sets a dangerous precedent: it suggests that anyone who files a lawsuit against an AI company can demand production of tens of millions of conversations without first narrowing for relevance. This is not how discovery works in other cases: courts do not allow plaintiffs suing Google to dig through the private emails of tens of millions of Gmail users irrespective of their relevance. And it is not how discovery should work for generative AI tools either.”

A November 7 order by US Magistrate Judge Ona Wang sided with the NYT, saying that OpenAI must “produce the 20 million de-identified Consumer ChatGPT Logs to News Plaintiffs by November 14, 2025, or within 7 days of completing the de-identification process.” Wang ruled that the production must go forward even though the parties don’t agree on whether the logs must be produced in full:

Whether or not the parties had reached agreement to produce the 20 million Consumer ChatGPT Logs in whole—which the parties vehemently dispute—such production here is appropriate. OpenAI has failed to explain how its consumers’ privacy rights are not adequately protected by: (1) the existing protective order in this multidistrict litigation or (2) OpenAI’s exhaustive de-identification of all of the 20 million Consumer ChatGPT Logs.

OpenAI’s filing today said the court order “did not acknowledge OpenAI’s sworn witness declaration explaining that the de-identification process is not intended to remove information that is non-identifying but may nonetheless be private, like a Washington Post reporter’s hypothetical use of ChatGPT to assist in the preparation of a news article.”

Chats stored under legal hold

The 20 million chats consist of a random sampling of ChatGPT conversations from December 2022 to November 2024 and do not include chats of business customers, OpenAI said in the message on its website.

“We presented several privacy-preserving options to The Times, including targeted searches over the sample (e.g., to search for chats that might include text from a New York Times article so they only receive the conversations relevant to their claims), as well as high-level data classifying how ChatGPT was used in the sample. These were rejected by The Times,” OpenAI said.

The chats are stored in a secure system that is “protected under legal hold, meaning it can’t be accessed or used for purposes other than meeting legal obligations,” OpenAI said. The NYT “would be legally obligated at this time to not make any data public outside the court process,” and OpenAI said it will fight any attempts to make the user conversations public.

A NYT filing on October 30 accused OpenAI of defying prior agreements “by refusing to produce even a small sample of the billions of model outputs that its conduct has put in issue in this case.” The filing continued:

Immediate production of the output log sample is essential to stay on track for the February 26, 2026, discovery deadline. OpenAI’s proposal to run searches on this small subset of its model outputs on Plaintiffs’ behalf is as inefficient as it is inadequate to allow Plaintiffs to fairly analyze how “real world” users interact with a core product at the center of this litigation. Plaintiffs cannot reasonably conduct expert analyses about how OpenAI’s models function in its core consumer-facing product, how retrieval augmented generation (“RAG”) functions to deliver news content, how consumers interact with that product, and the frequency of hallucinations without access to the model outputs themselves.

OpenAI said the NYT’s discovery requests were initially limited to logs “related to Times content” and that it has “been working to satisfy those requests by sampling conversation logs. Towards the end of that process, News Plaintiffs filed a motion with a new demand: that instead of finding and producing logs that are ‘related to Times content,’ OpenAI should hand over the entire 20 million-log sample ‘via hard drive.’”

OpenAI disputes judge’s reasoning

The November 7 order cited a California case, Concord Music Group, Inc. v. Anthropic PBC, in which US District Magistrate Judge Susan van Keulen ordered the production of 5 million records. OpenAI consistently relied on van Keulen’s use of a sample-size formula “in support of its previous proposed methodology for conversation data sampling, but fails to explain why Judge [van] Keulen’s subsequent order directing production of the entire 5 million-record sample to the plaintiff in that case is not similarly instructive here,” Wang wrote.

OpenAI’s filing today said the company was never given an opportunity to explain why Concord shouldn’t apply in this case because the news plaintiffs did not reference it in their motion.

“The cited Concord order was not about whether wholesale production of the sample was appropriate; it was about the mechanism through which Anthropic would effectuate an already agreed-upon production,” OpenAI wrote. “Nothing about that order suggests that Judge van Keulen would have ordered wholesale production had Anthropic raised the privacy concerns that OpenAI has raised throughout this case.”

The Concord logs were just prompt-output pairs, “i.e., a single user prompt followed by a single model output,” OpenAI wrote. “The logs at issue here are complete conversations: each log in the 20 million sample represents a complete exchange of multiple prompt-output pairs between a user and ChatGPT.” That could result in “up to 80 million prompt-output pairs,” OpenAI said.

We contacted The New York Times about OpenAI’s filing and will update this article if it provides any comment.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

OpenAI slams court order that lets NYT read 20 million complete user chats Read More »

us-states-could-lose-$21-billion-of-broadband-grants-after-trump-overhaul

US states could lose $21 billion of broadband grants after Trump overhaul

The BEAD law is clear that the money can be used for more than sending subsidies to Internet service providers. The law says BEAD money can be allocated for connecting eligible community anchor institutions; data collection, broadband mapping, and planning; installing Internet and Wi-Fi infrastructure or providing reduced-cost broadband to multi-family buildings; and providing affordable Internet-capable devices.

The current law also says that if a state fails to use its full allocation, the National Telecommunications and Information Administration (NTIA) “shall reallocate the unused amounts to other eligible entities with approved final proposals.” The law gives the NTIA chief latitude to spend the money for “any use determined necessary… to facilitate the goals of the Program.”

Arielle Roth, who has overseen the BEAD overhaul in her role as head of the NTIA, has said she’s open to sending the remaining funds to states. Roth said in an October 28 speech that the NTIA is “considering how states can use some of the BEAD savings—what has commonly been referred to as nondeployment money—on key outcomes like permitting reform” but added that “no final decisions have been made.” The Ernst bill would take that decision out of the NTIA’s hands.

States still waiting after Biden plans thrown out

After Congress created BEAD, the Biden administration spent about three years developing rules and procedures for the program and then evaluating plans submitted by each US state and territory. The process included developing new maps that, while error-prone due to false submissions by ISPs, provided a more accurate view of broadband coverage gaps than was previously available.

By November 2024, the Biden administration had approved initial funding plans submitted by every state and territory. But the Trump administration rewrote the program rules, eliminating a preference for fiber and demanding lower-cost deployments.

States that could have started construction in summer 2025 had to draft new plans and keep waiting for the grant money. The Trump administration is also telling states that they must exempt ISPs from net neutrality and price laws in order to obtain grant funding.

As for when the long-delayed grants will be distributed, Roth said the NTIA is “on track to approve the majority of state plans and get money out the door this year.”

US states could lose $21 billion of broadband grants after Trump overhaul Read More »

you-won’t-believe-the-excuses-lawyers-have-after-getting-busted-for-using-ai

You won’t believe the excuses lawyers have after getting busted for using AI


I got hacked; I lost my login; it was a rough draft; toggling windows is hard.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Amid what one judge called an “epidemic” of fake AI-generated case citations bogging down courts, some common excuses are emerging from lawyers hoping to dodge the most severe sanctions for filings deemed misleading.

Using a database compiled by French lawyer and AI researcher Damien Charlotin, Ars reviewed 23 cases where lawyers were sanctioned for AI hallucinations. In many, judges noted that the simplest path to avoid or diminish sanctions was to admit that AI was used as soon as it’s detected, act humble, self-report the error to relevant legal associations, and voluntarily take classes on AI and law. But not every lawyer takes the path of least resistance, Ars’ review found, with many instead offering excuses that no judge found credible. Some even lie about their AI use, judges concluded.

Since 2023—when fake AI citations started being publicized—the most popular excuse has been that the lawyer didn’t know AI was used to draft a filing.

Sometimes that means arguing that you didn’t realize you were using AI, as in the case of a California lawyer who got stung by Google’s AI Overviews, which he claimed he took for typical Google search results. Most often, lawyers using this excuse tend to blame an underling, but clients have been blamed, too. A Texas lawyer this month was sanctioned after deflecting so much that the court had to eventually put his client on the stand after he revealed she played a significant role in drafting the aberrant filing.

“Is your client an attorney?” the court asked.

“No, not at all your Honor, just was essentially helping me with the theories of the case,” the lawyer said.

Another popular dodge comes from lawyers who feign ignorance that chatbots are prone to hallucinating facts.

Recent cases suggest this excuse may be mutating into variants. Last month, a sanctioned Oklahoma lawyer admitted that he didn’t expect ChatGPT to add new citations when all he asked the bot to do was “make his writing more persuasive.” And in September, a California lawyer got in a similar bind—and was sanctioned a whopping $10,000, a fine the judge called “conservative.” That lawyer had asked ChatGPT to “enhance” his briefs, “then ran the ‘enhanced’ briefs through other AI platforms to check for errors,” neglecting to ever read the “enhanced” briefs.

Neither of those tired old excuses hold much weight today, especially in courts that have drawn up guidance to address AI hallucinations. But rather than quickly acknowledge their missteps, as courts are begging lawyers to do, several lawyers appear to have gotten desperate. Ars found a bunch citing common tech issues as the reason for citing fake cases.

When in doubt, blame hackers?

For an extreme case, look to a New York City civil court, where a lawyer, Innocent Chinweze, first admitted to using Microsoft Copilot to draft an errant filing, then bizarrely pivoted to claim that the AI citations were due to malware found on his computer.

Chinweze said he had created a draft with correct citations but then got hacked, allowing bad actors “unauthorized remote access” to supposedly add the errors in his filing.

The judge was skeptical, describing the excuse as an “incredible and unsupported statement,” particularly since there was no evidence of the prior draft existing. Instead, Chinweze asked to bring in an expert to testify that the hack had occurred, requesting to end the proceedings on sanctions until after the court weighed the expert’s analysis.

The judge, Kimon C. Thermos, didn’t have to weigh this argument, however, because after the court broke for lunch, the lawyer once again “dramatically” changed his position.

“He no longer wished to adjourn for an expert to testify regarding malware or unauthorized access to his computer,” Thermos wrote in an order issuing sanctions. “He retreated” to “his original position that he used Copilot to aid in his research and didn’t realize that it could generate fake cases.”

Possibly more galling to Thermos than the lawyer’s weird malware argument, though, was a document that Chinweze filed on the day of his sanctions hearing. That document included multiple summaries preceded by this text, the judge noted:

Some case metadata and case summaries were written with the help of AI, which can produce inaccuracies. You should read the full case before relying on it for legal research purposes.

Thermos admonished Chinweze for continuing to use AI recklessly. He blasted the filing as “an incoherent document that is eighty-eight pages long, has no structure, contains the full text of most of the cases cited,” and “shows distinct indications that parts of the discussion/analysis of the cited cases were written by artificial intelligence.”

Ultimately, Thermos ordered Chinweze to pay $1,000, the most typical fine lawyers received in the cases Ars reviewed. The judge then took an extra non-monetary step to sanction Chinweze, referring the lawyer to a grievance committee, “given that his misconduct was substantial and seriously implicated his honesty, trustworthiness, and fitness to practice law.”

Ars could not immediately reach Chinweze for comment.

Toggling windows on a laptop is hard

In Alabama, an attorney named James A. Johnson made an “embarrassing mistake,” he said, primarily because toggling windows on a laptop is hard, US District Judge Terry F. Moorer noted in an October order on sanctions.

Johnson explained that he had accidentally used an AI tool that he didn’t realize could hallucinate. It happened while he was “at an out-of-state hospital attending to the care of a family member recovering from surgery.” He rushed to draft the filing, he said, because he got a notice that his client’s conference had suddenly been “moved up on the court’s schedule.”

“Under time pressure and difficult personal circumstance,” Johnson explained, he decided against using Fastcase, a research tool provided by the Alabama State Bar, to research the filing. Working on his laptop, he opted instead to use “a Microsoft Word plug-in called Ghostwriter Legal” because “it appeared automatically in the sidebar of Word while Fastcase required opening a separate browser to access through the Alabama State Bar website.”

To Johnson, it felt “tedious to toggle back and forth between programs on [his] laptop with the touchpad,” and that meant he “unfortunately fell victim to the allure of a new program that was open and available.”

Moorer seemed unimpressed by Johnson’s claim that he understood tools like ChatGPT were unreliable but didn’t expect the same from other AI legal tools—particularly since “information from Ghostwriter Legal made it clear that it used ChatGPT as its default AI program,” Moorer wrote.

The lawyer’s client was similarly horrified, deciding to drop Johnson on the spot, even though that risked “a significant delay of trial.” Moorer noted that Johnson seemed shaken by his client’s abrupt decision, evidenced by “his look of shock, dismay, and display of emotion.”

Moorer further noted that Johnson had been paid using public funds while seemingly letting AI do his homework. “The harm is not inconsequential as public funds for appointed counsel are not a bottomless well and are limited resource,” the judge wrote in justifying a more severe fine.

“It has become clear that basic reprimands and small fines are not sufficient to deter this type of misconduct because if it were, we would not be here,” Moorer concluded.

Ruling that Johnson’s reliance on AI was “tantamount to bad faith,” Moorer imposed a $5,000 fine. The judge also would have “considered potential disqualification, but that was rendered moot” since Johnson’s client had already dismissed him.

Asked for comment, Johnson told Ars that “the court made plainly erroneous findings of fact and the sanctions are on appeal.”

Plagued by login issues

As a lawyer in Georgia tells it, sometimes fake AI citations may be filed because a lawyer accidentally filed a rough draft instead of the final version.

Other lawyers claim they turn to AI as needed when they have trouble accessing legal tools like Westlaw or LexisNexis.

For example, in Iowa, a lawyer told an appeals court that she regretted relying on “secondary AI-driven research tools” after experiencing “login issues her with her Westlaw subscription.” Although the court was “sympathetic to issues with technology, such as login issues,” the lawyer was sanctioned, primarily because she only admitted to using AI after the court ordered her to explain her mistakes. In her case, however, she got to choose between paying a minimal $150 fine or attending “two hours of legal ethics training particular to AI.”

Less sympathetic was a lawyer who got caught lying about the AI tool she blamed for inaccuracies, a Louisiana case suggested. In that case, a judge demanded to see the research history after a lawyer claimed that AI hallucinations came from “using Westlaw Precision, an AI-assisted research tool, rather than Westlaw’s standalone legal database.”

It turned out that the lawyer had outsourced the research, relying on a “currently suspended” lawyer’s AI citations, and had only “assumed” the lawyer’s mistakes were from Westlaw’s AI tool. It’s unclear what tool was actually used by the suspended lawyer, who likely lost access to a Westlaw login, but the judge ordered a $1,000 penalty after the lawyer who signed the filing “agreed that Westlaw did not generate the fabricated citations.”

Judge warned of “serial hallucinators”

Another lawyer, William T. Panichi in Illinois, has been sanctioned at least three times, Ars’ review found.

In response to his initial penalties ordered in July, he admitted to being tempted by AI while he was “between research software.”

In that case, the court was frustrated to find that the lawyer had contradicted himself, and it ordered more severe sanctions as a result.

Panichi “simultaneously admitted to using AI to generate the briefs, not doing any of his own independent research, and even that he ‘barely did any personal work [him]self on this appeal,’” the court order said, while also defending charging a higher fee—supposedly because this case “was out of the ordinary in terms of time spent” and his office “did some exceptional work” getting information.

The court deemed this AI misuse so bad that Panichi was ordered to disgorge a “payment of $6,925.62 that he received” in addition to a $1,000 penalty.

“If I’m lucky enough to be able to continue practicing before the appellate court, I’m not going to do it again,” Panichi told the court in July, just before getting hit with two more rounds of sanctions in August.

Panichi did not immediately respond to Ars’ request for comment.

When AI-generated hallucinations are found, penalties are often paid to the court, the other parties’ lawyers, or both, depending on whose time and resources were wasted fact-checking fake cases.

Lawyers seem more likely to argue against paying sanctions to the other parties’ attorneys, hoping to keep sanctions as low as possible. One lawyer even argued that “it only takes 7.6 seconds, not hours, to type citations into LexisNexis or Westlaw,” while seemingly neglecting the fact that she did not take those precious seconds to check her own citations.

The judge in the case, Nancy Miller, was clear that “such statements display an astounding lack of awareness of counsel’s obligations,” noting that “the responsibility for correcting erroneous and fake citations never shifts to opposing counsel or the court, even if they are the first to notice the errors.”

“The duty to mitigate the harms caused by such errors remains with the signor,” Miller said. “The sooner such errors are properly corrected, either by withdrawing or amending and supplementing the offending pleadings, the less time is wasted by everyone involved, and fewer costs are incurred.”

Texas US District Judge Marina Garcia Marmolejo agreed, explaining that even more time is wasted determining how other judges have responded to fake AI-generated citations.

“At one of the busiest court dockets in the nation, there are scant resources to spare ferreting out erroneous AI citations in the first place, let alone surveying the burgeoning caselaw on this subject,” she said.

At least one Florida court was “shocked, shocked” to find that a lawyer was refusing to pay what the other party’s attorneys said they were owed after misusing AI. The lawyer in that case, James Martin Paul, asked to pay less than a quarter of the fees and costs owed, arguing that Charlotin’s database showed he might otherwise owe penalties that “would be the largest sanctions paid out for the use of AI generative case law to date.”

But caving to Paul’s arguments “would only benefit serial hallucinators,” the Florida court found. Ultimately, Paul was sanctioned more than $85,000 for what the court said was “far more egregious” conduct than other offenders in the database, chastising him for “repeated, abusive, bad-faith conduct that cannot be recognized as legitimate legal practice and must be deterred.”

Paul did not immediately respond to Ars’ request to comment.

Michael B. Slade, a US bankruptcy judge in Illinois, seems to be done weighing excuses, calling on all lawyers to stop taking AI shortcuts that are burdening courts.

“At this point, to be blunt, any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud,” Slade wrote.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

You won’t believe the excuses lawyers have after getting busted for using AI Read More »

elon-musk-wins-$1-trillion-tesla-pay-vote-despite-“part-time-ceo”-criticism

Elon Musk wins $1 trillion Tesla pay vote despite “part-time CEO” criticism

Tesla shareholders today voted to approve a compensation plan that would pay Elon Musk more than $1 trillion over the next decade if he hits all of the plan’s goals. Musk won over 75 percent of the vote, according to the announcement at today’s shareholder meeting.

The pay plan would give Musk 423,743,904 shares, awarded in 12 tranches of 35,311,992 shares each if Tesla achieves various operational goals and market value milestones. Goals include delivering 20 million vehicles, obtaining 10 million Full Self-Driving subscriptions, delivering 1 million “AI robots,” putting 1 million robotaxis in operation, and achieving a $400 billion adjusted EBITDA (earnings before interest, taxes, depreciation, and amortization).

Musk has threatened to leave if he doesn’t get a larger share of Tesla. He told investors last month, “It’s not like I’m going to go spend the money. It’s just, if we build this robot army, do I have at least a strong influence over that robot army? Not control, but a strong influence. That’s what it comes down to in a nutshell. I don’t feel comfortable building that robot army if I don’t have at least a strong influence.”

The plan has 12 market capitalization milestones topping out at $8.5 trillion. The value of Musk’s award is estimated to exceed $1 trillion if he hits all operational and market capitalization goals. Musk would increase his ownership stake to 24.8 percent of Tesla, or 28.8 percent if Tesla ends up winning an appeal in the court case that voided his 2018 pay plan.

Tesla Chair Robyn Denholm has argued that Musk needs big pay packages to stay motivated. Some investors have said $1 trillion is too much for a CEO who spends much of his time running other companies such as SpaceX, X (formerly Twitter), and xAI.

New York Comptroller Thomas DiNapoli, who runs a state retirement fund that owns over 3.3 million shares, slammed the pay plan in a webinar last week. He said that Musk’s existing stake in Tesla should already “be incentive enough to drive performance. The idea that another massive equity award will somehow refocus a man who is hopelessly distracted is both illogical and contrary to the evidence. This is not pay for performance; this is pay for unchecked power.”

Musk and his side hustles

With Musk spending more time at xAI, “some major Tesla investors have privately pressed top executives and board members about how much attention Musk was actually paying to the company and about whether there is a CEO succession plan,” a Wall Street Journal article on Tuesday said. “An unusually large contingent of Tesla board members, including chair Robyn Denholm, former Chipotle CFO Jack Hartung, and Tesla co-founder JB Straubel, met with big investors in New York last week to advocate for Musk’s proposed new pay package.”

Elon Musk wins $1 trillion Tesla pay vote despite “part-time CEO” criticism Read More »

at&t-falsely-promised-“everyone”-a-free-iphone,-ad-industry-board-rules

AT&T falsely promised “everyone” a free iPhone, ad-industry board rules

“Focusing on the words ‘everyone gets,’ Verizon argued to NAD that the challenged advertising communicated an explicit message—that all AT&T subscribers are eligible for the trade-in offer—which it asserts was literally false because only subscribers to ‘qualifying’ AT&T plans are eligible. Verizon also argued that the advertisement communicated a comparable misleading message that all AT&T customers were eligible for the trade-in,” the NARB decision said.

While AT&T disclosed the offer limits, Verizon argued that the disclosure was not clear and conspicuous. Verizon said—and the NAD agreed—that the phrase “everyone gets” suggests everyone will get a free phone, not that everyone “can get” a free phone if they subscribe to AT&T’s more expensive plans.

AT&T claimed the ad was literally true because it did not say that everyone “will” get the free phone. “Rather, according to the advertiser, the challenged language communicates that all customers, current or new, can qualify for the offer and urges customers to ‘learn’ the details about the trade-in opportunity,” the NARB said.

AT&T argued that the word “learn” makes it clear there are limits on the offer. The NAD disagreed, saying that the “learn how” phrase “precedes the word ‘everyone,’ suggesting everyone is eligible to receive a phone, not that everyone can learn how to get a phone.”

AT&T also submitted the results of a customer survey, arguing that it proved customers seeing the ad understood the offer’s limitations. The NAD decided that the survey was methodologically unsound, while the NARB said that both AT&T and Verizon offered “plausible” interpretations of the results.

Panel: Buyers of low-cost plans likely duped

After hearing AT&T’s and Verizon’s arguments, the NARB panel decided “that the challenged advertising, on its face, conveys a false message and further does not clarify the message by disclosing a material limitation to the offer of a free cell phone in a clear and conspicuous manner.”

AT&T falsely promised “everyone” a free iPhone, ad-industry board rules Read More »

bombshell-report-exposes-how-meta-relied-on-scam-ad-profits-to-fund-ai

Bombshell report exposes how Meta relied on scam ad profits to fund AI


“High risk” versus “high value”

Meta goosed its revenue by targeting users likely to click on scam ads, docs show.

Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them.

In a lengthy report, Reuters exposed five years of Meta practices and failures that allowed scammers to take advantage of users of Facebook, Instagram, and WhatsApp.

Documents showed that internally, Meta was hesitant to abruptly remove accounts, even those considered some of the “scammiest scammers,” out of concern that a drop in revenue could diminish resources needed for artificial intelligence growth.

Instead of promptly removing bad actors, Meta allowed “high value accounts” to “accrue more than 500 strikes without Meta shutting them down,” Reuters reported. The more strikes a bad actor accrued, the more Meta could charge to run ads, as Meta’s documents showed the company “penalized” scammers by charging higher ad rates. Meanwhile, Meta acknowledged in documents that its systems helped scammers target users most likely to click on their ads.

“Users who click on scam ads are likely to see more of them because of Meta’s ad-personalization system, which tries to deliver ads based on a user’s interests,” Reuters reported.

Internally, Meta estimates that users across its apps in total encounter 15 billion “high risk” scam ads a day. That’s on top of 22 billion organic scam attempts that Meta users are exposed to daily, a 2024 document showed. Last year, the company projected that about $16 billion, which represents about 10 percent of its revenue, would come from scam ads.

“High risk” scam ads strive to sell users on fake products or investment schemes, Reuters noted. Some common scams in this category that mislead users include selling banned medical products, or promoting sketchy entities, like linking to illegal online casinos. However, Meta is most concerned about “imposter” ads, which impersonate celebrities or big brands that Meta fears may halt advertising or engagement on its apps if such scams aren’t quickly stopped.

“Hey it’s me,” one scam advertisement using Elon Musk’s photo read. “I have a gift for you text me.” Another using Donald Trump’s photo claimed the US president was offering $710 to every American as “tariff relief.” Perhaps most depressingly, a third posed as a real law firm, offering advice on how to avoid falling victim to online scams.

Meta removed these particular ads after Reuters flagged them, but in 2024, Meta earned about $7 billion from “high risk” ads like these alone, Reuters reported.

Sandeep Abraham, a former Meta safety investigator who now runs consultancy firm Risky Business Solutions as a fraud examiner, told Reuters that regulators should intervene.

“If regulators wouldn’t tolerate banks profiting from fraud, they shouldn’t tolerate it in tech,” Abraham said.

Meta won’t disclose how much it made off scam ads

Meta spokesperson Andy Stone told Reuters that its collection of documents—which were created between 2021 and 2025 by Meta’s finance, lobbying, engineering, and safety divisions—“present a selective view that distorts Meta’s approach to fraud and scams.”

Stone claimed that Meta’s estimate that it would earn 10 percent of its 2024 revenue from scam ads was “rough and overly-inclusive.” He suggested the actual amount Meta earned was much lower but declined to specify the true amount. He also said that Meta’s most recent investor disclosures note that scam ads “adversely affect” Meta’s revenue.

“We aggressively fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t want it either,” Stone said.

Despite those efforts, this spring, Meta’s safety team “estimated that the company’s platforms were involved in a third of all successful scams in the US,” Reuters reported. In other internal documents around the same time, Meta staff concluded that “it is easier to advertise scams on Meta platforms than Google,” acknowledging that Meta’s rivals were better at “weeding out fraud.”

As Meta tells it, though seemingly dismal, these documents came amid vast improvements in its fraud protections. Stone told Reuters that “over the past 18 months, we have reduced user reports of scam ads globally by 58 percent and, so far in 2025, we’ve removed more than 134 million pieces of scam ad content,” Stone said.

According to Reuters, the problem may be the pace Meta sets in combating scammers. In 2023, Meta laid off “everyone who worked on the team handling advertiser concerns about brand-rights issues,” then ordered safety staffers to limit use of computing resources to devote more resources to virtual reality and AI. A 2024 document showed Meta recommended a “moderate” approach to enforcement, plotting to reduce revenue “attributable to scams, illegal gambling and prohibited goods” by 1–3 percentage points each year since 2024, supposedly slashing it in half by 2027. More recently, a 2025 document showed Meta continues to weigh how “abrupt reductions of scam advertising revenue could affect its business projections.”

Eventually, Meta “substantially expanded” its teams that track scam ads, Stone told Reuters. But Meta also took steps to ensure they didn’t take too hard a hit while needing vast resources—$72 billion—to invest in AI, Reuters reported.

For example, in February, Meta told “the team responsible for vetting questionable advertisers” that they weren’t “allowed to take actions that could cost Meta more than 0.15 percent of the company’s total revenue,” Reuters reported. That’s any scam account worth about $135 million, Reuters noted. Stone pushed back, saying that the team was never given “a hard limit” on what the manager described as “specific revenue guardrails.”

“Let’s be cautious,” the team’s manager wrote, warning that Meta didn’t want to lose revenue by blocking “benign” ads mistakenly swept up in enforcement.

Meta should donate scam ad profits, ex-exec says

Documents showed that Meta prioritized taking action when it risked regulatory fines, although revenue from scam ads was worth roughly three times the highest fines it could face. Possibly, Meta most feared that officials would require disgorgement of ill-gotten gains, rather than fines.

Meta appeared to be less likely to ramp up enforcement from police requests. Documents showed that police in Singapore flagged “146 examples of scams targeting that country’s users last fall,” Reuters reported. Only 23 percent violated Meta’s policies, while the rest only “violate the spirit of the policy, but not the letter,” a Meta presentation said.

Scams that Meta failed to flag offered promotions like crypto scams, fake concert tickets, or deals “too good to be true,” like 80 percent off a desirable item from a high-fashion brand. Meta also looked past fake job ads that claimed to be hiring for Big Tech companies.

Rob Leathern previously led Meta’s business integrity unit that worked to prevent scam ads but left in 2020. He told Wired that it’s hard to “know how bad it’s gotten or what the current state is” since Meta and other social media platforms don’t provide outside researchers access to large random samples of ads.

With such access, researchers like Leathern and Rob Goldman, Meta’s former vice president of ads, could provide “scorecards” showing how well different platforms work to combat scams. Together, Leathern and Goldman launched a nonprofit called CollectiveMetrics.org in hopes of “bringing more transparency to digital advertising in order to fight deceptive ads,” Wired reported.

“I want there to be more transparency. I want third parties, researchers, academics, nonprofits, whoever, to be able to actually assess how good of a job these platforms are doing at stopping scams and fraud,” Leathern told Wired. “We’d like to move to actual measurement of the problem and help foster an understanding.”

Another meaningful step that Leathern thinks companies like Meta should take to protect users would be to notify users when Meta discovers that they clicked on a scam ad—rather than targeting them with more scam ads, as Reuters suggested was Meta’s practice.

“These scammers aren’t getting people’s money on day one, typically. So there’s a window to take action,” he said, recommending that platforms donate ill-gotten gains from running scam ads to “fund nonprofits to educate people about how to recognize these kinds of scams or problems.”

“There’s lots that could be done with funds that come from these bad guys,” Leathern said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Bombshell report exposes how Meta relied on scam ad profits to fund AI Read More »

fbi-orders-domain-registrar-to-reveal-who-runs-mysterious-archive.is-site

FBI orders domain registrar to reveal who runs mysterious Archive.is site

FBI wants detailed records

While copyright infringement would be a likely area of investigation for the FBI with Archive.today, the subpoena doesn’t provide specific information on the probe. The subpoena seeks the Archive.today customer or subscriber name, addresses, length of service, records of phone calls or texts, payment information, records of session times and duration of Internet connectivity, mobile device identification codes, IP addresses or other numbers used to identify the subscriber, and the types of services provided.

In contrast with the nonprofit Internet Archive, the operator or operators of Archive.today have remained mysterious. It has used various domains (archive.ph, archive.is, etc.), and its registrant “Denis Petrov” may be an alias.

An FAQ that apparently hasn’t been updated in over a decade says that Archive.today, which was started in 2012, uses data centers in Europe and is “privately funded.” It also accepts donations. There are several indications that the founder is from Russia.

While the Internet Archive uses a system to automatically crawl the Internet, Archive.today relies on users to paste in URLs in order to archive their content. News articles published by major media outlets are often saved in full on the site, giving other users a way to read articles that are blocked by a paywall.

Archive.today doesn’t publicize a way for copyright owners to seek removal of content, whereas the Internet Archive has a policy for removing pages when it is made aware of content that infringes a copyright.

US publishers have been fighting web services designed to bypass paywalls. In July, the News/Media Alliance said it secured the takedown of paywall-bypass website 12ft.io. “Following the News/Media Alliance’s efforts, the webhost promptly locked 12ft.io on Monday, July 14th,” the group said. (Ars Technica owner Condé Nast is a member of the alliance.)

FBI orders domain registrar to reveal who runs mysterious Archive.is site Read More »

oddest-chatgpt-leaks-yet:-cringey-chat-logs-found-in-google-analytics-tool

Oddest ChatGPT leaks yet: Cringey chat logs found in Google analytics tool


ChatGPT leaks seem to confirm OpenAI scrapes Google, expert says.

Credit: Aurich Lawson | Getty Images

For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination: Google Search Console (GSC), a tool that developers typically use to monitor search traffic, not lurk private chats.

Normally, when site managers access GSC performance reports, they see queries based on keywords or short phrases that Internet users type into Google to find relevant content. But starting this September, odd queries, sometimes more than 300 characters long, could also be found in GSC. Showing only user inputs, the chats appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private.

Jason Packer, owner of an analytics consulting firm called Quantable, was among the first to flag the issue in a detailed blog last month.

Determined to figure out what exactly was causing the leaks, he teamed up with “Internet sleuth” and web optimization consultant Slobodan Manić. Together, they conducted testing that they believe may have surfaced “the first definitive proof that OpenAI directly scrapes Google Search with actual user prompts.” Their investigation seemed to confirm the AI giant was compromising user privacy, in some cases in order to maintain engagement by seizing search data that Google otherwise wouldn’t share.

OpenAI declined Ars’ request to confirm if Packer and Manić’s theory posed in their blog was correct or answer any of their remaining questions that could help users determine the scope of the problem.

However, an OpenAI spokesperson confirmed that the company was “aware” of the issue and has since “resolved” a glitch “that temporarily affected how a small number of search queries were routed.”

Packer told Ars that he’s “very pleased that OpenAI was able to resolve the issue quickly.” But he suggested that OpenAI’s response failed to confirm whether or not OpenAI was scraping Google, and that leaves room for doubt that the issue was completely resolved.

Google declined to comment.

“Weirder” than prior ChatGPT leaks

The first odd ChatGPT query to appear in GSC that Packer reviewed was a wacky stream-of-consciousness from a likely female user asking ChatGPT to assess certain behaviors to help her figure out if a boy who teases her had feelings for her. Another odd query seemed to come from an office manager sharing business information while plotting a return-to-office announcement.

These were just two of 200 odd queries—including “some pretty crazy ones,” Packer told Ars—that he reviewed on one site alone. In his blog, Packer concluded that the queries should serve as “a reminder that prompts aren’t as private as you think they are!”

Packer suspected that these queries were connected to reporting from The Information in August that cited sources claiming OpenAI was scraping Google search results to power ChatGPT responses. Sources claimed that OpenAI was leaning on Google to answer prompts to ChatGPT seeking information about current events, like news or sports.

OpenAI has not confirmed that it’s scraping Google search engine results pages (SERPs). However, Packer thinks his testing of ChatGPT leaks may be evidence that OpenAI not only scrapes “SERPs in general to acquire data,” but also sends user prompts to Google Search.

Manić helped Packer solve a big part of the riddle. He found that the odd queries were turning up in one site’s GSC because it ranked highly in Google Search for “https://openai.com/index/chatgpt/”—a ChatGPT URL that was appended at the start of every strange query turning up in GSC.

It seemed that Google had tokenized the URL, breaking it up into a search for keywords “openai + index + chatgpt.” Sites using GSC that ranked highly for those keywords were therefore likely to encounter ChatGPT leaks, Parker and Manić proposed, including sites that covered prior ChatGPT leaks where chats were being indexed in Google search results. Using their recommendations to seek out queries in GSC, Ars was able to verify similar strings.

“Don’t get confused though, this is a new and completely different ChatGPT screw-up than having Google index stuff we don’t want them to,” Packer wrote. “Weirder, if not as serious.”

It’s unclear what exactly OpenAI fixed, but Packer and Manić have a theory about one possible path for leaking chats. Visiting the URL that starts every strange query found in GSC, ChatGPT users encounter a prompt box that seemed buggy, causing “the URL of that page to be added to the prompt.” The issue, they explained, seemed to be that:

Normally ChatGPT 5 will choose to do a web search whenever it thinks it needs to, and is more likely to do that with an esoteric or recency-requiring search. But this bugged prompt box also contains the query parameter ‘hints=search’ to cause it to basically always do a search: https://chatgpt.com/?hints=search&openaicom_referred=true&model=gpt-5

Clearly some of those searches relied on Google, Packer’s blog said, mistakenly sending to GSC “whatever” the user says in the prompt box, with “https://openai.com/index/chatgpt/” text added to the front of it.” As Packer explained, “we know it must have scraped those rather than using an API or some kind of private connection—because those other options don’t show inside GSC.”

This means “that OpenAI is sharing any prompt that requires a Google Search with both Google and whoever is doing their scraping,” Packer alleged. “And then also with whoever’s site shows up in the search results! Yikes.”

To Packer, it appeared that “ALL ChatGPT prompts” that used Google Search risked being leaked during the past two months.

OpenAI claimed only a small number of queries were leaked but declined to provide a more precise estimate. So, it remains unclear how many of the 700 million people who use ChatGPT each week had prompts routed to GSC.

OpenAI’s response leaves users with “lingering questions”

After ChatGPT prompts were found surfacing in Google’s search index in August, OpenAI clarified that users had clicked a box making those prompts public, which OpenAI defended as “sufficiently clear.” The AI firm later scrambled to remove the chats from Google’s SERPs after it became obvious that users felt misled into sharing private chats publicly.

Packer told Ars that a major difference between those leaks and the GSC leaks is that users harmed by the prior scandal, at least on some level, “had to actively share” their leaked chats. In the more recent case, “nobody clicked share” or had a reasonable way to prevent their chats from being exposed.

“Did OpenAI go so fast that they didn’t consider the privacy implications of this, or did they just not care?” Packer posited in his blog.

Perhaps most troubling to some users—whose identities are not linked in chats unless their prompts perhaps share identifying information—there does not seem to be any way to remove the leaked chats from GSC, unlike the prior scandal.

Packer and Manić are left with “lingering questions” about how far OpenAI’s fix will go to stop the issue.

Manić was hoping OpenAI might confirm if prompts entered on https://chatgpt.com that trigger Google Search were also affected. But OpenAI did not follow up on that question, or a broader question about how big the leak was. To Manić, a major concern was that OpenAI’s scraping may be “contributing to ‘crocodile mouth’ in Google Search Console,” a troubling trend SEO researchers have flagged that causes impressions to spike but clicks to dip.

OpenAI also declined to clarify Packer’s biggest question. He’s left wondering if the company’s “fix” simply ended OpenAI’s “routing of search queries, such that raw prompts are no longer being sent to Google Search, or are they no longer scraping Google Search at all for data?

“We still don’t know if it’s that one particular page that has this bug or whether this is really widespread,” Packer told Ars. “In either case, it’s serious and just sort of shows how little regard OpenAI has for moving carefully when it comes to privacy.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Oddest ChatGPT leaks yet: Cringey chat logs found in Google analytics tool Read More »

mark-zuckerberg’s-illegal-school-drove-his-neighbors-crazy

Mark Zuckerberg’s illegal school drove his neighbors crazy


Neighbors complained about noise, security guards, and hordes of traffic.

An entrance to Mark Zuckerberg’s compound in Palo Alto, California. Credit: Loren Elliott/Redux

The Crescent Park neighborhood of Palo Alto, California, has some of the best real estate in the country, with a charming hodgepodge of homes ranging in style from Tudor revival to modern farmhouse and contemporary Mediterranean. It also has a gigantic compound that is home to Mark Zuckerberg, his wife Priscilla Chan, and their daughters Maxima, August, and Aurelia. Their land has expanded to include 11 previously separate properties, five of which are connected by at least one property line.

The Zuckerberg compound’s expansion first became a concern for Crescent Park neighbors as early as 2016, due to fears that his purchases were driving up the market. Then, about five years later, neighbors noticed that a school appeared to be operating out of the Zuckerberg compound. This would be illegal under the area’s residential zoning code without a permit. They began a crusade to shut it down that did not end until summer 2025.

WIRED obtained 1,665 pages of documents about the neighborhood dispute—including 311 records, legal filings, construction plans, and emails—through a public record request filed to the Palo Alto Department of Planning and Development Services. (Mentions of “Zuckerberg” or “the Zuckerbergs” appear to have been redacted. However, neighbors and separate public records confirm that the property in question belongs to the family. The names of the neighbors who were in touch with the city were also redacted.)

The documents reveal that the school may have been operating as early as 2021 without a permit to operate in the city of Palo Alto. As many as 30 students might have enrolled, according to observations from neighbors. These documents also reveal a wider problem: For almost a decade, the Zuckerbergs’ neighbors have been complaining to the city about noisy construction work, the intrusive presence of private security, and the hordes of staffers and business associates causing traffic and taking up street parking.

Over time, neighbors became fed up with what they argued was the city’s lack of action, particularly with respect to the school. Some believed that the delay was because of preferential treatment to the Zuckerbergs. “We find it quite remarkable that you are working so hard to meet the needs of a single billionaire family while keeping the rest of the neighborhood in the dark,” reads one email sent to the city’s Planning and Development Services Department in February. “Just as you have not earned our trust, this property owner has broken many promises over the years, and any solution which depends on good faith behavioral changes from them is a failure from the beginning.”

Palo Alto spokesperson Meghan Horrigan-Taylor told WIRED that the city “enforces zoning, building, and life safety rules consistently, without regard to who owns a property.” She also refuted the claim that neighbors were kept in the dark, claiming that the city’s approval of construction projects at the Zuckerberg properties “were processed the same way they are for any property owner.” She added that, though some neighbors told the city they believe the Zuckerbergs received “special treatment,” that is not accurate.

“Staff met with residents, conducted site visits, and provided updates by phone and email while engaging the owner’s representative to address concerns,” Horrigan-Taylor said. “These actions were measured and appropriate to abate the unpermitted use and responsive to neighborhood issues within the limits of local and state law.”

According to The New York Times, which first reported on the school’s existence, it was called “Bicken Ben School” and shared a name with one of the Zuckerbergs’ chickens. The listing for Bicken Ben School, or BBS for short, in a California Department of Education directory claims the school opened on October 5, 2022. This, however, is the year after neighbors claim to have first seen it operating. It’s also two and a half years after Sara Berge—the school’s point of contact, per documents WIRED obtained from the state via public record request—claims to have started her role as “head of school” for a “Montessori pod” at a “private family office” according to her LinkedIn profile, which WIRED viewed in September and October. Berge did not respond to a request to comment.

Between 2022 and 2025, according to the documents Bicken Ben filed to the state, the school grew from nine to 14 students ranging from 5 to 10 years old. Neighbors, however, estimated that they observed 15 to 30 students. Berge similarly claimed on her LinkedIn profile to have overseen “25 children” in her job. In a June 2025 job listing for “BBS,” the school had a “current enrollment of 35–40 students and plans for continued growth,” which the listing says includes a middle school.

In order for the Zuckerbergs to run a private school on their land, which is in a residential zone, they need a “conditional use” permit from the city. However, based on the documents WIRED obtained, and Palo Alto’s public database of planning applications, the Zuckerbergs do not appear to have ever applied for or received this permit.

Per emails obtained by WIRED, Palo Alto authorities told a lawyer working with the Zuckerbergs in March 2025 that the family had to shut down the school on its compound by June 30. A state directory lists BBS, the abbreviation for Bicken Ben School, as having operated until August 18, and three of Zuckerberg’s neighbors—who all requested anonymity due to the high-profile nature of the family—confirmed to WIRED in late September that they had not seen or heard students being dropped off and picked up on weekdays in recent weeks.

However, Zuckerberg family spokesperson Brian Baker tells WIRED that the school didn’t close, per se. It simply moved. It’s not clear where it is now located, or whether the school is operating under a different name.

In response to a detailed request for comment, Baker provided WIRED with an emailed statement on behalf of the Zuckerbergs. “Mark, Priscilla and their children have made Palo Alto their home for more than a decade,” he said. “They value being members of the community and have taken a number of steps above and beyond any local requirements to avoid disruption in the neighborhood.”

“Serious and untenable”

By the fall of 2024, Zuckerberg’s neighbors were at their breaking point. At some point in mid-2024, according to an email from then mayor Greer Stone, a group of neighbors had met with Stone to air their grievances about the Zuckerberg compound and the illegal school they claimed it was operating. They didn’t arrive at an immediate resolution.

In the years prior, the city had received several rounds of complaints about the Zuckerberg compound. Complaints for the address of the school were filed to 311, the nationwide number for reporting local non-emergency issues, in February 2019, September 2021, January 2022, and April 2023. They all alleged that the property was operating illegally under city code. Both were closed by the planning department, which found no rule violations. An unknown number of additional complaints, mentioned in emails among city workers, were also made between 2020 and 2024—presumably delivered via phone calls, in person, or to city departments not included in WIRED’s public record request.

In December 2020, building inspection manager Korwyn Peck wrote to code enforcement officer Brian Reynolds about an inspection he attempted to conduct around the Zuckerberg compound, in response to several noise and traffic complaints from neighbors. He described that several men in SUVs had gathered to watch him, and a tense conversation with one of them had ensued. “This appears to be a site that we will need to pay attention to,” Peck wrote to Reynolds.

“We have all been accused of ‘not caring,’ which of course is not true,” Peck added. “It does appear, however, with the activity I observed tonight, that we are dealing with more than four simple dwellings. This appears to be more than a homeowner with a security fetish.”

In a September 11, 2024, email to Jonathan Lait, Palo Alto’s director of planning and development services and Palo Alto city attorney Molly Stump, one of Zuckerberg’s neighbors alleged that since 2021, “despite numerous neighborhood complaints” to the city of Palo Alto, including “multiple code violation reports,” the school had continued to grow. They claimed that a garage at the property had been converted into another classroom, and that an increasing number of children were arriving each day. Lait and Stump did not respond to a request to comment.

“The addition of daily traffic from the teachers and parents at the school has only exacerbated an already difficult situation,” they said in the email, noting that the neighborhood has been dealing with an “untenable traffic” situation for more than eight years.

They asked the city to conduct a formal investigation into the school on Zuckerberg’s property, adding that their neighbors are also “extremely concerned” about the school, and “are willing to provide eyewitness accounts in support of this complaint.”

Over the next week, another neighbor forwarded this note to all six Palo Alto city council members, as well as then mayor Stone. One of these emails described the situation as “serious” and “untenable.”

“We believe the investigation should be swift and should yield a cease and desist order,” the neighbor wrote.

Lait responded to the neighbor who sent the original complaint on October 15, claiming that he’d had an “initial call” with a “representative” of the property owners and that he was directing the city’s code enforcement staff to reexamine the property.

On December 11, 2024, the neighbor claimed that since one of their fellow neighbors had spoken to a Zuckerberg representative, and the representative had allegedly admitted that there was a school on the property, “it seems like an open and shut case.”

“Our hope is that there is an equal process in place for all residents of Palo Alto regardless of wealth or stature,” the neighbor wrote. “It is hard to imagine that this kind of behavior would be ignored in any other circumstance.”

That same day, Lait told Christine Wade, a partner at SSL Law Firm—who, in an August 2024 email thread, said she was “still working with” the Zuckerberg family—that the Zuckerbergs lacked the required permit to run a school in a residential zone.

“Based on our review of local and state law, we believe this use constitutes a private school use in a residential zone requiring a conditional use permit,” Lait wrote in an email to Wade. “We also have not found any state preemptions that would exclude a use like this from local zoning requirements.” Lait added that a “next step,” if a permit was not obtained, would be sending a cease and desist to the property owner.

According to several emails, Wade, Lait, and Mark Legaspi, CEO of the Zuckerberg family office called West 10, went on to arrange an in-person meeting at City Hall on January 9. (This is the first time that the current name of the Zuckerberg family office, West 10, has been publicly disclosed. The office was previously called West Street.) Although WIRED did not obtain notes from the meeting, Lait informed the neighbor on January 10 that he had told the Zuckerbergs’ “representative” that the school would need to shut down if it didn’t get a conditional use permit or apply for that specific permit.

Lait added that the representative would clarify what the family planned to do in about a week; however, he noted that if the school were to close, the city may give the school a “transition period” to wind things down. Wade did not respond to a request for comment.

“At a minimum, give us extended breaks”

There was another increasingly heated conversation happening behind the scenes. On February 3 of this year, at least one neighbor met with Jordan Fox, an employee of West 10.

It’s unclear exactly what happened at this meeting, or if the neighbor who sent the September 11 complaint was in attendance. But a day after the meeting with Fox, two additional neighbors added their names to the September 11 complaint, per an email to Lait.

On February 12, a neighbor began an email chain with Fox. This email was forwarded to Planning Department officials two months later. The neighbor, who seemingly attended the meeting, said they had “connected” with fellow neighbors “to review and revise” an earlier list of 14 requests that had been reportedly submitted to the Zuckerbergs at some previous point. The note does not specify the contents of this original list of requests, but of the 19 neighbors who originally contributed to it, they claimed that 15 had contributed to the revised list.

The email notes that the Zuckerbergs had been “a part of our neighborhood for many years,” and that they “hope that this message will start an open and respectful dialogue,” built upon the “premise of how we all wish to be treated as neighbors.”

“Our top requests are to minimize future disruption to the neighborhood and proactively manage the impact of the many people who are affiliated with you,” the email says. This includes restricting parking by “security guards, contractors, staff, teachers, landscapers, visitors, etc.” In the event of major demolitions, concrete pours, or large parties, the email asks for advance notice, and for dedicated efforts to “monitor and mitigate noise.”

The email also asks the Zuckerbergs to, “ideally stop—but at a minimum give us extended breaks from—the acquisition, demolition and construction cycle to let the neighborhood recover from the last eight years of disruption.”

At this point, the email requests that the family “abide by both the letter and the spirit of Palo Alto” by complying with city code about residential buildings.

Specifically, it asks the Zuckerbergs to get a use permit for the compound’s school and to hold “a public hearing for transparency.” It also asks the family to not expand its compound any further. “We hope this will help us get back the quiet, attractive residential neighborhood that we all loved so much when we chose to move here.”

In a follow-up on March 4, Fox acknowledged the “unusual” effects that come with being neighbors with Mark Zuckerberg and his family.

“I recognize and understand that the nature of our residence is unique given the profile and visibility of the family,” she wrote. “I hope that as we continue to grow our relationship with you over time, you will increasingly enjoy the benefits of our proximity—e.g., enhanced safety and security, shared improvements, and increased property values.”

Fox said that the Zuckerbergs instituted “a revised parking policy late last year” that should address their concerns, and promised to double down on efforts to give advanced notice about construction, parties, and other potential disruptions.

However, Fox did not directly address the unpermitted school and other nonresidential activities happening at the compound. She acknowledged that the compound has “residential support staff” including “childcare, culinary, personal assistants, property management, and security,” but said that they have “policies in place to minimize their impact on the neighborhood.”

It’s unclear if the neighbor responded to Fox.

“You have not earned our trust”

While these conversations were happening between Fox and Zuckerberg’s neighbors, Lait and others at the city Planning Department were scrambling to find a solution for the neighbor who complained on September 11, and a few other neighbors who endorsed the complaint in September and February.

Starting in February, one of these neighbors took the lead on following up with Lait. They asked him for an update on February 11, and heard back a few days later. He didn’t have any major updates, “but after conversations with the family’s representatives, he said he was exploring whether a “subset of children” could continue to come to the school sometimes for “ancillary” uses.

“I also believe a more nuanced solution is warranted in this case,” Lait added. Ideally, such a solution would respond to the neighbors’ complaints, but allow the Zuckerbergs to “reasonably be authorized by the zoning code.”

The neighbor wasn’t thrilled. The next day, they replied and called the city’s plan “unsatisfactory.”

“The city’s ‘nuanced solution’ in dealing with this serial violator has led to the current predicament,” they said (referring to the nuanced solution Lait mentioned in his last email.)

Horrigan-Taylor, the Palo Alto spokesperson, told WIRED that Lait’s mention of a “nuanced” solution referred to “resolving, to the extent permissible by law, neighborhood impacts and otherwise permitted use established by state law and local zoning.”

“Would I, or any other homeowner, be given the courtesy of a ‘nuanced solution’ if we were in violation of city code for over four years?” they added.

“Please know that you have not earned our trust and that we will take every opportunity to hold the city accountable if your solution satisfies a single [redacted] property owner over the interests of an entire neighborhood,” they continued.

“If you somehow craft a ‘nuanced solution’ based on promises,” the neighbor said, “the city will no doubt once again simply disappear and the damage to the neighborhood will continue.”

Lait did not respond right away. The neighbor followed up on March 13, asking if he had “reconsidered” his plan to offer a “‘nuanced solution’ for resolution of these ongoing issues by a serial code violator.” They asked when the neighborhood could “expect relief from the almost decade long disruptions.”

Behind the scenes, Zuckerberg’s lawyers were fighting to make sure the school could continue to operate. In a document dated March 14, Wade argues that she believed the activities at “the Property” “represent an appropriate residential use based on established state law as well as constitutional principles.”

Wade said that “the Family” was in the process of obtaining a “Large Family Daycare” license for the property, which is legal for a cohort of 14 or fewer children all under the age of 10.

“We consistently remind our vendors, guests, etc. to minimize noise, not loiter anywhere other than within the Family properties, and to keep areas clean,” Wade added in the letter. Wade also attached an adjusted lease corresponding with the address of the illicit school, which promises that the property will be used for only one purpose. The exact purpose is redacted.

On March 25, Lait told the neighbor that the city’s June 30 deadline for the Zuckerbergs to shut down the school had not changed. However, the family’s representative said that they were pursuing a daycare license. These licenses are granted by the state, not the city of Palo Alto.

The subtext of this email was that if the state gave them a daycare licence, there wasn’t much the city could do. Horrigan-Taylor confirmed with WIRED that “state licensed large family day care homes” do not require city approval, adding that the city also “does not regulate homeschooling.”

“Thanks for this rather surprising information,” the neighbor replied about a week later. “We have repeatedly presented ideas to the family over the past 8 years with very little to show for it, so from our perspective, we need to understand the city’s willingness to act or not to act.”

Baker told WIRED that the Zuckerbergs never ended up applying for a daycare license, a claim that corresponds with California’s public registry of daycare centers. (There are only two registered daycare centers in Palo Alto, and neither belongs to the Zuckerbergs. The Zuckerbergs’ oldest child, Maxima, will also turn 10 in December and consequently age out of any daycare legally operating in California.)

Horrigan-Taylor said that a representative for the Zuckerbergs told the city that the family wanted to move the school to “another location where private schools are permitted by right.”

In a school administrator job listing posted to the Association Montessori International website in July 2022 for “BBS,” Bicken Ben head of school Berge claims that the school had four distinct locations, and that applicants must be prepared to travel six to eight weeks per year. The June 2025 job listing also says that the “year-round” school spans “across multiple campuses,” but the main location of the job is listed as Palo Alto. It’s unclear where the other sites are located.

Most of the Zuckerbergs’ neighbors did not respond to WIRED’s request for comment. However, the ones that did clearly indicated that they would not be forgetting the Bicken Ben saga, or the past decade of disruption, anytime soon.

“Frankly I’m not sure what’s going on,” one neighbor said, when reached by WIRED via landline. “Except for noise and construction debris.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Mark Zuckerberg’s illegal school drove his neighbors crazy Read More »

dhs-offers-“disturbing-new-excuses”-to-seize-kids’-biometric-data,-expert-says

DHS offers “disturbing new excuses” to seize kids’ biometric data, expert says


Sweeping DHS power grab would collect face, iris, voice scans of all immigrants.

Civil and digital rights experts are horrified by a proposed rule change that would allow the Department of Homeland Security to collect a wide range of sensitive biometric data on all immigrants, without age restrictions, and store that data throughout each person’s “lifecycle” in the immigration system.

If adopted, the rule change would allow DHS agencies, including Immigration and Customs Enforcement (ICE), to broadly collect facial imagery, finger and palm prints, iris scans, and voice prints. They may also request DNA, which DHS claimed “would only be collected in limited circumstances,” like to verify family relations. These updates would cost taxpayers $288.7 million annually, DHS estimated, including $57.1 million for DNA collection alone. Annual individual charges to immigrants submitting data will likely be similarly high, estimated at around $231.5 million.

Costs could be higher, DHS admitted, especially if DNA testing is conducted more widely than projected.

“DHS does not know the full costs to the government of expanding biometrics collection in terms of assets, process, storage, labor, and equipment,” DHS’s proposal said, while noting that from 2020 to 2024, the US only processed such data from about 21 percent of immigrants on average.

Alarming critics, the update would allow DHS for the first time to collect biometric data of children under 14, which DHS claimed would help reduce human trafficking and other harms by making it easier to identify kids crossing the border unaccompanied or with a stranger.

Jennifer Lynch, general counsel for a digital rights nonprofit called the Electronic Frontier Foundation, told Ars that EFF joined Democratic senators in opposing a prior attempt by DHS to expand biometric data collection in 2020.

There was so much opposition to that rule change that DHS ultimately withdrew it, Lynch noted, but DHS confirmed in its proposal that the agency expects more support for the much broader initiative under the current Trump administration. Quoting one of Trump’s earliest executive orders in this term, directing DHS to “secure the border,” DHS suggested it was the agency’s duty to use “any available technologies and procedures to determine the validity of any claimed familial relationship between aliens encountered or apprehended by the Department of Homeland Security.”

Lynch warned that DHS’s plan to track immigrants over time, starting as young as possible, would allow DHS “to track people without their knowledge as they go about their lives” and “map families and connections in whole communities over time.”

“This expansion poses grave threats to the privacy, security, and liberty of US citizens and non-citizens,” Lynch told Ars, noting that “the federal government, including DHS, has failed to protect biometric data in the past.”

“Risks from security breaches to children’s biometrics are especially acute,” she said. “Large numbers of children are already victims of identity theft.”

By maintaining a database, the US also risks chilling speech, as immigrants weigh risks of social media comments—which DHS already monitors—possibly triggering removals or arrests.

“People will be less likely to speak out on any issue for fear of being tracked and facing severe reprisals, like detention and deportation, that we’ve already seen from this administration,” Lynch told Ars.

DHS also wants to collect more biometric data on US citizens and permanent residents who sponsor immigrants or have familial ties. Esha Bhandari, director of the ACLU’s speech, privacy, and technology project, told Ars that “we should all be concerned that the Trump administration is potentially building a vast database of people’s sensitive, unchangeable information, as this will have serious privacy consequences for citizens and noncitizens alike.”

“DHS continues to explore disturbing new excuses to collect more DNA and other sensitive biometric information, from the sound of our voice to the unique identifiers in our irises,” Bhandari said.

EFF previously noted that DHS’s biometric database was already the second largest in the world. By expanding it, DHS estimated that the agency would collect “about 1.12 million more biometrics submissions” annually, increasing the current baseline to about 3.19 million.

As the data pool expands, DHS plans to hold onto the data until an immigrant who has requested benefits or otherwise engaged with DHS agencies is either granted citizenship or removed.

Lynch suggested that “DHS cites questionable authority for this massive change to its practices,” which would “exponentially expand the federal government’s ability to collect biometrics from anyone associated with any immigration benefit or request—including US citizens and children of any age.”

“Biometrics are unique to each of us and can’t be changed, so these threats exists as long as the government holds onto our data,” Lynch said.

DHS will collect more data on kids than adults

Not all agencies will require all forms of biometric data to be submitted “instantly” if the rule change goes through, DHS said. Instead, agencies will assess their individual needs, while supposedly avoiding repetitive data collection, so that data won’t be collected every time someone is required to fill out a form.

DHS said it “recognizes” that its sweeping data collection plans that remove age restrictions don’t conform with Department of Justice policies. But the agency claimed there was no conflict since “DHS regulatory provisions control all DHS biometrics collections” and “DHS is not authorized to operate or collect biometrics under DOJ authorities.”

“Using biometrics for identity verification and management” is necessary, DHS claimed, because it “will assist DHS’s efforts to combat trafficking, confirm the results of biographical criminal history checks, and deter fraud.”

Currently, DHS is seeking public comments on the rule change, which can be submitted over the next 60 days ahead of a deadline on January 2, 2026. The agency suggests it “welcomes” comments, particularly on the types of biometric data DHS wants to collect, including concerns about the “reliability of technology.”

If approved, DHS said that kids will likely be subjected to more biometric data collection than adults. Additionally, younger kids will be subjected to processes that DHS formerly limited to only children age 14 and over.

For example, DHS noted that previously, “policies, procedures, and practices in place at that time” restricted DHS from running criminal background checks on children.

However, DHS claims that’s now appropriate, including in cases where children were trafficked or are seeking benefits under the Violence Against Women Act and, therefore, are expected to prove “good moral character.”

“Generally, DHS plans to use the biometric information collected from children for identity management in the immigration lifecycle only, but will retain the authority for other uses in its discretion, such as background checks and for law enforcement purposes,” DHS’s proposal said.

The changes will also help protect kids from removals, DHS claimed, by making it easier for an ICE attorney to complete required “identity, law enforcement, or security investigations or examinations.” As DHS explained:

DHS proposes to collect biometrics at any age to ensure the immigration records created for children can be related to their adult records later, and to help combat child trafficking, smuggling, and labor exploitation by facilitating identity verification, while also confirming the absence of criminal history or associations with terrorist organizations or gang membership.

A top priority appears to be tracking kids’ family relationships.

“DHS’s ability to collect biometrics, including DNA, regardless of a minor’s age, will allow DHS to accurately prove or disprove claimed genetic relationships among apprehended aliens and ensure that unaccompanied alien children (UAC) are properly identified and cared for,” the proposal said.

But DHS acknowledges that biometrics won’t help in some situations, like where kids are adopted. In those cases, DHS will still rely on documentation like birth certificates, medical records, and “affidavits to support claims based on familial relationships.”

It’s possible that some DHS agencies may establish an age threshold for some data collection, the rule change noted.

A day after the rule change was proposed, 42 comments have been submitted. Most were critical, but as Lynch warned, speaking out seemed risky, with many choosing to anonymously criticize the initiative as violating people’s civil rights and making the US appear more authoritarian.

One anonymous user cited guidance from the ACLU and the Electronic Privacy Information Center, while warning that “what starts as a ‘biometrics update’ could turn into widespread privacy erosion for immigrants and citizens alike.”

The commenter called out DHS for seriously “talking about harvesting deeply personal data that could track someone forever” and subjecting “infants and toddlers” to “iris scans or DNA swabs.”

“You pitch it as a tool against child trafficking, which is a real issue, but does swabbing a newborn really help, or does it just create a lifelong digital profile starting at day one?” the commenter asked. “Accuracy for growing kids is questionable, and the [ACLU] has pointed out how this disproportionately burdens families. Imagine the hassle for parents—it’s not protection; it’s preemptively treating every child like a data point in a government file.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

DHS offers “disturbing new excuses” to seize kids’ biometric data, expert says Read More »

flock-haters-cross-political-divides-to-remove-error-prone-cameras

Flock haters cross political divides to remove error-prone cameras

“People should care because this could be you,” White said. “This is something that police agencies are now using to document and watch what you’re doing, where you’re going, without your consent.”

Haters cross political divides to fight Flock

Currently, Flock’s reach is broad, “providing services to 5,000 police departments, 1,000 businesses, and numerous homeowners associations across 49 states,” lawmakers noted. Additionally, in October, Flock partnered with Amazon, which allows police to request Ring camera footage that widens Flock’s lens further.

However, Flock’s reach notably doesn’t extend into certain cities and towns in Arizona, Colorado, New York, Oregon, Tennessee, Texas, and Virginia, following successful local bids to end Flock contracts. These local fights have only just started as groups learn from each other, Sarah Hamid, EFF’s director of strategic campaigns, told Ars.

“Several cities have active campaigns underway right now across the country—urban and rural, in blue states and red states,” Hamid said.

A Flock spokesperson told Ars that the growing effort to remove cameras “remains an extremely small percentage of communities that consider deploying Flock technology (low single digital percentages).” To keep Flock’s cameras on city streets, Flock attends “hundreds of local community meetings and City Council sessions each month, and the vast majority of those contracts are accepted,” Flock’s spokesperson said.

Hamid challenged Flock’s “characterization of camera removals as isolated incidents,” though, noting “that doesn’t reflect what we’re seeing.”

“The removals span multiple states and represent different organizing strategies—some community-led, some council-initiated, some driven by budget constraints,” Hamid said.

Most recently, city officials voted to remove Flock cameras this fall in Sedona, Arizona.

A 72-year-old retiree, Sandy Boyce, helped fuel the local movement there after learning that Sedona had “quietly” renewed its Flock contract, NBC News reported. She felt enraged as she imagined her tax dollars continuing to support a camera system tracking her movements without her consent, she told NBC News.

Flock haters cross political divides to remove error-prone cameras Read More »

real-humans-don’t-stream-drake-songs-23-hours-a-day,-rapper-suing-spotify-says

Real humans don’t stream Drake songs 23 hours a day, rapper suing Spotify says


“Irregular” Drake streams

Proposed class action may force Spotify to pay back artists harmed by streaming fraud.

Lawsuit questions if Drake really is the most-streamed artist on Spotify after the musician became “the first artist to nominally achieve 120 billion total streams on Spotify.” Credit: Mark Blinch / Stringer | Getty Images Sport

Spotify profits off fake Drake streams that rob other artists of perhaps hundreds of millions in revenue shares, a lawsuit filed Sunday alleged—hoping to force Spotify to reimburse every artist impacted.

The lawsuit was filed by an American rapper known as RBX, who may be best known for cameos on two of the 1990s’ biggest hip-hop records, Dr. Dre’s The Chronic and Snoop Dogg’s Doggystyle.

The problem goes beyond Drake, RBX’s lawsuit alleged. It claims Spotify ignores “billions of fraudulent streams” each month, selfishly benefiting from bot networks that artificially inflate user numbers to help Spotify attract significantly higher ad revenue.

Drake’s account is a prime example of the kinds of fake streams Spotify is inclined to overlook, RBX alleged, since Drake is “the most streamed artist of all time on the platform,” in September becoming “the first artist to nominally achieve 120 billion total streams.” Watching Drake hit this milestone, the platform chose to ignore a “substantial” amount of inauthentic activity that contributed to about 37 billion streams between January 2022 and September 2025, the lawsuit alleged.

This activity, RBX alleged, “appeared to be the work of a sprawling network of Bot Accounts” that Spotify reasonably should have detected.

Apparently, RBX noticed that while most artists see an “initial spike” in streams when a song or album is released, followed by a predictable drop-off as more time passes, the listening patterns of Drake’s fans weren’t as predictable. After releases, some of Drake’s music would see “significant and irregular uptick months” over not just ensuing months, but years, allegedly “with no reasonable explanations for those upticks other than streaming fraud.”

Most suspiciously, individual accounts would sometimes listen to Drake “exclusively” for “23 hours a day”—which seems like the sort of “staggering and irregular” streaming that Spotify should flag, the lawsuit alleged.

It’s unclear how RBX’s legal team conducted this analysis. At this stage, they’ve told the court that claims are based on “information and belief” that discovery will reveal “there is voluminous information” to back up the rapper’s arguments.

Fake Drake streams may have robbed artists of millions

Spotify artists are supposed to get paid based on valid streams that represent their rightful portion of revenue pools. If RBX’s claims are true, based on the allegedly fake boosting of Drake’s streams alone, losses to all other artists in the revenue pool are “estimated to be in the hundreds of millions of dollars,” the complaint said. Actual damages, including punitive damages, are to be determined at trial, the lawsuit noted, and are likely much higher.

“Drake’s music streams are but one notable example of the rampant streaming fraud that Spotify has allowed to occur, across myriad artists, through negligence and/or willful blindness,” the lawsuit alleged.

If granted, the class would cover more than 100,000 rights holders who collected royalties from music hosted on the platform from “January 1, 2018, through the present.” That class could be expanded, the lawsuit noted, depending on how discovery goes. Since Spotify allegedly “concealed” the fake streams, there can be no time limitations for how far the claims could go back, the lawsuit argued. Attorney Mark Pifko of Baron & Budd, who is representing RBX, suggested in a statement provided to Ars that even one bad actor on Spotify cheats countless artists out of rightful earnings.

“Given the way Spotify pays royalty holders, allocating a limited pool of money based on each song’s proportional share of streams for a particular period, if someone cheats the system, fraudulently inflating their streams, it takes from everyone else,” Pifko said. “Not everyone who makes a living in the music business is a household name like Taylor Swift—there are thousands of songwriters, performers, and producers who earn revenue from music streaming who you’ve never heard of. These people are the backbone of the music business and this case is about them.”

Spotify did not immediately respond to Ars’ request for comment. However, a spokesperson told Rolling Stone that while the platform cannot comment on pending litigation, Spotify denies allegations that it profits from fake streams.

“Spotify in no way benefits from the industry-wide challenge of artificial streaming,” Spotify’s spokesperson said. “We heavily invest in always-improving, best-in-class systems to combat it and safeguard artist payouts with strong protections like removing fake streams, withholding royalties, and charging penalties.”

Fake fans appear to move hundreds of miles between plays

Spotify has publicly discussed ramping up efforts to detect and penalize streaming fraud. But RBX alleged that instead, Spotify “deliberately” “deploys insufficient measures to address fraudulent streaming,” allowing fraud to run “rampant.”

The platform appears least capable at handling so-called “Bot Vendors” that “typically design Bots to mimic human behavior and resemble real social media or streaming accounts in order to avoid detection,” the lawsuit alleged.

These vendors rely on virtual private networks (VPNs) to obscure locations of streams, but “with reasonable diligence,” Spotify could better detect them, RBX alleged—especially when streams are coming “from areas that lack the population to support a high volume of streams.”

For example, RBX again points to Drake’s streams. During a four-day period in 2024, “at least 250,000 streams of Drake’s song ‘No Face’ originated in Turkey but were falsely geomapped through the coordinated use of VPNs to the United Kingdom,” the lawsuit alleged, based on “information and belief.”

Additionally, “a large percentage of the accounts streaming Drake’s music were geographically concentrated around areas whose populations could not support the volume of streams emanating therefrom. In some cases, massive amounts of music streams, more than a hundred million streams, originated in areas with zero residential addresses,” the lawsuit alleged.

Just looking at how Drake’s fans move should raise a red flag, RBX alleged:

“Geohash data shows that nearly 10 percent of Drake’s streams come from users whose location data showed that they traveled a minimum of 15,000 kilometers in a month, moved unreasonable locations between songs (consecutive plays separated by mere seconds but spanning thousands of kilometers), including more than 500 kilometers between songs (roughly the distance from New York City to Pittsburgh).”

Spotify could cut off a lot of this activity, RBX alleged, by ending its practice of allowing free ad-supported accounts to sign up without a credit card. But supposedly it doesn’t, because “Spotify has an incentive for turning a blind eye to the blatant streaming fraud occurring on its service,” the lawsuit said.

Spotify has admitted fake streams impact revenue

RBX’s lawsuit pointed out that Spotify has told investors that, despite its best efforts, artificial streams “may contribute, from time to time, to an overstatement” in the number of reported monthly average users—a stat that helps drive ad revenue.

Spotify also somewhat tacitly acknowledges fears that the platform may be financially motivated to overlook when big artists pay for fake streams. In an FAQ, Spotify confirmed that “artificial streaming is something we take seriously at every level,” promising to withhold royalties, correct public streaming numbers, and take other steps, like possibly even removing tracks, no matter how big the artist is. Artists’ labels and distributors can also get hit with penalties if fake streams are detected, Spotify said. Spotify has defended its prevention methods as better than its rivals’ efforts.

“Our systems are working: In a case from last year, one bad actor was indicted for stealing $10 million from streaming services, only $60,000 of which came from Spotify, proving how effective we are at limiting the impact of artificial streaming on our platform,” Spotify’s spokesperson told Rolling Stone.

However, RBX alleged that Spotify is actually “one of the easiest platforms to defraud using Bots due to its negligent, lax, and/or non-existent—Bot-related security measures.” And supposedly that’s by design, since “the higher the volume of individual streams, the more Spotify could charge for ads,” RBX alleged.

“By properly detecting and/or removing fraudulent streams from its service, Spotify would lose significant advertising revenue,” the theory goes, with RBX directly accusing Spotify of concealing “both the enormity of this problem, and its detrimental financial impact to legitimate Rights Holders.”

For RBX to succeed, it will likely matter what evidence was used to analyze Drake’s streaming numbers. Last month, a lawsuit that Drake filed was dismissed, ultimately failing to convince a judge that Kendrick Lamar’s record label artificially inflated Spotify streams of “Not Like Us.” Drake’s failure to show any evidence beyond some online comments and reports (which suggested that the label was at least aware that Lamar’s manager supposedly paid a bot network to “jumpstart” the song’s streams) was deemed insufficient to keep the case alive.

Industry group slowly preparing to fight streaming fraud

A loss could smear Spotify’s public image after the platform joined an industry coalition formed in 2023 to fight streaming fraud, the Music Fights Fraud Alliance (MFFA). This coalition is often cited as a major step that Spotify and the rest of the industry are taking; however, the group’s website does not indicate the progress made in the years since.

As of this writing, the website showed that task forces were formed, as well as a partnership with a nonprofit called the National Cyber-Forensics and Training Alliance, with a goal to “work closely together to identify and disrupt streaming fraud.” The partnership was also supposed to produce “intelligence reports and other actionable information in support of fraud prevention and mitigation.”

Ars reached out to MFFA to see if there are any updates to share on the group’s work over the past two years. MFFA’s executive director, Michael Lewan, told Ars that “admittedly MFFA is still relatively nascent and growing,” “not even formally incorporated until” he joined in February of this year.

“We have accomplished a lot, and are going to continue to grow as the industry is taking fraud seriously,” Lewan said.

Lewan can’t “shed too many details on our initiatives,” he said, suggesting that MFFA is “a bit different from other trade orgs that are much more public facing.” However, several initiatives have been launched, he confirmed, which will help “improve coordination and communication amongst member companies”—which include streamers like Spotify and Amazon, as well as distributors like CD Baby and social platforms like SoundCloud and Meta apps—“to identify and disrupt suspicious activity, including sharing of data.”

“We also have efforts to raise awareness on what fraud looks like and how to mitigate against fraudulent activity,” Lewan said. “And we’re in continuous communication with other partners (in and outside the industry) on data standards, artist education, enforcement and deterrence.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Real humans don’t stream Drake songs 23 hours a day, rapper suing Spotify says Read More »