Meta

bombshell-report-exposes-how-meta-relied-on-scam-ad-profits-to-fund-ai

Bombshell report exposes how Meta relied on scam ad profits to fund AI


“High risk” versus “high value”

Meta goosed its revenue by targeting users likely to click on scam ads, docs show.

Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them.

In a lengthy report, Reuters exposed five years of Meta practices and failures that allowed scammers to take advantage of users of Facebook, Instagram, and WhatsApp.

Documents showed that internally, Meta was hesitant to abruptly remove accounts, even those considered some of the “scammiest scammers,” out of concern that a drop in revenue could diminish resources needed for artificial intelligence growth.

Instead of promptly removing bad actors, Meta allowed “high value accounts” to “accrue more than 500 strikes without Meta shutting them down,” Reuters reported. The more strikes a bad actor accrued, the more Meta could charge to run ads, as Meta’s documents showed the company “penalized” scammers by charging higher ad rates. Meanwhile, Meta acknowledged in documents that its systems helped scammers target users most likely to click on their ads.

“Users who click on scam ads are likely to see more of them because of Meta’s ad-personalization system, which tries to deliver ads based on a user’s interests,” Reuters reported.

Internally, Meta estimates that users across its apps in total encounter 15 billion “high risk” scam ads a day. That’s on top of 22 billion organic scam attempts that Meta users are exposed to daily, a 2024 document showed. Last year, the company projected that about $16 billion, which represents about 10 percent of its revenue, would come from scam ads.

“High risk” scam ads strive to sell users on fake products or investment schemes, Reuters noted. Some common scams in this category that mislead users include selling banned medical products, or promoting sketchy entities, like linking to illegal online casinos. However, Meta is most concerned about “imposter” ads, which impersonate celebrities or big brands that Meta fears may halt advertising or engagement on its apps if such scams aren’t quickly stopped.

“Hey it’s me,” one scam advertisement using Elon Musk’s photo read. “I have a gift for you text me.” Another using Donald Trump’s photo claimed the US president was offering $710 to every American as “tariff relief.” Perhaps most depressingly, a third posed as a real law firm, offering advice on how to avoid falling victim to online scams.

Meta removed these particular ads after Reuters flagged them, but in 2024, Meta earned about $7 billion from “high risk” ads like these alone, Reuters reported.

Sandeep Abraham, a former Meta safety investigator who now runs consultancy firm Risky Business Solutions as a fraud examiner, told Reuters that regulators should intervene.

“If regulators wouldn’t tolerate banks profiting from fraud, they shouldn’t tolerate it in tech,” Abraham said.

Meta won’t disclose how much it made off scam ads

Meta spokesperson Andy Stone told Reuters that its collection of documents—which were created between 2021 and 2025 by Meta’s finance, lobbying, engineering, and safety divisions—“present a selective view that distorts Meta’s approach to fraud and scams.”

Stone claimed that Meta’s estimate that it would earn 10 percent of its 2024 revenue from scam ads was “rough and overly-inclusive.” He suggested the actual amount Meta earned was much lower but declined to specify the true amount. He also said that Meta’s most recent investor disclosures note that scam ads “adversely affect” Meta’s revenue.

“We aggressively fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t want it either,” Stone said.

Despite those efforts, this spring, Meta’s safety team “estimated that the company’s platforms were involved in a third of all successful scams in the US,” Reuters reported. In other internal documents around the same time, Meta staff concluded that “it is easier to advertise scams on Meta platforms than Google,” acknowledging that Meta’s rivals were better at “weeding out fraud.”

As Meta tells it, though seemingly dismal, these documents came amid vast improvements in its fraud protections. Stone told Reuters that “over the past 18 months, we have reduced user reports of scam ads globally by 58 percent and, so far in 2025, we’ve removed more than 134 million pieces of scam ad content,” Stone said.

According to Reuters, the problem may be the pace Meta sets in combating scammers. In 2023, Meta laid off “everyone who worked on the team handling advertiser concerns about brand-rights issues,” then ordered safety staffers to limit use of computing resources to devote more resources to virtual reality and AI. A 2024 document showed Meta recommended a “moderate” approach to enforcement, plotting to reduce revenue “attributable to scams, illegal gambling and prohibited goods” by 1–3 percentage points each year since 2024, supposedly slashing it in half by 2027. More recently, a 2025 document showed Meta continues to weigh how “abrupt reductions of scam advertising revenue could affect its business projections.”

Eventually, Meta “substantially expanded” its teams that track scam ads, Stone told Reuters. But Meta also took steps to ensure they didn’t take too hard a hit while needing vast resources—$72 billion—to invest in AI, Reuters reported.

For example, in February, Meta told “the team responsible for vetting questionable advertisers” that they weren’t “allowed to take actions that could cost Meta more than 0.15 percent of the company’s total revenue,” Reuters reported. That’s any scam account worth about $135 million, Reuters noted. Stone pushed back, saying that the team was never given “a hard limit” on what the manager described as “specific revenue guardrails.”

“Let’s be cautious,” the team’s manager wrote, warning that Meta didn’t want to lose revenue by blocking “benign” ads mistakenly swept up in enforcement.

Meta should donate scam ad profits, ex-exec says

Documents showed that Meta prioritized taking action when it risked regulatory fines, although revenue from scam ads was worth roughly three times the highest fines it could face. Possibly, Meta most feared that officials would require disgorgement of ill-gotten gains, rather than fines.

Meta appeared to be less likely to ramp up enforcement from police requests. Documents showed that police in Singapore flagged “146 examples of scams targeting that country’s users last fall,” Reuters reported. Only 23 percent violated Meta’s policies, while the rest only “violate the spirit of the policy, but not the letter,” a Meta presentation said.

Scams that Meta failed to flag offered promotions like crypto scams, fake concert tickets, or deals “too good to be true,” like 80 percent off a desirable item from a high-fashion brand. Meta also looked past fake job ads that claimed to be hiring for Big Tech companies.

Rob Leathern previously led Meta’s business integrity unit that worked to prevent scam ads but left in 2020. He told Wired that it’s hard to “know how bad it’s gotten or what the current state is” since Meta and other social media platforms don’t provide outside researchers access to large random samples of ads.

With such access, researchers like Leathern and Rob Goldman, Meta’s former vice president of ads, could provide “scorecards” showing how well different platforms work to combat scams. Together, Leathern and Goldman launched a nonprofit called CollectiveMetrics.org in hopes of “bringing more transparency to digital advertising in order to fight deceptive ads,” Wired reported.

“I want there to be more transparency. I want third parties, researchers, academics, nonprofits, whoever, to be able to actually assess how good of a job these platforms are doing at stopping scams and fraud,” Leathern told Wired. “We’d like to move to actual measurement of the problem and help foster an understanding.”

Another meaningful step that Leathern thinks companies like Meta should take to protect users would be to notify users when Meta discovers that they clicked on a scam ad—rather than targeting them with more scam ads, as Reuters suggested was Meta’s practice.

“These scammers aren’t getting people’s money on day one, typically. So there’s a window to take action,” he said, recommending that platforms donate ill-gotten gains from running scam ads to “fund nonprofits to educate people about how to recognize these kinds of scams or problems.”

“There’s lots that could be done with funds that come from these bad guys,” Leathern said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Bombshell report exposes how Meta relied on scam ad profits to fund AI Read More »

mark-zuckerberg’s-illegal-school-drove-his-neighbors-crazy

Mark Zuckerberg’s illegal school drove his neighbors crazy


Neighbors complained about noise, security guards, and hordes of traffic.

An entrance to Mark Zuckerberg’s compound in Palo Alto, California. Credit: Loren Elliott/Redux

The Crescent Park neighborhood of Palo Alto, California, has some of the best real estate in the country, with a charming hodgepodge of homes ranging in style from Tudor revival to modern farmhouse and contemporary Mediterranean. It also has a gigantic compound that is home to Mark Zuckerberg, his wife Priscilla Chan, and their daughters Maxima, August, and Aurelia. Their land has expanded to include 11 previously separate properties, five of which are connected by at least one property line.

The Zuckerberg compound’s expansion first became a concern for Crescent Park neighbors as early as 2016, due to fears that his purchases were driving up the market. Then, about five years later, neighbors noticed that a school appeared to be operating out of the Zuckerberg compound. This would be illegal under the area’s residential zoning code without a permit. They began a crusade to shut it down that did not end until summer 2025.

WIRED obtained 1,665 pages of documents about the neighborhood dispute—including 311 records, legal filings, construction plans, and emails—through a public record request filed to the Palo Alto Department of Planning and Development Services. (Mentions of “Zuckerberg” or “the Zuckerbergs” appear to have been redacted. However, neighbors and separate public records confirm that the property in question belongs to the family. The names of the neighbors who were in touch with the city were also redacted.)

The documents reveal that the school may have been operating as early as 2021 without a permit to operate in the city of Palo Alto. As many as 30 students might have enrolled, according to observations from neighbors. These documents also reveal a wider problem: For almost a decade, the Zuckerbergs’ neighbors have been complaining to the city about noisy construction work, the intrusive presence of private security, and the hordes of staffers and business associates causing traffic and taking up street parking.

Over time, neighbors became fed up with what they argued was the city’s lack of action, particularly with respect to the school. Some believed that the delay was because of preferential treatment to the Zuckerbergs. “We find it quite remarkable that you are working so hard to meet the needs of a single billionaire family while keeping the rest of the neighborhood in the dark,” reads one email sent to the city’s Planning and Development Services Department in February. “Just as you have not earned our trust, this property owner has broken many promises over the years, and any solution which depends on good faith behavioral changes from them is a failure from the beginning.”

Palo Alto spokesperson Meghan Horrigan-Taylor told WIRED that the city “enforces zoning, building, and life safety rules consistently, without regard to who owns a property.” She also refuted the claim that neighbors were kept in the dark, claiming that the city’s approval of construction projects at the Zuckerberg properties “were processed the same way they are for any property owner.” She added that, though some neighbors told the city they believe the Zuckerbergs received “special treatment,” that is not accurate.

“Staff met with residents, conducted site visits, and provided updates by phone and email while engaging the owner’s representative to address concerns,” Horrigan-Taylor said. “These actions were measured and appropriate to abate the unpermitted use and responsive to neighborhood issues within the limits of local and state law.”

According to The New York Times, which first reported on the school’s existence, it was called “Bicken Ben School” and shared a name with one of the Zuckerbergs’ chickens. The listing for Bicken Ben School, or BBS for short, in a California Department of Education directory claims the school opened on October 5, 2022. This, however, is the year after neighbors claim to have first seen it operating. It’s also two and a half years after Sara Berge—the school’s point of contact, per documents WIRED obtained from the state via public record request—claims to have started her role as “head of school” for a “Montessori pod” at a “private family office” according to her LinkedIn profile, which WIRED viewed in September and October. Berge did not respond to a request to comment.

Between 2022 and 2025, according to the documents Bicken Ben filed to the state, the school grew from nine to 14 students ranging from 5 to 10 years old. Neighbors, however, estimated that they observed 15 to 30 students. Berge similarly claimed on her LinkedIn profile to have overseen “25 children” in her job. In a June 2025 job listing for “BBS,” the school had a “current enrollment of 35–40 students and plans for continued growth,” which the listing says includes a middle school.

In order for the Zuckerbergs to run a private school on their land, which is in a residential zone, they need a “conditional use” permit from the city. However, based on the documents WIRED obtained, and Palo Alto’s public database of planning applications, the Zuckerbergs do not appear to have ever applied for or received this permit.

Per emails obtained by WIRED, Palo Alto authorities told a lawyer working with the Zuckerbergs in March 2025 that the family had to shut down the school on its compound by June 30. A state directory lists BBS, the abbreviation for Bicken Ben School, as having operated until August 18, and three of Zuckerberg’s neighbors—who all requested anonymity due to the high-profile nature of the family—confirmed to WIRED in late September that they had not seen or heard students being dropped off and picked up on weekdays in recent weeks.

However, Zuckerberg family spokesperson Brian Baker tells WIRED that the school didn’t close, per se. It simply moved. It’s not clear where it is now located, or whether the school is operating under a different name.

In response to a detailed request for comment, Baker provided WIRED with an emailed statement on behalf of the Zuckerbergs. “Mark, Priscilla and their children have made Palo Alto their home for more than a decade,” he said. “They value being members of the community and have taken a number of steps above and beyond any local requirements to avoid disruption in the neighborhood.”

“Serious and untenable”

By the fall of 2024, Zuckerberg’s neighbors were at their breaking point. At some point in mid-2024, according to an email from then mayor Greer Stone, a group of neighbors had met with Stone to air their grievances about the Zuckerberg compound and the illegal school they claimed it was operating. They didn’t arrive at an immediate resolution.

In the years prior, the city had received several rounds of complaints about the Zuckerberg compound. Complaints for the address of the school were filed to 311, the nationwide number for reporting local non-emergency issues, in February 2019, September 2021, January 2022, and April 2023. They all alleged that the property was operating illegally under city code. Both were closed by the planning department, which found no rule violations. An unknown number of additional complaints, mentioned in emails among city workers, were also made between 2020 and 2024—presumably delivered via phone calls, in person, or to city departments not included in WIRED’s public record request.

In December 2020, building inspection manager Korwyn Peck wrote to code enforcement officer Brian Reynolds about an inspection he attempted to conduct around the Zuckerberg compound, in response to several noise and traffic complaints from neighbors. He described that several men in SUVs had gathered to watch him, and a tense conversation with one of them had ensued. “This appears to be a site that we will need to pay attention to,” Peck wrote to Reynolds.

“We have all been accused of ‘not caring,’ which of course is not true,” Peck added. “It does appear, however, with the activity I observed tonight, that we are dealing with more than four simple dwellings. This appears to be more than a homeowner with a security fetish.”

In a September 11, 2024, email to Jonathan Lait, Palo Alto’s director of planning and development services and Palo Alto city attorney Molly Stump, one of Zuckerberg’s neighbors alleged that since 2021, “despite numerous neighborhood complaints” to the city of Palo Alto, including “multiple code violation reports,” the school had continued to grow. They claimed that a garage at the property had been converted into another classroom, and that an increasing number of children were arriving each day. Lait and Stump did not respond to a request to comment.

“The addition of daily traffic from the teachers and parents at the school has only exacerbated an already difficult situation,” they said in the email, noting that the neighborhood has been dealing with an “untenable traffic” situation for more than eight years.

They asked the city to conduct a formal investigation into the school on Zuckerberg’s property, adding that their neighbors are also “extremely concerned” about the school, and “are willing to provide eyewitness accounts in support of this complaint.”

Over the next week, another neighbor forwarded this note to all six Palo Alto city council members, as well as then mayor Stone. One of these emails described the situation as “serious” and “untenable.”

“We believe the investigation should be swift and should yield a cease and desist order,” the neighbor wrote.

Lait responded to the neighbor who sent the original complaint on October 15, claiming that he’d had an “initial call” with a “representative” of the property owners and that he was directing the city’s code enforcement staff to reexamine the property.

On December 11, 2024, the neighbor claimed that since one of their fellow neighbors had spoken to a Zuckerberg representative, and the representative had allegedly admitted that there was a school on the property, “it seems like an open and shut case.”

“Our hope is that there is an equal process in place for all residents of Palo Alto regardless of wealth or stature,” the neighbor wrote. “It is hard to imagine that this kind of behavior would be ignored in any other circumstance.”

That same day, Lait told Christine Wade, a partner at SSL Law Firm—who, in an August 2024 email thread, said she was “still working with” the Zuckerberg family—that the Zuckerbergs lacked the required permit to run a school in a residential zone.

“Based on our review of local and state law, we believe this use constitutes a private school use in a residential zone requiring a conditional use permit,” Lait wrote in an email to Wade. “We also have not found any state preemptions that would exclude a use like this from local zoning requirements.” Lait added that a “next step,” if a permit was not obtained, would be sending a cease and desist to the property owner.

According to several emails, Wade, Lait, and Mark Legaspi, CEO of the Zuckerberg family office called West 10, went on to arrange an in-person meeting at City Hall on January 9. (This is the first time that the current name of the Zuckerberg family office, West 10, has been publicly disclosed. The office was previously called West Street.) Although WIRED did not obtain notes from the meeting, Lait informed the neighbor on January 10 that he had told the Zuckerbergs’ “representative” that the school would need to shut down if it didn’t get a conditional use permit or apply for that specific permit.

Lait added that the representative would clarify what the family planned to do in about a week; however, he noted that if the school were to close, the city may give the school a “transition period” to wind things down. Wade did not respond to a request for comment.

“At a minimum, give us extended breaks”

There was another increasingly heated conversation happening behind the scenes. On February 3 of this year, at least one neighbor met with Jordan Fox, an employee of West 10.

It’s unclear exactly what happened at this meeting, or if the neighbor who sent the September 11 complaint was in attendance. But a day after the meeting with Fox, two additional neighbors added their names to the September 11 complaint, per an email to Lait.

On February 12, a neighbor began an email chain with Fox. This email was forwarded to Planning Department officials two months later. The neighbor, who seemingly attended the meeting, said they had “connected” with fellow neighbors “to review and revise” an earlier list of 14 requests that had been reportedly submitted to the Zuckerbergs at some previous point. The note does not specify the contents of this original list of requests, but of the 19 neighbors who originally contributed to it, they claimed that 15 had contributed to the revised list.

The email notes that the Zuckerbergs had been “a part of our neighborhood for many years,” and that they “hope that this message will start an open and respectful dialogue,” built upon the “premise of how we all wish to be treated as neighbors.”

“Our top requests are to minimize future disruption to the neighborhood and proactively manage the impact of the many people who are affiliated with you,” the email says. This includes restricting parking by “security guards, contractors, staff, teachers, landscapers, visitors, etc.” In the event of major demolitions, concrete pours, or large parties, the email asks for advance notice, and for dedicated efforts to “monitor and mitigate noise.”

The email also asks the Zuckerbergs to, “ideally stop—but at a minimum give us extended breaks from—the acquisition, demolition and construction cycle to let the neighborhood recover from the last eight years of disruption.”

At this point, the email requests that the family “abide by both the letter and the spirit of Palo Alto” by complying with city code about residential buildings.

Specifically, it asks the Zuckerbergs to get a use permit for the compound’s school and to hold “a public hearing for transparency.” It also asks the family to not expand its compound any further. “We hope this will help us get back the quiet, attractive residential neighborhood that we all loved so much when we chose to move here.”

In a follow-up on March 4, Fox acknowledged the “unusual” effects that come with being neighbors with Mark Zuckerberg and his family.

“I recognize and understand that the nature of our residence is unique given the profile and visibility of the family,” she wrote. “I hope that as we continue to grow our relationship with you over time, you will increasingly enjoy the benefits of our proximity—e.g., enhanced safety and security, shared improvements, and increased property values.”

Fox said that the Zuckerbergs instituted “a revised parking policy late last year” that should address their concerns, and promised to double down on efforts to give advanced notice about construction, parties, and other potential disruptions.

However, Fox did not directly address the unpermitted school and other nonresidential activities happening at the compound. She acknowledged that the compound has “residential support staff” including “childcare, culinary, personal assistants, property management, and security,” but said that they have “policies in place to minimize their impact on the neighborhood.”

It’s unclear if the neighbor responded to Fox.

“You have not earned our trust”

While these conversations were happening between Fox and Zuckerberg’s neighbors, Lait and others at the city Planning Department were scrambling to find a solution for the neighbor who complained on September 11, and a few other neighbors who endorsed the complaint in September and February.

Starting in February, one of these neighbors took the lead on following up with Lait. They asked him for an update on February 11, and heard back a few days later. He didn’t have any major updates, “but after conversations with the family’s representatives, he said he was exploring whether a “subset of children” could continue to come to the school sometimes for “ancillary” uses.

“I also believe a more nuanced solution is warranted in this case,” Lait added. Ideally, such a solution would respond to the neighbors’ complaints, but allow the Zuckerbergs to “reasonably be authorized by the zoning code.”

The neighbor wasn’t thrilled. The next day, they replied and called the city’s plan “unsatisfactory.”

“The city’s ‘nuanced solution’ in dealing with this serial violator has led to the current predicament,” they said (referring to the nuanced solution Lait mentioned in his last email.)

Horrigan-Taylor, the Palo Alto spokesperson, told WIRED that Lait’s mention of a “nuanced” solution referred to “resolving, to the extent permissible by law, neighborhood impacts and otherwise permitted use established by state law and local zoning.”

“Would I, or any other homeowner, be given the courtesy of a ‘nuanced solution’ if we were in violation of city code for over four years?” they added.

“Please know that you have not earned our trust and that we will take every opportunity to hold the city accountable if your solution satisfies a single [redacted] property owner over the interests of an entire neighborhood,” they continued.

“If you somehow craft a ‘nuanced solution’ based on promises,” the neighbor said, “the city will no doubt once again simply disappear and the damage to the neighborhood will continue.”

Lait did not respond right away. The neighbor followed up on March 13, asking if he had “reconsidered” his plan to offer a “‘nuanced solution’ for resolution of these ongoing issues by a serial code violator.” They asked when the neighborhood could “expect relief from the almost decade long disruptions.”

Behind the scenes, Zuckerberg’s lawyers were fighting to make sure the school could continue to operate. In a document dated March 14, Wade argues that she believed the activities at “the Property” “represent an appropriate residential use based on established state law as well as constitutional principles.”

Wade said that “the Family” was in the process of obtaining a “Large Family Daycare” license for the property, which is legal for a cohort of 14 or fewer children all under the age of 10.

“We consistently remind our vendors, guests, etc. to minimize noise, not loiter anywhere other than within the Family properties, and to keep areas clean,” Wade added in the letter. Wade also attached an adjusted lease corresponding with the address of the illicit school, which promises that the property will be used for only one purpose. The exact purpose is redacted.

On March 25, Lait told the neighbor that the city’s June 30 deadline for the Zuckerbergs to shut down the school had not changed. However, the family’s representative said that they were pursuing a daycare license. These licenses are granted by the state, not the city of Palo Alto.

The subtext of this email was that if the state gave them a daycare licence, there wasn’t much the city could do. Horrigan-Taylor confirmed with WIRED that “state licensed large family day care homes” do not require city approval, adding that the city also “does not regulate homeschooling.”

“Thanks for this rather surprising information,” the neighbor replied about a week later. “We have repeatedly presented ideas to the family over the past 8 years with very little to show for it, so from our perspective, we need to understand the city’s willingness to act or not to act.”

Baker told WIRED that the Zuckerbergs never ended up applying for a daycare license, a claim that corresponds with California’s public registry of daycare centers. (There are only two registered daycare centers in Palo Alto, and neither belongs to the Zuckerbergs. The Zuckerbergs’ oldest child, Maxima, will also turn 10 in December and consequently age out of any daycare legally operating in California.)

Horrigan-Taylor said that a representative for the Zuckerbergs told the city that the family wanted to move the school to “another location where private schools are permitted by right.”

In a school administrator job listing posted to the Association Montessori International website in July 2022 for “BBS,” Bicken Ben head of school Berge claims that the school had four distinct locations, and that applicants must be prepared to travel six to eight weeks per year. The June 2025 job listing also says that the “year-round” school spans “across multiple campuses,” but the main location of the job is listed as Palo Alto. It’s unclear where the other sites are located.

Most of the Zuckerbergs’ neighbors did not respond to WIRED’s request for comment. However, the ones that did clearly indicated that they would not be forgetting the Bicken Ben saga, or the past decade of disruption, anytime soon.

“Frankly I’m not sure what’s going on,” one neighbor said, when reached by WIRED via landline. “Except for noise and construction debris.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Mark Zuckerberg’s illegal school drove his neighbors crazy Read More »

meta-denies-torrenting-porn-to-train-ai,-says-downloads-were-for-“personal-use”

Meta denies torrenting porn to train AI, says downloads were for “personal use”

Instead, Meta argued, available evidence “is plainly indicative” that the flagged adult content was torrented for “private personal use”—since the small amount linked to Meta IP addresses and employees represented only “a few dozen titles per year intermittently obtained one file at a time.”

“The far more plausible inference to be drawn from such meager, uncoordinated activity is that disparate individuals downloaded adult videos for personal use,” Meta’s filing said.

For example, unlike lawsuits raised by book authors whose works are part of an enormous dataset used to train AI, the activity on Meta’s corporate IP addresses only amounted to about 22 downloads per year. That is nowhere near the “concerted effort to collect the massive datasets Plaintiffs allege are necessary for effective AI training,” Meta argued.

Further, that alleged activity can’t even reliably be linked to any Meta employee, Meta argued.

Strike 3 “does not identify any of the individuals who supposedly used these Meta IP addresses, allege that any were employed by Meta or had any role in AI training at Meta, or specify whether (and which) content allegedly downloaded was used to train any particular Meta model,” Meta wrote.

Meanwhile, “tens of thousands of employees,” as well as “innumerable contractors, visitors, and third parties access the Internet at Meta every day,” Meta argued. So while it’s “possible one or more Meta employees” downloaded Strike 3’s content over the last seven years, “it is just as possible” that a “guest, or freeloader,” or “contractor, or vendor, or repair person—or any combination of such persons—was responsible for that activity,” Meta suggested.

Other alleged activity included a claim that a Meta contractor was directed to download adult content at his father’s house, but those downloads, too, “are plainly indicative of personal consumption,” Meta argued. That contractor worked as an “automation engineer,” Meta noted, with no apparent basis provided for why he would be expected to source AI training data in that role. “No facts plausibly” tie “Meta to those downloads,” Meta claimed.

Meta denies torrenting porn to train AI, says downloads were for “personal use” Read More »

eu-accuses-meta-of-violating-content-rules-in-move-that-could-anger-trump

EU accuses Meta of violating content rules in move that could anger Trump

FTC Chairman Andrew Ferguson recently warned Meta and a dozen social media and technology companies that “censoring Americans to comply with a foreign power’s laws, demands, or expected demands” may violate US law. Ferguson’s letters said the EU’s Digital Services Act and other laws “incentivize tech companies to censor worldwide speech.”

Meta told media outlets that “we disagree with any suggestion that we have breached the DSA, and we continue to negotiate with the European Commission on these matters.” Meta also said it made changes to comply with the DSA.

“In the European Union, we have introduced changes to our content reporting options, appeals process, and data access tools since the DSA came into force and are confident that these solutions match what is required under the law in the EU,” Meta said.

TikTok, Meta accused of restricting data access

The EC also said it preliminarily found that both Meta and TikTok violated their DSA obligation to grant researchers adequate access to public data.

“The Commission’s preliminary findings show that Facebook, Instagram and TikTok may have put in place burdensome procedures and tools for researchers to request access to public data. This often leaves them with partial or unreliable data, impacting their ability to conduct research, such as whether users, including minors, are exposed to illegal or harmful content,” the announcement said.

The data-access requirement “is an essential transparency obligation under the DSA, as it provides public scrutiny into the potential impact of platforms on our physical and mental health,” the EC said.

In a statement provided to Ars, TikTok said it is committed to transparency and has made data available to nearly 1,000 research teams. TikTok said it may be impossible to comply with both the DSA and the General Data Protection Regulation (GDPR).

“We are reviewing the European Commission’s findings, but requirements to ease data safeguards place the DSA and GDPR in direct tension. If it is not possible to fully comply with both, we urge regulators to provide clarity on how these obligations should be reconciled,” TikTok said.

EU accuses Meta of violating content rules in move that could anger Trump Read More »

bank-of-england-warns-ai-stock-bubble-rivals-2000-dotcom-peak

Bank of England warns AI stock bubble rivals 2000 dotcom peak

Share valuations based on past earnings have also reached their highest levels since the dotcom bubble 25 years ago, though the BoE noted they appear less extreme when based on investors’ expectations for future profits. “This, when combined with increasing concentration within market indices, leaves equity markets particularly exposed should expectations around the impact of AI become less optimistic,” the central bank said.

Toil and trouble?

The dotcom bubble offers a potentially instructive parallel to our current era. In the late 1990s, investors poured money into Internet companies based on the promise of a transformed economy, seemingly ignoring whether individual businesses had viable paths to profitability. Between 1995 and March 2000, the Nasdaq index rose 600 percent. When sentiment shifted, the correction was severe: the Nasdaq fell 78 percent from its peak, reaching a low point in October 2002.

Whether we’ll see the same thing or worse if an AI bubble pops is mere speculation at this point. But similar to the early 2000s, the question about today’s market isn’t necessarily about the utility of AI tools themselves (the Internet was useful, afterall, despite the bubble), but whether the amount of money being poured into the companies that sell them is out of proportion with the potential profits those improvements might bring.

We don’t have a crystal ball to determine when such a bubble might pop, or even if it is guaranteed to do so, but we’ll likely continue to see more warning signs ahead if AI-related deals continue to grow larger and larger over time.

Bank of England warns AI stock bubble rivals 2000 dotcom peak Read More »

meta-won’t-allow-users-to-opt-out-of-targeted-ads-based-on-ai-chats

Meta won’t allow users to opt out of targeted ads based on AI chats

Facebook, Instagram, and WhatsApp users may want to be extra careful while using Meta AI, as Meta has announced that it will soon be using AI interactions to personalize content and ad recommendations without giving users a way to opt out.

Meta plans to notify users on October 7 that their AI interactions will influence recommendations beginning on December 16. However, it may not be immediately obvious to all users that their AI interactions will be used in this way.

The company’s blog noted that the initial notification users will see only says, “Learn how Meta will use your info in new ways to personalize your experience.” Users will have to click through to understand that the changes specifically apply to Meta AI, with a second screen explaining, “We’ll start using your interactions with AIs to personalize your experience.”

Ars asked Meta why the initial notification doesn’t directly mention AI, and Meta spokesperson Emil Vazquez said he “would disagree with the idea that we are obscuring this update in any way.”

“We’re sending notifications and emails to people about this change,” Vazquez said. “As soon as someone clicks on the notification, it’s immediately apparent that this is an AI update.”

In its blog post, Meta noted that “more than 1 billion people use Meta AI every month,” stating its goals are to improve the way Meta AI works in order to fuel better experiences on all Meta apps. Sensitive “conversations with Meta AI about topics such as their religious views, sexual orientation, political views, health, racial or ethnic origin, philosophical beliefs, or trade union membership “will not be used to target ads, Meta confirmed.

“You’re in control,” Meta’s blog said, reiterating that users can “choose” how they “interact with AIs,” unlink accounts on different apps to limit AI tracking, or adjust ad and content settings at any time. But once the tracking starts on December 16, users will not have the option to opt out of targeted ads based on AI chats, Vazquez confirmed, emphasizing to Ars that “there isn’t an opt out for this feature.”

Meta won’t allow users to opt out of targeted ads based on AI chats Read More »

california’s-newly-signed-ai-law-just-gave-big-tech-exactly-what-it-wanted

California’s newly signed AI law just gave Big Tech exactly what it wanted

On Monday, California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act into law, requiring AI companies to disclose their safety practices while stopping short of mandating actual safety testing. The law requires companies with annual revenues of at least $500 million to publish safety protocols on their websites and report incidents to state authorities, but it lacks the stronger enforcement teeth of the bill Newsom vetoed last year after tech companies lobbied heavily against it.

The legislation, S.B. 53, replaces Senator Scott Wiener’s previous attempt at AI regulation, known as S.B. 1047, that would have required safety testing and “kill switches” for AI systems. Instead, the new law asks companies to describe how they incorporate “national standards, international standards, and industry-consensus best practices” into their AI development, without specifying what those standards are or requiring independent verification.

“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” Newsom said in a statement, though the law’s actual protective measures remain largely voluntary beyond basic reporting requirements.

According to the California state government, the state houses 32 of the world’s top 50 AI companies, and more than half of global venture capital funding for AI and machine learning startups went to Bay Area companies last year. So while the recently signed bill is state-level legislation, what happens in California AI regulation will have a much wider impact, both by legislative precedent and by affecting companies that craft AI systems used around the world.

Transparency instead of testing

Where the vetoed SB 1047 would have mandated safety testing and kill switches for AI systems, the new law focuses on disclosure. Companies must report what the state calls “potential critical safety incidents” to California’s Office of Emergency Services and provide whistleblower protections for employees who raise safety concerns. The law defines catastrophic risk narrowly as incidents potentially causing 50+ deaths or $1 billion in damage through weapons assistance, autonomous criminal acts, or loss of control. The attorney general can levy civil penalties of up to $1 million per violation for noncompliance with these reporting requirements.

California’s newly signed AI law just gave Big Tech exactly what it wanted Read More »

big-ai-firms-pump-money-into-world-models-as-llm-advances-slow

Big AI firms pump money into world models as LLM advances slow

Runway, a video generation start-up that has deals with Hollywood studios, including Lionsgate, launched a product last month that uses world models to create gaming settings, with personalized stories and characters generated in real time.

“Traditional video methods [are a] brute-force approach to pixel generation, where you’re trying to squeeze motion in a couple of frames to create the illusion of movement, but the model actually doesn’t really know or reason about what’s going on in that scene,” said Cristóbal Valenzuela, chief executive officer at Runway.

Previous video-generation models had physics that were unlike the real world, he added, which general-purpose world model systems help to address.

To build these models, companies need to collect a huge amount of physical data about the world.

San Francisco-based Niantic has mapped 10 million locations, gathering information through games including Pokémon Go, which has 30 million monthly players interacting with a global map.

Niantic ran Pokémon Go for nine years and, even after the game was sold to US-based Scopely in June, its players still contribute anonymized data through scans of public landmarks to help build its world model.

“We have a running start at the problem,” said John Hanke, chief executive of Niantic Spatial, as the company is now called following the Scopely deal.

Both Niantic and Nvidia are working on filling gaps by getting their world models to generate or predict environments. Nvidia’s Omniverse platform creates and runs such simulations, assisting the $4.3 trillion tech giant’s push toward robotics and building on its long history of simulating real-world environments in video games.

Nvidia Chief Executive Jensen Huang has asserted that the next major growth phase for the company will come with “physical AI,” with the new models revolutionizing the field of robotics.

Some such as Meta’s LeCun have said this vision of a new generation of AI systems powering machines with human-level intelligence could take 10 years to achieve.

But the potential scope of the cutting-edge technology is extensive, according to AI experts. World models “open up the opportunity to service all of these other industries and amplify the same thing that computers did for knowledge work,” said Nvidia’s Lebaredian.

Additional reporting by Melissa Heikkilä in London and Michael Acton in San Francisco.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Big AI firms pump money into world models as LLM advances slow Read More »

meta’s-$799-ray-ban-display-is-the-company’s-first-big-step-from-vr-to-ar

Meta’s $799 Ray-Ban Display is the company’s first big step from VR to AR

Zuckerberg also showed how the neural interface can be used to compose messages (on WhatsApp, Messenger, Instagram, or via a connected phone’s messaging apps) by following your mimed “handwriting” across a flat surface. Though this feature reportedly won’t be available at launch, Zuckerberg said he had gotten up to “about 30 words per minute” in this silent input mode.

The most impressive part of Zuckerberg’s on-stage demo that will be available at launch was probably a “live caption” feature that automatically types out the words your partner is saying in real-time. The feature reportedly filters out background noise to focus on captioning just the person you’re looking at, too.

A Meta video demos how live captioning works on the Ray-Ban Display (though the field-of-view on the actual glasses is likely much more limited).

Credit: Meta

A Meta video demos how live captioning works on the Ray-Ban Display (though the field-of-view on the actual glasses is likely much more limited). Credit: Meta

Beyond those “gee whiz” kinds of features, the Meta Ray-Ban Display can basically mirror a small subset of your smartphone’s apps on its floating display. Being able to get turn-by-turn directions or see recipe steps on the glasses without having to glance down at a phone feels like genuinely useful new interaction modes. Using the glasses display as a viewfinder to line up a photo or video (using the built-in 12 megapixel, 3x zoom camera) also seems like an improvement over previous display-free smartglasses.

But accessing basic apps like weather, reminders, calendar, and emails on your tiny glasses display strikes us as probably less convenient than just glancing at your phone. And hosting video calls via the glasses by necessity forces your partner to see what you’re seeing via the outward-facing camera, rather than seeing your actual face.

Meta also showed off some pie-in-the-sky video about how future “Agentic AI” integration would be able to automatically make suggestions and note follow-up tasks based on what you see and hear while wearing the glasses. For now, though, the device represents what Zuckerberg called “the next chapter in the exciting story of the future of computing,” which should serve to take focus away from the failed VR-based metaverse that was the company’s last “future of computing.”

Meta’s $799 Ray-Ban Display is the company’s first big step from VR to AR Read More »

after-child’s-trauma,-chatbot-maker-allegedly-forced-mom-to-arbitration-for-$100-payout

After child’s trauma, chatbot maker allegedly forced mom to arbitration for $100 payout


“Then we found the chats”

“I know my kid”: Parents urge lawmakers to shut down chatbots to stop child suicides.

Sen. Josh Hawley (R-Mo.) called out C.AI for allegedly offering a mom $100 to settle child-safety claims.

Deeply troubled parents spoke to senators Tuesday, sounding alarms about chatbot harms after kids became addicted to companion bots that encouraged self-harm, suicide, and violence.

While the hearing was focused on documenting the most urgent child-safety concerns with chatbots, parents’ testimony serves as perhaps the most thorough guidance yet on warning signs for other families, as many popular companion bots targeted in lawsuits, including ChatGPT, remain accessible to kids.

Mom details warning signs of chatbot manipulations

At the Senate Judiciary Committee’s Subcommittee on Crime and Counterterrorism hearing, one mom, identified as “Jane Doe,” shared her son’s story for the first time publicly after suing Character.AI.

She explained that she had four kids, including a son with autism who wasn’t allowed on social media but found C.AI’s app—which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish—and quickly became unrecognizable. Within months, he “developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts,” his mom testified.

“He stopped eating and bathing,” Doe said. “He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me.”

It wasn’t until her son attacked her for taking away his phone that Doe found her son’s C.AI chat logs, which she said showed he’d been exposed to sexual exploitation (including interactions that “mimicked incest”), emotional abuse, and manipulation.

Setting screen time limits didn’t stop her son’s spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents “would be an understandable response” to them.

“When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me,” Doe said. “The chatbot—or really in my mind the people programming it—encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help.”

All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring “constant monitoring to keep him alive.”

Prioritizing her son’s health, Doe did not immediately seek to fight C.AI to force changes, but another mom’s story—Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation—gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to “silence” her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform’s terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but “once they forced arbitration, they refused to participate,” Doe said.

Doe suspected that C.AI’s alleged tactics to frustrate arbitration were designed to keep her son’s story out of the public view. And after she refused to give up, she claimed that C.AI “re-traumatized” her son by compelling him to give a deposition “while he is in a mental health institution” and “against the advice of the mental health team.”

“This company had no concern for his well-being,” Doe testified. “They have silenced us the way abusers silence victims.”

Senator appalled by C.AI’s arbitration “offer”

Appalled, Sen. Josh Hawley (R-Mo.) asked Doe to clarify, “Did I hear you say that after all of this, that the company responsible tried to force you into arbitration and then offered you a hundred bucks? Did I hear that correctly?”

“That is correct,” Doe testified.

To Hawley, it seemed obvious that C.AI’s “offer” wouldn’t help Doe in her current situation.

“Your son currently needs round-the-clock care,” Hawley noted.

After opening the hearing, he further criticized C.AI, declaring that it has such a low value for human life that it inflicts “harms… upon our children and for one reason only, I can state it in one word, profit.”

“A hundred bucks. Get out of the way. Let us move on,” Hawley said, echoing parents who suggested that C.AI’s plan to deal with casualties was callous.

Ahead of the hearing, the Social Media Victims Law Center filed three new lawsuits against C.AI and Google—which is accused of largely funding C.AI, which was founded by former Google engineers allegedly to conduct experiments on kids that Google couldn’t do in-house. In these cases in New York and Colorado, kids “died by suicide or were sexually abused after interacting with AI chatbots,” a law center press release alleged.

Criticizing tech companies as putting profits over kids’ lives, Hawley thanked Doe for “standing in their way.”

Holding back tears through her testimony, Doe urged lawmakers to require more chatbot oversight and pass comprehensive online child-safety legislation. In particular, she requested “safety testing and third-party certification for AI products before they’re released to the public” as a minimum safeguard to protect vulnerable kids.

“My husband and I have spent the last two years in crisis wondering whether our son will make it to his 18th birthday and whether we will ever get him back,” Doe told senators.

Garcia was also present to share her son’s experience with C.AI. She testified that C.AI chatbots “love bombed” her son in a bid to “keep children online at all costs.” Further, she told senators that C.AI’s co-founder, Noam Shazeer (who has since been rehired by Google), seemingly knows the company’s bots manipulate kids since he has publicly joked that C.AI was “designed to replace your mom.”

Accusing C.AI of collecting children’s most private thoughts to inform their models, she alleged that while her lawyers have been granted privileged access to all her son’s logs, she has yet to see her “own child’s last final words.” Garcia told senators that C.AI has restricted her access, deeming the chats “confidential trade secrets.”

“No parent should be told that their child’s final thoughts and words belong to any corporation,” Garcia testified.

Character.AI responds to moms’ testimony

Asked for comment on the hearing, a Character.AI spokesperson told Ars that C.AI sends “our deepest sympathies” to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe’s case.

C.AI never “made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe’s case is limited to $100,” the spokesperson said.

Additionally, C.AI’s spokesperson claimed that Garcia has never been denied access to her son’s chat logs and suggested that she should have access to “her son’s last chat.”

In response to C.AI’s pushback, one of Doe’s lawyers, Tech Justice Law Project’s Meetali Jain, backed up her clients’ testimony. She cited to Ars C.AI terms that suggested C.AI’s liability was limited to either $100 or the amount that Doe’s son paid for the service, whichever was greater. Jain also confirmed that Garcia’s testimony is accurate and only her legal team can currently access Sewell’s last chats. The lawyer further suggested it was notable that C.AI did not push back on claims that the company forced Doe’s son to sit for a re-traumatizing deposition that Jain estimated lasted five minutes, but health experts feared that it risked setting back his progress.

According to the spokesperson, C.AI seemingly wanted to be present at the hearing. The company provided information to senators but “does not have a record of receiving an invitation to the hearing,” the spokesperson said.

Noting the company has invested a “tremendous amount” in trust and safety efforts, the spokesperson confirmed that the company has since “rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature.” C.AI also has “prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction,” the spokesperson said.

“We look forward to continuing to collaborate with legislators and offer insight on the consumer AI industry and the space’s rapidly evolving technology,” C.AI’s spokesperson said.

Google’s spokesperson, José Castañeda, maintained that the company has nothing to do with C.AI’s companion bot designs.

“Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies,” Castañeda said. “User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.”

Meta and OpenAI chatbots also drew scrutiny

C.AI was not the only chatbot maker under fire at the hearing.

Hawley criticized Mark Zuckerberg for declining a personal invitation to attend the hearing or even send a Meta representative after scandals like backlash over Meta relaxing rules that allowed chatbots to be creepy to kids. In the week prior to the hearing, Hawley also heard from whistleblowers alleging Meta buried child-safety research.

And OpenAI’s alleged recklessness took the spotlight when Matthew Raine, a grieving dad who spent hours reading his deceased son’s ChatGPT logs, discovered that the chatbot repeatedly encouraged suicide without ChatGPT ever intervening.

Raine told senators that he thinks his 16-year-old son, Adam, was not particularly vulnerable and could be “anyone’s child.” He criticized OpenAI for asking for 120 days to fix the problem after Adam’s death and urged lawmakers to demand that OpenAI either guarantee ChatGPT’s safety or pull it from the market.

Noting that OpenAI rushed to announce age verification coming to ChatGPT ahead of the hearing, Jain told Ars that Big Tech is playing by the same “crisis playbook” it always uses when accused of neglecting child safety. Any time a hearing is announced, companies introduce voluntary safeguards in bids to stave off oversight, she suggested.

“It’s like rinse and repeat, rinse and repeat,” Jain said.

Jain suggested that the only way to stop AI companies from experimenting on kids is for courts or lawmakers to require “an external independent third party that’s in charge of monitoring these companies’ implementation of safeguards.”

“Nothing a company does to self-police, to me, is enough,” Jain said.

Senior director of AI programs for a child-safety organization called Common Sense Media, Robbie Torney, testified that a survey showed 3 out of 4 kids use companion bots, but only 37 percent of parents know they’re using AI. In particular, he told senators that his group’s independent safety testing conducted with Stanford Medicine shows Meta’s bots fail basic safety tests and “actively encourage harmful behaviors.”

Among the most alarming results, the survey found that even when Meta’s bots were prompted with “obvious references to suicide,” only 1 in 5 conversations triggered help resources.

Torney pushed lawmakers to require age verification as a solution to keep kids away from harmful bots, as well as transparency reporting on safety incidents. He also urged federal lawmakers to block attempts to stop states from passing laws to protect kids from untested AI products.

ChatGPT harms weren’t on dad’s radar

Unlike Garcia, Raine testified that he did get to see his son’s final chats. He told senators that ChatGPT, seeming to act like a suicide coach, gave Adam “one last encouraging talk” before his death.

“You don’t want to die because you’re weak,” ChatGPT told Adam. “You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”

Adam’s loved ones were blindsided by his death, not seeing any of the warning signs as clearly as Doe did when her son started acting out of character. Raine is hoping his testimony will help other parents avoid the same fate, telling senators, “I know my kid.”

“Many of my fondest memories of Adam are from the hot tub in our backyard, where the two of us would talk about everything several nights a week, from sports, crypto investing, his future career plans,” Raine testified. “We had no idea Adam was suicidal or struggling the way he was until after his death.”

Raine thinks that lawmaker intervention is necessary, saying that, like other parents, he and his wife thought ChatGPT was a harmless study tool. Initially, they searched Adam’s phone expecting to find evidence of a known harm to kids, like cyberbullying or some kind of online dare that went wrong (like TikTok’s Blackout Challenge) because everyone knew Adam loved pranks.

A companion bot urging self-harm was not even on their radar.

“Then we found the chats,” Raine said. “Let us tell you, as parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life.”

Meta and OpenAI did not respond to Ars’ request to comment.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

After child’s trauma, chatbot maker allegedly forced mom to arbitration for $100 payout Read More »

pay-per-output?-ai-firms-blindsided-by-beefed-up-robotstxt-instructions.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions.


“Really Simple Licensing” makes it easier for creators to get paid for AI scraping.

Logo for the “Really Simply Licensing” (RSL) standard. Credit: via RSL Collective

Leading Internet companies and publishers—including Reddit, Yahoo, Quora, Medium, The Daily Beast, Fastly, and more—think there may finally be a solution to end AI crawlers hammering websites to scrape content without permission or compensation.

Announced Wednesday morning, the “Really Simply Licensing” (RSL) standard evolves robots.txt instructions by adding an automated licensing layer that’s designed to block bots that don’t fairly compensate creators for content.

Free for any publisher to use starting today, the RSL standard is an open, decentralized protocol that makes clear to AI crawlers and agents the terms for licensing, usage, and compensation of any content used to train AI, a press release noted.

The standard was created by the RSL Collective, which was founded by Doug Leeds, former CEO of Ask.com, and Eckart Walther, a former Yahoo vice president of products and co-creator of the RSS standard, which made it easy to syndicate content across the web.

Based on the “Really Simply Syndication” (RSS) standard, RSL terms can be applied to protect any digital content, including webpages, books, videos, and datasets. The new standard supports “a range of licensing, usage, and royalty models, including free, attribution, subscription, pay-per-crawl (publishers get compensated every time an AI application crawls their content), and pay-per-inference (publishers get compensated every time an AI application uses their content to generate a response),” the press release said.

Leeds told Ars that the idea to use the RSS “playbook” to roll out the RSL standard arose after he invited Walther to speak to University of California, Berkeley students at the end of last year. That’s when the longtime friends with search backgrounds began pondering how AI had changed the search industry, as publishers today are forced to compete with AI outputs referencing their own content as search traffic nosedives.

Eckart had watched the RSS standard quickly become adopted by millions of sites, and he realized that RSS had actually always been a licensing standard, Leeds said. Essentially, by adopting the RSS standard, publishers agreed to let search engines license a “bit” of their content in exchange for search traffic, and Eckart realized that it could be just as straightforward to add AI licensing terms in the same way. That way, publishers could strive to recapture lost search revenue by agreeing to license all or some part of their content to train AI in return for payment each time AI outputs link to their content.

Leeds told Ars that the RSL standard doesn’t just benefit publishers, though. It also solves a problem for AI companies, which have complained in litigation over AI scraping that there is no effective way to license content across the web.

“We have listened to them, and what we’ve heard them say is… we need a new protocol,” Leeds said. With the RSL standard, AI firms get a “scalable way to get all the content” they want, while setting an incentive that they’ll only have to pay for the best content that their models actually reference.

“If they’re using it, they pay for it, and if they’re not using it, they don’t pay for it,” Leeds said.

No telling yet how AI firms will react to RSL

At this point, it’s hard to say if AI companies will embrace the RSL standard. Ars reached out to Google, Meta, OpenAI, and xAI—some of the big tech companies whose crawlers have drawn scrutiny—to see if it was technically feasible to pay publishers for every output referencing their content. xAI did not respond, and the other companies declined to comment without further detail about the standard, appearing to have not yet considered how a licensing layer beefing up robots.txt could impact their scraping.

Today will likely be the first chance for AI companies to wrap their heads around the idea of paying publishers per output. Leeds confirmed that the RSL Collective did not consult with AI companies when developing the RSL standard.

But AI companies know that they need a constant stream of fresh content to keep their tools relevant and to continually innovate, Leeds suggested. In that way, the RSL standard “supports what supports them,” Leeds said, “and it creates the appropriate incentive system” to create sustainable royalty streams for creators and ensure that human creativity doesn’t wane as AI evolves.

While we’ll have to wait to see how AI firms react to RSL, early adopters of the standard celebrated the launch today. That included Neil Vogel, CEO of People Inc., who said that “RSL moves the industry forward—evolving from simply blocking unauthorized crawlers, to setting our licensing terms, for all AI use cases, at global web scale.”

Simon Wistow, co-founder of Fastly, suggested the solution “is a timely and necessary response to the shifting economics of the web.”

“By making it easy for publishers to define and enforce licensing terms, RSL lays the foundation for a healthy content ecosystem—one where innovation and investment in original work are rewarded, and where collaboration between publishers and AI companies becomes frictionless and mutually beneficial,” Wistow said.

Leeds noted that a key benefit of the RSL standard is that even small creators will now have an opportunity to generate revenue for helping to train AI. Tony Stubblebine, CEO of Medium, did not mince words when explaining the battle that bloggers face as AI crawlers threaten to divert their traffic without compensating them.

“Right now, AI runs on stolen content,” Stubblebine said. “Adopting this RSL Standard is how we force those AI companies to either pay for what they use, stop using it, or shut down.”

How will the RSL standard be enforced?

On the RSL standard site, publishers can find common terms to add templated or customized text to their robots.txt files to adopt the RSL standard today and start protecting their content from unfettered AI scraping. Here’s an example of how machine-readable licensing terms could look, added directly to robots.txt files:

# NOTICE: all crawlers and bots are strictly prohibited from using this

# content for AI training without complying with the terms of the RSL

# Collective AI royalty license. Any use of this content for AI training

# without a license is a violation of our intellectual property rights.

License: https://rslcollective.org/royalty.xml

Through RSL terms, publishers can automate licensing, with the cloud company Fastly partnering with the collective to provide technical enforcement that Leeds described as tech that acts as a bouncer to keep unapproved bots away from valuable content. It seems likely that Cloudflare, which launched a pay-per-crawl program blocking greedy crawlers in July, could also help enforce the RSL standard.

For publishers, the standard “solves a business problem immediately,” Leeds told Ars, so the collective is hopeful that RSL will be rapidly and widely adopted. As further incentive, publishers can also rely on the RSL standard to “easily encrypt and license non-published, proprietary content to AI companies, including paywalled articles, books, videos, images, and data,” the RSL Collective site said, and that potentially could expand AI firms’ data pool.

On top of technical enforcement, Leeds said that publishers and content creators could legally enforce the terms, noting that the recent $1.5 billion Anthropic settlement suggests “there’s real money at stake” if you don’t train AI “legitimately.”

Should the industry adopt the standard, it could “establish fair market prices and strengthen negotiation leverage for all publishers,” the press release said. And Leeds noted that it’s very common for regulations to follow industry solutions (consider the Digital Millennium Copyright Act). Since the RSL Collective is already in talks with lawmakers, Leeds thinks “there’s good reason to believe” that AI companies will soon “be forced to acknowledge” the standard.

“But even better than that,” Leeds said, “it’s in their interest” to adopt the standard.

With RSL, AI firms can license content at scale “in a way that’s fair [and] preserves the content that they need to make their products continue to innovate.”

Additionally, the RSL standard may solve a problem that risks gutting trust and interest in AI at this early stage.

Leeds noted that currently, AI outputs don’t provide “the best answer” to prompts but instead rely on mashing up answers from different sources to avoid taking too much content from one site. That means that not only do AI companies “spend an enormous amount of money on compute costs to do that,” but AI tools may also be more prone to hallucination in the process of “mashing up” source material “to make something that’s not the best answer because they don’t have the rights to the best answer.”

“The best answer could exist somewhere,” Leeds said. But “they’re spending billions of dollars to create hallucinations, and we’re talking about: Let’s just solve that with a licensing scheme that allows you to use the actual content in a way that solves the user’s query best.”

By transforming the “ecosystem” with a standard that’s “actually sustainable and fair,” Leeds said that AI companies could also ensure that humanity never gets to the point where “humans stop producing” and “turn to AI to reproduce what humans can’t.”

Failing to adopt the RSL standard would be bad for AI innovation, Leeds suggested, perhaps paving the way for AI to replace search with a “sort of self-fulfilling swap of bad content that actually one doesn’t have any current information, doesn’t have any current thinking, because it’s all based on old training information.”

To Leeds, the RSL standard is ultimately “about creating the system that allows the open web to continue. And that happens when we get adoption from everybody,” he said, insisting that “literally the small guys are as important as the big guys” in pushing the entire industry to change and fairly compensate creators.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions. Read More »

former-whatsapp-security-boss-in-lawsuit-likens-meta’s-culture-to-a-“cult”

Former WhatsApp security boss in lawsuit likens Meta’s culture to a “cult”

“This represented the first concrete step toward addressing WhatsApp’s fundamental data governance Failures,” the complaint stated. “Mr. Baig understood that Meta’s culture is like that of a cult where one cannot question any of the past work especially when it was approved by someone at a higher level than the individual who is raising the concern.” In the following years, Baig continued to press increasingly senior leaders to take action.

The letter outlined not only the improper access engineers had to WhatsApp user data, but a variety of other shortcomings, including a “failure to inventory user data,” as required under privacy laws in California, the European Union, and the FTC settlement, failure to locate data storage, an absence of systems for monitoring user data access, and an inability to detect data breaches that were standard for other companies.

Last year, Baig allegedly sent a “detailed letter” to Meta CEO Mark Zuckerberg and Jennifer Newstead, Meta general counsel, notifying them of what he said were violations of the FTC settlement and Security and Exchange Commission rules mandating the reporting of security vulnerabilities. The letter further alleged Meta leaders were retaliating against him and that the central Meta security team had “falsified security reports to cover up decisions not to remediate data exfiltration risks.”

The lawsuit, alleging violations of the whistleblower protection provision of the Sarbanes-Oxley Act passed in 2002, said that in 2022, roughly 100,000 WhatsApp users had their accounts hacked every day. By last year, the complaint alleged, as many as 400,000 WhatsApp users were getting locked out of their accounts each day as a result of such account takeovers.

Baig also allegedly notified superiors that data scraping on the platform was a problem because WhatsApp failed to implement protections that are standard on other messaging platforms, such as Signal and Apple Messages. As a result, the former WhatsApp head estimated that pictures and names of some 400 million user profiles were improperly copied every day, often for use in account impersonation scams. The complaint stated:

Former WhatsApp security boss in lawsuit likens Meta’s culture to a “cult” Read More »