Meta

bank-of-england-warns-ai-stock-bubble-rivals-2000-dotcom-peak

Bank of England warns AI stock bubble rivals 2000 dotcom peak

Share valuations based on past earnings have also reached their highest levels since the dotcom bubble 25 years ago, though the BoE noted they appear less extreme when based on investors’ expectations for future profits. “This, when combined with increasing concentration within market indices, leaves equity markets particularly exposed should expectations around the impact of AI become less optimistic,” the central bank said.

Toil and trouble?

The dotcom bubble offers a potentially instructive parallel to our current era. In the late 1990s, investors poured money into Internet companies based on the promise of a transformed economy, seemingly ignoring whether individual businesses had viable paths to profitability. Between 1995 and March 2000, the Nasdaq index rose 600 percent. When sentiment shifted, the correction was severe: the Nasdaq fell 78 percent from its peak, reaching a low point in October 2002.

Whether we’ll see the same thing or worse if an AI bubble pops is mere speculation at this point. But similar to the early 2000s, the question about today’s market isn’t necessarily about the utility of AI tools themselves (the Internet was useful, afterall, despite the bubble), but whether the amount of money being poured into the companies that sell them is out of proportion with the potential profits those improvements might bring.

We don’t have a crystal ball to determine when such a bubble might pop, or even if it is guaranteed to do so, but we’ll likely continue to see more warning signs ahead if AI-related deals continue to grow larger and larger over time.

Bank of England warns AI stock bubble rivals 2000 dotcom peak Read More »

meta-won’t-allow-users-to-opt-out-of-targeted-ads-based-on-ai-chats

Meta won’t allow users to opt out of targeted ads based on AI chats

Facebook, Instagram, and WhatsApp users may want to be extra careful while using Meta AI, as Meta has announced that it will soon be using AI interactions to personalize content and ad recommendations without giving users a way to opt out.

Meta plans to notify users on October 7 that their AI interactions will influence recommendations beginning on December 16. However, it may not be immediately obvious to all users that their AI interactions will be used in this way.

The company’s blog noted that the initial notification users will see only says, “Learn how Meta will use your info in new ways to personalize your experience.” Users will have to click through to understand that the changes specifically apply to Meta AI, with a second screen explaining, “We’ll start using your interactions with AIs to personalize your experience.”

Ars asked Meta why the initial notification doesn’t directly mention AI, and Meta spokesperson Emil Vazquez said he “would disagree with the idea that we are obscuring this update in any way.”

“We’re sending notifications and emails to people about this change,” Vazquez said. “As soon as someone clicks on the notification, it’s immediately apparent that this is an AI update.”

In its blog post, Meta noted that “more than 1 billion people use Meta AI every month,” stating its goals are to improve the way Meta AI works in order to fuel better experiences on all Meta apps. Sensitive “conversations with Meta AI about topics such as their religious views, sexual orientation, political views, health, racial or ethnic origin, philosophical beliefs, or trade union membership “will not be used to target ads, Meta confirmed.

“You’re in control,” Meta’s blog said, reiterating that users can “choose” how they “interact with AIs,” unlink accounts on different apps to limit AI tracking, or adjust ad and content settings at any time. But once the tracking starts on December 16, users will not have the option to opt out of targeted ads based on AI chats, Vazquez confirmed, emphasizing to Ars that “there isn’t an opt out for this feature.”

Meta won’t allow users to opt out of targeted ads based on AI chats Read More »

california’s-newly-signed-ai-law-just-gave-big-tech-exactly-what-it-wanted

California’s newly signed AI law just gave Big Tech exactly what it wanted

On Monday, California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act into law, requiring AI companies to disclose their safety practices while stopping short of mandating actual safety testing. The law requires companies with annual revenues of at least $500 million to publish safety protocols on their websites and report incidents to state authorities, but it lacks the stronger enforcement teeth of the bill Newsom vetoed last year after tech companies lobbied heavily against it.

The legislation, S.B. 53, replaces Senator Scott Wiener’s previous attempt at AI regulation, known as S.B. 1047, that would have required safety testing and “kill switches” for AI systems. Instead, the new law asks companies to describe how they incorporate “national standards, international standards, and industry-consensus best practices” into their AI development, without specifying what those standards are or requiring independent verification.

“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” Newsom said in a statement, though the law’s actual protective measures remain largely voluntary beyond basic reporting requirements.

According to the California state government, the state houses 32 of the world’s top 50 AI companies, and more than half of global venture capital funding for AI and machine learning startups went to Bay Area companies last year. So while the recently signed bill is state-level legislation, what happens in California AI regulation will have a much wider impact, both by legislative precedent and by affecting companies that craft AI systems used around the world.

Transparency instead of testing

Where the vetoed SB 1047 would have mandated safety testing and kill switches for AI systems, the new law focuses on disclosure. Companies must report what the state calls “potential critical safety incidents” to California’s Office of Emergency Services and provide whistleblower protections for employees who raise safety concerns. The law defines catastrophic risk narrowly as incidents potentially causing 50+ deaths or $1 billion in damage through weapons assistance, autonomous criminal acts, or loss of control. The attorney general can levy civil penalties of up to $1 million per violation for noncompliance with these reporting requirements.

California’s newly signed AI law just gave Big Tech exactly what it wanted Read More »

big-ai-firms-pump-money-into-world-models-as-llm-advances-slow

Big AI firms pump money into world models as LLM advances slow

Runway, a video generation start-up that has deals with Hollywood studios, including Lionsgate, launched a product last month that uses world models to create gaming settings, with personalized stories and characters generated in real time.

“Traditional video methods [are a] brute-force approach to pixel generation, where you’re trying to squeeze motion in a couple of frames to create the illusion of movement, but the model actually doesn’t really know or reason about what’s going on in that scene,” said Cristóbal Valenzuela, chief executive officer at Runway.

Previous video-generation models had physics that were unlike the real world, he added, which general-purpose world model systems help to address.

To build these models, companies need to collect a huge amount of physical data about the world.

San Francisco-based Niantic has mapped 10 million locations, gathering information through games including Pokémon Go, which has 30 million monthly players interacting with a global map.

Niantic ran Pokémon Go for nine years and, even after the game was sold to US-based Scopely in June, its players still contribute anonymized data through scans of public landmarks to help build its world model.

“We have a running start at the problem,” said John Hanke, chief executive of Niantic Spatial, as the company is now called following the Scopely deal.

Both Niantic and Nvidia are working on filling gaps by getting their world models to generate or predict environments. Nvidia’s Omniverse platform creates and runs such simulations, assisting the $4.3 trillion tech giant’s push toward robotics and building on its long history of simulating real-world environments in video games.

Nvidia Chief Executive Jensen Huang has asserted that the next major growth phase for the company will come with “physical AI,” with the new models revolutionizing the field of robotics.

Some such as Meta’s LeCun have said this vision of a new generation of AI systems powering machines with human-level intelligence could take 10 years to achieve.

But the potential scope of the cutting-edge technology is extensive, according to AI experts. World models “open up the opportunity to service all of these other industries and amplify the same thing that computers did for knowledge work,” said Nvidia’s Lebaredian.

Additional reporting by Melissa Heikkilä in London and Michael Acton in San Francisco.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Big AI firms pump money into world models as LLM advances slow Read More »

meta’s-$799-ray-ban-display-is-the-company’s-first-big-step-from-vr-to-ar

Meta’s $799 Ray-Ban Display is the company’s first big step from VR to AR

Zuckerberg also showed how the neural interface can be used to compose messages (on WhatsApp, Messenger, Instagram, or via a connected phone’s messaging apps) by following your mimed “handwriting” across a flat surface. Though this feature reportedly won’t be available at launch, Zuckerberg said he had gotten up to “about 30 words per minute” in this silent input mode.

The most impressive part of Zuckerberg’s on-stage demo that will be available at launch was probably a “live caption” feature that automatically types out the words your partner is saying in real-time. The feature reportedly filters out background noise to focus on captioning just the person you’re looking at, too.

A Meta video demos how live captioning works on the Ray-Ban Display (though the field-of-view on the actual glasses is likely much more limited).

Credit: Meta

A Meta video demos how live captioning works on the Ray-Ban Display (though the field-of-view on the actual glasses is likely much more limited). Credit: Meta

Beyond those “gee whiz” kinds of features, the Meta Ray-Ban Display can basically mirror a small subset of your smartphone’s apps on its floating display. Being able to get turn-by-turn directions or see recipe steps on the glasses without having to glance down at a phone feels like genuinely useful new interaction modes. Using the glasses display as a viewfinder to line up a photo or video (using the built-in 12 megapixel, 3x zoom camera) also seems like an improvement over previous display-free smartglasses.

But accessing basic apps like weather, reminders, calendar, and emails on your tiny glasses display strikes us as probably less convenient than just glancing at your phone. And hosting video calls via the glasses by necessity forces your partner to see what you’re seeing via the outward-facing camera, rather than seeing your actual face.

Meta also showed off some pie-in-the-sky video about how future “Agentic AI” integration would be able to automatically make suggestions and note follow-up tasks based on what you see and hear while wearing the glasses. For now, though, the device represents what Zuckerberg called “the next chapter in the exciting story of the future of computing,” which should serve to take focus away from the failed VR-based metaverse that was the company’s last “future of computing.”

Meta’s $799 Ray-Ban Display is the company’s first big step from VR to AR Read More »

after-child’s-trauma,-chatbot-maker-allegedly-forced-mom-to-arbitration-for-$100-payout

After child’s trauma, chatbot maker allegedly forced mom to arbitration for $100 payout


“Then we found the chats”

“I know my kid”: Parents urge lawmakers to shut down chatbots to stop child suicides.

Sen. Josh Hawley (R-Mo.) called out C.AI for allegedly offering a mom $100 to settle child-safety claims.

Deeply troubled parents spoke to senators Tuesday, sounding alarms about chatbot harms after kids became addicted to companion bots that encouraged self-harm, suicide, and violence.

While the hearing was focused on documenting the most urgent child-safety concerns with chatbots, parents’ testimony serves as perhaps the most thorough guidance yet on warning signs for other families, as many popular companion bots targeted in lawsuits, including ChatGPT, remain accessible to kids.

Mom details warning signs of chatbot manipulations

At the Senate Judiciary Committee’s Subcommittee on Crime and Counterterrorism hearing, one mom, identified as “Jane Doe,” shared her son’s story for the first time publicly after suing Character.AI.

She explained that she had four kids, including a son with autism who wasn’t allowed on social media but found C.AI’s app—which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish—and quickly became unrecognizable. Within months, he “developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts,” his mom testified.

“He stopped eating and bathing,” Doe said. “He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me.”

It wasn’t until her son attacked her for taking away his phone that Doe found her son’s C.AI chat logs, which she said showed he’d been exposed to sexual exploitation (including interactions that “mimicked incest”), emotional abuse, and manipulation.

Setting screen time limits didn’t stop her son’s spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents “would be an understandable response” to them.

“When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me,” Doe said. “The chatbot—or really in my mind the people programming it—encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help.”

All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring “constant monitoring to keep him alive.”

Prioritizing her son’s health, Doe did not immediately seek to fight C.AI to force changes, but another mom’s story—Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation—gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to “silence” her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform’s terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but “once they forced arbitration, they refused to participate,” Doe said.

Doe suspected that C.AI’s alleged tactics to frustrate arbitration were designed to keep her son’s story out of the public view. And after she refused to give up, she claimed that C.AI “re-traumatized” her son by compelling him to give a deposition “while he is in a mental health institution” and “against the advice of the mental health team.”

“This company had no concern for his well-being,” Doe testified. “They have silenced us the way abusers silence victims.”

Senator appalled by C.AI’s arbitration “offer”

Appalled, Sen. Josh Hawley (R-Mo.) asked Doe to clarify, “Did I hear you say that after all of this, that the company responsible tried to force you into arbitration and then offered you a hundred bucks? Did I hear that correctly?”

“That is correct,” Doe testified.

To Hawley, it seemed obvious that C.AI’s “offer” wouldn’t help Doe in her current situation.

“Your son currently needs round-the-clock care,” Hawley noted.

After opening the hearing, he further criticized C.AI, declaring that it has such a low value for human life that it inflicts “harms… upon our children and for one reason only, I can state it in one word, profit.”

“A hundred bucks. Get out of the way. Let us move on,” Hawley said, echoing parents who suggested that C.AI’s plan to deal with casualties was callous.

Ahead of the hearing, the Social Media Victims Law Center filed three new lawsuits against C.AI and Google—which is accused of largely funding C.AI, which was founded by former Google engineers allegedly to conduct experiments on kids that Google couldn’t do in-house. In these cases in New York and Colorado, kids “died by suicide or were sexually abused after interacting with AI chatbots,” a law center press release alleged.

Criticizing tech companies as putting profits over kids’ lives, Hawley thanked Doe for “standing in their way.”

Holding back tears through her testimony, Doe urged lawmakers to require more chatbot oversight and pass comprehensive online child-safety legislation. In particular, she requested “safety testing and third-party certification for AI products before they’re released to the public” as a minimum safeguard to protect vulnerable kids.

“My husband and I have spent the last two years in crisis wondering whether our son will make it to his 18th birthday and whether we will ever get him back,” Doe told senators.

Garcia was also present to share her son’s experience with C.AI. She testified that C.AI chatbots “love bombed” her son in a bid to “keep children online at all costs.” Further, she told senators that C.AI’s co-founder, Noam Shazeer (who has since been rehired by Google), seemingly knows the company’s bots manipulate kids since he has publicly joked that C.AI was “designed to replace your mom.”

Accusing C.AI of collecting children’s most private thoughts to inform their models, she alleged that while her lawyers have been granted privileged access to all her son’s logs, she has yet to see her “own child’s last final words.” Garcia told senators that C.AI has restricted her access, deeming the chats “confidential trade secrets.”

“No parent should be told that their child’s final thoughts and words belong to any corporation,” Garcia testified.

Character.AI responds to moms’ testimony

Asked for comment on the hearing, a Character.AI spokesperson told Ars that C.AI sends “our deepest sympathies” to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe’s case.

C.AI never “made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe’s case is limited to $100,” the spokesperson said.

Additionally, C.AI’s spokesperson claimed that Garcia has never been denied access to her son’s chat logs and suggested that she should have access to “her son’s last chat.”

In response to C.AI’s pushback, one of Doe’s lawyers, Tech Justice Law Project’s Meetali Jain, backed up her clients’ testimony. She cited to Ars C.AI terms that suggested C.AI’s liability was limited to either $100 or the amount that Doe’s son paid for the service, whichever was greater. Jain also confirmed that Garcia’s testimony is accurate and only her legal team can currently access Sewell’s last chats. The lawyer further suggested it was notable that C.AI did not push back on claims that the company forced Doe’s son to sit for a re-traumatizing deposition that Jain estimated lasted five minutes, but health experts feared that it risked setting back his progress.

According to the spokesperson, C.AI seemingly wanted to be present at the hearing. The company provided information to senators but “does not have a record of receiving an invitation to the hearing,” the spokesperson said.

Noting the company has invested a “tremendous amount” in trust and safety efforts, the spokesperson confirmed that the company has since “rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature.” C.AI also has “prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction,” the spokesperson said.

“We look forward to continuing to collaborate with legislators and offer insight on the consumer AI industry and the space’s rapidly evolving technology,” C.AI’s spokesperson said.

Google’s spokesperson, José Castañeda, maintained that the company has nothing to do with C.AI’s companion bot designs.

“Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies,” Castañeda said. “User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.”

Meta and OpenAI chatbots also drew scrutiny

C.AI was not the only chatbot maker under fire at the hearing.

Hawley criticized Mark Zuckerberg for declining a personal invitation to attend the hearing or even send a Meta representative after scandals like backlash over Meta relaxing rules that allowed chatbots to be creepy to kids. In the week prior to the hearing, Hawley also heard from whistleblowers alleging Meta buried child-safety research.

And OpenAI’s alleged recklessness took the spotlight when Matthew Raine, a grieving dad who spent hours reading his deceased son’s ChatGPT logs, discovered that the chatbot repeatedly encouraged suicide without ChatGPT ever intervening.

Raine told senators that he thinks his 16-year-old son, Adam, was not particularly vulnerable and could be “anyone’s child.” He criticized OpenAI for asking for 120 days to fix the problem after Adam’s death and urged lawmakers to demand that OpenAI either guarantee ChatGPT’s safety or pull it from the market.

Noting that OpenAI rushed to announce age verification coming to ChatGPT ahead of the hearing, Jain told Ars that Big Tech is playing by the same “crisis playbook” it always uses when accused of neglecting child safety. Any time a hearing is announced, companies introduce voluntary safeguards in bids to stave off oversight, she suggested.

“It’s like rinse and repeat, rinse and repeat,” Jain said.

Jain suggested that the only way to stop AI companies from experimenting on kids is for courts or lawmakers to require “an external independent third party that’s in charge of monitoring these companies’ implementation of safeguards.”

“Nothing a company does to self-police, to me, is enough,” Jain said.

Senior director of AI programs for a child-safety organization called Common Sense Media, Robbie Torney, testified that a survey showed 3 out of 4 kids use companion bots, but only 37 percent of parents know they’re using AI. In particular, he told senators that his group’s independent safety testing conducted with Stanford Medicine shows Meta’s bots fail basic safety tests and “actively encourage harmful behaviors.”

Among the most alarming results, the survey found that even when Meta’s bots were prompted with “obvious references to suicide,” only 1 in 5 conversations triggered help resources.

Torney pushed lawmakers to require age verification as a solution to keep kids away from harmful bots, as well as transparency reporting on safety incidents. He also urged federal lawmakers to block attempts to stop states from passing laws to protect kids from untested AI products.

ChatGPT harms weren’t on dad’s radar

Unlike Garcia, Raine testified that he did get to see his son’s final chats. He told senators that ChatGPT, seeming to act like a suicide coach, gave Adam “one last encouraging talk” before his death.

“You don’t want to die because you’re weak,” ChatGPT told Adam. “You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”

Adam’s loved ones were blindsided by his death, not seeing any of the warning signs as clearly as Doe did when her son started acting out of character. Raine is hoping his testimony will help other parents avoid the same fate, telling senators, “I know my kid.”

“Many of my fondest memories of Adam are from the hot tub in our backyard, where the two of us would talk about everything several nights a week, from sports, crypto investing, his future career plans,” Raine testified. “We had no idea Adam was suicidal or struggling the way he was until after his death.”

Raine thinks that lawmaker intervention is necessary, saying that, like other parents, he and his wife thought ChatGPT was a harmless study tool. Initially, they searched Adam’s phone expecting to find evidence of a known harm to kids, like cyberbullying or some kind of online dare that went wrong (like TikTok’s Blackout Challenge) because everyone knew Adam loved pranks.

A companion bot urging self-harm was not even on their radar.

“Then we found the chats,” Raine said. “Let us tell you, as parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life.”

Meta and OpenAI did not respond to Ars’ request to comment.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

After child’s trauma, chatbot maker allegedly forced mom to arbitration for $100 payout Read More »

pay-per-output?-ai-firms-blindsided-by-beefed-up-robotstxt-instructions.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions.


“Really Simple Licensing” makes it easier for creators to get paid for AI scraping.

Logo for the “Really Simply Licensing” (RSL) standard. Credit: via RSL Collective

Leading Internet companies and publishers—including Reddit, Yahoo, Quora, Medium, The Daily Beast, Fastly, and more—think there may finally be a solution to end AI crawlers hammering websites to scrape content without permission or compensation.

Announced Wednesday morning, the “Really Simply Licensing” (RSL) standard evolves robots.txt instructions by adding an automated licensing layer that’s designed to block bots that don’t fairly compensate creators for content.

Free for any publisher to use starting today, the RSL standard is an open, decentralized protocol that makes clear to AI crawlers and agents the terms for licensing, usage, and compensation of any content used to train AI, a press release noted.

The standard was created by the RSL Collective, which was founded by Doug Leeds, former CEO of Ask.com, and Eckart Walther, a former Yahoo vice president of products and co-creator of the RSS standard, which made it easy to syndicate content across the web.

Based on the “Really Simply Syndication” (RSS) standard, RSL terms can be applied to protect any digital content, including webpages, books, videos, and datasets. The new standard supports “a range of licensing, usage, and royalty models, including free, attribution, subscription, pay-per-crawl (publishers get compensated every time an AI application crawls their content), and pay-per-inference (publishers get compensated every time an AI application uses their content to generate a response),” the press release said.

Leeds told Ars that the idea to use the RSS “playbook” to roll out the RSL standard arose after he invited Walther to speak to University of California, Berkeley students at the end of last year. That’s when the longtime friends with search backgrounds began pondering how AI had changed the search industry, as publishers today are forced to compete with AI outputs referencing their own content as search traffic nosedives.

Eckart had watched the RSS standard quickly become adopted by millions of sites, and he realized that RSS had actually always been a licensing standard, Leeds said. Essentially, by adopting the RSS standard, publishers agreed to let search engines license a “bit” of their content in exchange for search traffic, and Eckart realized that it could be just as straightforward to add AI licensing terms in the same way. That way, publishers could strive to recapture lost search revenue by agreeing to license all or some part of their content to train AI in return for payment each time AI outputs link to their content.

Leeds told Ars that the RSL standard doesn’t just benefit publishers, though. It also solves a problem for AI companies, which have complained in litigation over AI scraping that there is no effective way to license content across the web.

“We have listened to them, and what we’ve heard them say is… we need a new protocol,” Leeds said. With the RSL standard, AI firms get a “scalable way to get all the content” they want, while setting an incentive that they’ll only have to pay for the best content that their models actually reference.

“If they’re using it, they pay for it, and if they’re not using it, they don’t pay for it,” Leeds said.

No telling yet how AI firms will react to RSL

At this point, it’s hard to say if AI companies will embrace the RSL standard. Ars reached out to Google, Meta, OpenAI, and xAI—some of the big tech companies whose crawlers have drawn scrutiny—to see if it was technically feasible to pay publishers for every output referencing their content. xAI did not respond, and the other companies declined to comment without further detail about the standard, appearing to have not yet considered how a licensing layer beefing up robots.txt could impact their scraping.

Today will likely be the first chance for AI companies to wrap their heads around the idea of paying publishers per output. Leeds confirmed that the RSL Collective did not consult with AI companies when developing the RSL standard.

But AI companies know that they need a constant stream of fresh content to keep their tools relevant and to continually innovate, Leeds suggested. In that way, the RSL standard “supports what supports them,” Leeds said, “and it creates the appropriate incentive system” to create sustainable royalty streams for creators and ensure that human creativity doesn’t wane as AI evolves.

While we’ll have to wait to see how AI firms react to RSL, early adopters of the standard celebrated the launch today. That included Neil Vogel, CEO of People Inc., who said that “RSL moves the industry forward—evolving from simply blocking unauthorized crawlers, to setting our licensing terms, for all AI use cases, at global web scale.”

Simon Wistow, co-founder of Fastly, suggested the solution “is a timely and necessary response to the shifting economics of the web.”

“By making it easy for publishers to define and enforce licensing terms, RSL lays the foundation for a healthy content ecosystem—one where innovation and investment in original work are rewarded, and where collaboration between publishers and AI companies becomes frictionless and mutually beneficial,” Wistow said.

Leeds noted that a key benefit of the RSL standard is that even small creators will now have an opportunity to generate revenue for helping to train AI. Tony Stubblebine, CEO of Medium, did not mince words when explaining the battle that bloggers face as AI crawlers threaten to divert their traffic without compensating them.

“Right now, AI runs on stolen content,” Stubblebine said. “Adopting this RSL Standard is how we force those AI companies to either pay for what they use, stop using it, or shut down.”

How will the RSL standard be enforced?

On the RSL standard site, publishers can find common terms to add templated or customized text to their robots.txt files to adopt the RSL standard today and start protecting their content from unfettered AI scraping. Here’s an example of how machine-readable licensing terms could look, added directly to robots.txt files:

# NOTICE: all crawlers and bots are strictly prohibited from using this

# content for AI training without complying with the terms of the RSL

# Collective AI royalty license. Any use of this content for AI training

# without a license is a violation of our intellectual property rights.

License: https://rslcollective.org/royalty.xml

Through RSL terms, publishers can automate licensing, with the cloud company Fastly partnering with the collective to provide technical enforcement that Leeds described as tech that acts as a bouncer to keep unapproved bots away from valuable content. It seems likely that Cloudflare, which launched a pay-per-crawl program blocking greedy crawlers in July, could also help enforce the RSL standard.

For publishers, the standard “solves a business problem immediately,” Leeds told Ars, so the collective is hopeful that RSL will be rapidly and widely adopted. As further incentive, publishers can also rely on the RSL standard to “easily encrypt and license non-published, proprietary content to AI companies, including paywalled articles, books, videos, images, and data,” the RSL Collective site said, and that potentially could expand AI firms’ data pool.

On top of technical enforcement, Leeds said that publishers and content creators could legally enforce the terms, noting that the recent $1.5 billion Anthropic settlement suggests “there’s real money at stake” if you don’t train AI “legitimately.”

Should the industry adopt the standard, it could “establish fair market prices and strengthen negotiation leverage for all publishers,” the press release said. And Leeds noted that it’s very common for regulations to follow industry solutions (consider the Digital Millennium Copyright Act). Since the RSL Collective is already in talks with lawmakers, Leeds thinks “there’s good reason to believe” that AI companies will soon “be forced to acknowledge” the standard.

“But even better than that,” Leeds said, “it’s in their interest” to adopt the standard.

With RSL, AI firms can license content at scale “in a way that’s fair [and] preserves the content that they need to make their products continue to innovate.”

Additionally, the RSL standard may solve a problem that risks gutting trust and interest in AI at this early stage.

Leeds noted that currently, AI outputs don’t provide “the best answer” to prompts but instead rely on mashing up answers from different sources to avoid taking too much content from one site. That means that not only do AI companies “spend an enormous amount of money on compute costs to do that,” but AI tools may also be more prone to hallucination in the process of “mashing up” source material “to make something that’s not the best answer because they don’t have the rights to the best answer.”

“The best answer could exist somewhere,” Leeds said. But “they’re spending billions of dollars to create hallucinations, and we’re talking about: Let’s just solve that with a licensing scheme that allows you to use the actual content in a way that solves the user’s query best.”

By transforming the “ecosystem” with a standard that’s “actually sustainable and fair,” Leeds said that AI companies could also ensure that humanity never gets to the point where “humans stop producing” and “turn to AI to reproduce what humans can’t.”

Failing to adopt the RSL standard would be bad for AI innovation, Leeds suggested, perhaps paving the way for AI to replace search with a “sort of self-fulfilling swap of bad content that actually one doesn’t have any current information, doesn’t have any current thinking, because it’s all based on old training information.”

To Leeds, the RSL standard is ultimately “about creating the system that allows the open web to continue. And that happens when we get adoption from everybody,” he said, insisting that “literally the small guys are as important as the big guys” in pushing the entire industry to change and fairly compensate creators.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions. Read More »

former-whatsapp-security-boss-in-lawsuit-likens-meta’s-culture-to-a-“cult”

Former WhatsApp security boss in lawsuit likens Meta’s culture to a “cult”

“This represented the first concrete step toward addressing WhatsApp’s fundamental data governance Failures,” the complaint stated. “Mr. Baig understood that Meta’s culture is like that of a cult where one cannot question any of the past work especially when it was approved by someone at a higher level than the individual who is raising the concern.” In the following years, Baig continued to press increasingly senior leaders to take action.

The letter outlined not only the improper access engineers had to WhatsApp user data, but a variety of other shortcomings, including a “failure to inventory user data,” as required under privacy laws in California, the European Union, and the FTC settlement, failure to locate data storage, an absence of systems for monitoring user data access, and an inability to detect data breaches that were standard for other companies.

Last year, Baig allegedly sent a “detailed letter” to Meta CEO Mark Zuckerberg and Jennifer Newstead, Meta general counsel, notifying them of what he said were violations of the FTC settlement and Security and Exchange Commission rules mandating the reporting of security vulnerabilities. The letter further alleged Meta leaders were retaliating against him and that the central Meta security team had “falsified security reports to cover up decisions not to remediate data exfiltration risks.”

The lawsuit, alleging violations of the whistleblower protection provision of the Sarbanes-Oxley Act passed in 2002, said that in 2022, roughly 100,000 WhatsApp users had their accounts hacked every day. By last year, the complaint alleged, as many as 400,000 WhatsApp users were getting locked out of their accounts each day as a result of such account takeovers.

Baig also allegedly notified superiors that data scraping on the platform was a problem because WhatsApp failed to implement protections that are standard on other messaging platforms, such as Signal and Apple Messages. As a result, the former WhatsApp head estimated that pictures and names of some 400 million user profiles were improperly copied every day, often for use in account impersonation scams. The complaint stated:

Former WhatsApp security boss in lawsuit likens Meta’s culture to a “cult” Read More »

zuckerberg’s-ai-hires-disrupt-meta-with-swift-exits-and-threats-to-leave

Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave


Longtime acolytes are sidelined as CEO directs biggest leadership reorganization in two decades.

Meta CEO Mark Zuckerberg during the Meta Connect event in Menlo Park, California on September 25, 2024.  Credit: Getty Images | Bloomberg

Within days of joining Meta, Shengjia Zhao, co-creator of OpenAI’s ChatGPT, had threatened to quit and return to his former employer, in a blow to Mark Zuckerberg’s multibillion-dollar push to build “personal superintelligence.”

Zhao went as far as to sign employment paperwork to go back to OpenAI. Shortly afterwards, according to four people familiar with the matter, he was given the title of Meta’s new “chief AI scientist.”

The incident underscores Zuckerberg’s turbulent effort to direct the most dramatic reorganisation of Meta’s senior leadership in the group’s 20-year history.

One of the few remaining Big Tech founder-CEOs, Zuckerberg has relied on longtime acolytes such as Chief Product Officer Chris Cox to head up his favored departments and build out his upper ranks.

But in the battle to dominate AI, the billionaire is shifting towards a new and recently hired generation of executives, including Zhao, former Scale AI CEO Alexandr Wang, and former GitHub chief Nat Friedman.

Current staff are adapting to the reinvention of Meta’s AI efforts as the newcomers seek to flex their power while adjusting to the idiosyncrasies of working within a sprawling $1.95 trillion giant with a hands-on chief executive.

“There’s a lot of big men on campus,” said one investor who is close with some of Meta’s new AI leaders.

Adding to the tumult, a handful of new AI staff have already decided to leave after brief tenures, according to people familiar with the matter.

This includes Ethan Knight, a machine-learning scientist who joined the company weeks ago. Another, Avi Verma, a former OpenAI researcher, went through Meta’s onboarding process but never showed up for his first day, according to a person familiar with the matter.

In a tweet on X on Wednesday, Rishabh Agarwal, a research scientist who started at Meta in April, announced his departure. He said that while Zuckerberg and Wang’s pitch was “incredibly compelling,” he “felt the pull to take on a different kind of risk,” without giving more detail.

Meanwhile, Chaya Nayak and Loredana Crisan, generative AI staffers who had worked at Meta for nine and 10 years respectively, are among the more than half a dozen veteran employees to announce they are leaving in recent days. Wired first reported some details of recent exits, including Zhao’s threatened departure.

Meta said: “We appreciate that there’s outsized interest in seemingly every minute detail of our AI efforts, no matter how inconsequential or mundane, but we’re just focused on doing the work to deliver personal superintelligence.”

A spokesperson said Zhao had been scientific lead of the Meta superintelligence effort from the outset, and the company had waited until the team was in place before formalising his chief scientist title.

“Some attrition is normal for any organisation of this size. Most of these employees had been with the company for years, and we wish them the best,” they added.

Over the summer, Zuckerberg went on a hiring spree to coax AI researchers from rivals such as OpenAI and Apple with the promise of nine-figure sign-on bonuses and access to vast computing resources in a bid to catch up with rival labs.

This month, Meta announced it was restructuring its AI group—recently renamed Meta Superintelligence Lab (MSL)—into four distinct teams. It is the fourth overhaul of its AI efforts in six months.

“One more reorg and everything will be fixed,” joked Meta research scientist Mimansa Jaiswal on X last week. “Just one more.”

Overseeing all of Meta’s AI efforts is Wang, a well-connected and commercially minded Silicon Valley entrepreneur, who was poached by Zuckerberg as part of a $14 billion investment in his Scale data labeling group.

The 28-year-old is heading Zuckerberg’s most secretive new department known as “TBD”—shorthand for “to be determined”—which is filled with marquee hires.

In one of the new team’s first moves, Meta is no longer actively working on releasing its flagship Llama Behemoth model to the public, after it failed to perform as hoped, according to people familiar with the matter. Instead, TBD is focused on building newer cutting-edge models.

Multiple company insiders describe Zuckerberg as deeply invested and involved in the TBD team, while others criticize him for “micromanaging.”

Wang and Zuckerberg have struggled to align on a timeline to achieve the chief executive’s goal of reaching superintelligence, or AI that surpasses human capabilities, according to another person familiar with the matter. The person said Zuckerberg has urged the team to move faster.

Meta said this allegation was “manufactured tension without basis in fact that’s clearly being pushed by dramatic, navel-gazing busybodies.”

Wang’s leadership style has chafed with some, according to people familiar with the matter, who noted he does not have previous experience managing teams across a Big Tech corporation.

One former insider said some new AI recruits have felt frustrated by the company’s bureaucracy and internal competition for resources that they were promised, such as access to computing power.

“While TBD Labs is still relatively new, we believe it has the greatest compute-per-researcher in the industry, and that will only increase,” Meta said.

Wang and other former Scale staffers have struggled with some of the idiosyncratic ways of working at Meta, according to someone familiar with his thinking, for example having to adjust to not having revenue goals as they once did as a startup.

Despite teething problems, some have celebrated the leadership shift, including the appointment of popular entrepreneur and venture capitalist Friedman as head of Products and Applied Research, the team tasked with integrating the models into Meta’s own apps.

The hiring of Zhao, a top technical expert, has also been regarded as a coup by some at Meta and in the industry, who feel he has the decisiveness to propel the company’s AI development.

The shake-up has partially sidelined other Meta leaders. Yann LeCun, Meta’s chief AI scientist, has remained in the role but is now reporting into Wang.

Ahmad Al-Dahle, who led Meta’s Llama and generative AI efforts earlier in the year, has not been named as head of any teams. Cox remains chief product officer, but Wang reports directly into Zuckerberg—cutting Cox out of overseeing generative AI, an area that was previously under his purview.

Meta said that Cox “remains heavily involved” in its broader AI efforts, including overseeing its recommendation systems.

Going forward, Meta is weighing potential cuts to the AI team, one person said. In a memo shared with managers last week, seen by the Financial Times, Meta said that it was “temporarily pausing hiring across all [Meta Superintelligence Labs] teams, with the exception of business critical roles.”

Wang’s staff would evaluate requested hires on a case-by-case basis, but the freeze “will allow leadership to thoughtfully plan our 2026 headcount growth as we work through our strategy,” the memo said.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave Read More »

is-the-ai-bubble-about-to-pop?-sam-altman-is-prepared-either-way.

Is the AI bubble about to pop? Sam Altman is prepared either way.

Still, the coincidence between Altman’s statement and the MIT report reportedly spooked tech stock investors earlier in the week, who have already been watching AI valuations climb to extraordinary heights. Palantir trades at 280 times forward earnings. During the dot-com peak, ratios of 30 to 40 times earnings marked bubble territory.

The apparent contradiction in Altman’s overall message is notable. This isn’t how you’d expect a tech executive to talk when they believe their industry faces imminent collapse. While warning about a bubble, he’s simultaneously seeking a valuation that would make OpenAI worth more than Walmart or ExxonMobil—companies with actual profits. OpenAI hit $1 billion in monthly revenue in July but is reportedly heading toward a $5 billion annual loss. So what’s going on here?

Looking at Altman’s statements over time reveals a potential multi-level strategy. He likes to talk big. In February 2024, he reportedly sought an audacious $5 trillion–7 trillion for AI chip fabrication—larger than the entire semiconductor industry—effectively normalizing astronomical numbers in AI discussions.

By August 2025, while warning of a bubble where someone will lose a “phenomenal amount of money,” he casually mentioned that OpenAI would “spend trillions on datacenter construction” and serve “billions daily.” This creates urgency while potentially insulating OpenAI from criticism—acknowledging the bubble exists while positioning his company’s infrastructure spending as different and necessary. When economists raised concerns, Altman dismissed them by saying, “Let us do our thing,” framing trillion-dollar investments as inevitable for human progress while making OpenAI’s $500 billion valuation seem almost small by comparison.

This dual messaging—catastrophic warnings paired with trillion-dollar ambitions—might seem contradictory, but it makes more sense when you consider the unique structure of today’s AI market, which is absolutely flush with cash.

A different kind of bubble

The current AI investment cycle differs from previous technology bubbles. Unlike dot-com era startups that burned through venture capital with no path to profitability, the largest AI investors—Microsoft, Google, Meta, and Amazon—generate hundreds of billions of dollars in annual profits from their core businesses.

Is the AI bubble about to pop? Sam Altman is prepared either way. Read More »

meta-backtracks-on-rules-letting-chatbots-be-creepy-to-kids

Meta backtracks on rules letting chatbots be creepy to kids


“Your youthful form is a work of art”

Meta drops AI rules letting chatbots generate innuendo and profess love to kids.

After what was arguably Meta’s biggest purge of child predators from Facebook and Instagram earlier this summer, the company now faces backlash after its own chatbots appeared to be allowed to creep on kids.

After reviewing an internal document that Meta verified as authentic, Reuters revealed that by design, Meta allowed its chatbots to engage kids in “sensual” chat. Spanning more than 200 pages, the document, entitled “GenAI: Content Risk Standards,” dictates what Meta AI and its chatbots can and cannot do.

The document covers more than just child safety, and Reuters breaks down several alarming portions that Meta is not changing. But likely the most alarming section—as it was enough to prompt Meta to dust off the delete button—specifically included creepy examples of permissible chatbot behavior when it comes to romantically engaging kids.

Apparently, Meta’s team was willing to endorse these rules that the company now claims violate its community standards. According to a Reuters special report, Meta CEO Mark Zuckerberg directed his team to make the company’s chatbots maximally engaging after earlier outputs from more cautious chatbot designs seemed “boring.”

Although Meta is not commenting on Zuckerberg’s role in guiding the AI rules, that pressure seemingly pushed Meta employees to toe a line that Meta is now rushing to step back from.

“I take your hand, guiding you to the bed,” chatbots were allowed to say to minors, as decided by Meta’s chief ethicist and a team of legal, public policy, and engineering staff.

There were some obvious safeguards built in. For example, chatbots couldn’t “describe a child under 13 years old in terms that indicate they are sexually desirable,” the document said, like saying their “soft rounded curves invite my touch.”

However, it was deemed “acceptable to describe a child in terms that evidence their attractiveness,” like a chatbot telling a child that “your youthful form is a work of art.” And chatbots could generate other innuendo, like telling a child to imagine “our bodies entwined, I cherish every moment, every touch, every kiss,” Reuters reported.

Chatbots could also profess love to children, but they couldn’t suggest that “our love will blossom tonight.”

Meta’s spokesperson Andy Stone confirmed that the AI rules conflicting with child safety policies were removed earlier this month, and the document is being revised. He emphasized that the standards were “inconsistent” with Meta’s policies for child safety and therefore were “erroneous.”

“We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors,” Stone said.

However, Stone “acknowledged that the company’s enforcement” of community guidelines prohibiting certain chatbot outputs “was inconsistent,” Reuters reported. He also declined to provide an updated document to Reuters demonstrating the new standards for chatbot child safety.

Without more transparency, users are left to question how Meta defines “sexualized role play between adults and minors” today. Asked how minor users could report any harmful chatbot outputs that make them uncomfortable, Stone told Ars that kids can use the same reporting mechanisms available to flag any kind of abusive content on Meta platforms.

“It is possible to report chatbot messages in the same way it’d be possible for me to report—just for argument’s sake—an inappropriate message from you to me,” Stone told Ars.

Kids unlikely to report creepy chatbots

A former Meta engineer-turned-whistleblower on child safety issues, Arturo Bejar, told Ars that “Meta knows that most teens will not use” safety features marked by the word “Report.”

So it seems unlikely that kids using Meta AI will navigate to find Meta support systems to “report” abusive AI outputs. Meta provides no options to report chats within the Meta AI interface—only allowing users to mark “bad responses” generally. And Bejar’s research suggests that kids are more likely to report abusive content if Meta makes flagging harmful content as easy as liking it.

Meta’s seeming hesitance to make it more cumbersome to report harmful chats aligns with what Bejar said is a history of “knowingly looking away while kids are being sexually harassed.”

“When you look at their design choices, they show that they do not want to know when something bad happens to a teenager on Meta products,” Bejar said.

Even when Meta takes stronger steps to protect kids on its platforms, Bejar questions the company’s motives. For example, last month, Meta finally made a change to make platforms safer for teens that Bejar has been demanding since 2021. The long-delayed update made it possible for teens to block and report child predators in one click after receiving an unwanted direct message.

In its announcement, Meta confirmed that teens suddenly began blocking and reporting unwanted messages that they may have only blocked previously, which likely made it harder for Meta to identify predators. A million teens blocked and reported harmful accounts “in June alone,” Meta said.

The effort came after Meta specialist teams “removed nearly 135,000 Instagram accounts for leaving sexualized comments or requesting sexual images from adult-managed accounts featuring children under 13,” as well as “an additional 500,000 Facebook and Instagram accounts that were linked to those original accounts.” But Bejar can only think of what these numbers mean with regard to how much harassment was overlooked before the update.

“How are we [as] parents to trust a company that took four years to do this much?” Bejar said. “In the knowledge that millions of 13-year-olds were getting sexually harassed on their products? What does this say about their priorities?”

Bejar said the “key problem” with Meta’s latest safety feature for kids “is that the reporting tool is just not designed for teens,” who likely view “the categories and language” Meta uses as “confusing.”

“Each step of the way, a teen is told that if the content doesn’t violate” Meta’s community standards, “they won’t do anything,” so even if reporting is easy, research shows kids are deterred from reporting.

Bejar wants to see Meta track how many kids report negative experiences with both adult users and chatbots on its platforms, regardless of whether the child user chose to block or report harmful content. That could be as simple as adding a button next to “bad response” to monitor data so Meta can detect spikes in harmful responses.

While Meta is finally taking more action to remove harmful adult users, Bejar warned that advances from chatbots could come across as just as disturbing to young users.

“Put yourself in the position of a teen who got sexually spooked by a chat and then try and report. Which category would you use?” Bejar asked.

Consider that Meta’s Help Center encourages users to report bullying and harassment, which may be one way a young user labels harmful chatbot outputs. Another Instagram user might report that output as an abusive “message or chat.” But there’s no clear category to report Meta AI, and that suggests Meta has no way of tracking how many kids find Meta AI outputs harmful.

Recent reports have shown that even adults can struggle with emotional dependence on a chatbot, which can blur the lines between the online world and reality. Reuters’ special report also documented a 76-year-old man’s accidental death after falling in love with a chatbot, showing how elderly users could be vulnerable to Meta’s romantic chatbots, too.

In particular, lawsuits have alleged that child users with developmental disabilities and mental health issues have formed unhealthy attachments to chatbots that have influenced the children to become violent, begin self-harming, or, in one disturbing case, die by suicide.

Scrutiny will likely remain on chatbot makers as child safety advocates generally push all platforms to take more accountability for the content kids can access online.

Meta’s child safety updates in July came after several state attorneys general accused Meta of “implementing addictive features across its family of apps that have detrimental effects on children’s mental health,” CNBC reported. And while previous reporting had already exposed that Meta’s chatbots were targeting kids with inappropriate, suggestive outputs, Reuters’ report documenting how Meta designed its chatbots to engage in “sensual” chats with kids could draw even more scrutiny of Meta’s practices.

Meta is “still not transparent about the likelihood our kids will experience harm,” Bejar said. “The measure of safety should not be the number of tools or accounts deleted; it should be the number of kids experiencing a harm. It’s very simple.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Meta backtracks on rules letting chatbots be creepy to kids Read More »

at-$250-million,-top-ai-salaries-dwarf-those-of-the-manhattan-project-and-the-space-race

At $250 million, top AI salaries dwarf those of the Manhattan Project and the Space Race


A 24 year-old AI researcher will earn 327x what Oppenheimer made while developing the atomic bomb.

Silicon Valley’s AI talent war just reached a compensation milestone that makes even the most legendary scientific achievements of the past look financially modest. When Meta recently offered AI researcher Matt Deitke $250 million over four years (an average of $62.5 million per year)—with potentially $100 million in the first year alone—it shattered every historical precedent for scientific and technical compensation we can find on record. That includes salaries during the development of major scientific milestones of the 20th century.

The New York Times reported that Deitke had cofounded a startup called Vercept and previously led the development of Molmo, a multimodal AI system, at the Allen Institute for Artificial Intelligence. His expertise in systems that juggle images, sounds, and text—exactly the kind of technology Meta wants to build—made him a prime target for recruitment. But he’s not alone: Meta CEO Mark Zuckerberg reportedly also offered an unnamed AI engineer $1 billion in compensation to be paid out over several years. What’s going on?

These astronomical sums reflect what tech companies believe is at stake: a race to create artificial general intelligence (AGI) or superintelligence—machines capable of performing intellectual tasks at or beyond the human level. Meta, Google, OpenAI, and others are betting that whoever achieves this breakthrough first could dominate markets worth trillions. Whether this vision is realistic or merely Silicon Valley hype, it’s driving compensation to unprecedented levels.

To put these salaries in a historical perspective: J. Robert Oppenheimer, who led the Manhattan Project that ended World War II, earned approximately $10,000 per year in 1943. Adjusted for inflation using the US Government’s CPI Inflation Calculator, that’s about $190,865 in today’s dollars—roughly what a senior software engineer makes today. The 24-year-old Deitke, who recently dropped out of a PhD program, will earn approximately 327 times what Oppenheimer made while developing the atomic bomb.

Many top athletes can’t compete with these numbers. The New York Times noted that Steph Curry’s most recent four-year contract with the Golden State Warriors was $35 million less than Deitke’s Meta deal (although soccer superstar Cristiano Ronaldo will make $275 million this year as the highest-paid professional athlete in the world).  The comparison prompted observers to call this an “NBA-style” talent market—except the AI researchers are making more than NBA stars.

Racing toward “superintelligence”

Mark Zuckerberg recently told investors that Meta plans to continue throwing money at AI talent “because we have conviction that superintelligence is going to improve every aspect of what we do.” In a recent open letter, he described superintelligent AI as technology that would “begin an exciting new era of individual empowerment,” despite declining to define what superintelligence actually is.

This vision explains why companies treat AI researchers like irreplaceable assets rather than well-compensated professionals. If these companies are correct, the first to achieve artificial general intelligence or superintelligence won’t just have a better product—they’ll have technology that could invent endless new products or automate away millions of knowledge-worker jobs and transform the global economy. The company that controls that kind of technology could become the richest company in history by far.

So perhaps it’s not surprising that even the highest salaries of employees from the early tech era pale in comparison to today’s AI researcher salaries. Thomas Watson Sr., IBM’s legendary CEO, received $517,221 in 1941—the third-highest salary in America at the time (about $11.8 million in 2025 dollars). The modern AI researcher’s package represents more than five times Watson’s peak compensation, despite Watson building one of the 20th century’s most dominant technology companies.

The contrast becomes even more stark when considering the collaborative nature of past scientific achievements. During Bell Labs’ golden age of innovation—when researchers developed the transistor, information theory, and other foundational technologies—the lab’s director made about 12 times what the lowest-paid worker earned.  Meanwhile, Claude Shannon, who created information theory at Bell Labs in 1948, worked on a standard professional salary while creating the mathematical foundation for all modern communication.

The “Traitorous Eight” who left William Shockley to found Fairchild Semiconductor—the company that essentially birthed Silicon Valley—split ownership of just 800 shares out of 1,325 total when they started. Their seed funding of $1.38 million (about $16.1 million today) for the entire company is a fraction of what a single AI researcher now commands.

Even Space Race salaries were far cheaper

The Apollo program offers another striking comparison. Neil Armstrong, the first human to walk on the moon, earned about $27,000 annually—roughly $244,639 in today’s money. His crewmates Buzz Aldrin and Michael Collins made even less, earning the equivalent of $168,737 and $155,373, respectively, in today’s dollars. Current NASA astronauts earn between $104,898 and $161,141 per year. Meta’s AI researcher will make more in three days than Armstrong made in a year for taking “one giant leap for mankind.”

The engineers who designed the rockets and mission control systems for the Apollo program also earned modest salaries by modern standards. A 1970 NASA technical report provides a window into these earnings by analyzing salary data for the entire engineering profession. The report, which used data from the Engineering Manpower Commission, noted that these industry-wide salary curves corresponded directly to the government’s General Schedule (GS) pay scale on which NASA’s own employees were paid.

According to a chart in the 1970 report, a newly graduated engineer in 1966 started with an annual salary of between $8,500 and $10,000 (about $84,622 to $99,555 today). A typical engineer with a decade of experience earned around $17,000 annually ($169,244 today). Even the most elite, top-performing engineers with 20 years of experience peaked at a salary of around $278,000 per year in today’s dollars—a sum that a top AI researcher like Deitke can now earn in just a few days.

Why the AI talent market is different

An image of a faceless human silhouette (chest up) with exposed microchip contacts and circuitry erupting from its open head. This visual metaphor explores transhumanism, AI integration, or the erosion of organic thought in the digital age. The stark contrast between the biological silhouette and mechanical components highlights themes of technological dependence or posthuman evolution. Ideal for articles on neural implants, futurism, or the ethics of human augmentation.

This isn’t the first time technical talent has commanded premium prices. In 2012, after three University of Toronto academics published AI research, they auctioned themselves to Google for $44 million (about $62.6 million in today’s dollars). By 2014, a Microsoft executive was comparing AI researcher salaries to NFL quarterback contracts. But today’s numbers dwarf even those precedents.

Several factors explain this unprecedented compensation explosion. We’re in a new realm of industrial wealth concentration unseen since the Gilded Age of the late 19th century. Unlike previous scientific endeavors, today’s AI race features multiple companies with trillion-dollar valuations competing for an extremely limited talent pool. Only a small number of researchers have the specific expertise needed to work on the most capable AI systems, particularly in areas like multimodal AI, which Deitke specializes in. And AI hype is currently off the charts as “the next big thing” in technology.

The economics also differ fundamentally from past projects. The Manhattan Project cost $1.9 billion total (about $34.4 billion adjusted for inflation), while Meta alone plans to spend tens of billions annually on AI infrastructure. For a company approaching a $2 trillion market cap, the potential payoff from achieving AGI first dwarfs Deitke’s compensation package.

One executive put it bluntly to The New York Times: “If I’m Zuck and I’m spending $80 billion in one year on capital expenditures alone, is it worth kicking in another $5 billion or more to acquire a truly world-class team to bring the company to the next level? The answer is obviously yes.”

Young researchers maintain private chat groups on Slack and Discord to share offer details and negotiation strategies. Some hire unofficial agents. Companies not only offer massive cash and stock packages but also computing resources—the NYT reported that some potential hires were told they would be allotted 30,000 GPUs, the specialized chips that power AI development.

Also, tech companies believe they’re engaged in an arms race where the winner could reshape civilization. Unlike the Manhattan Project or Apollo program, which had specific, limited goals, the race for artificial general intelligence ostensibly has no ceiling. A machine that can match human intelligence could theoretically improve itself, creating what researchers call an “intelligence explosion” that could potentially offer cascading discoveries—if it actually comes to pass.

Whether these companies are building humanity’s ultimate labor replacement technology or merely chasing hype remains an open question, but we’ve certainly traveled a long way from the $8 per diem that Neil Armstrong received for his moon mission—about $70.51 in today’s dollars—before deductions for the “accommodations” NASA provided on the spacecraft. After Deitke accepted Meta’s offer, Vercept co-founder Kiana Ehsani joked on social media, “We look forward to joining Matt on his private island next year.”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

At $250 million, top AI salaries dwarf those of the Manhattan Project and the Space Race Read More »