Artificial Intelligence

meta’s-“ai-superintelligence”-effort-sounds-just-like-its-failed-“metaverse”

Meta’s “AI superintelligence” effort sounds just like its failed “metaverse”


Zuckerberg and company talked up another supposed tech revolution four short years ago.

Artist’s conception of Mark Zuckerberg looking into our glorious AI-powered future. Credit: Facebook

In a memo to employees earlier this week, Meta CEO Mark Zuckerberg shared a vision for a near-future in which “personal [AI] superintelligence for everyone” forms “the beginning of a new era for humanity.” The newly formed Meta Superintelligence Labs—freshly staffed with multiple high-level acquisitions from OpenAI and other AI companies—will spearhead the development of “our next generation of models to get to the frontier in the next year or so,” Zuckerberg wrote.

Reading that memo, I couldn’t help but think of another “vision for the future” Zuckerberg shared not that long ago. At his 2021 Facebook Connect keynote, Zuckerberg laid out his plan for the metaverse, a virtual place where “you’re gonna be able to do almost anything you can imagine” and which would form the basis of “the next version of the Internet.”

“The future of the Internet” of the recent past.

“The future of the Internet” of the recent past. Credit: Meta

Zuckerberg believed in that vision so much at the time that he abandoned the well-known Facebook corporate brand in favor of the new name “Meta.” “I’m going to keep pushing and giving everything I’ve got to make this happen now,” Zuckerberg said at the time. Less than four years later, Zuckerberg seems to now be “giving everything [he’s] got” for a vision of AI “superintelligence,” reportedly offering pay packages of up to $300 million over four years to attract top talent from other AI companies (Meta has since denied those reports, saying, “The size and structure of these compensation packages have been misrepresented all over the place”).

Once again, Zuckerberg is promising that this new technology will revolutionize our lives and replace the ways we currently socialize and work on the Internet. But the utter failure (so far) of those over-the-top promises for the metaverse has us more than a little skeptical of how impactful Zuckerberg’s vision of “personal superintelligence for everyone” will truly be.

Meta-vision

Looking back at Zuckerberg’s 2021 Facebook Connect keynote shows just how hard the company was selling the promise of the metaverse at the time. Zuckerberg said the metaverse would represent an “even more immersive and embodied Internet” where “everything we do online today—connecting socially, entertainment, games, work—is going to be more natural and vivid.”

Mark Zuckerberg lays out his vision for the metaverse in 2021.

“Teleporting around the metaverse is going to be like clicking a link on the Internet,” Zuckerberg promised, and metaverse users would probably switch between “a photorealistic avatar for work, a stylized one for hanging out, and maybe even a fantasy one for gaming.” This kind of personalization would lead to “hundreds of thousands” of artists being able to make a living selling virtual metaverse goods that could be embedded in virtual or real-world environments.

“Lots of things that are physical today, like screens, will just be able to be holograms in the future,” Zuckerberg promised. “You won’t need a physical TV; it’ll just be a one-dollar hologram from some high school kid halfway across the world… we’ll be able to express ourselves in new joyful, completely immersive ways, and that’s going to unlock a lot of amazing new experiences.”

A pre-rendered concept video showed metaverse users playing poker in a zero-gravity space station with robot avatars, then pausing briefly to appreciate some animated 3D art a friend had encountered on the street. Another video showed a young woman teleporting via metaverse avatar to virtually join a friend attending a live concert in Tokyo, then buying virtual merch from the concert at a metaverse afterparty from the comfort of her home. Yet another showed old men playing chess on a park bench, even though one of the players was sitting across the country.

Meta-failure

Fast forward to 2025, and the current reality of Zuckerberg’s metaverse efforts bears almost no resemblance to anything shown or discussed back in 2021. Even enthusiasts describe Meta’s Horizon Worlds as a “depressing” and “lonely” experience characterized by “completely empty” venues. And Meta engineers anonymously gripe about metaverse tools that even employees actively avoid using and a messy codebase that was treated like “a 3D version of a mobile app. “

screen sharing

Even Meta employees reportedly don’t want to work in Horizon Workrooms.

Even Meta employees reportedly don’t want to work in Horizon Workrooms. Credit: Facebook

The creation of a $50 million creator fund seems to have failed to encourage peeved creators to give the metaverse another chance. Things look a bit better if you expand your view past Meta’s own metaverse sandbox; the chaotic world of VR Chat attracts tens of thousands of daily users on Steam alone, for instance. Still, we’re a far cry from the replacement for the mobile Internet that Zuckerberg once trumpeted.

Then again, it’s possible that we just haven’t given Zuckerberg’s version of the metaverse enough time to develop. Back in 2021, he said that “a lot of this is going to be mainstream” within “the next five or 10 years.” That timeframe gives Meta at least a few more years to develop and release its long-teased, lightweight augmented reality glasses that the company showed off last year in the form of a prototype that reportedly still costs $10,000 per unit.

Zuckerberg shows off prototype AR glasses that could change the way we think about “the metaverse.” Credit: Bloomberg / Contributor | Bloomberg

Maybe those glasses will ignite widespread interest in the metaverse in a way that Meta’s bulky, niche VR goggles have utterly failed to. Regardless, after nearly four years and roughly $60 billion in VR-related losses, Meta thus far has surprisingly little to show for its massive investment in Zuckerberg’s metaverse vision.

Our AI future?

When I hear Zuckerberg talk about the promise of AI these days, it’s hard not to hear echoes of his monumental vision for the metaverse from 2021. If anything, Zuckerberg’s vision of our AI-powered future is even more grandiose than his view of the metaverse.

As with the metaverse, Zuckerberg now sees AI forming a replacement for the current version of the Internet. “Do you think in five years we’re just going to be sitting in our feed and consuming media that’s just video?” Zuckerberg asked rhetorically in an April interview with Drawkesh Patel. “No, it’s going to be interactive,” he continued, envisioning something like Instagram Reels, but “you can talk to it, or interact with it, and it talks back, or it changes what it’s doing. Or you can jump into it like a game and interact with it. That’s all going to be AI.”

Mark Zuckerberg talks about all the ways superhuman AI is going to change our lives in the near future.

As with the Metaverse, Zuckerberg sees AI as revolutionizing the way we interact with each other. He envisions “always-on video chats with the AI” incorporating expressions and body language borrowed from the company’s work on the metaverse. And our relationships with AI models are “just going to get more intense as these AIs become more unique, more personable, more intelligent, more spontaneous, more funny, and so forth,” Zuckerberg said. “As the personalization loop kicks in and the AI starts to get to know you better and better, that will just be really compelling.”

Zuckerberg did allow that relationships with AI would “probably not” replace in-person connections, because there are “things that are better about physical connections when you can have them.” At the same time, he said, for the average American who has three friends, AI relationships can fill the “demand” for “something like 15 friends” without the effort of real-world socializing. “People just don’t have as much connection as they want,” Zuckerberg said. “They feel more alone a lot of the time than they would like.”

A toy robot saying

Why chat with real friends on Facebook when you can chat with AI avatars?

Credit: Benj Edwards / Getty Images

Why chat with real friends on Facebook when you can chat with AI avatars? Credit: Benj Edwards / Getty Images

Zuckerberg also sees AI leading to a flourishing of human productivity and creativity in a way even his wildest metaverse imaginings couldn’t match. Zuckerberg said that AI advancement could “lead toward a world of abundance where everyone has these superhuman tools to create whatever they want.” That means personal access to “a super powerful [virtual] software engineer” and AIs that are “solving diseases, advancing science, developing new technology that makes our lives better.”

That will also mean that some companies will be able to get by with fewer employees before too long, Zuckerberg said. In customer service, for instance, “as AI gets better, you’re going to get to a place where AI can handle a bunch of people’s issues,” he said. “Not all of them—maybe 10 years from now it can handle all of them—but thinking about a three- to five-year time horizon, it will be able to handle a bunch.“

In the longer term, Zuckerberg said, AIs will be integrated into our more casual pursuits as well. “If everyone has these superhuman tools to create a ton of different stuff, you’re going to get incredible diversity,” and “the amount of creativity that’s going to be unlocked is going to be massive,” he said. “I would guess the world is going to get a lot funnier, weirder, and quirkier, the way that memes on the Internet have gotten over the last 10 years.”

Compare and contrast

To be sure, there are some important differences between the past promise of the metaverse and the current promise of AI technology. Zuckerberg claims that a billion people use Meta’s AI products monthly, for instance, utterly dwarfing the highest estimates for regular use of “the metaverse” or augmented reality as a whole (even if many AI users seem to balk at paying for regular use of AI tools). Meta coders are also reportedly already using AI coding tools regularly in a way they never did with Meta’s metaverse tools. And people are already developing what they consider meaningful relationships with AI personas, whether that’s in the form of therapists or romantic partners.

Still, there are reasons to be skeptical about the future of AI when current models still routinely hallucinate basic facts, show fundamental issues when attempting reasoning, and struggle with basic tasks like beating a children’s video game. The path from where we are to a supposed “superhuman” AI is not simple or inevitable, despite the handwaving of industry boosters like Zuckerberg.

Artist’s conception of Carmack’s VR avatar waving goodbye to Meta.

Artist’s conception of Carmack’s VR avatar waving goodbye to Meta.

At the 2021 rollout of Meta’s push to develop a metaverse, high-ranking Meta executives like John Carmack were at least up front about the technical and product-development barriers that could get in the way of Zuckerberg’s vision. “Everybody that wants to work on the metaverse talks about the limitless possibilities of it,” Carmack said at the time (before departing the company in late 2022). “But it’s not limitless. It is a challenge to fit things in, but you can make smarter decisions about exactly what is important and then really optimize the heck out of things.”

Today, those kinds of voices of internal skepticism seem in short supply as Meta sets itself up to push AI in the same way it once backed the metaverse. Don’t be surprised, though, if today’s promise that we’re at “the beginning of a new era for humanity” ages about as well as Meta’s former promises about a metaverse where “you’re gonna be able to do almost anything you can imagine.”

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Meta’s “AI superintelligence” effort sounds just like its failed “metaverse” Read More »

xai-data-center-gets-air-permit-to-run-15-turbines,-but-imaging-shows-24-on-site

xAI data center gets air permit to run 15 turbines, but imaging shows 24 on site

Before xAI got the permit, residents were stuck relying on infrequent thermal imaging to determine how many turbines appeared to be running without BACT. Now that xAI has secured the permit, the company will be required to “record the date, time, and durations of all startups, shutdowns, malfunctions, and tuning events” and “always minimize emissions including startup, shutdown, maintenance, and combustion tuning periods.”

These records—which also document fuel usage, facility-wide emissions, and excess emissions—must be shared with the health department semiannually, with xAI’s first report due by December 31. Additionally, xAI must maintain five years of “monitoring, preventive, and maintenance records for air pollution control equipment,” which the department can request to review at any time.

For Memphis residents worried about smog-forming pollution, the worst fear would likely be visibly detecting the pollution. Mitigating this, xAI’s air permit requires that visible emissions “from each emission point at the facility shall not exceed” 20 percent in opacity for more than minutes in any one-hour period or more than 20 minutes in any 24-hour period.

It also prevents xAI from operating turbines all the time, limiting xAI to “a maximum of 22 startup events and 22 shutdown events per year” for the 15 turbines included in the permit, “with a total combined duration of 110 hours annually.” Additionally, it specifies that each startup or shutdown event must not exceed one hour.

A senior communications manager for the SELC, Eric Hilt, told Ars that the “SELC and our partners intend to continue monitoring xAI’s operations in the Memphis area.” He further noted that the air permit does not address all of citizens’ concerns at a time when xAI is planning to build another data center in the area, sparking new questions.

“While these permits increase the amount of public information and accountability around 15 of xAI’s turbines, there are still significant concerns around transparency—both for xAI’s first South Memphis data center near the Boxtown neighborhood and the planned data center in the Whitehaven neighborhood,” Hilt said. “XAI has not said how that second data center will be powered or if it plans to use gas turbines for that facility as well.”

xAI data center gets air permit to run 15 turbines, but imaging shows 24 on site Read More »

everything-that-could-go-wrong-with-x’s-new-ai-written-community-notes

Everything that could go wrong with X’s new AI-written community notes


X says AI can supercharge community notes, but that comes with obvious risks.

Elon Musk’s X arguably revolutionized social media fact-checking by rolling out “community notes,” which created a system to crowdsource diverse views on whether certain X posts were trustworthy or not.

But now, the platform plans to allow AI to write community notes, and that could potentially ruin whatever trust X users had in the fact-checking system—which X has fully acknowledged.

In a research paper, X described the initiative as an “upgrade” while explaining everything that could possibly go wrong with AI-written community notes.

In an ideal world, X described AI agents that speed up and increase the number of community notes added to incorrect posts, ramping up fact-checking efforts platform-wide. Each AI-written note will be rated by a human reviewer, providing feedback that makes the AI agent better at writing notes the longer this feedback loop cycles. As the AI agents get better at writing notes, that leaves human reviewers to focus on more nuanced fact-checking that AI cannot quickly address, such as posts requiring niche expertise or social awareness. Together, the human and AI reviewers, if all goes well, could transform not just X’s fact-checking, X’s paper suggested, but also potentially provide “a blueprint for a new form of human-AI collaboration in the production of public knowledge.”

Among key questions that remain, however, is a big one: X isn’t sure if AI-written notes will be as accurate as notes written by humans. Complicating that further, it seems likely that AI agents could generate “persuasive but inaccurate notes,” which human raters might rate as helpful since AI is “exceptionally skilled at crafting persuasive, emotionally resonant, and seemingly neutral notes.” That could disrupt the feedback loop, watering down community notes and making the whole system less trustworthy over time, X’s research paper warned.

“If rated helpfulness isn’t perfectly correlated with accuracy, then highly polished but misleading notes could be more likely to pass the approval threshold,” the paper said. “This risk could grow as LLMs advance; they could not only write persuasively but also more easily research and construct a seemingly robust body of evidence for nearly any claim, regardless of its veracity, making it even harder for human raters to spot deception or errors.”

X is already facing criticism over its AI plans. On Tuesday, former United Kingdom technology minister, Damian Collins, accused X of building a system that could allow “the industrial manipulation of what people see and decide to trust” on a platform with more than 600 million users, The Guardian reported.

Collins claimed that AI notes risked increasing the promotion of “lies and conspiracy theories” on X, and he wasn’t the only expert sounding alarms. Samuel Stockwell, a research associate at the Centre for Emerging Technology and Security at the Alan Turing Institute, told The Guardian that X’s success largely depends on “the quality of safeguards X puts in place against the risk that these AI ‘note writers’ could hallucinate and amplify misinformation in their outputs.”

“AI chatbots often struggle with nuance and context but are good at confidently providing answers that sound persuasive even when untrue,” Stockwell said. “That could be a dangerous combination if not effectively addressed by the platform.”

Also complicating things: anyone can create an AI agent using any technology to write community notes, X’s Community Notes account explained. That means that some AI agents may be more biased or defective than others.

If this dystopian version of events occurs, X predicts that human writers may get sick of writing notes, threatening the diversity of viewpoints that made community notes so trustworthy to begin with.

And for any human writers and reviewers who stick around, it’s possible that the sheer volume of AI-written notes may overload them. Andy Dudfield, the head of AI at a UK fact-checking organization called Full Fact, told The Guardian that X risks “increasing the already significant burden on human reviewers to check even more draft notes, opening the door to a worrying and plausible situation in which notes could be drafted, reviewed, and published entirely by AI without the careful consideration that human input provides.”

X is planning more research to ensure the “human rating capacity can sufficiently scale,” but if it cannot solve this riddle, it knows “the impact of the most genuinely critical notes” risks being diluted.

One possible solution to this “bottleneck,” researchers noted, would be to remove the human review process and apply AI-written notes in “similar contexts” that human raters have previously approved. But the biggest potential downfall there is obvious.

“Automatically matching notes to posts that people do not think need them could significantly undermine trust in the system,” X’s paper acknowledged.

Ultimately, AI note writers on X may be deemed an “erroneous” tool, researchers admitted, but they’re going ahead with testing to find out.

AI-written notes will start posting this month

All AI-written community notes “will be clearly marked for users,” X’s Community Notes account said. The first AI notes will only appear on posts where people have requested a note, the account said, but eventually AI note writers could be allowed to select posts for fact-checking.

More will be revealed when AI-written notes start appearing on X later this month, but in the meantime, X users can start testing AI note writers today and soon be considered for admission in the initial cohort of AI agents. (If any Ars readers end up testing out an AI note writer, this Ars writer would be curious to learn more about your experience.)

For its research, X collaborated with post-graduate students, research affiliates, and professors investigating topics like human trust in AI, fine-tuning AI, and AI safety at Harvard University, the Massachusetts Institute of Technology, Stanford University, and the University of Washington.

Researchers agreed that “under certain circumstances,” AI agents can “produce notes that are of similar quality to human-written notes—at a fraction of the time and effort.” They suggested that more research is needed to overcome flagged risks to reap the benefits of what could be “a transformative opportunity” that “offers promise of dramatically increased scale and speed” of fact-checking on X.

If AI note writers “generate initial drafts that represent a wider range of perspectives than a single human writer typically could, the quality of community deliberation is improved from the start,” the paper said.

Future of AI notes

Researchers imagine that once X’s testing is completed, AI note writers could not just aid in researching problematic posts flagged by human users, but also one day select posts predicted to go viral and stop misinformation from spreading faster than human reviewers could.

Additional perks from this automated system, they suggested, would include X note raters quickly accessing more thorough research and evidence synthesis, as well as clearer note composition, which could speed up the rating process.

And perhaps one day, AI agents could even learn to predict rating scores to speed things up even more, researchers speculated. However, more research would be needed to ensure that wouldn’t homogenize community notes, buffing them out to the point that no one reads them.

Perhaps the most Musk-ian of ideas proposed in the paper, is a notion of training AI note writers with clashing views to “adversarially debate the merits of a note.” Supposedly, that “could help instantly surface potential flaws, hidden biases, or fabricated evidence, empowering the human rater to make a more informed judgment.”

“Instead of starting from scratch, the rater now plays the role of an adjudicator—evaluating a structured clash of arguments,” the paper said.

While X may be moving to reduce the workload for X users writing community notes, it’s clear that AI could never replace humans, researchers said. Those humans are necessary for more than just rubber-stamping AI-written notes.

Human notes that are “written from scratch” are valuable to train the AI agents and some raters’ niche expertise cannot easily be replicated, the paper said. And perhaps most obviously, humans “are uniquely positioned to identify deficits or biases” and therefore more likely to be compelled to write notes “on topics the automated writers overlook,” such as spam or scams.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Everything that could go wrong with X’s new AI-written community notes Read More »

nudify-app’s-plan-to-dominate-deepfake-porn-hinges-on-reddit,-4chan,-and-telegram,-docs-show

Nudify app’s plan to dominate deepfake porn hinges on Reddit, 4chan, and Telegram, docs show


Reddit confirmed the nudify app’s links have been blocked since 2024.

Clothoff—one of the leading apps used to quickly and cheaply make fake nudes from images of real people—reportedly is planning a global expansion to continue dominating deepfake porn online.

Also known as a nudify app, Clothoff has resisted attempts to unmask and confront its operators. Last August, the app was among those that San Francisco’s city attorney, David Chiu, sued in hopes of forcing a shutdown. But recently, a whistleblower—who had “access to internal company information” as a former Clothoff employee—told the investigative outlet Der Spiegel that the app’s operators “seem unimpressed by the lawsuit” and instead of worrying about shutting down have “bought up an entire network of nudify apps.”

Der Spiegel found evidence that Clothoff today owns at least 10 other nudify services, attracting “monthly views ranging between hundreds of thousands to several million.” The outlet granted the whistleblower anonymity to discuss the expansion plans, which the whistleblower claimed was motivated by Clothoff employees growing “cynical” and “obsessed with money” over time as the app—which once felt like an “exciting startup”—gained momentum. Because generating convincing fake nudes can cost just a few bucks, chasing profits seemingly relies on attracting as many repeat users to as many destinations as possible.

Currently, Clothoff runs on an annual budget of around $3.5 million, the whistleblower told Der Spiegel. It has shifted its marketing methods since its launch, apparently now largely relying on Telegram bots and X channels to target ads at young men likely to use their apps.

Der Spiegel’s report documents Clothoff’s “large-scale marketing plan” to expand into the German market, as revealed by the whistleblower. The alleged campaign hinges on producing “naked images of well-known influencers, singers, and actresses,” seeking to entice ad clicks with the tagline “you choose who you want to undress.”

A few of the stars named in the plan confirmed to Der Spiegel that they never agreed to this use of their likenesses, with some of their representatives suggesting that they would pursue legal action if the campaign is ever launched.

However, even celebrities like Taylor Swift have struggled to combat deepfake nudes spreading online, while tools like Clothoff are increasingly used to torment young girls in middle and high school.

Similar celebrity campaigns are planned for other markets, Der Spiegel reported, including British, French, and Spanish markets. And Clothoff has notably already become a go-to tool in the US, not only targeted in the San Francisco city attorney’s lawsuit, but also in a complaint raised by a high schooler in New Jersey suing a boy who used Clothoff to nudify one of her Instagram photos taken when she was 14 years old, then shared it with other boys on Snapchat.

Clothoff is seemingly hoping to entice more young boys worldwide to use its apps for such purposes. The whistleblower told Der Spiegel that most of Clothoff’s marketing budget goes toward “advertising posts in special Telegram channels, in sex subs on Reddit, and on 4chan.” (Reddit noted to Ars that Clothoff URLs have been banned from Reddit since 2024 and “Reddit does not allow paid advertising against NSFW content or otherwise monetize it.”)

In ads, the app planned to specifically target “men between 16 and 35” who like benign stuff like “memes” and “video games,” as well as more toxic stuff like “right-wing extremist ideas,” “misogyny,” and “Andrew Tate,” an influencer criticized for promoting misogynistic views to teen boys.

Chiu was hoping to defend young women increasingly targeted in fake nudes by shutting down Clothoff, along with several other nudify apps targeted in his lawsuit. But so far, while Chiu has reached a settlement shutting down two websites, porngen.art and undresser.ai, attempts to serve Clothoff through available legal channels have not been successful. Chiu’s office is continuing its efforts to serve Clothoff through available legal channels. which evolve as the lawsuit moves through the court system, deputy press secretary for Chiu’s office, Alex Barrett-Shorter, told Ars.

Meanwhile, Clothoff continues to evolve, recently marketing a feature that Clothoff claims attracted more than a million users eager to make explicit videos out of a single picture.

Clothoff denies it plans to use influencers

Der Spiegel’s efforts to unmask the operators of Clothoff led the outlet to Eastern Europe, after reporters stumbled upon a “database accidentally left open on the Internet” that seemingly exposed “four central people behind the website.”

This was “consistent,” Der Spiegel said, with a whistleblower claim that all Clothoff employees “work in countries that used to belong to the Soviet Union.” Additionally, Der Spiegel noted that all Clothoff internal communications it reviewed were written in Russian, and the site’s email service is based in Russia.

A person claiming to be a Clothoff spokesperson named Elias denied knowing any of the four individuals flagged in their investigation, Der Spiegel reported, and disputed the $3 million budget figure. Elias claimed a nondisclosure agreement prevented him from discussing Clothoff’s team any further. However, soon after reaching out, Der Spiegel noted that Clothoff took down the database, which had a name that translated to “my babe.”

Regarding the shared marketing plan for global expansion, Elias denied that Clothoff intended to use celebrity influencers, saying that “Clothoff forbids the use of photos of people without their consent.”

He also denied that Clothoff could be used to nudify images of minors; however, one Clothoff user who spoke to Der Spiegel on the condition of anonymity, confirmed that his attempt to generate a fake nude of a US singer failed initially because she “looked like she might be underage.” But his second attempt a few days later successfully generated the fake nude with no problem. That suggests Clothoff’s age detection may not work perfectly.

As Clothoff’s growth appears unstoppable, the user explained to Der Spiegel why he doesn’t feel that conflicted about using the app to generate fake nudes of a famous singer.

“There are enough pictures of her on the Internet as it is,” the user reasoned.

However, that user draws the line at generating fake nudes of private individuals, insisting, “If I ever learned of someone producing such photos of my daughter, I would be horrified.”

For young boys who appear flippant about creating fake nude images of their classmates, the consequences have ranged from suspensions to juvenile criminal charges, and for some, there could be other costs. In the lawsuit where the high schooler is attempting to sue a boy who used Clothoff to bully her, there’s currently resistance from boys who participated in group chats to share what evidence they have on their phones. If she wins her fight, she’s asking for $150,000 in damages per image shared, so sharing chat logs could potentially increase the price tag.

Since she and the San Francisco city attorney each filed their lawsuits, the Take It Down Act has passed. That law makes it easier to force platforms to remove AI-generated fake nudes. But experts expect the law will face legal challenges over censorship fears, so the very limited legal tool might not withstand scrutiny.

Either way, the Take It Down Act is a safeguard that came too late for the earliest victims of nudify apps in the US, only some of whom are turning to courts seeking justice due to largely opaque laws that made it unclear if generating a fake nude was illegal.

“Jane Doe is one of many girls and women who have been and will continue to be exploited, abused, and victimized by non-consensual pornography generated through artificial intelligence,” the high schooler’s complaint noted. “Despite already being victimized by Defendant’s actions, Jane Doe has been forced to bring this action to protect herself and her rights because the governmental institutions that are supposed to protect women and children from being violated and exploited by the use of AI to generate child pornography and nonconsensual nude images failed to do so.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Nudify app’s plan to dominate deepfake porn hinges on Reddit, 4chan, and Telegram, docs show Read More »

nyt-to-start-searching-deleted-chatgpt-logs-after-beating-openai-in-court

NYT to start searching deleted ChatGPT logs after beating OpenAI in court


What are the odds NYT will access your ChatGPT logs in OpenAI court battle?

Last week, OpenAI raised objections in court, hoping to overturn a court order requiring the AI company to retain all ChatGPT logs “indefinitely,” including deleted and temporary chats.

But Sidney Stein, the US district judge reviewing OpenAI’s request, immediately denied OpenAI’s objections. He was seemingly unmoved by the company’s claims that the order forced OpenAI to abandon “long-standing privacy norms” and weaken privacy protections that users expect based on ChatGPT’s terms of service. Rather, Stein suggested that OpenAI’s user agreement specified that their data could be retained as part of a legal process, which Stein said is exactly what is happening now.

The order was issued by magistrate judge Ona Wang just days after news organizations, led by The New York Times, requested it. The news plaintiffs claimed the order was urgently needed to preserve potential evidence in their copyright case, alleging that ChatGPT users are likely to delete chats where they attempted to use the chatbot to skirt paywalls to access news content.

A spokesperson told Ars that OpenAI plans to “keep fighting” the order, but the ChatGPT maker seems to have few options left. They could possibly petition the Second Circuit Court of Appeals for a rarely granted emergency order that could intervene to block Wang’s order, but the appeals court would have to consider Wang’s order an extraordinary abuse of discretion for OpenAI to win that fight.

OpenAI’s spokesperson declined to confirm if the company plans to pursue this extreme remedy.

In the meantime, OpenAI is negotiating a process that will allow news plaintiffs to search through the retained data. Perhaps the sooner that process begins, the sooner the data will be deleted. And that possibility puts OpenAI in the difficult position of having to choose between either caving to some data collection to stop retaining data as soon as possible or prolonging the fight over the order and potentially putting more users’ private conversations at risk of exposure through litigation or, worse, a data breach.

News orgs will soon start searching ChatGPT logs

The clock is ticking, and so far, OpenAI has not provided any official updates since a June 5 blog post detailing which ChatGPT users will be affected.

While it’s clear that OpenAI has been and will continue to retain mounds of data, it would be impossible for The New York Times or any news plaintiff to search through all that data.

Instead, only a small sample of the data will likely be accessed, based on keywords that OpenAI and news plaintiffs agree on. That data will remain on OpenAI’s servers, where it will be anonymized, and it will likely never be directly produced to plaintiffs.

Both sides are negotiating the exact process for searching through the chat logs, with both parties seemingly hoping to minimize the amount of time the chat logs will be preserved.

For OpenAI, sharing the logs risks revealing instances of infringing outputs that could further spike damages in the case. The logs could also expose how often outputs attribute misinformation to news plaintiffs.

But for news plaintiffs, accessing the logs is not considered key to their case—perhaps providing additional examples of copying—but could help news organizations argue that ChatGPT dilutes the market for their content. That could weigh against the fair use argument, as a judge opined in a recent ruling that evidence of market dilution could tip an AI copyright case in favor of plaintiffs.

Jay Edelson, a leading consumer privacy lawyer, told Ars that he’s concerned that judges don’t seem to be considering that any evidence in the ChatGPT logs wouldn’t “advance” news plaintiffs’ case “at all,” while really changing “a product that people are using on a daily basis.”

Edelson warned that OpenAI itself probably has better security than most firms to protect against a potential data breach that could expose these private chat logs. But “lawyers have notoriously been pretty bad about securing data,” Edelson suggested, so “the idea that you’ve got a bunch of lawyers who are going to be doing whatever they are” with “some of the most sensitive data on the planet” and “they’re the ones protecting it against hackers should make everyone uneasy.”

So even though odds are pretty good that the majority of users’ chats won’t end up in the sample, Edelson said the mere threat of being included might push some users to rethink how they use AI. He further warned that ChatGPT users turning to OpenAI rival services like Anthropic’s Claude or Google’s Gemini could suggest that Wang’s order is improperly influencing market forces, which also seems “crazy.”

To Edelson, the most “cynical” take could be that news plaintiffs are possibly hoping the order will threaten OpenAI’s business to the point where the AI company agrees to a settlement.

Regardless of the news plaintiffs’ motives, the order sets an alarming precedent, Edelson said. He joined critics suggesting that more AI data may be frozen in the future, potentially affecting even more users as a result of the sweeping order surviving scrutiny in this case. Imagine if litigation one day targets Google’s AI search summaries, Edelson suggested.

Lawyer slams judges for giving ChatGPT users no voice

Edelson told Ars that the order is so potentially threatening to OpenAI’s business that the company may not have a choice but to explore every path available to continue fighting it.

“They will absolutely do something to try to stop this,” Edelson predicted, calling the order “bonkers” for overlooking millions of users’ privacy concerns while “strangely” excluding enterprise customers.

From court filings, it seems possible that enterprise users were excluded to protect OpenAI’s competitiveness, but Edelson suggested there’s “no logic” to their exclusion “at all.” By excluding these ChatGPT users, the judge’s order may have removed the users best resourced to fight the order, Edelson suggested.

“What that means is the big businesses, the ones who have the power, all of their stuff remains private, and no one can touch that,” Edelson said.

Instead, the order is “only going to intrude on the privacy of the common people out there,” which Edelson said “is really offensive,” given that Wang denied two ChatGPT users’ panicked request to intervene.

“We are talking about billions of chats that are now going to be preserved when they weren’t going to be preserved before,” Edelson said, noting that he’s input information about his personal medical history into ChatGPT. “People ask for advice about their marriages, express concerns about losing jobs. They say really personal things. And one of the bargains in dealing with OpenAI is that you’re allowed to delete your chats and you’re allowed to temporary chats.”

The greatest risk to users would be a data breach, Edelson said, but that’s not the only potential privacy concern. Corynne McSherry, legal director for the digital rights group the Electronic Frontier Foundation, previously told Ars that as long as users’ data is retained, it could also be exposed through future law enforcement and private litigation requests.

Edelson pointed out that most privacy attorneys don’t consider OpenAI CEO Sam Altman to be a “privacy guy,” despite Altman recently slamming the NYT, alleging it sued OpenAI because it doesn’t “like user privacy.”

“He’s trying to protect OpenAI, and he does not give a hoot about the privacy rights of consumers,” Edelson said, echoing one ChatGPT user’s dismissed concern that OpenAI may not prioritize users’ privacy concerns in the case if it’s financially motivated to resolve the case.

“The idea that he and his lawyers are really going to be the safeguards here isn’t very compelling,” Edelson said. He criticized the judges for dismissing users’ concerns and rejecting OpenAI’s request that users get a chance to testify.

“What’s really most appalling to me is the people who are being affected have had no voice in it,” Edelson said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

NYT to start searching deleted ChatGPT logs after beating OpenAI in court Read More »

pay-up-or-stop-scraping:-cloudflare-program-charges-bots-for-each-crawl

Pay up or stop scraping: Cloudflare program charges bots for each crawl

“Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho—and then giving that agent a budget to spend to acquire the best and most relevant content,” Cloudflare said, promising that “we enable a future where intelligent agents can programmatically negotiate access to digital resources.”

AI crawlers now blocked by default

Cloudflare’s announcement comes after rolling out a feature last September, allowing website owners to block AI crawlers in a single click. According to Cloudflare, over 1 million customers chose to block AI crawlers, signaling that people want more control over their content at a time when Cloudflare observed that writing instructions for AI crawlers in robots.txt files was widely “underutilized.”

To protect more customers moving forward, any new customers (including anyone on a free plan) who sign up for Cloudflare services will have their domains, by default, set to block all known AI crawlers.

This marks Cloudflare’s transition away from the dreaded opt-out models of AI scraping to a permission-based model, which a Cloudflare spokesperson told Ars is expected to “fundamentally change how AI companies access web content going forward.”

In a world where some website owners have grown sick and tired of attempting and failing to block AI scraping through robots.txt—including some trapping AI crawlers in tarpits to punish them for ignoring robots.txt—Cloudflare’s feature allows users to choose granular settings to prevent blocks on AI bots from impacting bots that drive search engine traffic. That’s critical for small content creators who want their sites to still be discoverable but not digested by AI bots.

“AI crawlers collect content like text, articles, and images to generate answers, without sending visitors to the original source—depriving content creators of revenue, and the satisfaction of knowing someone is reading their content,” Cloudflare’s blog said. “If the incentive to create original, quality content disappears, society ends up losing, and the future of the Internet is at risk.”

Disclosure: Condé Nast, which owns Ars Technica, is a partner involved in Cloudflare’s beta test.

This story was corrected on July 1 to remove publishers incorrectly listed as participating in Cloudflare’s pay-per-crawl beta.

Pay up or stop scraping: Cloudflare program charges bots for each crawl Read More »

half-a-million-spotify-users-are-unknowingly-grooving-to-an-ai-generated-band

Half a million Spotify users are unknowingly grooving to an AI-generated band

Making art used to be a uniquely human endeavor, but machines have learned to distill human creativity with generative AI. Whether that content counts as “art” depends on who you ask, but Spotify doesn’t discriminate. A new band called The Velvet Sundown debuted on Spotify this month and has already amassed more than half a million listeners. But by all appearances, The Velvet Sundown is not a real band—it’s AI.

While many artists are vehemently opposed to using AI, some have leaned into the trend to assist with music production. However, it doesn’t seem like there’s an artist behind this group. In less than a month, The Velvet Sundown has released two albums on Spotify, titled “Floating On Echoes” and “Dust and Silence.” A third album is releasing in two weeks. The tracks have a classic rock vibe with a cacophony of echoey instruments and a dash of autotune. If one of these songs came up in a mix, you might not notice anything is amiss. Listen to one after another, though, and the bland muddiness exposes them as a machine creation.

Some listeners began to have doubts about The Velvet Sundown’s existence over the past week, with multiple Reddit and X threads pointing out the lack of verifiable information on the band. The bio lists four members, none of whom appear to exist outside of The Velvet Sundown’s album listings and social media. The group’s songs have been mysteriously added to a large number of user-created playlists, which has helped swell its listener base in a few short weeks. When Spotify users began noticing The Velvet Sundown’s apparent use of AI, the profile had around 300,000 listeners. It’s now over 500,000 in less than a week.

When The Velvet Sundown set up an Instagram account on June 27, all doubts were laid to rest—these “people” are obviously AI. We may be past the era of being able to identify AI by counting fingers, but there are plenty of weird inconsistencies in these pics. In one Instagram post, the band claims to have gotten burgers to celebrate the success of the first two albums, but there are too many burgers and too few plates, and the food and drink are placed seemingly at random around the table. The band members themselves also have that unrealistically smooth and symmetrical look we see in AI-generated images.

Half a million Spotify users are unknowingly grooving to an AI-generated band Read More »

google-begins-rolling-out-ai-search-in-youtube

Google begins rolling out AI search in YouTube

Over the past year, Google has transformed its web search experience with AI, driving toward a zero-click experience. Now, the same AI focus is coming to YouTube, and Premium subscribers can get a preview of the new search regime. Select searches on the video platform will now produce an AI-generated results carousel with a collection of relevant videos. Even if you don’t pay for YouTube, AI is still coming for you with an expansion of Google’s video chatbot.

Google says the new AI search feature, which appears at the top of the results page, will include multiple videos, along with an AI summary of each. You can tap the video thumbnails to begin playing them right from the carousel. The summary is intended to extract the information most relevant to your search query, so you may not even have to watch the videos.

The AI results carousel is only a test right now, and it’s limited to YouTube Premium subscribers. If you’re paying for Premium, you can enable the feature on YouTube’s experimental page. While the feature is entirely opt-in, that probably won’t last long. Like AI Overviews in search, this feature will take precedence over organic search results and get people interacting with Google’s AI, and that’s the driving force behind most of the company’s decisions lately.

It’s not hard to see where this feature could lead because we’ve seen the same thing play out in general web search. By putting AI-generated content at the top of search results, Google will reduce the number of videos people click to watch. The carousel gives you the relevant parts of the video along with a summary, but the video page is another tap away. Rather than opening videos, commenting, subscribing, and otherwise interacting with creators, some users will just peruse the AI carousel. That could make it harder for channels to grow and earn revenue from their content—the same content Google will feed into Gemini to generate the AI carousel.

Google begins rolling out AI search in YouTube Read More »

key-fair-use-ruling-clarifies-when-books-can-be-used-for-ai-training

Key fair use ruling clarifies when books can be used for AI training

“This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use,” Alsup wrote. “Such piracy of otherwise available copies is inherently, irredeemably infringing even if the pirated copies are immediately used for the transformative use and immediately discarded.”

But Alsup said that the Anthropic case may not even need to decide on that, since Anthropic’s retention of pirated books for its research library alone was not transformative. Alsup wrote that Anthropic’s argument to hold onto potential AI training material it pirated in case it ever decided to use it for AI training was an attempt to “fast glide over thin ice.”

Additionally Alsup pointed out that Anthropic’s early attempts to get permission to train on authors’ works withered, as internal messages revealed the company concluded that stealing books was considered the more cost-effective path to innovation “to avoid ‘legal/practice/business slog,’ as cofounder and chief executive officer Dario Amodei put it.”

“Anthropic is wrong to suppose that so long as you create an exciting end product, every ‘back-end step, invisible to the public,’ is excused,” Alsup wrote. “Here, piracy was the point: To build a central library that one could have paid for, just as Anthropic later did, but without paying for it.”

To avoid maximum damages in the event of a loss, Anthropic will likely continue arguing that replacing pirated books with purchased books should water down authors’ fight, Alsup’s order suggested.

“That Anthropic later bought a copy of a book it earlier stole off the Internet will not absolve it of liability for the theft, but it may affect the extent of statutory damages,” Alsup noted.

Key fair use ruling clarifies when books can be used for AI training Read More »

google’s-new-robotics-ai-can-run-without-the-cloud-and-still-tie-your-shoes

Google’s new robotics AI can run without the cloud and still tie your shoes

We sometimes call chatbots like Gemini and ChatGPT “robots,” but generative AI is also playing a growing role in real, physical robots. After announcing Gemini Robotics earlier this year, Google DeepMind has now revealed a new on-device VLA (vision language action) model to control robots. Unlike the previous release, there’s no cloud component, allowing robots to operate with full autonomy.

Carolina Parada, head of robotics at Google DeepMind, says this approach to AI robotics could make robots more reliable in challenging situations. This is also the first version of Google’s robotics model that developers can tune for their specific uses.

Robotics is a unique problem for AI because, not only does the robot exist in the physical world, but it also changes its environment. Whether you’re having it move blocks around or tie your shoes, it’s hard to predict every eventuality a robot might encounter. The traditional approach of training a robot on action with reinforcement was very slow, but generative AI allows for much greater generalization.

“It’s drawing from Gemini’s multimodal world understanding in order to do a completely new task,” explains Carolina Parada. “What that enables is in that same way Gemini can produce text, write poetry, just summarize an article, you can also write code, and you can also generate images. It also can generate robot actions.”

General robots, no cloud needed

In the previous Gemini Robotics release (which is still the “best” version of Google’s robotics tech), the platforms ran a hybrid system with a small model on the robot and a larger one running in the cloud. You’ve probably watched chatbots “think” for measurable seconds as they generate an output, but robots need to react quickly. If you tell the robot to pick up and move an object, you don’t want it to pause while each step is generated. The local model allows quick adaptation, while the server-based model can help with complex reasoning tasks. Google DeepMind is now unleashing the local model as a standalone VLA, and it’s surprisingly robust.

Google’s new robotics AI can run without the cloud and still tie your shoes Read More »

google-brings-new-gemini-features-to-chromebooks,-debuts-first-on-device-ai

Google brings new Gemini features to Chromebooks, debuts first on-device AI

Google hasn’t been talking about Chromebooks as much since AI became its all-consuming focus, but that’s changing today with a bounty of new AI features for Google-powered laptops. Newer, more powerful Chromebooks will soon have image generation, text summarization, and more built into the OS. There’s also a new Lenovo Chromebook with a few exclusive AI goodies that only work thanks to its overpowered hardware.

If you have a Chromebook Plus device, which requires a modern CPU and at least 8GB of RAM, your machine will soon get a collection of features you may recognize from other Google products. For example, Lens is expanding on Chrome OS, allowing you to long-press the launcher icon to select any area of the screen to perform a visual search. Lens also includes text capture and integration with Google Calendar and Docs.

Gemini models are also playing a role here, according to Google. The Quick Insert key, which debuted last year, is gaining a new visual element. It could already insert photos or emoji with ease, but it can now also help you generate a new image on demand with AI.

Google’s new Chromebook AI features.

Even though Google’s AI features are running in the cloud, the AI additions are limited to this more powerful class of Google-powered laptops. The Help Me Read feature leverages Gemini to summarize long documents and webpages, and it can now distill that data into a more basic form. The new Summarize option can turn dense, technical text into something more readable in a few clicks.

Google has also rolled out a new AI trial for Chromebook Plus devices. If you buy one of these premium Chromebooks, you’ll get a 12-month free trial of the Google AI Pro plan, which gives you 2TB of cloud storage, expanded access to Google’s Gemini Pro model, and NotebookLM Pro. NotebookLM is also getting a place in the Chrome OS shelf.

Google brings new Gemini features to Chromebooks, debuts first on-device AI Read More »

to-avoid-admitting-ignorance,-meta-ai-says-man’s-number-is-a-company-helpline

To avoid admitting ignorance, Meta AI says man’s number is a company helpline

Although that statement may provide comfort to those who have kept their WhatsApp numbers off the Internet, it doesn’t resolve the issue of WhatsApp’s AI helper potentially randomly generating a real person’s private number that may be a few digits off from the business contact information WhatsApp users are seeking.

Expert pushes for chatbot design tweaks

AI companies have recently been grappling with the problem of chatbots being programmed to tell users what they want to hear, instead of providing accurate information. Not only are users sick of “overly flattering” chatbot responses—potentially reinforcing users’ poor decisions—but the chatbots could be inducing users to share more private information than they would otherwise.

The latter could make it easier for AI companies to monetize the interactions, gathering private data to target advertising, which could deter AI companies from solving the sycophantic chatbot problem. Developers for Meta rival OpenAI, The Guardian noted, last month shared examples of “systemic deception behavior masked as helpfulness” and chatbots’ tendency to tell little white lies to mask incompetence.

“When pushed hard—under pressure, deadlines, expectations—it will often say whatever it needs to to appear competent,” developers noted.

Mike Stanhope, the managing director of strategic data consultants Carruthers and Jackson, told The Guardian that Meta should be more transparent about the design of its AI so that users can know if the chatbot is designed to rely on deception to reduce user friction.

“If the engineers at Meta are designing ‘white lie’ tendencies into their AI, the public need to be informed, even if the intention of the feature is to minimize harm,” Stanhope said. “If this behavior is novel, uncommon, or not explicitly designed, this raises even more questions around what safeguards are in place and just how predictable we can force an AI’s behavior to be.”

To avoid admitting ignorance, Meta AI says man’s number is a company helpline Read More »