Artificial Intelligence

gemini-2.5-is-leaving-preview-just-in-time-for-google’s-new-$250-ai-subscription

Gemini 2.5 is leaving preview just in time for Google’s new $250 AI subscription

Deep Think graphs I/O

Deep Think is more capable of complex math and coding. Credit: Ryan Whitwam

Both 2.5 models have adjustable thinking budgets when used in Vertex AI and via the API, and now the models will also include summaries of the “thinking” process for each output. This makes a little progress toward making generative AI less overwhelmingly expensive to run. Gemini 2.5 Pro will also appear in some of Google’s dev products, including Gemini Code Assist.

Gemini Live, previously known as Project Astra, started to appear on mobile devices over the last few months. Initially, you needed to have a Gemini subscription or a Pixel phone to access Gemini Live, but now it’s coming to all Android and iOS devices immediately. Google demoed a future “agentic” capability in the Gemini app that can actually control your phone, search the web for files, open apps, and make calls. It’s perhaps a little aspirational, just like the Astra demo from last year. The version of Gemini Live we got wasn’t as good, but as a glimpse of the future, it was impressive.

There are also some developments in Chrome, and you guessed it, it’s getting Gemini. It’s not dissimilar from what you get in Edge with Copilot. There’s a little Gemini icon in the corner of the browser, which you can click to access Google’s chatbot. You can ask it about the pages you’re browsing, have it summarize those pages, and ask follow-up questions.

Google AI Ultra is ultra-expensive

Since launching Gemini, Google has only had a single $20 monthly plan for AI features. That plan granted you access to the Pro models and early versions of Google’s upcoming AI. At I/O, Google is catching up to AI firms like OpenAI, which have offered sky-high AI plans. Google’s new Google AI Ultra plan will cost $250 per month, more than the $200 plan for ChatGPT Pro.

Gemini 2.5 is leaving preview just in time for Google’s new $250 AI subscription Read More »

zero-click-searches:-google’s-ai-tools-are-the-culmination-of-its-hubris

Zero-click searches: Google’s AI tools are the culmination of its hubris


Google’s first year with AI search was a wild ride. It will get wilder.

Google is constantly making changes to its search rankings, but not all updates are equal. Every few months, the company bundles up changes into a larger “core update.” These updates make rapid and profound changes to search, so website operators watch them closely.

The March 2024 update was unique. It was one of Google’s largest core updates ever, and it took over a month to fully roll out. Nothing has felt quite the same since. Whether the update was good or bad depends on who you ask—and maybe who you are.

It’s common for websites to see traffic changes after a core update, but the impact of the March 2024 update marked a seismic shift. Google says the update aimed to address spam and AI-generated content in a meaningful way. Still, many publishers say they saw clicks on legitimate sites evaporate, while others have had to cope with unprecedented volatility in their traffic. Because Google owns almost the entire search market, changes in its algorithm can move the Internet itself.

In hindsight, the March 2024 update looks like the first major Google algorithm update for the AI era. Not only did it (supposedly) veer away from ranking AI-authored content online, but it also laid the groundwork for Google’s ambitious—and often annoying—desire to fuse AI with search.

A year ago, this ambition surfaced with AI Overviews, but now the company is taking an even more audacious route, layering in a new chat-based answer service called “AI Mode.” Both of these technologies do at least two things: They aim to keep you on Google properties longer, and they remix publisher content without always giving prominent citations.

Smaller publishers appear to have borne the brunt of the changes caused by these updates. “Google got all this flak for crushing the small publishers, and it’s true that when they make these changes, they do crush a lot of publishers,” says Jim Yu, CEO of enterprise SEO platform BrightEdge. Yu explains that Google is the only search engine likely to surface niche content in the first place, and there are bound to be changes to sites at the fringes during a major core update.

Google’s own view on the impact of the March 2024 update is unsurprisingly positive. The company said it was hoping to reduce the appearance of unhelpful content in its search engine results pages (SERPs) by 40 percent. After the update, the company claimed an actual reduction of closer to 45 percent. But does it feel like Google’s results have improved by that much? Most people don’t think so.

What causes this disconnect? According to Michael King, founder of SEO firm iPullRank, we’re not speaking the same language as Google. “Google’s internal success metrics differ from user perceptions,” he says. “Google measures user satisfaction through quantifiable metrics, while external observers rely on subjective experiences.”

Google evaluates algorithm changes with various tests, including human search quality testers and running A/B tests on live searches. But more than anything else, success is about the total number of searches (5 trillion of them per year). Google often makes this number a centerpiece of its business updates to show investors that it can still grow.

However, using search quantity to measure quality has obvious problems. For instance, more engagement with a search engine might mean that quality has decreased, so people try new queries (e.g., the old trick of adding “Reddit” to the end of your search string). In other words, people could be searching more because they don’t like the results.

Jim Yu suggests that Google is moving fast and breaking things, but it may not be as bad as we think. “I think they rolled things out faster because they had to move a lot faster than they’ve historically had to move, and it ends up that they do make some real mistakes,” says Yu. “[Google] is held to a higher standard, but by and large, I think their search quality is improving.”

According to King, Google’s current search behavior still favors big names, but other sites have started to see a rebound. “Larger brands are performing better in the top three positions, while lesser-known websites have gained ground in positions 4 through 10,” says King. “Although some websites have indeed lost traffic due to reduced organic visibility, the bigger issue seems tied to increased usage of AI Overviews”—and now the launch of AI Mode.

Yes, the specter of AI hangs over every SERP. The unhelpful vibe many people now get from Google searches, regardless of the internal metrics the company may use, may come from a fundamental shift in how Google surfaces information in the age of AI.

The AI Overview hangover

In 2025, you can’t talk about Google’s changes to search without acknowledging the AI-generated elephant in the room. As it wrapped up that hefty core update in March 2024, Google also announced a major expansion of AI in search, moving the “Search Generative Experience” out of labs and onto Google.com. The feature was dubbed “AI Overviews.”

The AI Overview box has been a fixture on Google’s search results page ever since its debut a year ago. The feature uses the same foundational AI model as Google’s Gemini chatbot to formulate answers to your search queries by ingesting the top 100 (!) search results. It sits at the top of the page, pushing so-called blue link content even farther down below the ads and knowledge graph content. It doesn’t launch on every query, and sometimes it answers questions you didn’t ask—or even hallucinates a totally wrong answer.

And it’s not without some irony that Google’s laudable decision to de-rank synthetic AI slop comes at the same time that Google heavily promotes its own AI-generated content right at the top of SERPs.

AI Overview on phone

AI Overviews appear right at the top of many search results.

Credit: Google

AI Overviews appear right at the top of many search results. Credit: Google

What is Google getting for all of this AI work? More eyeballs, it would seem. “AI is driving more engagement than ever before on Google,” says Yu. BrightEdge data shows that impressions on Google are up nearly 50 percent since AI Overviews launched. Many of the opinions you hear about AI Overviews online are strongly negative, but that doesn’t mean people aren’t paying attention to the feature. In its Q1 2025 earnings report, Google announced that AI Overviews is being “used” by 1.5 billion people every month. (Since you can’t easily opt in or opt out of AI Overviews, this “usage” claim should be taken with a grain of salt.)

Interestingly, the impact of AI Overviews has varied across the web. In October 2024, Google was so pleased with AI Overviews that it expanded them to appear in more queries. And as AI crept into more queries, publishers saw a corresponding traffic drop. Yu estimates this drop to be around 30 percent on average for those with high AI query coverage. For searches that are less supported in AI Overviews—things like restaurants and financial services—the traffic change has been negligible. And there are always exceptions. Yu suggests that some large businesses with high AI Overview query coverage have seen much smaller drops in traffic because they rank extremely well as both AI citations and organic results.

Lower traffic isn’t the end of the world for some businesses. Last May, AI Overviews were largely absent from B2B queries, but that turned around in a big way in recent months. BrightEdge estimates that 70 percent of B2B searches now have AI answers, which has reduced traffic for many companies. Yu doesn’t think it’s all bad, though. “People don’t click through as much—they engage a lot more on the AI—but when they do click, the conversion rate for the business goes up,” Yu says. In theory, serious buyers click and window shoppers don’t.

But the Internet is not a giant mall that exists only for shoppers. It is, first and foremost, a place to share and find information, and AI Overviews have hit some purveyors of information quite hard. At launch, AI Overviews were heavily focused on “What is” and “How to” queries. Such “service content” is a staple of bloggers and big media alike, and these types of publishers aren’t looking for sales conversions—it’s traffic that matters. And they’re getting less of it because AI Overviews “helpfully” repackages and remixes their content, eliminating the need to click through to the site. Some publishers are righteously indignant, asking how it’s fair for Google to remix content it doesn’t own, and to do so without compensation.

But Google’s intentions don’t end with AI Overviews. Last week, the company started an expanded public test of so-called “AI Mode,” right from the front page. AI Mode doesn’t even bother with those blue links. It’s a chatbot experience that, at present, tries to answer your query without clearly citing sources inline. (On some occasions, it will mention Reddit or Wikipedia.) On the right side of the screen, Google provides a little box with three sites linked, which you can expand to see more options. To the end user, it’s utterly unclear if those are “sources,” “recommendations,” or “partner deals.”

Perhaps more surprisingly, in our testing, not a single AI Mode “sites box” listed a site that ranked on the first page for the same query on a regular search. That is, the links in AI Mode for “best foods to eat for a cold” don’t overlap at all with the SERP for the same query in Google Search. In fairness, AI Mode is very new, and its behavior will undoubtedly change. But the direction the company is headed seems clear.

Google’s real goal is to keep you on Google or other Alphabet properties. In 2019, Rand Fishkin noticed that Google’s evolution from search engine to walled garden was at a tipping point. At that time—and for the first time—more than half of Google searches resulted in zero click-throughs to other sites. But data did show large numbers of clicks to Google’s own properties, like YouTube and Maps. If Google doesn’t intend to deliver a “zero-click” search experience, you wouldn’t know it from historical performance data or the new features the company develops.

You also wouldn’t know it from the way AI Overviews work. They do cite some of the sources used in building each output, and data suggests people click on those links. But are the citations accurate? Is every source used for constructing an AI Overview cited? We don’t really know, as Google is famously opaque about how its search works. We do know that Google uses a customized version of Gemini to support AI Overviews and that Gemini has been trained on billions and billions of webpages.

When AI Overviews do cite a source, it’s not clear how those sources came to be the ones cited. There’s good reason to be suspicious here: AI Overview’s output is not great, as witnessed by the numerous hallucinations we all know and love (telling people to eat rocks, for instance). The only thing we know for sure is that Google isn’t transparent about any of this.

No signs of slowing

Despite all of that, Google is not slowing down on AI in search. More recent core updates have only solidified this new arrangement with an ever-increasing number of AI-answered queries. The company appears OK with its current accuracy problems, or at the very least, it’s comfortable enough to push out AI updates anyway. Google appears to have been caught entirely off guard by the public launch of ChatGPT, and it’s now utilizing its search dominance to play catch-up.

To make matters even more dicey, Google isn’t even trying to address the biggest issue in all this: The company’s quest for zero-click search harms the very content creators upon which the company has built its empire.

For its part, Google has been celebrating its AI developments, insisting that content producers don’t know what’s best for them, refuting any concerns with comments about search volume increases and ever-more-complex search query strings. The changes must be working!

Google has been building toward this moment for years. The company started with a list of 10 blue links and nothing else, but little by little, it pushed the links down the page and added more content that keeps people in the Google ecosystem. Way back in 2007, Google added Universal Search, which allowed it to insert content from Google Maps, YouTube, and other services. In 2009, Rich Snippets began displaying more data from search results on SERPs. In 2012, the Knowledge Graph began extracting data from search results to display answers in the search results. Each change kept people on Google longer and reduced click-throughs, all the while pushing the search results down the page.

AI Overviews, and especially AI Mode, are the logical outcome of Google’s yearslong transformation from an indexer of information to an insular web portal built on scraping content from around the web. Earlier in Google’s evolution, the implicit agreement was that websites would allow Google to crawl their pages in exchange for sending them traffic. That relationship has become strained as the company has kept more traffic for itself, reducing click-throughs to websites even as search volume continues to increase. And locking Google out isn’t a realistic option when the company controls almost the entire search market.

Even when Google has taken a friendlier approach, business concerns could get in the way. During the search antitrust trial, documents showed that Google initially intended to let sites opt out of being used for AI training for its search-based AI features—but these sites would still be included in search results. The company ultimately canned that idea, leaving site operators with the Pyrrhic choice of participating in the AI “revolution” or becoming invisible on the web. Google now competes with, rather than supports, the open web.

When many of us look at Google’s search results today, the vibe feels off. Maybe it’s the AI, maybe it’s Google’s algorithm, or maybe the Internet just isn’t what it once was. Whatever the cause, the shift toward zero-click search that began more than a decade ago was made clear by the March 2024 core update, and it has only accelerated with the launch of AI Mode. Even businesses that have escaped major traffic drops from AI Overviews could soon find that Google’s AI-only search can get much more overbearing.

The AI slop will continue until morale improves.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Zero-click searches: Google’s AI tools are the culmination of its hubris Read More »

new-orleans-called-out-for-sketchiest-use-of-facial-recognition-yet-in-the-us

New Orleans called out for sketchiest use of facial recognition yet in the US

According to police records submitted to the city council, the network “only proved useful in a single case.” Investigating the tension between these claims, the Post suggested we may never know how many suspects were misidentified or what steps police took to ensure responsible use of the controversial live feeds.

In the US, New Orleans stands out for taking a step further than law enforcement in other regions by using live feeds from facial recognition cameras to make immediate arrests, the Post noted. The Security Industry Association told the Post that four states—Maryland, Montana, Vermont, and Virginia—and 19 cities nationwide “explicitly bar” the practice.

Lagarde told the Post that police cannot “directly” search for suspects on the camera network or add suspects to the watchlist in real time. Reese Harper, an NOPD spokesperson, told the Post that his department “does not own, rely on, manage, or condone the use by members of the department of any artificial intelligence systems associated with the vast network of Project Nola crime cameras.”

In a federally mandated 2023 audit, New Orleans police complained that complying with the ordinance took too long and “often” resulted in no matches. That could mean the tech is flawed, or it could be a sign that the process was working as intended to prevent wrongful arrests.

The Post noted that in total, “at least eight Americans have been wrongfully arrested due to facial recognition,” as both police and AI software rushing arrests are prone to making mistakes.

“By adopting this system–in secret, without safeguards, and at tremendous threat to our privacy and security–the City of New Orleans has crossed a thick red line,” Wessler said. “This is the stuff of authoritarian surveillance states and has no place in American policing.”

Project Nola did not immediately respond to Ars’ request to comment.

New Orleans called out for sketchiest use of facial recognition yet in the US Read More »

xai-says-an-“unauthorized”-prompt-change-caused-grok-to-focus-on-“white-genocide”

xAI says an “unauthorized” prompt change caused Grok to focus on “white genocide”

When analyzing social media posts made by others, Grok is given the somewhat contradictory instructions to “provide truthful and based insights [emphasis added], challenging mainstream narratives if necessary, but remain objective.” Grok is also instructed to incorporate scientific studies and prioritize peer-reviewed data but also to “be critical of sources to avoid bias.”

Grok’s brief “white genocide” obsession highlights just how easy it is to heavily twist an LLM’s “default” behavior with just a few core instructions. Conversational interfaces for LLMs in general are essentially a gnarly hack for systems intended to generate the next likely words to follow strings of input text. Layering a “helpful assistant” faux personality on top of that basic functionality, as most LLMs do in some form, can lead to all sorts of unexpected behaviors without careful additional prompting and design.

The 2,000+ word system prompt for Anthropic’s Claude 3.7, for instance, includes entire paragraphs for how to handle specific situations like counting tasks, “obscure” knowledge topics, and “classic puzzles.” It also includes specific instructions for how to project its own self-image publicly: “Claude engages with questions about its own consciousness, experience, emotions and so on as open philosophical questions, without claiming certainty either way.”

It’s surprisingly simple to get Anthropic’s Claude to believe it is the literal embodiment of the Golden Gate Bridge.

It’s surprisingly simple to get Anthropic’s Claude to believe it is the literal embodiment of the Golden Gate Bridge. Credit: Antrhopic

Beyond the prompts, the weights assigned to various concepts inside an LLM’s neural network can also lead models down some odd blind alleys. Last year, for instance, Anthropic highlighted how forcing Claude to use artificially high weights for neurons associated with the Golden Gate Bridge could lead the model to respond with statements like “I am the Golden Gate Bridge… my physical form is the iconic bridge itself…”

Incidents like Grok’s this week are a good reminder that, despite their compellingly human conversational interfaces, LLMs don’t really “think” or respond to instructions like humans do. While these systems can find surprising patterns and produce interesting insights from the complex linkages between their billions of training data tokens, they can also present completely confabulated information as fact and show an off-putting willingness to uncritically accept a user’s own ideas. Far from being all-knowing oracles, these systems can show biases in their actions that can be much harder to detect than Grok’s recent overt “white genocide” obsession.

xAI says an “unauthorized” prompt change caused Grok to focus on “white genocide” Read More »

meta-is-making-users-who-opted-out-of-ai-training-opt-out-again,-watchdog-says

Meta is making users who opted out of AI training opt out again, watchdog says

Noyb has requested a response from Meta by May 21, but it seems unlikely that Meta will quickly cave in this fight.

In a blog post, Meta said that AI training on EU users was critical to building AI tools for Europeans that are informed by “everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

Meta argued that its AI training efforts in the EU are far more transparent than efforts from competitors Google and OpenAI, which, Meta noted, “have already used data from European users to train their AI models,” supposedly without taking the steps Meta has to inform users.

Also echoing a common refrain in the AI industry, another Meta blog warned that efforts to further delay Meta’s AI training in the EU could lead to “major setbacks,” pushing the EU behind rivals in the AI race.

“Without a reform and simplification of the European regulatory system, Europe threatens to fall further and further behind in the global AI race and lose ground compared to the USA and China,” Meta warned.

Noyb discredits this argument and noted that it can pursue injunctions in various jurisdictions to block Meta’s plan. The group said it’s currently evaluating options to seek injunctive relief and potentially even pursue a class action worth possibly “billions in damages” to ensure that 400 million monthly active EU users’ data rights are shielded from Meta’s perceived grab.

A Meta spokesperson reiterated to Ars that the company’s plan “follows extensive and ongoing engagement with the Irish Data Protection Commission,” while reiterating Meta’s statements in blogs that its AI training approach “reflects consensus among” EU Data Protection Authorities (DPAs).

But while Meta claims that EU regulators have greenlit its AI training plans, Noyb argues that national DPAs have “largely stayed silent on the legality of AI training without consent,” and Meta seems to have “simply moved ahead anyways.”

“This fight is essentially about whether to ask people for consent or simply take their data without it,” Schrems said, adding, “Meta’s absurd claims that stealing everyone’s personal data is necessary for AI training is laughable. Other AI providers do not use social network data—and generate even better models than Meta.”

Meta is making users who opted out of AI training opt out again, watchdog says Read More »

google-hits-back-after-apple-exec-says-ai-is-hurting-search

Google hits back after Apple exec says AI is hurting search

The antitrust trial targeting Google’s search business is heading into the home stretch, and the outcome could forever alter Google—and the web itself. The company is scrambling to protect its search empire, but perhaps market forces could pull the rug out from under Google before the government can. Apple SVP of Services Eddie Cue suggested in his testimony on Wednesday that Google’s search traffic might be falling. Not so fast, says Google.

In an unusual move, Google issued a statement late in the day after Cue’s testimony to dispute the implication that it may already be losing its monopoly. During questioning by DOJ attorney Adam Severt, Cue expressed concern about losing the Google search deal, which is a major source of revenue for Apple. This contract, along with a similar one for Firefox, gives Google default search placement in exchange for a boatload of cash. The DOJ contends that is anticompetitive, and its proposed remedies call for banning Google from such deals.

Surprisingly, Cue noted in his testimony that search volume in Safari fell for the first time ever in April. Since Google is the default search provider, that implies fewer Google searches. Apple devices are popular, and a drop in Google searches there could be a bad sign for the company’s future competitiveness. Google’s statement on this comes off as a bit defensive.

Google hits back after Apple exec says AI is hurting search Read More »

largest-deepfake-porn-site-shuts-down-forever

Largest deepfake porn site shuts down forever

The shuttering of Mr. Deepfakes won’t solve the problem of deepfakes, though. In 2022, the number of deepfakes skyrocketed as AI technology made the synthetic NCII appear more realistic than ever, prompting an FBI warning in 2023 to alert the public that the fake content was being increasingly used in sextortion schemes. But the immediate solutions society used to stop the spread had little impact. For example, in response to pressure to make the fake NCII harder to find, Google started downranking explicit deepfakes in search results but refused to demote platforms like Mr. Deepfakes unless Google received an unspecified “high volume of removals for fake explicit imagery.”

According to researchers, Mr. Deepfakes—a real person who remains anonymous but reportedly is a 36-year-old hospital worker in Toronto—created the engine driving this spike. His DeepFaceLab quickly became “the leading deepfake software, estimated to be the software behind 95 percent of all deepfake videos and has been replicated over 8,000 times on GitHub,” researchers found. For casual users, his platform hosted videos that could be purchased, usually priced above $50 if it was deemed realistic, while more motivated users relied on forums to make requests or enhance their own deepfake skills to become creators.

Mr. Deepfakes’ illegal trade began on Reddit but migrated to its own platform after a ban in 2018. There, thousands of deepfake creators shared technical knowledge, with the Mr. Deepfakes site forums eventually becoming “the only viable source of technical support for creating sexual deepfakes,” researchers noted last year.

Having migrated once before, it seems unlikely that this community won’t find a new platform to continue generating the illicit content, possibly rearing up under a new name since Mr. Deepfakes seemingly wants out of the spotlight. Back in 2023, researchers estimated that the platform had more than 250,000 members, many of whom may quickly seek a replacement or even try to build a replacement.

Further increasing the likelihood that Mr. Deepfakes’ reign of terror isn’t over, the DeepFaceLab GitHub repository—which was archived in November and can no longer be edited—remains available for anyone to copy and use.

404 Media reported that many Mr. Deepfakes members have already connected on Telegram, where synthetic NCII is also reportedly frequently traded. Hany Farid, a professor at UC Berkeley who is a leading expert on digitally manipulated images, told 404 Media that “while this takedown is a good start, there are many more just like this one, so let’s not stop here.”

Largest deepfake porn site shuts down forever Read More »

google-is-quietly-testing-ads-in-ai-chatbots

Google is quietly testing ads in AI chatbots

Google has built an enormously successful business around the idea of putting ads in search results. Its most recent quarterly results showed the company made more than $50 billion from search ads, but what happens if AI becomes the dominant form of finding information? Google is preparing for that possibility by testing chatbot ads, but you won’t see them in Google’s Gemini AI—at least not yet.

A report from Bloomberg describes how Google began working on a plan in 2024 to adapt AdSense ads to a chatbot experience. Usually, AdSense ads appear in search results and are scattered around websites. Google ran a small test of chatbot ads late last year, partnering with select AI startups, including AI search apps iAsk and Liner.

The testing must have gone well because Google is now allowing more chatbot makers to sign up for AdSense. “AdSense for Search is available for websites that want to show relevant ads in their conversational AI experiences,” said a Google spokesperson.

If people continue shifting to using AI chatbots to find information, this expansion of AdSense could help prop up profits. There’s no hint of advertising in Google’s own Gemini chatbot or AI Mode search, but the day may be coming when you won’t get the clean, ad-free experience at no cost.

A path to profit

Google is racing to catch up to OpenAI, which has a substantial lead in chatbot market share despite Gemini’s recent growth. This has led Google to freely provide some of its most capable AI tools, including Deep Research, Gemini Pro, and Veo 2 video generation. There are limits to how much you can use most of these features with a free account, but it must be costing Google a boatload of cash.

Google is quietly testing ads in AI chatbots Read More »

first-amendment-doesn’t-just-protect-human-speech,-chatbot-maker-argues

First Amendment doesn’t just protect human speech, chatbot maker argues


Do LLMs generate “pure speech”?

Feds could censor chatbots if their “speech” isn’t protected, Character.AI says.

Pushing to dismiss a lawsuit alleging that its chatbots caused a teen’s suicide, Character Technologies is arguing that chatbot outputs should be considered “pure speech” deserving of the highest degree of protection under the First Amendment.

In their motion to dismiss, the developers of Character.AI (C.AI) argued that it doesn’t matter who the speaker is—whether it’s a video game character spouting scripted dialogue, a foreign propagandist circulating misinformation, or a chatbot churning out AI-generated responses to prompting—courts protect listeners’ rights to access that speech. Accusing the mother of the departed teen, Megan Garcia, of attempting to “insert this Court into the conversations of millions of C.AI users” and supposedly endeavoring to “shut down” C.AI, the chatbot maker argued that the First Amendment bars all of her claims.

“The Court need not wrestle with the novel questions of who should be deemed the speaker of the allegedly harmful content here and whether that speaker has First Amendment rights,” Character Technologies argued, “because the First Amendment protects the public’s ‘right to receive information and ideas.'”

Warning that “imposing tort liability for one user’s alleged response to expressive content would be to ‘declare what the rest of the country can and cannot read, watch, and hear,'” the company urged the court to consider the supposed “chilling effect” that would have on “both on C.AI and the entire nascent generative AI industry.”

“‘Pure speech,’ such as the chat conversations at issue here, ‘is entitled to comprehensive protection under the First Amendment,'” Character Technologies argued in another court filing.

However, Garcia’s lawyers pointed out that even a video game character’s dialogue is written by a human, arguing that all of Character Technologies’ examples of protected “pure speech” are human speech. Although the First Amendment also protects non-human corporations’ speech, corporations are formed by humans, they noted. And unlike corporations, chatbots have no intention behind their outputs, her legal team argued, instead simply using a probabilistic approach to generate text. So they argue that the First Amendment does not apply.

Character Technologies argued in response that demonstrating C.AI’s expressive intent is not required, but if it were, “conversations with Characters feature such intent” because chatbots are designed to “be expressive and engaging,” and users help design and prompt those characters.

“Users layer their own expressive intent into each conversation by choosing which Characters to talk to and what messages to send and can also edit Characters’ messages and direct Characters to generate different responses,” the chatbot maker argued.

In her response opposing the motion to dismiss, Garcia urged the court to decline what her legal team characterized as Character Technologies’ invitation to “radically expand First Amendment protections from expressions of human volition to an unpredictable, non-determinative system where humans can’t even examine many of the mathematical functions creating outputs, let alone control them.”

To support Garcia’s case, they cited a 40-year-old ruling where the Eleventh Circuit ruled that a talking cat called “Blackie” could not be “considered a person” and was deemed a “non-human entity” despite possessing an “exceptional speech-like ability.”

Garcia’s lawyers hope the judge will rule that “AI output is not speech at all,” or if it is speech, it “falls within an exception to the First Amendment”—perhaps deemed offensive to minors who the chatbot maker knew were using the service or possibly resulting in a novel finding that manipulative speech isn’t protected. If either argument is accepted, the chatbot makers’ attempt to invoke “listeners’ rights cannot save it,” they suggested.

However, Character Technologies disputes that any recognized exception to the First Amendment’s protections is applicable in the case, noting that Garcia’s team is not arguing that her son’s chats with bots were “obscene” or incited violence. Rather, the chatbot maker argued, Garcia is asking the court to “be the first to hold that ‘manipulative expression’ is unprotected by the First Amendment because a ‘disparity in power and information between speakers and listeners… frustrat[es] listeners’ rights.'”

Now, a US court is being asked to clarify if chatbot outputs are protected speech. At a hearing Monday, a US district judge in Florida, Anne Conway, did not rule from the bench, Garcia’s legal team told Ars. Asking few questions of either side, the judge is expected to issue an opinion on the motion to dismiss within the next few weeks, or possibly months.

For Garcia and her family, who appeared at the hearing, the idea that AI “has more rights than humans” felt dehumanizing, Garcia’s legal team said.

“Pandering” to Trump administration to dodge guardrails

According to Character Technologies, the court potentially agreeing with Garcia that “that AI-generated speech is categorically unprotected” would have “far-reaching consequences.”

At perhaps the furthest extreme, they’ve warned Conway that without a First Amendment barrier, “the government could pass a law prohibiting AI from ‘offering prohibited accounts of history’ or ‘making negative statements about the nation’s leaders,’ as China has considered doing.” And the First Amendment specifically prohibits the government from controlling the flow of ideas in society, they noted, angling to make chatbot output protections seem crucial in today’s political climate.

Meetali Jain, Garcia’s attorney and founder of the Tech Justice Law Project, told Ars that this kind of legal challenge is new in the generative AI space, where copyright battles have dominated courtroom debates.

“This is the first time that I’ve seen not just the issue of the First Amendment being applied to gen AI but also the First Amendment being applied in this way,” Jain said.

In their court filing, Jain’s team noted that Character Technologies is not arguing that the First Amendment shielded the rights of Garcia’s son, Sewell Setzer, to receive allegedly harmful speech. Instead, their argument is “effectively juxtaposing the listeners’ rights of their millions of users against this one user who was aggrieved. So it’s kind of like the hypothetical users versus the real user who’s in court.”

Jain told Ars that Garcia’s team tried to convince the judge that the argument that it doesn’t matter who the speaker is, even when the speaker isn’t human, is reckless since it seems to be “implying” that “AI is a sentient being and has its own rights.”

Additionally, Jain suggested that Character Technologies’ argument that outputs must be shielded to avoid government censorship seems to be “pandering” to the Trump administration’s fears that China may try to influence American politics through social media algorithms like TikTok’s or powerful open source AI models like DeepSeek.

“That suggests that there can be no sort of imposition of guardrails on AI, lest we either lose on the national security front or because of these vague hypothetical under-theorized First Amendment concerns,” Jain told Ars.

At a press briefing Tuesday, Jain confirmed that the judge clearly understood that “our position was that the First Amendment protects speech, not words.”

“LLMs do not think and feel as humans do,” Jain said, citing University of Colorado law school researchers who supported their complaint. “Rather, they generate text through statistical methods based on patterns found in their training data. And so our position was that there is a distinction to make between words and speech, and that it’s really only the latter that is deserving of First Amendment protection.”

Jain alleged that Character Technologies is angling to create a legal environment where all chatbot outputs are protected against liability claims so that C.AI can operate “without any sort of constraints or guardrails.”

It’s notable, she suggested, that the chatbot maker updated its safety features following the death of Garcia’s son, Sewell Setzer. A C.AI blog mourned the “tragic loss of one of our users” and noted updates, included changes “to reduce the likelihood of encountering sensitive or suggestive content,” improved detection and intervention in harmful chat sessions, and “a revised disclaimer on every chat to remind users that the AI is not a real person.”

Although Character Technologies argues that it’s common to update safety practices over time, Garcia’s team alleged these updates show that C.AI could have made a safer product and chose not to.

Expert warns against giving AI products rights

Character Technologies has also argued that C.AI is not a “product” as Florida law defines it. That has striking industry implications, according to Camille Carlton, a policy director for the Center for Humane Technology who is serving as a technical expert on the case.

At the press briefing, Carlton suggested that “by invoking these First Amendment protections over speech without really specifying whose speech is being protected, Character.AI’s defense has really laid the groundwork for a world in which LLM outputs are protected speech and for a world in which AI products could have other protected rights in the same way that humans do.”

Since chatbot outputs seemingly don’t have Section 230 protections—Jain noted it was somewhat surprising that Character Technologies did not raise this defense—the chatbot maker may be attempting to secure the First Amendment as a shield instead, Carlton suggested.

“It’s a move that they’re incentivized to take because it would reduce their own accountability and their own responsibility,” Carlton said.

Jain expects that whatever Conway decides, the losing side will appeal. However, if Conway denies the motion, then discovery can begin, perhaps allowing Garcia the clearest view yet into the allegedly harmful chats she believes manipulated her son into feeling completely disconnected from the real world.

If courts grant AI products across the board such rights, Carlton warned, troubled parents like Garcia may have no recourse for potentially dangerous outputs.

“This issue could fundamentally reshape how the law approaches AI free speech and corporate accountability,” Carlton said. “And I think the bottom line from our perspective—and from what we’re seeing in terms of the trends in Character.AI and the broader trends from these AI labs—is that we need to double down on the fact that these are products. They’re not people.”

Character Technologies declined Ars’ request to comment.

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

First Amendment doesn’t just protect human speech, chatbot maker argues Read More »

redditor-accidentally-reinvents-discarded-’90s-tool-to-escape-today’s-age-gates

Redditor accidentally reinvents discarded ’90s tool to escape today’s age gates


The ’90s called. They want their flawed age verification methods back.

A boys head with a fingerprint revealing something unclear but perhaps evocative

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Back in the mid-1990s, when The Net was among the top box office draws and Americans were just starting to flock online in droves, kids had to swipe their parents’ credit cards or find a fraudulent number online to access adult content on the web. But today’s kids—even in states with the strictest age verification laws—know they can just use Google.

Last month, a study analyzing the relative popularity of Google search terms found that age verification laws shift users’ search behavior. It’s impossible to tell if the shift represents young users attempting to circumvent the child-focused law or adult users who aren’t the actual target of the laws. But overall, enforcement causes nearly half of users to stop searching for popular adult sites complying with laws and instead search for a noncompliant rival (48 percent) or virtual private network (VPN) services (34 percent), which are used to mask a location and circumvent age checks on preferred sites, the study found.

“Individuals adapt primarily by moving to content providers that do not require age verification,” the study concluded.

Although the Google Trends data prevented researchers from analyzing trends by particular age groups, the findings help confirm critics’ fears that age verification laws “may be ineffective, potentially compromise user privacy, and could drive users toward less regulated, potentially more dangerous platforms,” the study said.

The authors warn that lawmakers are not relying enough on evidence-backed policy evaluations to truly understand the consequences of circumvention strategies before passing laws. Internet law expert Eric Goldman recently warned in an analysis of age-estimation tech available today that this situation creates a world in which some kids are likely to be harmed by the laws designed to protect them.

Goldman told Ars that all of the age check methods carry the same privacy and security flaws, concluding that technology alone can’t solve this age-old societal problem. And logic-defying laws that push for them could end up “dramatically” reshaping the Internet, he warned.

Zeve Sanderson, a co-author of the Google Trends study, told Ars that “if you’re a policymaker, in addition to being potentially nervous about the more dangerous content, it’s also about just benefiting a noncompliant firm.”

“You don’t want to create a regulatory environment where noncompliance is incentivized or they benefit in some way,” Sanderson said.

Sanderson’s study pointed out that search data is only part of the picture. Some users may be using VPNs and accessing adult sites through direct URLs rather than through search. Others may rely on social media to find adult content, a 2025 conference paper noted, “easily” bypassing age checks on the largest platforms. VPNs remain the most popular circumvention method, a 2024 article in the International Journal of Law, Ethics, and Technology confirmed, “and yet they tend to be ignored or overlooked by statutes despite their popularity.”

While kids are ducking age gates and likely putting their sensitive data at greater risk, adult backlash may be peaking over the red wave of age-gating laws already blocking adults from visiting popular porn sites in several states.

Some states started controversially requiring checking IDs to access adult content, which prompted Pornhub owner Aylo to swiftly block access to its sites in certain states. Pornhub instead advocates for device-based age verification, which it claims is a safer choice.

Aylo’s campaign has seemingly won over some states that either explicitly recommend device-based age checks or allow platforms to adopt whatever age check method they deem “reasonable.” Other methods could include app store-based age checks, algorithmic age estimation (based on a user’s web activity), face scans, or even tools that guess users’ ages based on hand movements.

On Reddit, adults have spent the past year debating the least intrusive age verification methods, as it appears inevitable that adult content will stay locked down, and they dread a future where more and more adult sites might ask for IDs. Additionally, critics have warned that showing an ID magnifies the risk of users publicly exposing their sexual preferences if a data breach or leak occurs.

To avoid that fate, at least one Redditor has attempted to reinvent the earliest age verification method, promoting a resurgence of credit card-based age checks that society discarded as unconstitutional in the early 2000s.

Under those systems, an entire industry of age verification companies emerged, selling passcodes to access adult sites for a supposedly nominal fee. The logic was simple: Only adults could buy credit cards, so only adults could buy passcodes with credit cards.

If “a person buys, for a nominal fee, a randomly generated passcode not connected to them in any way” to access adult sites, one Redditor suggested about three months ago, “there won’t be any way to tie the individual to that passcode.”

“This could satisfy the requirement to keep stuff out of minors’ hands,” the Redditor wrote in a thread asking how any site featuring sexual imagery could hypothetically comply with US laws. “Maybe?”

Several users rushed to educate the Redditor about the history of age checks. Those grasping for purely technology-based solutions today could be propping up the next industry flourishing from flawed laws, they said.

And, of course, since ’90s kids easily ducked those age gates, too, history shows why investing millions to build the latest and greatest age verification systems probably remains a fool’s errand after all these years.

The cringey early history of age checks

The earliest age verification systems were born out of Congress’s “first attempt to outlaw pornography online,” the LA Times reported. That attempt culminated in the Communications Decency Act of 1996.

Although the law was largely overturned a year later, the million-dollar age verification industry was already entrenched, partly due to its intriguing business model. These companies didn’t charge adult sites any fee to add age check systems—which required little technical expertise to implement—and instead shared a big chunk of their revenue with porn sites that opted in. Some sites got 50 percent of revenues, estimated in the millions, simply for adding the functionality.

The age check business was apparently so lucrative that in 2000, one adult site, which was sued for distributing pornographic images of children, pushed fans to buy subscriptions to its preferred service as a way of helping to fund its defense, Wired reported. “Please buy an Adult Check ID, and show your support to fight this injustice!” the site urged users. (The age check service promptly denied any association with the site.)

In a sense, the age check industry incentivized adult sites’ growth, an American Civil Liberties Union attorney told the LA Times in 1999. In turn, that fueled further growth in the age verification industry.

Some services made their link to adult sites obvious, like Porno Press, which charged a one-time fee of $9.95 to access affiliated adult sites, a Congressional filing noted. But many others tried to mask the link, opting for names like PayCom Billing Services, Inc. or CCBill, as Forbes reported, perhaps enticing more customers by drawing less attention on a credit card statement. Other firms had names like Adult Check, Mancheck, and Adult Sights, Wired reported.

Of these firms, the biggest and most successful was Adult Check. At its peak popularity in 2001, the service boasted 4 million customers willing to pay “for the privilege of ogling 400,000 sex sites,” Forbes reported.

At the head of the company was Laith P. Alsarraf, the CEO of the Adult Check service provider Cybernet Ventures.

Alsarraf testified to Congress several times, becoming a go-to expert witness for lawmakers behind the 1998 Child Online Protection Act (COPA). Like the version of the CDA that prompted it, this act was ultimately deemed unconstitutional. And some judges and top law enforcement officers defended Alsarraf’s business model with Adult Check in court—insisting that it didn’t impact adult speech and “at most” posed a “modest burden” that was “outweighed by the government’s compelling interest in shielding minors” from adult content.

But his apparent conflicts of interest also drew criticism. One judge warned in 1999 that “perhaps we do the minors of this country harm if First Amendment protections, which they will with age inherit fully, are chipped away in the name of their protection,” the American Civil Liberties Union (ACLU) noted.

Summing up the seeming conflict, Ann Beeson, an ACLU lawyer, told the LA Times, “the government wants to shut down porn on the Net. And yet their main witness is this guy who makes his money urging more and more people to access porn on the Net.”

’90s kids dodged Adult Check age gates

Adult Check’s subscription costs varied, but the service predictably got more expensive as its popularity spiked. In 1999, customers could snag a “lifetime membership” for $76.95 or else fork over $30 every two years or $20 annually, the LA Times reported. Those were good deals compared to the significantly higher costs documented in the 2001 Forbes report, which noted a three-month package was available for $20, or users could pay $20 monthly to access supposedly premium content.

Among Adult Check’s customers were apparently some savvy kids who snuck through the cracks in the system. In various threads debating today’s laws, several Redditors have claimed that they used Adult Check as minors in the ’90s, either admitting to stealing a parent’s credit card or sharing age-authenticated passcodes with friends.

“Adult Check? I remember signing up for that in the mid-late 90s,” one commenter wrote in a thread asking if anyone would ever show ID to access porn. “Possibly a minor friend of mine paid for half the fee so he could use it too.”

“Those years were a strange time,” the commenter continued. “We’d go see tech-suspense-horror-thrillers like The Net and Disclosure where the protagonist has to fight to reclaim their lives from cyberantagonists, only to come home to send our personal information along with a credit card payment so we could look at porn.”

“LOL. I remember paying for the lifetime package, thinking I’d use it for decades,” another commenter responded. “Doh…”

Adult Check thrived even without age check laws

Sanderson’s study noted that today, minors’ “first exposure [to adult content] typically occurs between ages 11–13,” which is “substantially earlier than pre-Internet estimates.” Kids seeking out adult content may be in a period of heightened risk-taking or lack self-control, while others may be exposed without ever seeking it out. Some studies suggest that kids who are more likely to seek out adult content could struggle with lower self-esteem, emotional problems, body image concerns, or depressive symptoms. These potential negative associations with adolescent exposure to porn have long been the basis for lawmakers’ fight to keep the content away from kids—and even the biggest publishers today, like Pornhub, agree that it’s a worthy goal.

After parents got wise to ’90s kids dodging age gates, pressure predictably mounted on Adult Check to solve the problem, despite Adult Check consistently admitting that its system wasn’t foolproof. Alsarraf claimed that Adult Check developed “proprietary” technology to detect when kids were using credit cards or when multiple kids were attempting to use the same passcode at the same time from different IP addresses. He also claimed that Adult Check could detect stolen credit cards, bogus card numbers, card numbers “posted on the Internet,” and other fraud.

Meanwhile, the LA Times noted, Cybernet Ventures pulled in an estimated $50 million in 1999, ensuring that the CEO could splurge on a $690,000 house in Pasadena and a $100,000 Hummer. Although Adult Check was believed to be his most profitable venture at that time, Alsarraf told the LA Times that he wasn’t really invested in COPA passing.

“I know Adult Check will flourish,” Alsarraf said, “with or without the law.”

And he was apparently right. By 2001, subscriptions banked an estimated $320 million.

After the CDA and COPA were blocked, “many website owners continue to use Adult Check as a responsible approach to content accessibility,” Alsarraf testified.

While adult sites were likely just in it for the paychecks—which reportedly were dependably delivered—he positioned this ongoing growth as fueled by sites voluntarily turning to Adult Check to protect kids and free speech. “Adult Check allows a free flow of ideas and constitutionally protected speech to course through the Internet without censorship and unreasonable intrusion,” Alsarraf said.

“The Adult Check system is the least restrictive, least intrusive method of restricting access to content that requires minimal cost, and no parental technical expertise and intervention: It does not judge content, does not inhibit free speech, and it does not prevent access to any ideas, word, thoughts, or expressions,” Alsarraf testified.

Britney Spears aided Adult Check’s downfall

Adult Check’s downfall ultimately came in part thanks to Britney Spears, Wired reported in 2002. Spears went from Mickey Mouse Club child star to the “Princess of Pop” at 16 years old with her hit “Baby One More Time” in 1999, the same year that Adult Check rose to prominence.

Today, Spears is well-known for her activism, but in the late 1990s and early 2000s, she was one of the earliest victims of fake online porn.

Spears submitted documents in a lawsuit raised by the publisher of a porn magazine called Perfect 10. The publisher accused Adult Check of enabling the infringement of its content featured on the age check provider’s partner sites, and Spears’ documents helped prove that Adult Check was also linking to “non-existent nude photos,” allegedly in violation of unfair competition laws. The case was an early test of online liability, and Adult Check seemingly learned the hard way that the courts weren’t on its side.

That suit prompted an injunction blocking Adult Check from partnering with sites promoting supposedly illicit photos of “models and celebrities,” which it said was no big deal because it only comprised about 6 percent of its business.

However, after losing the lawsuit in 2004, Adult Check’s reputation took a hit, and it fell out of the pop lexicon. Although Cybernet Ventures continued to exist, Adult Check screening was dropped from sites, as it was no longer considered the gold standard in age verification. Perhaps more importantly, it was no longer required by law.

But although millions validated Adult Check for years, not everybody in the ’90s bought into Adult Check’s claims that it was protecting kids from porn. Some critics said it only provided a veneer of online safety without meaningfully impacting kids. Most of the country—more than 250 million US residents—never subscribed.

“I never used Adult Check,” one Redditor said in a thread pondering whether age gate laws might increase the risks of government surveillance. “My recollection was that it was an untrustworthy scam and unneeded barrier for the theater of legitimacy.”

Alsarraf keeps a lower profile these days and did not respond to Ars’ request to comment.

The rise and fall of Adult Check may have prevented more legally viable age verification systems from gaining traction. The ACLU argued that its popularity trampled the momentum of the “least restrictive” method for age checks available in the ’90s, a system called the Platform for Internet Content Selection (PICS).

Based on rating and filtering technology, PICS allowed content providers or third-party interest groups to create private rating systems so that “individual users can then choose the rating system that best reflects their own values, and any material that offends them will be blocked from their homes.”

However, like all age check systems, PICS was also criticized as being imperfect. Legal scholar Lawrence Lessig called it “the devil” because “it allows censorship at any point on the chain of distribution” of online content.

Although the age verification technology has changed, today’s lawmakers are stuck in the same debate decades later, with no perfect solutions in sight.

SCOTUS to rule on constitutionality of age gate laws

This summer, the Supreme Court will decide whether a Texas law blocking minors’ access to porn is constitutional. The decision could either stunt the momentum or strengthen the backbone of nearly 20 laws in red states across the country seeking to age-gate the Internet.

For privacy advocates opposing the laws, the SCOTUS ruling feels like a sink-or-swim moment for age gates, depending on which way the court swings. And it will come just as blue states like Colorado have recently begun pushing for age gates, too. Meanwhile, other laws increasingly seek to safeguard kids’ privacy and prevent social media addiction by also requiring age checks.

Since the 1990s, the US has debated how to best keep kids away from harmful content without trampling adults’ First Amendment rights. And while cruder credit card-based systems like Adult Check are no longer seen as viable, it’s clear that for lawmakers today, technology is still viewed as both the problem and the solution.

While lawmakers claim that the latest technology makes it easier than ever to access porn, advancements like digital IDs, device-based age checks, or app store age checks seem to signal salvation, making it easier to digitally verify user ages. And some artificial intelligence solutions have likely made lawmakers’ dreams of age-gating the Internet appear even more within reach.

Critics have condemned age gates as unconstitutionally limiting adults’ access to legal speech, at the furthest extreme accusing conservatives of seeking to censor all adult content online or expand government surveillance by tracking people’s sexual identity. (Goldman noted that “Russell Vought, an architect of Project 2025 and President Trump’s Director of the Office of Management and Budget, admitted that he favored age authentication mandates as a ‘back door’ way to censor pornography.”)

Ultimately, SCOTUS could end up deciding if any kind of age gate is ever appropriate. The court could perhaps rule that strict scrutiny, which requires a narrowly tailored solution to serve a compelling government interest, must be applied, potentially ruling out all of lawmakers’ suggested strategies. Or the court could decide that strict scrutiny applies but age checks are narrowly tailored. Or it could go the other way and rule that strict scrutiny does not apply, so all state lawmakers need to show is that their basis for requiring age verification is rationally connected to their interest in blocking minors from adult content.

Age verification remains flawed, experts say

If there’s anything the ’90s can teach lawmakers about age gates, it’s that creating an age verification industry dependent on adult sites will only incentivize the creation of more adult sites that benefit from the new rules. Back then, when age verification systems increased sites’ revenues, compliant sites were rewarded, but in today’s climate, it’s the noncompliant sites that stand to profit by not authenticating ages.

Sanderson’s study noted that Louisiana “was the only state that implemented age verification in a manner that plausibly preserved a user’s anonymity while verifying age,” which is why Pornhub didn’t block the state over its age verification law. But other states that Pornhub blocked passed copycat laws that “tended to be stricter, either requiring uploads of an individual’s government identification,” methods requiring providing other sensitive data, “or even presenting biometric data such as face scanning,” the study noted.

The technology continues evolving as the debate rages on. Some of the most popular platforms and biggest tech companies have been testing new age estimation methods this year. Notably, Discord is testing out face scans in the United Kingdom and Australia, and both Meta and Google are testing technology to supposedly detect kids lying about their ages online.

But a solution has not yet been found as parents and their lawyers circle social media companies they believe are harming their kids. In fact, the unreliability of the tech remains an issue for Meta, which is perhaps the most motivated to find a fix, having long faced immense pressure to improve child safety on its platforms. Earlier this year, Meta had to yank its age detection tool after the “measure didn’t work as well as we’d hoped and inadvertently locked out some parents and guardians who shared devices with their teens,” the company said.

On April 21, Meta announced that it started testing the tech in the US, suggesting the flaws were fixed, but Meta did not directly respond to Ars’ request to comment in more detail on updates.

Two years ago, Ash Johnson, a senior policy manager at the nonpartisan nonprofit think tank the Information Technology and Innovation Foundation (ITIF), urged Congress to “support more research and testing of age verification technology,” saying that the government’s last empirical evaluation was in 2014. She noted then that “the technology is not perfect, and some children will break the rules, eventually slipping through the safeguards,” but that lawmakers need to understand the trade-offs of advocating for different tech solutions or else risk infringing user privacy.

More research is needed, Johnson told Ars, while Sanderson’s study suggested that regulators should also conduct circumvention research or be stuck with laws that have a “limited effectiveness as a standalone policy tool.”

For example, while AI solutions are increasingly more accurate—and in one Facebook survey overwhelmingly more popular with users, Goldman’s analysis noted—the tech still struggles to differentiate between a 17- or 18-year-old, for example.

Like Aylo, ITIF recommends device-based age authentication as the least restrictive method, Johnson told Ars. Perhaps the biggest issue with that option, though, is that kids may have an easy time accessing adult content on devices shared with parents, Goldman noted.

Not sharing Johnson’s optimism, Goldman wrote that “there is no ‘preferred’ or ‘ideal’ way to do online age authentication.” Even a perfect system that accurately authenticates age every time would be flawed, he suggested.

“Rather, they each fall on a spectrum of ‘dangerous in one way’ to ‘dangerous in a different way,'” he wrote, concluding that “every solution has serious privacy, accuracy, or security problems.”

Kids at “grave risk” from uninformed laws

As a “burgeoning” age verification industry swells, Goldman wants to see more earnest efforts from lawmakers to “develop a wider and more thoughtful toolkit of online child safety measures.” They could start, he suggested, by consistently defining minors in laws so it’s clear who is being regulated and what access is being restricted. They could then provide education to parents and minors to help them navigate online harms.

Without such careful consideration, Goldman predicts a dystopian future prompted by age verification laws. If SCOTUS endorses them, users could become so accustomed to age gates that they start entering sensitive information into various web platforms without a second thought. Even the government knows that would be a disaster, Goldman said.

“Governments around the world want people to think twice before sharing sensitive biometric information due to the information’s immutability if stolen,” Goldman wrote. “Mandatory age authentication teaches them the opposite lesson.”

Goldman recommends that lawmakers start seeking an information-based solution to age verification problems rather than depending on tech to save the day.

“Treating the online age authentication challenges as purely technological encourages the unsupportable belief that its problems can be solved if technologists ‘nerd harder,'” Goldman wrote. “This reductionist thinking is a categorical error. Age authentication is fundamentally an information problem, not a technology problem. Technology can help improve information accuracy and quality, but it cannot unilaterally solve information challenges.”

Lawmakers could potentially minimize risks to kids by only verifying age when someone tries to access restricted content or “by compelling age authenticators to minimize their data collection” and “promptly delete any highly sensitive information” collected. That likely wouldn’t stop some vendors from collecting or retaining data anyway, Goldman suggested. But it could be a better standard to protect users of all ages from inevitable data breaches, since we know that “numerous authenticators have suffered major data security failures that put authenticated individuals at grave risk.”

“If the policy goal is to protect minors online because of their potential vulnerability, then forcing minors to constantly decide whether or not to share highly sensitive information with strangers online is a policy failure,” Goldman wrote. “Child safety online needs a whole-of-society response, not a delegate-and-pray approach.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Redditor accidentally reinvents discarded ’90s tool to escape today’s age gates Read More »

openai-rolls-back-update-that-made-chatgpt-a-sycophantic-mess

OpenAI rolls back update that made ChatGPT a sycophantic mess

In search of good vibes

OpenAI, along with competitors like Google and Anthropic, is trying to build chatbots that people want to chat with. So, designing the model’s apparent personality to be positive and supportive makes sense—people are less likely to use an AI that comes off as harsh or dismissive. For lack of a better word, it’s increasingly about vibemarking.

When Google revealed Gemini 2.5, the team crowed about how the model topped the LM Arena leaderboard, which lets people choose between two different model outputs in a blinded test. The models people like more end up at the top of the list, suggesting they are more pleasant to use. Of course, people can like outputs for different reasons—maybe one is more technically accurate, or the layout is easier to read. But overall, people like models that make them feel good. The same is true of OpenAI’s internal model tuning work, it would seem.

An example of ChatGPT’s overzealous praise.

Credit: /u/Talvy

An example of ChatGPT’s overzealous praise. Credit: /u/Talvy

It’s possible this pursuit of good vibes is pushing models to display more sycophantic behaviors, which is a problem. Anthropic’s Alex Albert has cited this as a “toxic feedback loop.” An AI chatbot telling you that you’re a world-class genius who sees the unseen might not be damaging if you’re just brainstorming. However, the model’s unending praise can lead people who are using AI to plan business ventures or, heaven forbid, enact sweeping tariffs, to be fooled into thinking they’ve stumbled onto something important. In reality, the model has just become so sycophantic that it loves everything.

The constant pursuit of engagement has been a detriment to numerous products in the Internet era, and it seems generative AI is not immune. OpenAI’s GPT-4o update is a testament to that, but hopefully, this can serve as a reminder for the developers of generative AI that good vibes are not all that matters.

OpenAI rolls back update that made ChatGPT a sycophantic mess Read More »

mike-lindell’s-lawyers-used-ai-to-write-brief—judge-finds-nearly-30-mistakes

Mike Lindell’s lawyers used AI to write brief—judge finds nearly 30 mistakes

A lawyer representing MyPillow and its CEO Mike Lindell in a defamation case admitted using artificial intelligence in a brief that has nearly 30 defective citations, including misquotes and citations to fictional cases, a federal judge said.

“[T]he Court identified nearly thirty defective citations in the Opposition. These defects include but are not limited to misquotes of cited cases; misrepresentations of principles of law associated with cited cases, including discussions of legal principles that simply do not appear within such decisions; misstatements regarding whether case law originated from a binding authority such as the United States Court of Appeals for the Tenth Circuit; misattributions of case law to this District; and most egregiously, citation of cases that do not exist,” US District Judge Nina Wang wrote in an order to show cause Wednesday.

Wang ordered attorneys Christopher Kachouroff and Jennifer DeMaster to show cause as to why the court should not sanction the defendants, law firm, and individual attorneys. Kachouroff and DeMaster also have to explain why they should not be referred to disciplinary proceedings for violations of the rules of professional conduct.

Kachouroff and DeMaster, who are defending Lindell against a lawsuit filed by former Dominion Voting Systems employee Eric Coomer, both signed the February 25 brief with the defective citations. Kachouroff, representing defendants as lead counsel, admitted using AI to write the brief at an April 21 hearing, the judge wrote. The case is in the US District Court for the District of Colorado.

“Time and time again, when Mr. Kachouroff was asked for an explanation of why citations to legal authorities were inaccurate, he declined to offer any explanation, or suggested that it was a ‘draft pleading,'” Wang wrote. “Not until this Court asked Mr. Kachouroff directly whether the Opposition was the product of generative artificial intelligence did Mr. Kachouroff admit that he did, in fact, use generative artificial intelligence.”

Mike Lindell’s lawyers used AI to write brief—judge finds nearly 30 mistakes Read More »