Policy

trump-and-doj-try-to-spring-former-county-clerk-tina-peters-from-prison

Trump and DOJ try to spring former county clerk Tina Peters from prison

President Donald Trump is demanding the release of Tina Peters, a former election official who parroted Trump’s 2020 election conspiracy theories and is serving nine years in prison for compromising the security of election equipment.

In a post on Truth Social last night, Trump wrote that “Radical Left Colorado Attorney General Phil Weiser ignores Illegals committing Violent Crimes like Rape and Murder in his State and, instead, jailed Tina Peters, a 69-year-old Gold Star mother who worked to expose and document Democrat Election Fraud. Tina is an innocent Political Prisoner being horribly and unjustly punished in the form of Cruel and Unusual Punishment.”

Trump said he is “directing the Department of Justice to take all necessary action to help secure the release of this ‘hostage’ being held in a Colorado prison by the Democrats, for political reasons.”

The former Mesa County clerk was indicted in March 2022 on charges related to the leak of voting-system BIOS passwords and other confidential information. Peters was convicted in August 2024 and later sentenced in a Colorado state court.

“Your lies are well-documented and these convictions are serious,” 21st Judicial District Judge Matthew Barrett told Peters at her October 2024 sentencing. “I am convinced you would do it all over again. You are as defiant a defendant as this court has ever seen.”

DOJ reviews case for “abuse” of process

After Peters’ August 2024 conviction, Colorado Secretary of State Jena Griswold said that “Tina Peters willfully compromised her own election equipment trying to prove Trump’s big lie.”

Peters appealed her conviction in a Colorado appeals court and separately sought relief in US District Court for the District of Colorado. She asked the federal court to order her release on bond while the state court system handles her appeal and said her health has deteriorated while being incarcerated.

Trump’s Justice Department submitted a filing on Peters’ behalf in March, saying the US has concerns about “the exceptionally lengthy sentence imposed relative to the conduct at issue, the First Amendment implications of the trial court’s October 2024 assertions relating to Ms. Peters, and whether Colorado’s denial of bail pending appeal was arbitrary or unreasonable under the Eighth and Fourteenth Amendments.”

Trump and DOJ try to spring former county clerk Tina Peters from prison Read More »

after-two-court-losses,-doge-asks-supreme-court-for-social-security-data-access

After two court losses, DOGE asks Supreme Court for Social Security data access

The Trump administration filed an emergency application on Friday asking the Supreme Court to restore DOGE’s access to Social Security Administration records. A lower-court order that prohibited DOGE’s access is causing “irreparable harm to the executive branch” and thwarting DOGE’s attempts to “eliminate waste and fraud,” US Solicitor General John Sauer wrote in the appeal.

“The government cannot eliminate waste and fraud if district courts bar the very agency personnel with expertise and the designated mission of curtailing such waste and fraud from performing their jobs,” Sauer told the Supreme Court. The preliminary injunction that is currently in place halted “the Executive Branch’s critically important efforts to improve its information-technology infrastructure and eliminate waste,” the brief said.

The appeal was lodged in a case filed by the American Federation of State, County and Municipal Employees; the Alliance for Retired Americans; and American Federation of Teachers. Chief Justice John Roberts asked them to file a response to the US by May 12.

In March, the plaintiffs obtained an order that required the Social Security Administration (SSA) to block DOGE’s access to records. US District Judge Ellen Lipton Hollander’s order said the DOGE entity created by President Donald Trump “is essentially engaged in a fishing expedition at SSA, in search of a fraud epidemic, based on little more than suspicion.”

Trump admin lost at appeals court

Hollander ordered the SSA to cut off DOGE’s access and ruled that Elon Musk and other DOGE members must “disgorge and delete all non-anonymized PII [personally identifiable information] data in their possession or under their control.” The District of Maryland judge found that Social Security officials “provided members of the SSA DOGE Team with unbridled access to the personal and private data of millions of Americans, including but not limited to Social Security numbers, medical records, mental health records, hospitalization records, drivers’ license numbers, bank and credit card information, tax information, income history, work history, birth and marriage certificates, and home and work addresses.”

After two court losses, DOGE asks Supreme Court for Social Security data access Read More »

largest-deepfake-porn-site-shuts-down-forever

Largest deepfake porn site shuts down forever

The shuttering of Mr. Deepfakes won’t solve the problem of deepfakes, though. In 2022, the number of deepfakes skyrocketed as AI technology made the synthetic NCII appear more realistic than ever, prompting an FBI warning in 2023 to alert the public that the fake content was being increasingly used in sextortion schemes. But the immediate solutions society used to stop the spread had little impact. For example, in response to pressure to make the fake NCII harder to find, Google started downranking explicit deepfakes in search results but refused to demote platforms like Mr. Deepfakes unless Google received an unspecified “high volume of removals for fake explicit imagery.”

According to researchers, Mr. Deepfakes—a real person who remains anonymous but reportedly is a 36-year-old hospital worker in Toronto—created the engine driving this spike. His DeepFaceLab quickly became “the leading deepfake software, estimated to be the software behind 95 percent of all deepfake videos and has been replicated over 8,000 times on GitHub,” researchers found. For casual users, his platform hosted videos that could be purchased, usually priced above $50 if it was deemed realistic, while more motivated users relied on forums to make requests or enhance their own deepfake skills to become creators.

Mr. Deepfakes’ illegal trade began on Reddit but migrated to its own platform after a ban in 2018. There, thousands of deepfake creators shared technical knowledge, with the Mr. Deepfakes site forums eventually becoming “the only viable source of technical support for creating sexual deepfakes,” researchers noted last year.

Having migrated once before, it seems unlikely that this community won’t find a new platform to continue generating the illicit content, possibly rearing up under a new name since Mr. Deepfakes seemingly wants out of the spotlight. Back in 2023, researchers estimated that the platform had more than 250,000 members, many of whom may quickly seek a replacement or even try to build a replacement.

Further increasing the likelihood that Mr. Deepfakes’ reign of terror isn’t over, the DeepFaceLab GitHub repository—which was archived in November and can no longer be edited—remains available for anyone to copy and use.

404 Media reported that many Mr. Deepfakes members have already connected on Telegram, where synthetic NCII is also reportedly frequently traded. Hany Farid, a professor at UC Berkeley who is a leading expert on digitally manipulated images, told 404 Media that “while this takedown is a good start, there are many more just like this one, so let’s not stop here.”

Largest deepfake porn site shuts down forever Read More »

a-doge-recruiter-is-staffing-a-project-to-deploy-ai-agents-across-the-us-government

A DOGE recruiter is staffing a project to deploy AI agents across the US government


“does it still require Kremlin oversight?

A startup founder said that AI agents could do the work of tens of thousands of government employees.

An aide sets up a poster depicting the logo for the DOGE Caucus before a news conference in Washington, DC. Credit: Andrew Harnik/Getty Images

A young entrepreneur who was among the earliest known recruiters for Elon Musk’s so-called Department of Government Efficiency (DOGE) has a new, related gig—and he’s hiring. Anthony Jancso, cofounder of AcclerateX, a government tech startup, is looking for technologists to work on a project that aims to have artificial intelligence perform tasks that are currently the responsibility of tens of thousands of federal workers.

Jancso, a former Palantir employee, wrote in a Slack with about 2000 Palantir alumni in it that he’s hiring for a “DOGE orthogonal project to design benchmarks and deploy AI agents across live workflows in federal agencies,” according to an April 21 post reviewed by WIRED. Agents are programs that can perform work autonomously.

We’ve identified over 300 roles with almost full-process standardization, freeing up at least 70k FTEs for higher-impact work over the next year,” he continued, essentially claiming that tens of thousands of federal employees could see many aspects of their job automated and replaced by these AI agents. Workers for the project, he wrote, would be based on site in Washington, DC, and would not require a security clearance; it isn’t clear for whom they would work. Palantir did not respond to requests for comment.

The post was not well received. Eight people reacted with clown face emojis, three reacted with a custom emoji of a man licking a boot, two reacted with custom emoji of Joaquin Phoenix giving a thumbs down in the movie Gladiator, and three reacted with a custom emoji with the word “Fascist.” Three responded with a heart emoji.

“DOGE does not seem interested in finding ‘higher impact work’ for federal employees,” one person said in a comment that received 11 heart reactions. “You’re complicit in firing 70k federal employees and replacing them with shitty autocorrect.”

“Tbf we’re all going to be replaced with shitty autocorrect (written by chatgpt),” another person commented, which received one “+1” reaction.

“How ‘DOGE orthogonal’ is it? Like, does it still require Kremlin oversight?” another person said in a comment that received five reactions with a fire emoji. “Or do they just use your credentials to log in later?”

AccelerateX was originally called AccelerateSF, which VentureBeat reported in 2023 had received support from OpenAI and Anthropic. In its earliest incarnation, AccelerateSF hosted a hackathon for AI developers aimed at using the technology to solve San Francisco’s social problems. According to a 2023 Mission Local story, for instance, Jancso proposed that using large language models to help businesses fill out permit forms to streamline the construction paperwork process might help drive down housing prices. (OpenAI did not respond to a request for comment. Anthropic spokesperson Danielle Ghiglieri tells WIRED that the company “never invested in AccelerateX/SF,” but did sponsor a hackathon AccelerateSF hosted in 2023 by providing free access to its API usage at a time when its Claude API “was still in beta.”)

In 2024, the mission pivoted, with the venture becoming known as AccelerateX. In a post on X announcing the change, the company posted, “Outdated tech is dragging down the US Government. Legacy vendors sell broken systems at increasingly steep prices. This hurts every American citizen.” AccelerateX did not respond to a request for comment.

According to sources with direct knowledge, Jancso disclosed that AccelerateX had signed a partnership agreement with Palantir in 2024. According to the LinkedIn of someone described as one of AccelerateX’s cofounders, Rachel Yee, the company looks to have received funding from OpenAI’s Converge 2 Accelerator. Another of AccelerateSF’s cofounders, Kay Sorin, now works for OpenAI, having joined the company several months after that hackathon. Sorin and Yee did not respond to requests for comment.

Jancso’s cofounder, Jordan Wick, a former Waymo engineer, has been an active member of DOGE, appearing at several agencies over the past few months, including the Consumer Financial Protection Bureau, National Labor Relations Board, the Department of Labor, and the Department of Education. In 2023, Jancso attended a hackathon hosted by ScaleAI; WIRED found that another DOGE member, Ethan Shaotran, also attended the same hackathon.

Since its creation in the first days of the second Trump administration, DOGE has pushed the use of AI across agencies, even as it has sought to cut tens of thousands of federal jobs. At the Department of Veterans Affairs, a DOGE associate suggested using AI to write code for the agency’s website; at the General Services Administration, DOGE has rolled out the GSAi chatbot; the group has sought to automate the process of firing government employees with a tool called AutoRIF; and a DOGE operative at the Department of Housing and Urban Development is using AI tools to examine and propose changes to regulations. But experts say that deploying AI agents to do the work of 70,000 people would be tricky if not impossible.

A federal employee with knowledge of government contracting, who spoke to WIRED on the condition of anonymity because they were not authorized to speak to the press, says, “A lot of agencies have procedures that can differ widely based on their own rules and regulations, and so deploying AI agents across agencies at scale would likely be very difficult.”

Oren Etzioni, cofounder of the AI startup Vercept, says that while AI agents can be good at doing some things—like using an internet browser to conduct research—their outputs can still vary widely and be highly unreliable. For instance, customer service AI agents have invented nonexistent policies when trying to address user concerns. Even research, he says, requires a human to actually make sure what the AI is spitting out is correct.

“We want our government to be something that we can rely on, as opposed to something that is on the absolute bleeding edge,” says Etzioni. “We don’t need it to be bureaucratic and slow, but if corporations haven’t adopted this yet, is the government really where we want to be experimenting with the cutting edge AI?”

Etzioni says that AI agents are also not great 1-1 fits for job replacements. Rather, AI is able to do certain tasks or make others more efficient, but the idea that the technology could do the jobs of 70,000 employees would not be possible. “Unless you’re using funny math,” he says, “no way.”

Jancso, first identified by WIRED in February, was one of the earliest recruiters for DOGE in the months before Donald Trump was inaugurated. In December, Jancso, who sources told WIRED said he had been recruited by Steve Davis, president of the Musk-founded Boring Company and a current member of DOGE, used the Palantir alumni group to recruit DOGE members. On December 2nd, 2024, he wrote, “I’m helping Elon’s team find tech talent for the Department of Government Efficiency (DOGE) in the new admin. This is a historic opportunity to build an efficient government, and to cut the federal budget by 1/3. If you’re interested in playing a role in this mission, please reach out in the next few days.”

According to one source at SpaceX, who asked to remain anonymous as they are not authorized to speak to the press, Jancso appeared to be one of the DOGE members who worked out of the company’s DC office in the days before inauguration along with several other people who would constitute some of DOGE’s earliest members. SpaceX did not respond to a request for comment.

Palantir was cofounded by Peter Thiel, a billionaire and longtime Trump supporter with close ties to Musk. Palantir, which provides data analytics tools to several government agencies including the Department of Defense and the Department of Homeland Security, has received billions of dollars in government contracts. During the second Trump administration, the company has been involved in helping to build a “mega API” to connect data from the Internal Revenue Service to other government agencies, and is working with Immigration and Customs Enforcement to create a massive surveillance platform to identify immigrants to target for deportation.

This story originally appeared at WIRED.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

A DOGE recruiter is staffing a project to deploy AI agents across the US government Read More »

doj-confirms-it-wants-to-break-up-google’s-ad-business

DOJ confirms it wants to break up Google’s ad business

In the trial, Google will paint this demand as a severe overreach, claiming that few, if any, companies would have the resources to purchase and run the products. Last year, an ad consultant estimated Google’s ad empire could be worth up to $95 billion, quite possibly too big to sell. However, Google was similarly skeptical about Chrome, and representatives from other companies have said throughout the search remedy trial that they would love to buy Google’s browser.

An uphill battle

After losing three antitrust cases in just a couple of years, Google will have a hard time convincing the judge it is capable of turning over a new leaf with light remedies. A DOJ lawyer told the court Google is a “recidivist monopolist” that has a pattern of skirting its legal obligations. Still, Google is looking for mercy in the case. We expect to get more details on Google’s proposed remedies as the next trial nears, but it already offered a preview in today’s hearing.

Google suggests making a smaller subset of ad data available and ending the use of some pricing schemes, including unified pricing, that the court has found to be anticompetitive. Google also promised not to re-implement discontinued practices like “last look,” which gave the company a chance to outbid rivals at the last moment. This was featured prominently in the DOJ’s case, although Google ended the practice several years ago.

To ensure it adheres to the remedies, Google suggested a court-appointed monitor would audit the process. However, Brinkema seemed unimpressed with this proposal.

As in its other cases, Google says it plans to appeal the verdict, but before it can do that, the remedies phase has to be completed. Even if it can get the remedies paused for appeal, the decision could be a blow to investor confidence. So, Google will do whatever it can to avoid the worst-case scenario, leaning on the existence of competing advertisers like Meta and TikTok to show that the market is still competitive.

Like the search case, Google won’t be facing any big developments over the summer, but this fall could be rough. Judge Amit Mehta will most likely rule on the search remedies in August, and the ad tech remedies case will begin the following month. Google also has the Play Store case hanging over its head. It lost the first round, but the company hopes to prevail on appeal when the case gets underway again, probably in late 2025.

DOJ confirms it wants to break up Google’s ad business Read More »

judge-on-meta’s-ai-training:-“i-just-don’t-understand-how-that-can-be-fair-use”

Judge on Meta’s AI training: “I just don’t understand how that can be fair use”


Judge downplayed Meta’s “messed up” torrenting in lawsuit over AI training.

A judge who may be the first to rule on whether AI training data is fair use appeared skeptical Thursday at a hearing where Meta faced off with book authors over the social media company’s alleged copyright infringement.

Meta, like most AI companies, holds that training must be deemed fair use, or else the entire AI industry could face immense setbacks, wasting precious time negotiating data contracts while falling behind global rivals. Meta urged the court to rule that AI training is a transformative use that only references books to create an entirely new work that doesn’t replicate authors’ ideas or replace books in their markets.

At the hearing that followed after both sides requested summary judgment, however, Judge Vince Chhabria pushed back on Meta attorneys arguing that the company’s Llama AI models posed no threat to authors in their markets, Reuters reported.

“You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products,” Chhabria said. “You are dramatically changing, you might even say obliterating, the market for that person’s work, and you’re saying that you don’t even have to pay a license to that person.”

Declaring, “I just don’t understand how that can be fair use,” the shrewd judge apparently stoked little response from Meta’s attorney, Kannon Shanmugam, apart from a suggestion that any alleged threat to authors’ livelihoods was “just speculation,” Wired reported.

Authors may need to sharpen their case, which Chhabria warned could be “taken away by fair use” if none of the authors suing, including Sarah Silverman, Ta-Nehisi Coates, and Richard Kadrey, can show “that the market for their actual copyrighted work is going to be dramatically affected.”

Determined to probe this key question, Chhabria pushed authors’ attorney, David Boies, to point to specific evidence of market harms that seemed noticeably missing from the record.

“It seems like you’re asking me to speculate that the market for Sarah Silverman’s memoir will be affected by the billions of things that Llama will ultimately be capable of producing,” Chhabria said. “And it’s just not obvious to me that that’s the case.”

But if authors can prove fears of market harms are real, Meta might struggle to win over Chhabria, and that could set a precedent impacting copyright cases challenging AI training on other kinds of content.

The judge repeatedly appeared to be sympathetic to authors, suggesting that Meta’s AI training may be a “highly unusual case” where even though “the copying is for a highly transformative purpose, the copying has the high likelihood of leading to the flooding of the markets for the copyrighted works.”

And when Shanmugam argued that copyright law doesn’t shield authors from “protection from competition in the marketplace of ideas,” Chhabria resisted the framing that authors weren’t potentially being robbed, Reuters reported.

“But if I’m going to steal things from the marketplace of ideas in order to develop my own ideas, that’s copyright infringement, right?” Chhabria responded.

Wired noted that he asked Meta’s lawyers, “What about the next Taylor Swift?” If AI made it easy to knock off a young singer’s sound, how could she ever compete if AI produced “a billion pop songs” in her style?

In a statement, Meta’s spokesperson reiterated the company’s defense that AI training is fair use.

“Meta has developed transformational open source AI models that are powering incredible innovation, productivity, and creativity for individuals and companies,” Meta’s spokesperson said. “Fair use of copyrighted materials is vital to this. We disagree with Plaintiffs’ assertions, and the full record tells a different story. We will continue to vigorously defend ourselves and to protect the development of GenAI for the benefit of all.”

Meta’s torrenting seems “messed up”

Some have pondered why Chhabria appeared so focused on market harms, instead of hammering Meta for admittedly illegally pirating books that it used for its AI training, which seems to be obvious copyright infringement. According to Wired, “Chhabria spoke emphatically about his belief that the big question is whether Meta’s AI tools will hurt book sales and otherwise cause the authors to lose money,” not whether Meta’s torrenting of books was illegal.

The torrenting “seems kind of messed up,” Chhabria said, but “the question, as the courts tell us over and over again, is not whether something is messed up but whether it’s copyright infringement.”

It’s possible that Chhabria dodged the question for procedural reasons. In a court filing, Meta argued that authors had moved for summary judgment on Meta’s alleged copying of their works, not on “unsubstantiated allegations that Meta distributed Plaintiffs’ works via torrent.”

In the court filing, Meta alleged that even if Chhabria agreed that the authors’ request for “summary judgment is warranted on the basis of Meta’s distribution, as well as Meta’s copying,” that the authors “lack evidence to show that Meta distributed any of their works.”

According to Meta, authors abandoned any claims that Meta’s seeding of the torrented files served to distribute works, leaving only claims about Meta’s leeching. Meta argued that the authors “admittedly lack evidence that Meta ever uploaded any of their works, or any identifiable part of those works, during the so-called ‘leeching’ phase,” relying instead on expert estimates based on how torrenting works.

It’s also possible that for Chhabria, the torrenting question seemed like an unnecessary distraction. Former Meta attorney Mark Lumley, who quit the case earlier this year, told Vanity Fair that the torrenting was “one of those things that sounds bad but actually shouldn’t matter at all in the law. Fair use is always about uses the plaintiff doesn’t approve of; that’s why there is a lawsuit.”

Lumley suggested that court cases mulling fair use at this current moment should focus on the outputs, rather than the training. Citing the ruling in a case where Google Books scanning books to share excerpts was deemed fair use, Lumley argued that “all search engines crawl the full Internet, including plenty of pirated content,” so there’s seemingly no reason to stop AI crawling.

But the Copyright Alliance, a nonprofit, non-partisan group supporting the authors in the case, in a court filing alleged that Meta, in its bid to get AI products viewed as transformative, is aiming to do the opposite. “When describing the purpose of generative AI,” Meta allegedly strives to convince the court to “isolate the ‘training’ process and ignore the output of generative AI,” because that’s seemingly the only way that Meta can convince the court that AI outputs serve “a manifestly different purpose from Plaintiffs’ books,” the Copyright Alliance argued.

“Meta’s motion ignores what comes after the initial ‘training’—most notably the generation of output that serves the same purpose of the ingested works,” the Copyright Alliance argued. And the torrenting question should matter, the group argued, because unlike in Google Books, Meta’s AI models are apparently training on pirated works, not “legitimate copies of books.”

Chhabria will not be making a snap decision in the case, planning to take his time and likely stressing not just Meta, but every AI company defending training as fair use the longer he delays. Understanding that the entire AI industry potentially has a stake in the ruling, Chhabria apparently sought to relieve some tension at the end of the hearing with a joke, Wired reported.

 “I will issue a ruling later today,” Chhabria said. “Just kidding! I will take a lot longer to think about it.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Judge on Meta’s AI training: “I just don’t understand how that can be fair use” Read More »

“blatantly-unlawful”:-trump-slammed-for-trying-to-defund-pbs,-npr

“Blatantly unlawful”: Trump slammed for trying to defund PBS, NPR

CPB President Patricia Harrison suggested in a statement provided to Ars that these moves to block networks’ funding exceed Trump’s authority.

“CPB is not a federal executive agency subject to the president’s authority,” Harrison said. “Congress directly authorized and funded CPB to be a private nonprofit corporation wholly independent of the federal government,” statutorily forbidding “any department, agency, officer, or employee of the United States to exercise any direction, supervision, or control over educational television or radio broadcasting, or over [CPB] or any of its grantees or contractors.”

PBS President and CEO Paula Kerger went further, calling the order “blatantly unlawful” in a statement provided to Ars.

“Issued in the middle of the night,” Trump’s order “threatens our ability to serve the American public with educational programming, as we have for the past 50-plus years,” Kerger said. “We are currently exploring all options to allow PBS to continue to serve our member stations and all Americans.”

Rural communities need public media, orgs say

While Trump opposes NPR and PBS for promoting content that he disagrees with—criticizing segments on white privilege, gender identity, reparations, “fat phobia,” and abortion—the networks have defended their programming as unbiased and falling in line with Federal Communications Commission guidelines. Further, NPR reported that the networks’ “locally grounded content” currently reaches “more than 99 percent of the population at no cost,” providing not just educational fare and entertainment but also critical updates tied to local emergency and disaster response systems.

Cutting off funding, Kreger said last month, would have a “devastating impact” on rural communities, especially in parts of the country where NPR and PBS still serve as “the only source of news and emergency broadcasts,” NPR reported.

For example, Ed Ulman, CEO of Alaska Public Media, testified to Congress last month that his stations “provide potentially life-saving warnings and alerts that are crucial for Alaskans who face threats ranging from extreme weather to earthquakes, landslides, and even volcanoes.” Some of the smallest rural stations sometimes rely on CPB for about 50 percent of their funding, NPR reported.

“Blatantly unlawful”: Trump slammed for trying to defund PBS, NPR Read More »

tesla-denies-trying-to-replace-elon-musk-as-ceo

Tesla denies trying to replace Elon Musk as CEO

Tensions had been mounting at the company. Sales and profits were deteriorating rapidly. Musk was spending much of his time in Washington.

Around that time, Tesla’s board met with Musk for an update. Board members told him he needed to spend more time on Tesla, according to people familiar with the meeting. And he needed to say so publicly.

Musk didn’t push back.

Musk subsequently said in an April 22 call with investors that “starting next month, I’ll be allocating far more of my time to Tesla now that the major work of establishing the Department of Government Efficiency is done.”

The Journal report said that after Musk’s public statement, the Tesla “board narrowed its focus to a major search firm, according to the people familiar with the discussions. The current status of the succession planning couldn’t be determined. It is also unclear if Musk, himself a Tesla board member, was aware of the effort, or if his pledge to spend more time at Tesla has affected succession planning.”

Tesla’s eight-member board has been criticized for having members with close ties to Musk. Last year, a Delaware judge who invalidated a $55.8 billion pay package awarded to Musk said that most of the board members “were beholden to Musk or had compromising conflicts.”

That includes Musk’s brother, Kimbal, and longtime Musk friend James Murdoch, said the ruling from Delaware Court of Chancery Judge Kathaleen McCormick. The judge also wrote that Denholm “derived the vast majority of her wealth from her compensation as a Tesla director” and took a “lackadaisical approach to her oversight obligations.” Denholm later defended Musk’s pay, telling shareholders that the large sum was needed to keep the CEO motivated.

Tesla denies trying to replace Elon Musk as CEO Read More »

fortnite-will-return-to-ios-as-court-slams-apple’s-“interference“-and-”cover-up“

Fortnite will return to iOS as court slams Apple’s “interference“ and ”cover-up“

In a statement provided to Ars Technica, an Apple spokesperson said, “We strongly disagree with the decision. We will comply with the court’s order and we will appeal.”

An Epic return

With the new court order in place, Epic says it will once again submit a version of Fortnite to the iOS App Store in the US in the next week or so. That new version will offer players the option to use standard Apple App Store payments or its own, cheaper “Epic Direct Payment” system to purchase in-game currency and items.

That would mirror the system that was briefly in place for iOS players in August 2020, when Epic added alternate payment options to iOS Fortnite in intentional violation of what were then Apple’s store policies. Apple removed Fortnite from the iOS App Store hours later, setting off a legal battle that seems to finally be reaching its conclusion.

For those few hours when Epic Direct Payments were available on iOS Fortnite in 2020, Sweeney said that about 50 percent of customers “decided to give Epic a shot,” going through an additional step to register and pay through an Epic account on a webpage outside the app itself (and saving 20 percent on their purchase in the process). The other roughly 50 percent of customers decided to pay a higher price in exchange for the convenience of paying directly in the app through the iOS account they already had set up, Sweeney said. “Consumers were making the choice… and it was a wonderful thing to see,” he said.

Speaking to the press Wednesday night, Sweeney said the new court order was a “huge victory for developers” looking to offer their own payment service alongside Apple’s on iOS devices. “This is what we’ve wanted all along,” he said. “We think that this achieves the goal that we’ve been aiming for in the US, while there are still some challenges elsewhere in the world.”

While Sweeney said the specific iOS developer account Epic used to publish Fortnite in 2020 is still banned, he added that the company has several other developer accounts that could be used for the new submission, including one it has used to support Unreal Engine on Apple devices. And while Sweeney allowed that Apple could still “arbitrarily reject Epic from the App Store despite Epic following all the rules,” he added that, in light of this latest court ruling, Apple would now “have to deal with various consequences of that if they did.”

Fortnite will return to iOS as court slams Apple’s “interference“ and ”cover-up“ Read More »

first-amendment-doesn’t-just-protect-human-speech,-chatbot-maker-argues

First Amendment doesn’t just protect human speech, chatbot maker argues


Do LLMs generate “pure speech”?

Feds could censor chatbots if their “speech” isn’t protected, Character.AI says.

Pushing to dismiss a lawsuit alleging that its chatbots caused a teen’s suicide, Character Technologies is arguing that chatbot outputs should be considered “pure speech” deserving of the highest degree of protection under the First Amendment.

In their motion to dismiss, the developers of Character.AI (C.AI) argued that it doesn’t matter who the speaker is—whether it’s a video game character spouting scripted dialogue, a foreign propagandist circulating misinformation, or a chatbot churning out AI-generated responses to prompting—courts protect listeners’ rights to access that speech. Accusing the mother of the departed teen, Megan Garcia, of attempting to “insert this Court into the conversations of millions of C.AI users” and supposedly endeavoring to “shut down” C.AI, the chatbot maker argued that the First Amendment bars all of her claims.

“The Court need not wrestle with the novel questions of who should be deemed the speaker of the allegedly harmful content here and whether that speaker has First Amendment rights,” Character Technologies argued, “because the First Amendment protects the public’s ‘right to receive information and ideas.'”

Warning that “imposing tort liability for one user’s alleged response to expressive content would be to ‘declare what the rest of the country can and cannot read, watch, and hear,'” the company urged the court to consider the supposed “chilling effect” that would have on “both on C.AI and the entire nascent generative AI industry.”

“‘Pure speech,’ such as the chat conversations at issue here, ‘is entitled to comprehensive protection under the First Amendment,'” Character Technologies argued in another court filing.

However, Garcia’s lawyers pointed out that even a video game character’s dialogue is written by a human, arguing that all of Character Technologies’ examples of protected “pure speech” are human speech. Although the First Amendment also protects non-human corporations’ speech, corporations are formed by humans, they noted. And unlike corporations, chatbots have no intention behind their outputs, her legal team argued, instead simply using a probabilistic approach to generate text. So they argue that the First Amendment does not apply.

Character Technologies argued in response that demonstrating C.AI’s expressive intent is not required, but if it were, “conversations with Characters feature such intent” because chatbots are designed to “be expressive and engaging,” and users help design and prompt those characters.

“Users layer their own expressive intent into each conversation by choosing which Characters to talk to and what messages to send and can also edit Characters’ messages and direct Characters to generate different responses,” the chatbot maker argued.

In her response opposing the motion to dismiss, Garcia urged the court to decline what her legal team characterized as Character Technologies’ invitation to “radically expand First Amendment protections from expressions of human volition to an unpredictable, non-determinative system where humans can’t even examine many of the mathematical functions creating outputs, let alone control them.”

To support Garcia’s case, they cited a 40-year-old ruling where the Eleventh Circuit ruled that a talking cat called “Blackie” could not be “considered a person” and was deemed a “non-human entity” despite possessing an “exceptional speech-like ability.”

Garcia’s lawyers hope the judge will rule that “AI output is not speech at all,” or if it is speech, it “falls within an exception to the First Amendment”—perhaps deemed offensive to minors who the chatbot maker knew were using the service or possibly resulting in a novel finding that manipulative speech isn’t protected. If either argument is accepted, the chatbot makers’ attempt to invoke “listeners’ rights cannot save it,” they suggested.

However, Character Technologies disputes that any recognized exception to the First Amendment’s protections is applicable in the case, noting that Garcia’s team is not arguing that her son’s chats with bots were “obscene” or incited violence. Rather, the chatbot maker argued, Garcia is asking the court to “be the first to hold that ‘manipulative expression’ is unprotected by the First Amendment because a ‘disparity in power and information between speakers and listeners… frustrat[es] listeners’ rights.'”

Now, a US court is being asked to clarify if chatbot outputs are protected speech. At a hearing Monday, a US district judge in Florida, Anne Conway, did not rule from the bench, Garcia’s legal team told Ars. Asking few questions of either side, the judge is expected to issue an opinion on the motion to dismiss within the next few weeks, or possibly months.

For Garcia and her family, who appeared at the hearing, the idea that AI “has more rights than humans” felt dehumanizing, Garcia’s legal team said.

“Pandering” to Trump administration to dodge guardrails

According to Character Technologies, the court potentially agreeing with Garcia that “that AI-generated speech is categorically unprotected” would have “far-reaching consequences.”

At perhaps the furthest extreme, they’ve warned Conway that without a First Amendment barrier, “the government could pass a law prohibiting AI from ‘offering prohibited accounts of history’ or ‘making negative statements about the nation’s leaders,’ as China has considered doing.” And the First Amendment specifically prohibits the government from controlling the flow of ideas in society, they noted, angling to make chatbot output protections seem crucial in today’s political climate.

Meetali Jain, Garcia’s attorney and founder of the Tech Justice Law Project, told Ars that this kind of legal challenge is new in the generative AI space, where copyright battles have dominated courtroom debates.

“This is the first time that I’ve seen not just the issue of the First Amendment being applied to gen AI but also the First Amendment being applied in this way,” Jain said.

In their court filing, Jain’s team noted that Character Technologies is not arguing that the First Amendment shielded the rights of Garcia’s son, Sewell Setzer, to receive allegedly harmful speech. Instead, their argument is “effectively juxtaposing the listeners’ rights of their millions of users against this one user who was aggrieved. So it’s kind of like the hypothetical users versus the real user who’s in court.”

Jain told Ars that Garcia’s team tried to convince the judge that the argument that it doesn’t matter who the speaker is, even when the speaker isn’t human, is reckless since it seems to be “implying” that “AI is a sentient being and has its own rights.”

Additionally, Jain suggested that Character Technologies’ argument that outputs must be shielded to avoid government censorship seems to be “pandering” to the Trump administration’s fears that China may try to influence American politics through social media algorithms like TikTok’s or powerful open source AI models like DeepSeek.

“That suggests that there can be no sort of imposition of guardrails on AI, lest we either lose on the national security front or because of these vague hypothetical under-theorized First Amendment concerns,” Jain told Ars.

At a press briefing Tuesday, Jain confirmed that the judge clearly understood that “our position was that the First Amendment protects speech, not words.”

“LLMs do not think and feel as humans do,” Jain said, citing University of Colorado law school researchers who supported their complaint. “Rather, they generate text through statistical methods based on patterns found in their training data. And so our position was that there is a distinction to make between words and speech, and that it’s really only the latter that is deserving of First Amendment protection.”

Jain alleged that Character Technologies is angling to create a legal environment where all chatbot outputs are protected against liability claims so that C.AI can operate “without any sort of constraints or guardrails.”

It’s notable, she suggested, that the chatbot maker updated its safety features following the death of Garcia’s son, Sewell Setzer. A C.AI blog mourned the “tragic loss of one of our users” and noted updates, included changes “to reduce the likelihood of encountering sensitive or suggestive content,” improved detection and intervention in harmful chat sessions, and “a revised disclaimer on every chat to remind users that the AI is not a real person.”

Although Character Technologies argues that it’s common to update safety practices over time, Garcia’s team alleged these updates show that C.AI could have made a safer product and chose not to.

Expert warns against giving AI products rights

Character Technologies has also argued that C.AI is not a “product” as Florida law defines it. That has striking industry implications, according to Camille Carlton, a policy director for the Center for Humane Technology who is serving as a technical expert on the case.

At the press briefing, Carlton suggested that “by invoking these First Amendment protections over speech without really specifying whose speech is being protected, Character.AI’s defense has really laid the groundwork for a world in which LLM outputs are protected speech and for a world in which AI products could have other protected rights in the same way that humans do.”

Since chatbot outputs seemingly don’t have Section 230 protections—Jain noted it was somewhat surprising that Character Technologies did not raise this defense—the chatbot maker may be attempting to secure the First Amendment as a shield instead, Carlton suggested.

“It’s a move that they’re incentivized to take because it would reduce their own accountability and their own responsibility,” Carlton said.

Jain expects that whatever Conway decides, the losing side will appeal. However, if Conway denies the motion, then discovery can begin, perhaps allowing Garcia the clearest view yet into the allegedly harmful chats she believes manipulated her son into feeling completely disconnected from the real world.

If courts grant AI products across the board such rights, Carlton warned, troubled parents like Garcia may have no recourse for potentially dangerous outputs.

“This issue could fundamentally reshape how the law approaches AI free speech and corporate accountability,” Carlton said. “And I think the bottom line from our perspective—and from what we’re seeing in terms of the trends in Character.AI and the broader trends from these AI labs—is that we need to double down on the fact that these are products. They’re not people.”

Character Technologies declined Ars’ request to comment.

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

First Amendment doesn’t just protect human speech, chatbot maker argues Read More »

redditor-accidentally-reinvents-discarded-’90s-tool-to-escape-today’s-age-gates

Redditor accidentally reinvents discarded ’90s tool to escape today’s age gates


The ’90s called. They want their flawed age verification methods back.

A boys head with a fingerprint revealing something unclear but perhaps evocative

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Back in the mid-1990s, when The Net was among the top box office draws and Americans were just starting to flock online in droves, kids had to swipe their parents’ credit cards or find a fraudulent number online to access adult content on the web. But today’s kids—even in states with the strictest age verification laws—know they can just use Google.

Last month, a study analyzing the relative popularity of Google search terms found that age verification laws shift users’ search behavior. It’s impossible to tell if the shift represents young users attempting to circumvent the child-focused law or adult users who aren’t the actual target of the laws. But overall, enforcement causes nearly half of users to stop searching for popular adult sites complying with laws and instead search for a noncompliant rival (48 percent) or virtual private network (VPN) services (34 percent), which are used to mask a location and circumvent age checks on preferred sites, the study found.

“Individuals adapt primarily by moving to content providers that do not require age verification,” the study concluded.

Although the Google Trends data prevented researchers from analyzing trends by particular age groups, the findings help confirm critics’ fears that age verification laws “may be ineffective, potentially compromise user privacy, and could drive users toward less regulated, potentially more dangerous platforms,” the study said.

The authors warn that lawmakers are not relying enough on evidence-backed policy evaluations to truly understand the consequences of circumvention strategies before passing laws. Internet law expert Eric Goldman recently warned in an analysis of age-estimation tech available today that this situation creates a world in which some kids are likely to be harmed by the laws designed to protect them.

Goldman told Ars that all of the age check methods carry the same privacy and security flaws, concluding that technology alone can’t solve this age-old societal problem. And logic-defying laws that push for them could end up “dramatically” reshaping the Internet, he warned.

Zeve Sanderson, a co-author of the Google Trends study, told Ars that “if you’re a policymaker, in addition to being potentially nervous about the more dangerous content, it’s also about just benefiting a noncompliant firm.”

“You don’t want to create a regulatory environment where noncompliance is incentivized or they benefit in some way,” Sanderson said.

Sanderson’s study pointed out that search data is only part of the picture. Some users may be using VPNs and accessing adult sites through direct URLs rather than through search. Others may rely on social media to find adult content, a 2025 conference paper noted, “easily” bypassing age checks on the largest platforms. VPNs remain the most popular circumvention method, a 2024 article in the International Journal of Law, Ethics, and Technology confirmed, “and yet they tend to be ignored or overlooked by statutes despite their popularity.”

While kids are ducking age gates and likely putting their sensitive data at greater risk, adult backlash may be peaking over the red wave of age-gating laws already blocking adults from visiting popular porn sites in several states.

Some states started controversially requiring checking IDs to access adult content, which prompted Pornhub owner Aylo to swiftly block access to its sites in certain states. Pornhub instead advocates for device-based age verification, which it claims is a safer choice.

Aylo’s campaign has seemingly won over some states that either explicitly recommend device-based age checks or allow platforms to adopt whatever age check method they deem “reasonable.” Other methods could include app store-based age checks, algorithmic age estimation (based on a user’s web activity), face scans, or even tools that guess users’ ages based on hand movements.

On Reddit, adults have spent the past year debating the least intrusive age verification methods, as it appears inevitable that adult content will stay locked down, and they dread a future where more and more adult sites might ask for IDs. Additionally, critics have warned that showing an ID magnifies the risk of users publicly exposing their sexual preferences if a data breach or leak occurs.

To avoid that fate, at least one Redditor has attempted to reinvent the earliest age verification method, promoting a resurgence of credit card-based age checks that society discarded as unconstitutional in the early 2000s.

Under those systems, an entire industry of age verification companies emerged, selling passcodes to access adult sites for a supposedly nominal fee. The logic was simple: Only adults could buy credit cards, so only adults could buy passcodes with credit cards.

If “a person buys, for a nominal fee, a randomly generated passcode not connected to them in any way” to access adult sites, one Redditor suggested about three months ago, “there won’t be any way to tie the individual to that passcode.”

“This could satisfy the requirement to keep stuff out of minors’ hands,” the Redditor wrote in a thread asking how any site featuring sexual imagery could hypothetically comply with US laws. “Maybe?”

Several users rushed to educate the Redditor about the history of age checks. Those grasping for purely technology-based solutions today could be propping up the next industry flourishing from flawed laws, they said.

And, of course, since ’90s kids easily ducked those age gates, too, history shows why investing millions to build the latest and greatest age verification systems probably remains a fool’s errand after all these years.

The cringey early history of age checks

The earliest age verification systems were born out of Congress’s “first attempt to outlaw pornography online,” the LA Times reported. That attempt culminated in the Communications Decency Act of 1996.

Although the law was largely overturned a year later, the million-dollar age verification industry was already entrenched, partly due to its intriguing business model. These companies didn’t charge adult sites any fee to add age check systems—which required little technical expertise to implement—and instead shared a big chunk of their revenue with porn sites that opted in. Some sites got 50 percent of revenues, estimated in the millions, simply for adding the functionality.

The age check business was apparently so lucrative that in 2000, one adult site, which was sued for distributing pornographic images of children, pushed fans to buy subscriptions to its preferred service as a way of helping to fund its defense, Wired reported. “Please buy an Adult Check ID, and show your support to fight this injustice!” the site urged users. (The age check service promptly denied any association with the site.)

In a sense, the age check industry incentivized adult sites’ growth, an American Civil Liberties Union attorney told the LA Times in 1999. In turn, that fueled further growth in the age verification industry.

Some services made their link to adult sites obvious, like Porno Press, which charged a one-time fee of $9.95 to access affiliated adult sites, a Congressional filing noted. But many others tried to mask the link, opting for names like PayCom Billing Services, Inc. or CCBill, as Forbes reported, perhaps enticing more customers by drawing less attention on a credit card statement. Other firms had names like Adult Check, Mancheck, and Adult Sights, Wired reported.

Of these firms, the biggest and most successful was Adult Check. At its peak popularity in 2001, the service boasted 4 million customers willing to pay “for the privilege of ogling 400,000 sex sites,” Forbes reported.

At the head of the company was Laith P. Alsarraf, the CEO of the Adult Check service provider Cybernet Ventures.

Alsarraf testified to Congress several times, becoming a go-to expert witness for lawmakers behind the 1998 Child Online Protection Act (COPA). Like the version of the CDA that prompted it, this act was ultimately deemed unconstitutional. And some judges and top law enforcement officers defended Alsarraf’s business model with Adult Check in court—insisting that it didn’t impact adult speech and “at most” posed a “modest burden” that was “outweighed by the government’s compelling interest in shielding minors” from adult content.

But his apparent conflicts of interest also drew criticism. One judge warned in 1999 that “perhaps we do the minors of this country harm if First Amendment protections, which they will with age inherit fully, are chipped away in the name of their protection,” the American Civil Liberties Union (ACLU) noted.

Summing up the seeming conflict, Ann Beeson, an ACLU lawyer, told the LA Times, “the government wants to shut down porn on the Net. And yet their main witness is this guy who makes his money urging more and more people to access porn on the Net.”

’90s kids dodged Adult Check age gates

Adult Check’s subscription costs varied, but the service predictably got more expensive as its popularity spiked. In 1999, customers could snag a “lifetime membership” for $76.95 or else fork over $30 every two years or $20 annually, the LA Times reported. Those were good deals compared to the significantly higher costs documented in the 2001 Forbes report, which noted a three-month package was available for $20, or users could pay $20 monthly to access supposedly premium content.

Among Adult Check’s customers were apparently some savvy kids who snuck through the cracks in the system. In various threads debating today’s laws, several Redditors have claimed that they used Adult Check as minors in the ’90s, either admitting to stealing a parent’s credit card or sharing age-authenticated passcodes with friends.

“Adult Check? I remember signing up for that in the mid-late 90s,” one commenter wrote in a thread asking if anyone would ever show ID to access porn. “Possibly a minor friend of mine paid for half the fee so he could use it too.”

“Those years were a strange time,” the commenter continued. “We’d go see tech-suspense-horror-thrillers like The Net and Disclosure where the protagonist has to fight to reclaim their lives from cyberantagonists, only to come home to send our personal information along with a credit card payment so we could look at porn.”

“LOL. I remember paying for the lifetime package, thinking I’d use it for decades,” another commenter responded. “Doh…”

Adult Check thrived even without age check laws

Sanderson’s study noted that today, minors’ “first exposure [to adult content] typically occurs between ages 11–13,” which is “substantially earlier than pre-Internet estimates.” Kids seeking out adult content may be in a period of heightened risk-taking or lack self-control, while others may be exposed without ever seeking it out. Some studies suggest that kids who are more likely to seek out adult content could struggle with lower self-esteem, emotional problems, body image concerns, or depressive symptoms. These potential negative associations with adolescent exposure to porn have long been the basis for lawmakers’ fight to keep the content away from kids—and even the biggest publishers today, like Pornhub, agree that it’s a worthy goal.

After parents got wise to ’90s kids dodging age gates, pressure predictably mounted on Adult Check to solve the problem, despite Adult Check consistently admitting that its system wasn’t foolproof. Alsarraf claimed that Adult Check developed “proprietary” technology to detect when kids were using credit cards or when multiple kids were attempting to use the same passcode at the same time from different IP addresses. He also claimed that Adult Check could detect stolen credit cards, bogus card numbers, card numbers “posted on the Internet,” and other fraud.

Meanwhile, the LA Times noted, Cybernet Ventures pulled in an estimated $50 million in 1999, ensuring that the CEO could splurge on a $690,000 house in Pasadena and a $100,000 Hummer. Although Adult Check was believed to be his most profitable venture at that time, Alsarraf told the LA Times that he wasn’t really invested in COPA passing.

“I know Adult Check will flourish,” Alsarraf said, “with or without the law.”

And he was apparently right. By 2001, subscriptions banked an estimated $320 million.

After the CDA and COPA were blocked, “many website owners continue to use Adult Check as a responsible approach to content accessibility,” Alsarraf testified.

While adult sites were likely just in it for the paychecks—which reportedly were dependably delivered—he positioned this ongoing growth as fueled by sites voluntarily turning to Adult Check to protect kids and free speech. “Adult Check allows a free flow of ideas and constitutionally protected speech to course through the Internet without censorship and unreasonable intrusion,” Alsarraf said.

“The Adult Check system is the least restrictive, least intrusive method of restricting access to content that requires minimal cost, and no parental technical expertise and intervention: It does not judge content, does not inhibit free speech, and it does not prevent access to any ideas, word, thoughts, or expressions,” Alsarraf testified.

Britney Spears aided Adult Check’s downfall

Adult Check’s downfall ultimately came in part thanks to Britney Spears, Wired reported in 2002. Spears went from Mickey Mouse Club child star to the “Princess of Pop” at 16 years old with her hit “Baby One More Time” in 1999, the same year that Adult Check rose to prominence.

Today, Spears is well-known for her activism, but in the late 1990s and early 2000s, she was one of the earliest victims of fake online porn.

Spears submitted documents in a lawsuit raised by the publisher of a porn magazine called Perfect 10. The publisher accused Adult Check of enabling the infringement of its content featured on the age check provider’s partner sites, and Spears’ documents helped prove that Adult Check was also linking to “non-existent nude photos,” allegedly in violation of unfair competition laws. The case was an early test of online liability, and Adult Check seemingly learned the hard way that the courts weren’t on its side.

That suit prompted an injunction blocking Adult Check from partnering with sites promoting supposedly illicit photos of “models and celebrities,” which it said was no big deal because it only comprised about 6 percent of its business.

However, after losing the lawsuit in 2004, Adult Check’s reputation took a hit, and it fell out of the pop lexicon. Although Cybernet Ventures continued to exist, Adult Check screening was dropped from sites, as it was no longer considered the gold standard in age verification. Perhaps more importantly, it was no longer required by law.

But although millions validated Adult Check for years, not everybody in the ’90s bought into Adult Check’s claims that it was protecting kids from porn. Some critics said it only provided a veneer of online safety without meaningfully impacting kids. Most of the country—more than 250 million US residents—never subscribed.

“I never used Adult Check,” one Redditor said in a thread pondering whether age gate laws might increase the risks of government surveillance. “My recollection was that it was an untrustworthy scam and unneeded barrier for the theater of legitimacy.”

Alsarraf keeps a lower profile these days and did not respond to Ars’ request to comment.

The rise and fall of Adult Check may have prevented more legally viable age verification systems from gaining traction. The ACLU argued that its popularity trampled the momentum of the “least restrictive” method for age checks available in the ’90s, a system called the Platform for Internet Content Selection (PICS).

Based on rating and filtering technology, PICS allowed content providers or third-party interest groups to create private rating systems so that “individual users can then choose the rating system that best reflects their own values, and any material that offends them will be blocked from their homes.”

However, like all age check systems, PICS was also criticized as being imperfect. Legal scholar Lawrence Lessig called it “the devil” because “it allows censorship at any point on the chain of distribution” of online content.

Although the age verification technology has changed, today’s lawmakers are stuck in the same debate decades later, with no perfect solutions in sight.

SCOTUS to rule on constitutionality of age gate laws

This summer, the Supreme Court will decide whether a Texas law blocking minors’ access to porn is constitutional. The decision could either stunt the momentum or strengthen the backbone of nearly 20 laws in red states across the country seeking to age-gate the Internet.

For privacy advocates opposing the laws, the SCOTUS ruling feels like a sink-or-swim moment for age gates, depending on which way the court swings. And it will come just as blue states like Colorado have recently begun pushing for age gates, too. Meanwhile, other laws increasingly seek to safeguard kids’ privacy and prevent social media addiction by also requiring age checks.

Since the 1990s, the US has debated how to best keep kids away from harmful content without trampling adults’ First Amendment rights. And while cruder credit card-based systems like Adult Check are no longer seen as viable, it’s clear that for lawmakers today, technology is still viewed as both the problem and the solution.

While lawmakers claim that the latest technology makes it easier than ever to access porn, advancements like digital IDs, device-based age checks, or app store age checks seem to signal salvation, making it easier to digitally verify user ages. And some artificial intelligence solutions have likely made lawmakers’ dreams of age-gating the Internet appear even more within reach.

Critics have condemned age gates as unconstitutionally limiting adults’ access to legal speech, at the furthest extreme accusing conservatives of seeking to censor all adult content online or expand government surveillance by tracking people’s sexual identity. (Goldman noted that “Russell Vought, an architect of Project 2025 and President Trump’s Director of the Office of Management and Budget, admitted that he favored age authentication mandates as a ‘back door’ way to censor pornography.”)

Ultimately, SCOTUS could end up deciding if any kind of age gate is ever appropriate. The court could perhaps rule that strict scrutiny, which requires a narrowly tailored solution to serve a compelling government interest, must be applied, potentially ruling out all of lawmakers’ suggested strategies. Or the court could decide that strict scrutiny applies but age checks are narrowly tailored. Or it could go the other way and rule that strict scrutiny does not apply, so all state lawmakers need to show is that their basis for requiring age verification is rationally connected to their interest in blocking minors from adult content.

Age verification remains flawed, experts say

If there’s anything the ’90s can teach lawmakers about age gates, it’s that creating an age verification industry dependent on adult sites will only incentivize the creation of more adult sites that benefit from the new rules. Back then, when age verification systems increased sites’ revenues, compliant sites were rewarded, but in today’s climate, it’s the noncompliant sites that stand to profit by not authenticating ages.

Sanderson’s study noted that Louisiana “was the only state that implemented age verification in a manner that plausibly preserved a user’s anonymity while verifying age,” which is why Pornhub didn’t block the state over its age verification law. But other states that Pornhub blocked passed copycat laws that “tended to be stricter, either requiring uploads of an individual’s government identification,” methods requiring providing other sensitive data, “or even presenting biometric data such as face scanning,” the study noted.

The technology continues evolving as the debate rages on. Some of the most popular platforms and biggest tech companies have been testing new age estimation methods this year. Notably, Discord is testing out face scans in the United Kingdom and Australia, and both Meta and Google are testing technology to supposedly detect kids lying about their ages online.

But a solution has not yet been found as parents and their lawyers circle social media companies they believe are harming their kids. In fact, the unreliability of the tech remains an issue for Meta, which is perhaps the most motivated to find a fix, having long faced immense pressure to improve child safety on its platforms. Earlier this year, Meta had to yank its age detection tool after the “measure didn’t work as well as we’d hoped and inadvertently locked out some parents and guardians who shared devices with their teens,” the company said.

On April 21, Meta announced that it started testing the tech in the US, suggesting the flaws were fixed, but Meta did not directly respond to Ars’ request to comment in more detail on updates.

Two years ago, Ash Johnson, a senior policy manager at the nonpartisan nonprofit think tank the Information Technology and Innovation Foundation (ITIF), urged Congress to “support more research and testing of age verification technology,” saying that the government’s last empirical evaluation was in 2014. She noted then that “the technology is not perfect, and some children will break the rules, eventually slipping through the safeguards,” but that lawmakers need to understand the trade-offs of advocating for different tech solutions or else risk infringing user privacy.

More research is needed, Johnson told Ars, while Sanderson’s study suggested that regulators should also conduct circumvention research or be stuck with laws that have a “limited effectiveness as a standalone policy tool.”

For example, while AI solutions are increasingly more accurate—and in one Facebook survey overwhelmingly more popular with users, Goldman’s analysis noted—the tech still struggles to differentiate between a 17- or 18-year-old, for example.

Like Aylo, ITIF recommends device-based age authentication as the least restrictive method, Johnson told Ars. Perhaps the biggest issue with that option, though, is that kids may have an easy time accessing adult content on devices shared with parents, Goldman noted.

Not sharing Johnson’s optimism, Goldman wrote that “there is no ‘preferred’ or ‘ideal’ way to do online age authentication.” Even a perfect system that accurately authenticates age every time would be flawed, he suggested.

“Rather, they each fall on a spectrum of ‘dangerous in one way’ to ‘dangerous in a different way,'” he wrote, concluding that “every solution has serious privacy, accuracy, or security problems.”

Kids at “grave risk” from uninformed laws

As a “burgeoning” age verification industry swells, Goldman wants to see more earnest efforts from lawmakers to “develop a wider and more thoughtful toolkit of online child safety measures.” They could start, he suggested, by consistently defining minors in laws so it’s clear who is being regulated and what access is being restricted. They could then provide education to parents and minors to help them navigate online harms.

Without such careful consideration, Goldman predicts a dystopian future prompted by age verification laws. If SCOTUS endorses them, users could become so accustomed to age gates that they start entering sensitive information into various web platforms without a second thought. Even the government knows that would be a disaster, Goldman said.

“Governments around the world want people to think twice before sharing sensitive biometric information due to the information’s immutability if stolen,” Goldman wrote. “Mandatory age authentication teaches them the opposite lesson.”

Goldman recommends that lawmakers start seeking an information-based solution to age verification problems rather than depending on tech to save the day.

“Treating the online age authentication challenges as purely technological encourages the unsupportable belief that its problems can be solved if technologists ‘nerd harder,'” Goldman wrote. “This reductionist thinking is a categorical error. Age authentication is fundamentally an information problem, not a technology problem. Technology can help improve information accuracy and quality, but it cannot unilaterally solve information challenges.”

Lawmakers could potentially minimize risks to kids by only verifying age when someone tries to access restricted content or “by compelling age authenticators to minimize their data collection” and “promptly delete any highly sensitive information” collected. That likely wouldn’t stop some vendors from collecting or retaining data anyway, Goldman suggested. But it could be a better standard to protect users of all ages from inevitable data breaches, since we know that “numerous authenticators have suffered major data security failures that put authenticated individuals at grave risk.”

“If the policy goal is to protect minors online because of their potential vulnerability, then forcing minors to constantly decide whether or not to share highly sensitive information with strangers online is a policy failure,” Goldman wrote. “Child safety online needs a whole-of-society response, not a delegate-and-pray approach.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Redditor accidentally reinvents discarded ’90s tool to escape today’s age gates Read More »

fcc-urges-courts-to-ignore-5th-circuit-ruling-that-agency-can’t-issue-fines

FCC urges courts to ignore 5th Circuit ruling that agency can’t issue fines


FCC fights the 5th Circuit

One court said FCC violated right to trial, but other courts haven’t ruled yet.

Credit: Getty Images | AaronP/Bauer-Griffin

The Federal Communications Commission is urging two federal appeals courts to disregard a 5th Circuit ruling that guts the agency’s ability to issue financial penalties.

On April 17, the US Court of Appeals for the 5th Circuit granted an AT&T request to wipe out a $57 million fine for selling customer location data without consent. The conservative 5th Circuit court said the FCC “acted as prosecutor, jury, and judge,” violating AT&T’s Seventh Amendment right to a jury trial.

The ruling wasn’t a major surprise. The 5th Circuit said it was guided by the Supreme Court’s June 2024 ruling in Securities and Exchange Commission v. Jarkesy, which held that “when the SEC seeks civil penalties against a defendant for securities fraud, the Seventh Amendment entitles the defendant to a jury trial.” After the Supreme Court’s Jarkesy ruling, FCC Republican Nathan Simington vowed to vote against any fine imposed by the commission until its legal powers are clear.

Before becoming the FCC chairman, Brendan Carr voted against the fine issued to AT&T and fines for similar privacy violations simultaneously levied against T-Mobile and Verizon. Carr repeatedly opposed Biden-era efforts to regulate telecom providers and is aiming to eliminate many of the FCC’s rules now that he is in charge. But Carr has also been aggressive in regulation of media, and he doesn’t want the FCC’s ability to issue penalties completely wiped out. The Carr FCC stated its position in new briefs submitted in separate lawsuits filed by T-Mobile and Verizon.

Verizon sued the FCC in the 2nd Circuit in an attempt to overturn its privacy fine, while T-Mobile and subsidiary Sprint sued in the District of Columbia Circuit. Verizon and T-Mobile reacted to the 5th Circuit ruling by urging the other courts to rule the same way, prompting responses from the FCC last week.

“The Fifth Circuit concluded that the FCC’s enforcement proceeding leading to a monetary forfeiture order violated AT&T’s Seventh Amendment rights. This Court shouldn’t follow that decision,” the FCC told the 2nd Circuit last week.

FCC loss has wide implications

Carr’s FCC argued that the agency’s “monetary forfeiture order proceedings pose no Seventh Amendment problem because Section 504(a) [of the Communications Act] affords carriers the opportunity to demand a de novo jury trial in federal district court before the government can recover any penalty. Verizon elected to forgo that opportunity and instead sought direct appellate review.” The FCC put forth the same argument in the T-Mobile case with a filing in the District of Columbia Circuit.

There would be a circuit split if either the 2nd Circuit or DC Circuit appeals court rules in the FCC’s favor, increasing the chances that the Supreme Court will take up the case and rule directly on the FCC’s enforcement authority.

Beyond punishing telecom carriers for privacy violations, an FCC loss could prevent the commission from fining robocallers. When Carr’s FCC proposed a $4.5 million fine for an allegedly illegal robocall scheme in February, Simington repeated his objection to the FCC issuing fines of any type.

“While the conduct described in this NAL [Notice of Apparent Liability for Forfeiture] is particularly egregious and certainly worth enforcement action, I continue to believe that the Supreme Court’s decision in Jarkesy prevents me from voting, at this time, to approve this or any item purporting to impose a fine,” Simington said at the time.

5th Circuit reasoning

The 5th Circuit ruling against the FCC was issued by a panel of three judges appointed by Republican presidents. “Our analysis is governed by SEC v. Jarkesy. In that case, the Supreme Court ruled that the Seventh Amendment prohibited the SEC from requiring respondents to defend themselves before an agency, rather than a jury, against civil penalties for alleged securities fraud,” the appeals court said.

The penalty issued by the FCC is not “remedial,” the court said. The fine was punitive and not simply “meant to compensate victims whose location data was compromised. So, like the penalties in Jarkesy, the civil penalties here are ‘a type of remedy at common law that could only be enforced in courts of law.'”

The FCC argued that its enforcement proceeding fell under the “public rights” exception, unlike the private rights that must be adjudicated in court. “The Commission argues its enforcement action falls within the public rights exception because it involves common carriers,” the 5th Circuit panel said. “Given that common carriers like AT&T are ‘affected with a public interest,’ the Commission contends Congress could assign adjudication of civil penalties against them to agencies instead of courts.”

The panel disagreed, saying that “the Commission’s proposal would blow a hole in what is meant to be a narrow exception to Article III” and “empower Congress to bypass Article III adjudication in countless matters.” The panel acknowledged that “federal agencies like the Commission have long had regulatory authority over common carriers, such as when setting rates or granting licenses,” but said this doesn’t mean that “any regulatory action concerning common carriers implicates the public rights exception.”

FCC hopes lie with other courts

The 5th Circuit panel also rejected the FCC’s contention that carriers are afforded the right to a trial after the FCC enforcement proceeding. The 5th Circuit said this applies only when a carrier fails to pay a penalty and is sued by the Department of Justice. “To begin with, by the time DOJ sues (if it does), the Commission would have already adjudged a carrier guilty of violating section 222 and levied fines… in this process, which was completely in-house, the Commission acted as prosecutor, jury, and judge,” the panel said.

An entity penalized by the FCC can also ask a court of appeals to overturn the fine, as AT&T did here. But in choosing this path, the company “forgoes a jury trial,” the 5th Circuit panel said.

While Verizon and T-Mobile hope the other appeals courts will rule the same way, the FCC maintains that the 5th Circuit got it wrong. In its filing to the 2nd Circuit, the FCC challenged the 5th Circuit’s view on whether a trial after the FCC issues a fine satisfies the right to a jury trial. Pointing to an 1899 Supreme Court ruling, the FCC said that “an initial tribunal can lawfully enter judgment without a full jury trial if the law permits a subsequent ‘trial [anew] by jury, at the request of either party, in the appellate court.'”

The FCC further said the 5th Circuit relied on a precedent that doesn’t exist in either the 2nd Circuit or District of Columbia Circuit.

“The Fifth Circuit also relied on circuit precedent holding that ‘[i]n a section 504 trial, a defendant cannot challenge a forfeiture order’s legal conclusions,'” the FCC also said. “This Court, however, has never adopted such a limitation, and the Fifth Circuit’s premise is in doubt. Regardless, the proper approach would be to challenge any such limitation in the trial court and seek to strike the limitation—not to vacate the forfeiture order.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

FCC urges courts to ignore 5th Circuit ruling that agency can’t issue fines Read More »