Policy

spotify-peeved-after-10,000-users-sold-data-to-build-ai-tools

Spotify peeved after 10,000 users sold data to build AI tools


Spotify sent a warning to stop data sales, but developers say they never got it.

For millions of Spotify users, the “Wrapped” feature—which crunches the numbers on their annual listening habits—is a highlight of every year’s end, ever since it debuted in 2015. NPR once broke down exactly why our brains find the feature so “irresistible,” while Cosmopolitan last year declared that sharing Wrapped screenshots of top artists and songs had by now become “the ultimate status symbol” for tens of millions of music fans.

It’s no surprise then that, after a decade, some Spotify users who are especially eager to see Wrapped evolve are no longer willing to wait to see if Spotify will ever deliver the more creative streaming insights they crave.

With the help of AI, these users expect that their data can be more quickly analyzed to potentially uncover overlooked or never-considered patterns that could offer even more insights into what their listening habits say about them.

Imagine, for example, accessing a music recap that encapsulates a user’s full listening history—not just their top songs and artists. With that unlocked, users could track emotional patterns, analyzing how their music tastes reflected their moods over time and perhaps helping them adjust their listening habits to better cope with stress or major life events. And for users particularly intrigued by their own data, there’s even the potential to use AI to cross data streams from different platforms and perhaps understand even more about how their music choices impact their lives and tastes more broadly.

Likely just as appealing as gleaning deeper personal insights, though, users could also potentially build AI tools to compare listening habits with their friends. That could lead to nearly endless fun for the most invested music fans, where AI could be tapped to assess all kinds of random data points, like whose breakup playlists are more intense or who really spends the most time listening to a shared favorite artist.

In pursuit of supporting developers offering novel insights like these, more than 18,000 Spotify users have joined “Unwrapped,” a collective launched in February that allows them to pool and monetize their data.

Voting as a group through the decentralized data platform Vana—which Wired profiled earlier this year—these users can elect to sell their dataset to developers who are building AI tools offering fresh ways for users to analyze streaming data in ways that Spotify likely couldn’t or wouldn’t.

In June, the group made its first sale, with 99.5 percent of members voting yes. Vana co-founder Anna Kazlauskas told Ars that the collective—at the time about 10,000 members strong—sold a “small portion” of its data (users’ artist preferences) for $55,000 to Solo AI.

While each Spotify user only earned about $5 in cryptocurrency tokens—which Kazlauskas suggested was not “ideal,” wishing the users had earned about “a hundred times” more—she said the deal was “meaningful” in showing Spotify users that their data “is actually worth something.”

“I think this is what shows how these pools of data really act like a labor union,” Kazlauskas said. “A single Spotify user, you’re not going to be able to go say like, ‘Hey, I want to sell you my individual data.’ You actually need enough of a pool to sort of make it work.”

Spotify sent warning to Unwrapped

Unsurprisingly, Spotify is not happy about Unwrapped, which is perhaps a little too closely named to its popular branded feature for the streaming giant’s comfort. A spokesperson told Ars that Spotify sent a letter to the contact info listed for Unwrapped developers on their site, outlining concerns that the collective could be infringing on Spotify’s Wrapped trademark.

Further, the letter warned that Unwrapped violates Spotify’s developer policy, which bans using the Spotify platform or any Spotify content to build machine learning or AI models. And developers may also be violating terms by facilitating users’ sale of streaming data.

“Spotify honors our users’ privacy rights, including the right of portability,” Spotify’s spokesperson said. “All of our users can receive a copy of their personal data to use as they see fit. That said, UnwrappedData.org is in violation of our Developer Terms which prohibit the collection, aggregation, and sale of Spotify user data to third parties.”

But while Spotify suggests it has already taken steps to stop Unwrapped, the Unwrapped team told Ars that it never received any communication from Spotify. It plans to defend users’ right to “access, control, and benefit from their own data,” its statement said, while providing reassurances that it will “respect Spotify’s position as a global music leader.”

Unwrapped “does not distribute Spotify’s content, nor does it interfere with Spotify’s business,” developers argued. “What it provides is community-owned infrastructure that allows individuals to exercise rights they already hold under widely recognized data protection frameworks—rights to access their own listening history, preferences, and usage data.”

“When listeners choose to share or monetize their data together, they are not taking anything away from Spotify,” developers said. “They are simply exercising digital self-determination. To suggest otherwise is to claim that users do not truly own their data—that Spotify owns it for them.”

Jacob Hoffman-Andrews, a senior staff technologist for the digital rights group the Electronic Frontier Foundation, told Ars that—while EFF objects to data dividend schemes “where users are encouraged to share personal information in exchange for payment”—Spotify users should nevertheless always maintain control of their data.

“In general, listeners should have control of their own data, which includes exporting it for their own use,” Hoffman-Andrews said. “An individual’s musical history is of use not just to Spotify but also to the individual who created it. And there’s a long history of services that enable this sort of data portability, for instance Last.fm, which integrates with Spotify and many other services.”

To EFF, it seems ill-advised to sell data to AI companies, Hoffman-Andrews said, emphasizing “privacy isn’t a market commodity, it’s a fundamental right.”

“Of course, so is the right to control one’s own data,” Hoffman-Andrews noted, seeming to agree with Unwrapped developers in concluding that “ultimately, listeners should get to do what they want with their own information.”

Users’ right to privacy is the primary reason why Unwrapped developers told Ars that they’re hoping Spotify won’t try to block users from selling data to build AI.

“This is the heart of the issue: If Spotify seeks to restrict or penalize people for exercising these rights, it sends a chilling message that its listeners should have no say in how their own data is used,” the Unwrapped team’s statement said. “That is out of step not only with privacy law, but with the values of transparency, fairness, and community-driven innovation that define the next era of the Internet.”

Unwrapped sign-ups limited due to alleged Spotify issues

There could be more interest in Unwrapped. But Kazlauskas alleged to Ars that in the more than six months since Unwrapped’s launch, “Spotify has made it extraordinarily difficult” for users to port over their data. She claimed that developers have found that “every time they have an easy way for users to get their data,” Spotify shuts it down “in some way.”

Supposedly because of Spotify’s interference, Unwrapped remains in an early launch phase and can only offer limited spots for new users seeking to sell their data. Kazlauskas told Ars that about 300 users can be added each day due to the cumbersome and allegedly shifting process for porting over data.

Currently, however, Unwrapped is working on an update that could make that process more stable, Kazlauskas said, as well as changes to help users regularly update their streaming data. Those updates could perhaps attract more users to the collective.

Critics of Vana, like TechCrunch’s Kyle Wiggers, have suggested that data pools like Unwrapped will never reach “critical mass,” likely only appealing to niche users drawn to decentralization movements. Kazlauskas told Ars that data sale payments issued in cryptocurrency are one barrier for crypto-averse or crypto-shy users interested in Vana.

“The No. 1 thing I would say is, this kind of user experience problem where when you’re using any new kind of decentralized technology, you need to set up a wallet, then you’re getting tokens,” Kazlauskas explained. Users may feel culture shock, wondering, “What does that even mean? How do I vote with this thing? Is this real money?”

Kazlauskas is hoping that Vana supports a culture shift, striving to reach critical mass by giving users a “commercial lens” to start caring about data ownership. She also supports legislation like the Digital Choice Act in Utah, which “requires actually real-time API access, so people can get their data.” If the US had a federal law like that, Kazlauskas suspects that launching Unwrapped would have been “so much easier.”

Although regulations like Utah’s law could serve as a harbinger of a sea change, Kazlauskas noted that Big Tech companies that currently control AI markets employ a fierce lobbying force to maintain control over user data that decentralized movements just don’t have.

As Vana partners with Flower AI, striving, as Wired reported, to “shake up the AI industry” by releasing “a giant 100 billion-parameter model” later this year, Kazlauskas remains committed to ensuring that users are in control and “not just consumed.” She fears a future where tech giants may be motivated to use AI to surveil, influence, or manipulate users, when instead users could choose to band together and benefit from building more ethical AI.

“A world where a single company controls AI is honestly really dystopian,” Kazlauskas told Ars. “I think that it is really scary. And so I think that the path that decentralized AI offers is one where a large group of people are still in control, and you still get really powerful technology.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Spotify peeved after 10,000 users sold data to build AI tools Read More »

ai-vs.-maga:-populists-alarmed-by-trump’s-embrace-of-ai,-big-tech

AI vs. MAGA: Populists alarmed by Trump’s embrace of AI, Big Tech

Some Republicans are still angry over the deplatforming of Trump by tech executives once known for their progressive politics. They had been joined by a “vocal and growing group of conservatives who are fundamentally suspicious of the benefits of technological innovation,” Thierer said.

With MAGA skeptics on one side and Big Tech allies of the president on the other, a “battle for the soul of the conservative movement” is under way.

Popular resentment is now a threat to Trump’s Republican Party, warn some of its biggest supporters—especially if AI begins displacing jobs as many of its exponents suggest.

“You can displace farm workers—what are they going to do about it? You can displace factory workers—they will just kill themselves with drugs and fast food,” Tucker Carlson, one of the MAGA movement’s most prominent media figures, told a tech conference on Monday.

“If you do that to lawyers and non-profit sector employees, you will get a revolution.”

It made Trump’s embrace of Silicon Valley bosses a “significant risk” for his administration ahead of next year’s midterm elections, a leading Republican strategist said.

“It’s a real double-edged sword—the administration is forced to embrace [AI] because if the US is not the leader in AI, China will be,” the strategist said, echoing the kind of argument made by Sacks and fellow Trump adviser Michael Kratsios for their AI policy platform.

“But you could see unemployment spiking over the next year,” the strategist said.

Other MAGA supporters are urging Trump to tone down at least his public cheerleading for an AI sector so many of them consider a threat.

“The pressure that is being placed on conservatives to fall in line… is a recipe for discontent,” said Toscano.

By courting AI bosses, the Republican Party, which claims to represent the pro-family movement, religious communities, and American workers, appeared to be embracing those who are antithetical to all of those groups, he warned.

“The current view of things suggests that the most important members of the party are those that are from Silicon Valley,” Toscano said.

Additional reporting by Cristina Criddle in San Francisco.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

AI vs. MAGA: Populists alarmed by Trump’s embrace of AI, Big Tech Read More »

pay-per-output?-ai-firms-blindsided-by-beefed-up-robotstxt-instructions.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions.


“Really Simple Licensing” makes it easier for creators to get paid for AI scraping.

Logo for the “Really Simply Licensing” (RSL) standard. Credit: via RSL Collective

Leading Internet companies and publishers—including Reddit, Yahoo, Quora, Medium, The Daily Beast, Fastly, and more—think there may finally be a solution to end AI crawlers hammering websites to scrape content without permission or compensation.

Announced Wednesday morning, the “Really Simply Licensing” (RSL) standard evolves robots.txt instructions by adding an automated licensing layer that’s designed to block bots that don’t fairly compensate creators for content.

Free for any publisher to use starting today, the RSL standard is an open, decentralized protocol that makes clear to AI crawlers and agents the terms for licensing, usage, and compensation of any content used to train AI, a press release noted.

The standard was created by the RSL Collective, which was founded by Doug Leeds, former CEO of Ask.com, and Eckart Walther, a former Yahoo vice president of products and co-creator of the RSS standard, which made it easy to syndicate content across the web.

Based on the “Really Simply Syndication” (RSS) standard, RSL terms can be applied to protect any digital content, including webpages, books, videos, and datasets. The new standard supports “a range of licensing, usage, and royalty models, including free, attribution, subscription, pay-per-crawl (publishers get compensated every time an AI application crawls their content), and pay-per-inference (publishers get compensated every time an AI application uses their content to generate a response),” the press release said.

Leeds told Ars that the idea to use the RSS “playbook” to roll out the RSL standard arose after he invited Walther to speak to University of California, Berkeley students at the end of last year. That’s when the longtime friends with search backgrounds began pondering how AI had changed the search industry, as publishers today are forced to compete with AI outputs referencing their own content as search traffic nosedives.

Eckart had watched the RSS standard quickly become adopted by millions of sites, and he realized that RSS had actually always been a licensing standard, Leeds said. Essentially, by adopting the RSS standard, publishers agreed to let search engines license a “bit” of their content in exchange for search traffic, and Eckart realized that it could be just as straightforward to add AI licensing terms in the same way. That way, publishers could strive to recapture lost search revenue by agreeing to license all or some part of their content to train AI in return for payment each time AI outputs link to their content.

Leeds told Ars that the RSL standard doesn’t just benefit publishers, though. It also solves a problem for AI companies, which have complained in litigation over AI scraping that there is no effective way to license content across the web.

“We have listened to them, and what we’ve heard them say is… we need a new protocol,” Leeds said. With the RSL standard, AI firms get a “scalable way to get all the content” they want, while setting an incentive that they’ll only have to pay for the best content that their models actually reference.

“If they’re using it, they pay for it, and if they’re not using it, they don’t pay for it,” Leeds said.

No telling yet how AI firms will react to RSL

At this point, it’s hard to say if AI companies will embrace the RSL standard. Ars reached out to Google, Meta, OpenAI, and xAI—some of the big tech companies whose crawlers have drawn scrutiny—to see if it was technically feasible to pay publishers for every output referencing their content. xAI did not respond, and the other companies declined to comment without further detail about the standard, appearing to have not yet considered how a licensing layer beefing up robots.txt could impact their scraping.

Today will likely be the first chance for AI companies to wrap their heads around the idea of paying publishers per output. Leeds confirmed that the RSL Collective did not consult with AI companies when developing the RSL standard.

But AI companies know that they need a constant stream of fresh content to keep their tools relevant and to continually innovate, Leeds suggested. In that way, the RSL standard “supports what supports them,” Leeds said, “and it creates the appropriate incentive system” to create sustainable royalty streams for creators and ensure that human creativity doesn’t wane as AI evolves.

While we’ll have to wait to see how AI firms react to RSL, early adopters of the standard celebrated the launch today. That included Neil Vogel, CEO of People Inc., who said that “RSL moves the industry forward—evolving from simply blocking unauthorized crawlers, to setting our licensing terms, for all AI use cases, at global web scale.”

Simon Wistow, co-founder of Fastly, suggested the solution “is a timely and necessary response to the shifting economics of the web.”

“By making it easy for publishers to define and enforce licensing terms, RSL lays the foundation for a healthy content ecosystem—one where innovation and investment in original work are rewarded, and where collaboration between publishers and AI companies becomes frictionless and mutually beneficial,” Wistow said.

Leeds noted that a key benefit of the RSL standard is that even small creators will now have an opportunity to generate revenue for helping to train AI. Tony Stubblebine, CEO of Medium, did not mince words when explaining the battle that bloggers face as AI crawlers threaten to divert their traffic without compensating them.

“Right now, AI runs on stolen content,” Stubblebine said. “Adopting this RSL Standard is how we force those AI companies to either pay for what they use, stop using it, or shut down.”

How will the RSL standard be enforced?

On the RSL standard site, publishers can find common terms to add templated or customized text to their robots.txt files to adopt the RSL standard today and start protecting their content from unfettered AI scraping. Here’s an example of how machine-readable licensing terms could look, added directly to robots.txt files:

# NOTICE: all crawlers and bots are strictly prohibited from using this

# content for AI training without complying with the terms of the RSL

# Collective AI royalty license. Any use of this content for AI training

# without a license is a violation of our intellectual property rights.

License: https://rslcollective.org/royalty.xml

Through RSL terms, publishers can automate licensing, with the cloud company Fastly partnering with the collective to provide technical enforcement that Leeds described as tech that acts as a bouncer to keep unapproved bots away from valuable content. It seems likely that Cloudflare, which launched a pay-per-crawl program blocking greedy crawlers in July, could also help enforce the RSL standard.

For publishers, the standard “solves a business problem immediately,” Leeds told Ars, so the collective is hopeful that RSL will be rapidly and widely adopted. As further incentive, publishers can also rely on the RSL standard to “easily encrypt and license non-published, proprietary content to AI companies, including paywalled articles, books, videos, images, and data,” the RSL Collective site said, and that potentially could expand AI firms’ data pool.

On top of technical enforcement, Leeds said that publishers and content creators could legally enforce the terms, noting that the recent $1.5 billion Anthropic settlement suggests “there’s real money at stake” if you don’t train AI “legitimately.”

Should the industry adopt the standard, it could “establish fair market prices and strengthen negotiation leverage for all publishers,” the press release said. And Leeds noted that it’s very common for regulations to follow industry solutions (consider the Digital Millennium Copyright Act). Since the RSL Collective is already in talks with lawmakers, Leeds thinks “there’s good reason to believe” that AI companies will soon “be forced to acknowledge” the standard.

“But even better than that,” Leeds said, “it’s in their interest” to adopt the standard.

With RSL, AI firms can license content at scale “in a way that’s fair [and] preserves the content that they need to make their products continue to innovate.”

Additionally, the RSL standard may solve a problem that risks gutting trust and interest in AI at this early stage.

Leeds noted that currently, AI outputs don’t provide “the best answer” to prompts but instead rely on mashing up answers from different sources to avoid taking too much content from one site. That means that not only do AI companies “spend an enormous amount of money on compute costs to do that,” but AI tools may also be more prone to hallucination in the process of “mashing up” source material “to make something that’s not the best answer because they don’t have the rights to the best answer.”

“The best answer could exist somewhere,” Leeds said. But “they’re spending billions of dollars to create hallucinations, and we’re talking about: Let’s just solve that with a licensing scheme that allows you to use the actual content in a way that solves the user’s query best.”

By transforming the “ecosystem” with a standard that’s “actually sustainable and fair,” Leeds said that AI companies could also ensure that humanity never gets to the point where “humans stop producing” and “turn to AI to reproduce what humans can’t.”

Failing to adopt the RSL standard would be bad for AI innovation, Leeds suggested, perhaps paving the way for AI to replace search with a “sort of self-fulfilling swap of bad content that actually one doesn’t have any current information, doesn’t have any current thinking, because it’s all based on old training information.”

To Leeds, the RSL standard is ultimately “about creating the system that allows the open web to continue. And that happens when we get adoption from everybody,” he said, insisting that “literally the small guys are as important as the big guys” in pushing the entire industry to change and fairly compensate creators.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions. Read More »

girlsdoporn-owner-michael-pratt-gets-27-years-for-sex-trafficking-conspiracy

GirlsDoPorn owner Michael Pratt gets 27 years for sex trafficking conspiracy

“For almost a decade, the Defendant led the scheme to systematically coerce and defraud women into engaging in filmed sexual activity for profit,” the sentencing recommendation said. “A sentence of 260 months is warranted, given the longevity of the scheme, the amount of profit, and the extent of the damage to the victims.”

Pratt’s plea agreement limited his rights to appeal the sentencing, but said he “may appeal a custodial sentence above 260 months.” The 27-year (324-month) sentence exceeds that. While the government agreed to recommend no more than 260 months, the plea agreement said the government “may support on appeal the sentence or restitution order actually imposed.”

Pratt fled the US in 2019, shortly before being charged with sex trafficking crimes. “He was named to the FBI’s Ten Most Wanted list and lived as an international fugitive for more than three years until his arrest in Spain in December 2022 and extradition to San Diego in March 2024,” the DOJ said.

Pratt tried to minimize his role

Pratt is the fourth person to be sentenced in the GirlsDoPorn case. Pratt’s business partner, Matthew Wolfe, received 14 years. Ruben Andre Garcia was sentenced to 20 years, and Theodore Gyi was sentenced to four years. Defendant Valorie Moser is scheduled to be sentenced on Friday this week.

Pratt’s sentencing memorandum tried to minimize his role in the conspiracy. “Circa 2014, Mr. Pratt’s childhood friend, Matt Wolfe, took over as the cameraman and Mr. Pratt spent more time in the office doing post-production work and other business related activities,” the filing said.

Pratt argued that Garcia exhibited “erratic and unpredictable” behavior and that “much of this conduct occurred outside of Mr. Pratt’s presence.” Pratt’s filing said he should not receive a sentence as long as Garcia’s.

Garcia “was a rapist,” Pratt’s filing said. “Mr. Pratt had no involvement in Garcia’s sexual activities with the models before or after filming, nor did he condone it. When he received some complaints about Garcia’s behavior, Mr. Pratt took precautions to ensure the safety of the models, including setting up nanny video cameras, securing hotel incidental refrigerators, and ensuring everyone left the hotel as a group.”

The government’s sentencing memorandum described Pratt as “the ringleader in a wide-ranging sex-trafficking conspiracy during which many women were defrauded into engaging in sex acts on camera, destroying many of their lives.” The “scheme would never have occurred” if not for Pratt’s actions, “and hundreds of women would not have been victimized,” the government filing said.

GirlsDoPorn owner Michael Pratt gets 27 years for sex trafficking conspiracy Read More »

judge:-anthropic’s-$1.5b-settlement-is-being-shoved-“down-the-throat-of-authors”

Judge: Anthropic’s $1.5B settlement is being shoved “down the throat of authors”

At a hearing Monday, US district judge William Alsup blasted a proposed $1.5 billion settlement over Anthropic’s rampant piracy of books to train AI.

The proposed settlement comes in a case where Anthropic could have owed more than $1 trillion in damages after Alsup certified a class that included up to 7 million claimants whose works were illegally downloaded by the AI company.

Instead, critics fear Anthropic will get off cheaply, striking a deal with authors suing that covers less than 500,000 works and paying a small fraction of its total valuation (currently $183 billion) to get away with the massive theft. Defector noted that the settlement doesn’t even require Anthropic to admit wrongdoing, while the company continues raising billions based on models trained on authors’ works. Most recently, Anthropic raised $13 billion in a funding round, making back about 10 times the proposed settlement amount after announcing the deal.

Alsup expressed grave concerns that lawyers rushed the deal, which he said now risks being shoved “down the throat of authors,” Bloomberg Law reported.

In an order, Alsup clarified why he thought the proposed settlement was a chaotic mess. The judge said he was “disappointed that counsel have left important questions to be answered in the future,” seeking approval for the settlement despite the Works List, the Class List, the Claim Form, and the process for notification, allocation, and dispute resolution all remaining unresolved.

Denying preliminary approval of the settlement, Alsup suggested that the agreement is “nowhere close to complete,” forcing Anthropic and authors’ lawyers to “recalibrate” the largest publicly reported copyright class-action settlement ever inked, Bloomberg reported.

Of particular concern, the settlement failed to outline how disbursements would be managed for works with multiple claimants, Alsup noted. Until all these details are ironed out, Alsup intends to withhold approval, the order said.

One big change the judge wants to see is the addition of instructions requiring “anyone with copyright ownership” to opt in, with the consequence that the work won’t be covered if even one rights holder opts out, Bloomberg reported. There should also be instruction that any disputes over ownership or submitted claims should be settled in state court, Alsup said.

Judge: Anthropic’s $1.5B settlement is being shoved “down the throat of authors” Read More »

supreme-court-chief-justice-lets-trump-fire-ftc-democrat,-at-least-for-now

Supreme Court Chief Justice lets Trump fire FTC Democrat, at least for now

1935 Supreme Court is key precedent

The key precedent in the case is Humphrey’s Executor v. United States, a 1935 ruling in which the Supreme Court unanimously held that the president can only remove FTC commissioners for inefficiency, neglect of duty, or malfeasance in office. Trump’s termination notices to Slaughter and Bedoya said they were being fired simply because their presence on the commission “is inconsistent with my Administration’s priorities.”

The Trump administration argues that Humphrey’s Executor shouldn’t apply to the current version of the FTC because it exercises significant executive power. But the appeals court, in a 2-1 ruling, said “the present-day Commission exercises the same powers that the Court understood it to have in 1935 when Humphrey’s Executor was decided.”

“The government has no likelihood of success on appeal given controlling and directly on point Supreme Court precedent,” the panel majority said.

But while the government was found to have no likelihood of success in the DC Circuit appeals court, its chances are presumably much better in the Supreme Court. The Supreme Court previously stayed District Court decisions in cases involving Trump’s removal of Democrats from the National Labor Relations Board, the Merit Systems Protection Board, and the Consumer Product Safety Commission.

In a 2020 decision involving the Consumer Financial Protection Bureau, the court said in a footnote that its 1935 “conclusion that the FTC did not exercise executive power has not withstood the test of time.” If the Supreme Court ultimately rules in favor of Trump, it could throw out the Humphrey’s Executor ruling or clarify it in a way that makes it inapplicable to the FTC.

But Humphrey’s Executor is still a binding precedent, Slaughter’s opposition to the administrative stay said. “This Court should not grant an administrative stay where the court below simply ‘follow[ed] the case which directly controls,’ as it was required to do,” the Slaughter filing said.

Supreme Court Chief Justice lets Trump fire FTC Democrat, at least for now Read More »

“first-of-its-kind”-ai-settlement:-anthropic-to-pay-authors-$1.5-billion

“First of its kind” AI settlement: Anthropic to pay authors $1.5 billion

Authors revealed today that Anthropic agreed to pay $1.5 billion and destroy all copies of the books the AI company pirated to train its artificial intelligence models.

In a press release provided to Ars, the authors confirmed that the settlement is “believed to be the largest publicly reported recovery in the history of US copyright litigation.” Covering 500,000 works that Anthropic pirated for AI training, if a court approves the settlement, each author will receive $3,000 per work that Anthropic stole. “Depending on the number of claims submitted, the final figure per work could be higher,” the press release noted.

Anthropic has already agreed to the settlement terms, but a court must approve them before the settlement is finalized. Preliminary approval may be granted this week, while the ultimate decision may be delayed until 2026, the press release noted.

Justin Nelson, a lawyer representing the three authors who initially sued to spark the class action—Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber—confirmed that if the “first of its kind” settlement “in the AI era” is approved, the payouts will “far” surpass “any other known copyright recovery.”

“It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners,” Nelson said. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.”

Groups representing authors celebrated the settlement on Friday. The CEO of the Authors’ Guild, Mary Rasenberger, said it was “an excellent result for authors, publishers, and rightsholders generally.” Perhaps most critically, the settlement shows “there are serious consequences when” companies “pirate authors’ works to train their AI, robbing those least able to afford it,” Rasenberger said.

“First of its kind” AI settlement: Anthropic to pay authors $1.5 billion Read More »

warner-bros.-sues-midjourney-to-stop-ai-knockoffs-of-batman,-scooby-doo

Warner Bros. sues Midjourney to stop AI knockoffs of Batman, Scooby-Doo


AI would’ve gotten away with it too…

Warner Bros. case builds on arguments raised in a Disney/Universal lawsuit.

DVD art for the animated movie Scooby-Doo & Batman: The Brave and the Bold. Credit: Warner Bros. Discovery

Warner Bros. hit Midjourney with a lawsuit Thursday, crafting a complaint that strives to shoot down defenses that the AI company has already raised in a similar lawsuit filed by Disney and Universal Studios earlier this year.

The big film studios have alleged that Midjourney profits off image generation models trained to produce outputs of popular characters. For Disney and Universal, intellectual property rights to pop icons like Darth Vader and the Simpsons were allegedly infringed. And now, the WB complaint defends rights over comic characters like Superman, Wonder Woman, and Batman, as well as characters considered “pillars of pop culture with a lasting impact on generations,” like Scooby-Doo and Bugs Bunny, and modern cartoon characters like Rick and Morty.

“Midjourney brazenly dispenses Warner Bros. Discovery’s intellectual property as if it were its own,” the WB complaint said, accusing Midjourney of allowing subscribers to “pick iconic” copyrighted characters and generate them in “every imaginable scene.”

Planning to seize Midjourney’s profits from allegedly using beloved characters to promote its service, Warner Bros. described Midjourney as “defiant and undeterred” by the Disney/Universal lawsuit. Despite that litigation, WB claimed that Midjourney has recently removed copyright protections in its supposedly shameful ongoing bid for profits. Nothing but a permanent injunction will end Midjourney’s outputs of allegedly “countless infringing images,” WB argued, branding Midjourney’s alleged infringements as “vast, intentional, and unrelenting.”

Examples of closely matching outputs include prompts for “screencaps” showing specific movie frames, a search term that at least one artist, Reid Southen, had optimistically predicted Midjourney would block last year, but it apparently did not.

Here are some examples included in WB’s complaint:

Midjourney’s output for the prompt, “Superman, classic cartoon character, DC comics.”

Midjourney could face devastating financial consequences in a loss. At trial, WB is hoping discovery will show the true extent of Midjourney’s alleged infringement, asking the court for maximum statutory damages, at $150,000 per infringing output. Just 2,000 infringing outputs unearthed could cost Midjourney more than its total revenue for 2024, which was approximately $300 million, the WB complaint said.

Warner Bros. hopes to hobble Midjourney’s best defense

For Midjourney, the WB complaint could potentially hit harder than the Disney/Universal lawsuit. WB’s complaint shows how closely studios are monitoring AI copyright litigation, likely choosing ideal moments to strike when studios feel they can better defend their property. So, while much of WB’s complaint echoes Disney and Universal’s arguments—which Midjourney has already begun defending against—IP attorney Randy McCarthy suggested in statements provided to Ars that WB also looked for seemingly smart ways to potentially overcome some of Midjourney’s best defenses when filing its complaint.

WB likely took note when Midjourney filed its response to the Disney/Universal lawsuit last month, arguing that its system is “trained on billions of publicly available images” and generates images not by retrieving a copy of an image in its database but based on “complex statistical relationships between visual features and words in the text-image pairs are encoded within the model.”

This defense could allow Midjourney to avoid claims that it copied WB images and distributes copies through its models. But hoping to dodge this defense, WB didn’t argue that Midjourney retains copies of its images. Rather, the entertainment giant raised a more nuanced argument that:

Midjourney used software, servers, and other technology to store and fix data associated with Warner Bros. Discovery’s Copyrighted Works in such a manner that those works are thereby embodied in the model, from which Midjourney is then able to generate, reproduce, publicly display, and distribute unlimited “copies” and “derivative works” of Warner Bros. Discovery’s works as defined by the Copyright Act.”

McCarthy noted that WB’s argument pushes the court to at least consider that even though “Midjourney does not store copies of the works in its model,” its system “nonetheless accesses the data relating to the works that are stored by Midjourney’s system.”

“This seems to be a very clever way to counter MJ’s ‘statistical pattern analysis’ arguments,” McCarthy said.

If it’s a winning argument, that could give WB a path to wipe Midjourney’s models. WB argued that each time Midjourney provides a “substantially new” version of its image generator, it “repeats this process.” And that ongoing activity—due to Midjourney’s initial allegedly “massive copying” of WB works—allows Midjourney to “further reproduce, publicly display, publicly perform, and distribute image and video outputs that are identical or virtually identical to Warner Bros. Discovery’s Copyrighted Works in response to simple prompts from subscribers.”

Perhaps further strengthening the WB’s argument, the lawsuit noted that Midjourney promotes allegedly infringing outputs on its 24/7 YouTube channel and appears to have plans to compete with traditional TV and streaming services. Asking the court to block Midjourney’s outputs instead, WB claims it’s already been “substantially and irreparably harmed” and risks further damages if the AI image generator is left unchecked.

As alleged proof that the AI company knows its tool is being used to infringe WB property, WB pointed to Midjourney’s own Discord server and subreddit, where users post outputs depicting WB characters and share tips to help others do the same. They also called out Midjourney’s “Explore” page, which allows users to drop a WB-referencing output into the prompt field to generate similar images.

“It is hard to imagine copyright infringement that is any more willful than what Midjourney is doing here,” the WB complaint said.

WB and Midjourney did not immediately respond to Ars’ request to comment.

Midjourney slammed for promising “fewer blocked jobs”

McCarthy noted that WB’s legal strategy differs in other ways from the arguments Midjourney’s already weighed in the Disney/Universal lawsuit.

The WB complaint also anticipates Midjourney’s likely defense that users are generating infringing outputs, not Midjourney, which could invalidate any charges of direct copyright infringement.

In the Disney/Universal lawsuit, Midjourney argued that courts have recently found that AI tools referencing copyrighted works is “a quintessentially transformative fair use,” accusing studios of trying to censor “an instrument for user expression.” They claim that Midjourney cannot know about infringing outputs unless studios use the company’s DMCA process, while noting that subscribers have “any number of legitimate, noninfringing grounds to create images incorporating characters from popular culture,” including “non-commercial fan art, experimentation and ideation, and social commentary and criticism.”

To avoid losing on that front, the WB complaint doesn’t depend on a ruling that Midjourney directly infringed copyrights. Instead, the complaint “more fully” emphasizes how Midjourney may be “secondarily liable for infringement via contributory, inducement and/or vicarious liability by inducing its users to directly infringe,” McCarthy suggested.

Additionally, WB’s complaint “seems to be emphasizing” that Midjourney “allegedly has the technical means to prevent its system from accepting prompts that directly reference copyrighted characters,” and “that would prevent infringing outputs from being displayed,” McCarthy said.

The complaint noted that Midjourney is in full control of what outputs can be generated. Noting that Midjourney “temporarily refused to ‘animate'” outputs of WB characters after launching video generations, the lawsuit appears to have been filed in response to Midjourney “deliberately” removing those protections and then announcing that subscribers would experience “fewer blocked jobs.”

Together, these arguments “appear to be intended to lead to the inference that Midjourney is willfully enticing its users to infringe,” McCarthy said.

WB’s complaint details simple user prompts that generate allegedly infringing outputs without any need to manipulate the system. The ease of generating popular characters seems to make Midjourney a destination for users frustrated by other AI image generators that make it harder to generate infringing outputs, WB alleged.

On top of that, Midjourney also infringes copyrights by generating WB characters, “even in response to generic prompts like ‘classic comic book superhero battle.'” And while Midjourney has seemingly taken steps to block WB characters from appearing on its “Explore” page, where users can find inspiration for prompts, these guardrails aren’t perfect, but rather “spotty and suspicious,” WB alleged. Supposedly, searches for correctly spelled character names like “Batman” are blocked, but any user who accidentally or intentionally mispells a character’s name like “Batma” can learn an easy way to work around that block.

Additionally, WB alleged, “the outputs often contain extensive nuance and detail, background elements, costumes, and accessories beyond what was specified in the prompt.” And every time that Midjourney outputs an allegedly infringing image, it “also trains on the outputs it has generated,” the lawsuit noted, creating a never-ending cycle of continually enhanced AI fakes of pop icons.

Midjourney could slow down the cycle and “minimize” these allegedly infringing outputs, if it cannot automatically block them all, WB suggested. But instead, “Midjourney has made a calculated and profit-driven decision to offer zero protection for copyright owners even though Midjourney knows about the breathtaking scope of its piracy and copyright infringement,” WB alleged.

Fearing a supposed scheme to replace WB in the market by stealing its best-known characters, WB accused Midjourney of willfully allowing WB characters to be generated in order to “generate more money for Midjourney” to potentially compete in streaming markets.

Midjourney will remove protections “on a whim”

As Midjourney’s efforts to expand its features escalate, WB claimed that trust is lost. Even if Midjourney takes steps to address rightsholders’ concerns, WB argued, studios must remain watchful of every upgrade, since apparently, “Midjourney can and will remove copyright protection measures on a whim.”

The complaint noted that Midjourney just this week announced “plans to continue deploying new versions” of its image generator, promising to make it easier to search for and save popular artists’ styles—updating a feature that many artists loathe.

Without an injunction, Midjourney’s alleged infringement could interfere with WB’s licensing opportunities for its content, while “illegally and unfairly” diverting customers who buy WB products like posters, wall art, prints, and coloring books, the complaint said.

Perhaps Midjourney’s strongest defense could be efforts to prove that WB benefits from its image generator. In the Disney/Universal lawsuit, Midjourney pointed out that studios “benefit from generative AI models,” claiming that “many dozens of Midjourney subscribers are associated with” Disney and Universal corporate email addresses. If WB corporate email addresses are found among subscribers, Midjourney could claim that WB is trying to “have it both ways” by “seeking to profit” from AI tools while preventing Midjourney and its subscribers from doing the same.

McCarthy suggested it’s too soon to say how the WB battle will play out, but Midjourney’s response will reveal how it intends to shift tactics to avoid courts potentially picking apart its defense of its training data, while keeping any blame for copyright-infringing outputs squarely on users.

“As with the Disney/Universal lawsuit, we need to wait to see how Midjourney answers these latest allegations,” McCarthy said. “It is definitely an interesting development that will have widespread implications for many sectors of our society.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Warner Bros. sues Midjourney to stop AI knockoffs of Batman, Scooby-Doo Read More »

harvard-beats-trump-as-judge-orders-us-to-restore-$2.6-billion-in-funding

Harvard beats Trump as judge orders US to restore $2.6 billion in funding

Burroughs’ footnote said that district courts try to follow Supreme Court rulings, but “the Supreme Court’s recent emergency docket rulings regarding grant terminations have not been models of clarity, and have left many issues unresolved.”

“This Court understands, of course, that the Supreme Court, like the district courts, is trying to resolve these issues quickly, often on an emergency basis, and that the issues are complex and evolving,” Burroughs wrote. “Given this, however, the Court respectfully submits that it is unhelpful and unnecessary to criticize district courts for ‘defy[ing]’ the Supreme Court when they are working to find the right answer in a rapidly evolving doctrinal landscape, where they must grapple with both existing precedent and interim guidance from the Supreme Court that appears to set that precedent aside without much explanation or consensus.”

White House blasts “activist Obama-appointed judge”

White House spokesperson Liz Huston issued a statement saying the government will immediately appeal the “egregious” ruling. “Just as President Trump correctly predicted on the day of the hearing, this activist Obama-appointed judge was always going to rule in Harvard’s favor, regardless of the facts,” Huston said, according to the Harvard Crimson.

Huston also said that “Harvard does not have a constitutional right to taxpayer dollars and remains ineligible for grants in the future” in a statement quoted by various media outlets. “To any fair-minded observer, it is clear that Harvard University failed to protect their students from harassment and allowed discrimination to plague their campus for years,” she said.

Harvard President Alan Garber wrote in a message on the university’s website that the “ruling affirms Harvard’s First Amendment and procedural rights, and validates our arguments in defense of the University’s academic freedom, critical scientific research, and the core principles of American higher education.”

Garber noted that the case is not over. “We will continue to assess the implications of the opinion, monitor further legal developments, and be mindful of the changing landscape in which we seek to fulfill our mission,” he wrote.

Harvard beats Trump as judge orders US to restore $2.6 billion in funding Read More »

fcc-chair-teams-up-with-ted-cruz-to-block-wi-fi-hotspots-for-schoolkids

FCC chair teams up with Ted Cruz to block Wi-Fi hotspots for schoolkids

“Chairman Carr’s moves today are very unfortunate as they further signal that the Commission is no longer prioritizing closing the digital divide,” Schwartzman said. “In the 21st Century, education doesn’t stop when a student leaves school and today’s actions could lead to many students having a tougher time completing homework assignments because their families lack Internet access.”

Biden FCC expanded school and library program

Under then-Chairwoman Jessica Rosenworcel, the FCC expanded its E-Rate program in 2024 to let schools and libraries use Universal Service funding to lend out Wi-Fi hotspots and services that could be used off-premises. The FCC previously distributed Wi-Fi hotspots and other Internet access technology under pandemic-related spending authorized by Congress in 2021, but that program ended. The new hotspot lending program was supposed to begin this year.

Carr argues that when the Congressionally approved program ended, the FCC lost its authority to fund Wi-Fi hotspots for use outside of schools and libraries. “I dissented from both decisions at the time, and I am now pleased to circulate these two items, which will end the FCC’s illegal funding [of] unsupervised screen time for young kids,” he said.

Under Rosenworcel, the FCC said the Communications Act gives it “broad and flexible authority to establish rules governing the equipment and services that will be supported for eligible schools and libraries, as well as to design the specific mechanisms of support.”

The E-Rate program can continue providing telecom services to schools and libraries despite the hotspot component being axed. E-Rate disbursed about $1.75 billion in 2024, but could spend more based on demand because it has a funding cap of about $5 billion per year. E-Rate and other Universal Service programs are paid for through fees imposed on phone companies, which typically pass the cost on to consumers.

FCC chair teams up with Ted Cruz to block Wi-Fi hotspots for schoolkids Read More »

ars-live:-consumer-tech-firms-stuck-scrambling-ahead-of-looming-chip-tariffs

Ars Live: Consumer tech firms stuck scrambling ahead of looming chip tariffs

And perhaps the biggest confounding factor for businesses attempting to align supply chain choices with predictable tariff costs is looming chip tariffs. Trump has suggested those could come in August, but nearing the end of the month, there’s still no clarity there.

As tech firms brace for chip tariffs, Brzytwa will share CTA’s forecast based on a survey of industry experts, revealing the unique sourcing challenges chip tariffs will likely pose. It’s a particular pain point that Trump seems likely to impose taxes not just on imports of semiconductors but of any downstream product that includes a chip.

Because different electronics parts are typically assembled in different countries, supply chains for popular products have suddenly become a winding path, with potential tariff obstacles cropping up at any turn.

To Trump, complicating supply chains seems to be the point, intending to divert entire supply chains into the country to make the US a tech manufacturing hub, supposedly at the expense of his prime trade war target, China—which today is considered a world manufacturing “superpower.”

However, The New York Times this week suggested that Trump’s bullying tactics aren’t working on China, and experts suggest that now his chip tariffs risk not just spiking prices but throttling AI innovation in the US—just as China’s open source AI models shake up markets globally.

Brzytwa will share CTA research showing how the trade war has rattled, and will likely continue to rattle, tech firms into the foreseeable future. He’ll explain why tech firms can’t quickly or cheaply divert chip supply chains—and why policy that neglects to understand tech firms’ positions could be a lose-lose, putting Americans in danger of losing affordable access to popular tech without achieving Trump’s goal of altering China’s trade behavior.

Add to Google Calendar | Add to calendar (.ics download)

Ars Live: Consumer tech firms stuck scrambling ahead of looming chip tariffs Read More »

delete,-delete,-delete:-how-fcc-republicans-are-killing-rules-faster-than-ever

Delete, Delete, Delete: How FCC Republicans are killing rules faster than ever


FCC speeds up rule-cutting, giving public as little as 10 days to file objections.

FCC Chairman Brendan Carr testifies before the House Appropriations Subcommittee on Financial Services and General Government on May 21, 2025 in Washington, DC. Credit: Getty Images | John McDonnell

The Federal Communications Commission’s Republican chairman is eliminating regulations at breakneck speed by using a process that cuts dozens of rules at a time while giving the public only 10 or 20 days to review each proposal and submit objections.

Chairman Brendan Carr started his “Delete, Delete, Delete” rule-cutting initiative in March and later announced he’d be using the Direct Final Rule (DFR) mechanism to eliminate regulations without a full public-comment period. Direct Final Rule is just one of several mechanisms the FCC is using in the Delete, Delete, Delete initiative. But despite the seeming obscurity of regulations deleted under Direct Final Rule so far, many observers are concerned that the process could easily be abused to eliminate more significant rules that protect consumers.

On July 24, the FCC removed what it called “11 outdated and useless rule provisions” related to telegraphs, rabbit-ear broadcast receivers, and phone booths. The FCC said the 11 provisions consist of “39 regulatory burdens, 7,194 words, and 16 pages.”

The FCC eliminated these rules without the “prior notice and comment” period typically used to comply with the US Administrative Procedure Act (APA), with the FCC finding that it had “good cause” to skip that step. The FCC said it would allow comment for 10 days and that rule eliminations would take effect automatically after the 10-day period unless the FCC concluded that it received “significant adverse comments.”

On August 7, the FCC again used Direct Final Rule to eliminate 98 rules and requirements imposed on broadcasters. This time, the FCC allowed 20 days for comment. But it maintained its stance that the rules would be deleted automatically at the end of the period if no “significant” comments were received.

By contrast, FCC rulemakings usually allow 30 days for initial comments and another 15 days for reply comments. The FCC then considers the comments, responds to the major issues raised, and drafts a final proposal that is put up for a commission vote. This process, which takes months and gives both the public and commissioners more opportunity to consider the changes, can apply both to the creation of new rules and the elimination of existing ones.

FCC’s lone Democrat warns of “Trojan horse”

Telecom companies want the FCC to eliminate rules quickly. As we’ve previously written, AT&T submitted comments to the Delete, Delete, Delete docket urging the agency to eliminate rules that can result in financial penalties “without the delay imposed by notice-and-comment proceeding.”

Carr’s use of Direct Final Rule has drawn criticism from advocacy groups, local governments that could be affected by rule changes, and the FCC’s only Democratic commissioner. Anna Gomez, the lone FCC Democrat, told Ars in a phone interview that the rapid rule-cutting method “could be a Trojan horse because what we did, or what the commission did, is it adopted a process without public comment to eliminate any rule it finds to be outdated and, crucially, unwarranted. We don’t define what either of those terms mean, which therefore could lead to a situation that’s ripe for abuse.”

Gomez said she’d “be concerned if we eliminated rules that are meant to protect or inform consumers, or to promote competition, such as the broadband labels. This commission seems to have entirely lost its focus on consumers.”

Gomez told us that she doesn’t think a 10-day comment period is ever appropriate and that Carr seems to be trying “to meet some kind of arbitrary rule reduction quota.” If the rules being eliminated are truly obsolete, “then what’s the rush?” she asked. “If we don’t give sufficient time for public comment, then what happens when we make a mistake? What happens when we eliminate rules and it turns out, in fact, that these rules were important to keep? That’s why we give the public due process to comment on when we adopt rules and when we eliminate rules.”

Gomez hasn’t objected to the specific rules deleted under this process so far, but she spoke out against the method used by Carr both times Direct Final Rule method was used. “I told the chairman that I could support initiating a proceeding to look at how a Direct Final Rule process could be used going forward and including a Notice of Proposed Rulemaking proposing to eliminate the rules the draft order purports to eliminate today. That offer was declined,” she said in her dissenting statement in the July vote.

Gomez said that rules originally adopted under a notice-and-comment process should not be eliminated “without seeking public comment on appropriate processes and guardrails.” She added that the “order does not limit the Direct Final Rule process to elimination of rules that are objectively obsolete with a clear definition of how that will be applied, asserting instead authority to remove rules that are ‘outdated or unwarranted.'”

Local governments object

Carr argued that the Administrative Procedure Act “gives the commission the authority to fast-track the elimination of rules that inarguably fail to serve the public interest. Using this authority, the Commission can forgo the usual prior notice and public comment period before repealing the rules for these bygone regulations.”

Carr justified the deletions by saying that “outdated and unnecessary regulations from Washington often derail efforts to build high-speed networks and infrastructure across the country.” It’s not clear why the specific rule deletions were needed to accelerate broadband deployment, though. As Carr said, the FCC’s first use of Direct Finale Rule targeted regulations for “telegraph services, rabbit-ear broadcast receivers, and telephone booths—technologies that were considered outdated decades ago.”

Carr’s interpretation of the Administrative Procedure Act is wrong, said an August 6 filing submitted by local governments in Maryland, Massachusetts, the District of Columbia, Oregon, Virginia, California, New York, and Texas. Direct Final Rule “is intended for extremely simple, non-substantive decisions,” and the FCC process “is insufficient to ensure that future Commission decisions will fall within the good cause exception of the Administrative Procedure Act,” the filing said.

Local governments argued that “the new procedure is itself a substantive decision” and should be subject to a full notice-and-comment rulemaking. “The procedure adopted by the Commission makes it almost inevitable that the Commission will adopt rule changes outside of any APA exceptions,” the filing said.

The FCC could face court challenges. Gerard Lavery Lederer, a lawyer for the local government coalition, told Ars, “we fully anticipate that Chairman Carr and the FCC’s general counsel will take our concerns seriously.” But he also said local governments are worried about the FCC adopting industry proposals that “violate local government rights as preserved by Congress in the [Communications] Act” or that have “5th Amendment takings implications and/or 10th Amendment overreach issues.”

Is that tech really “obsolete”?

At least some rules targeted for deletion, like regulations on equipment used by radio and TV broadcast stations, may seem too arcane to care about. But a coalition of 22 public interest, civil rights, labor, and digital rights groups argued in a July 17 letter to Carr that some of the rule deletions could harm vulnerable populations and that the shortened comment period wasn’t long enough to determine the impact.

“For example, the Commission has targeted rules relating to calling cards and telephone booths in the draft Order as ‘obsolete,'” the letter said. “However, calling cards and pay phones remain important technologies for rural areas, immigrant communities, the unhoused, and others without reliable access to modern communications services. The impact on these communities is not clear and will not likely be clear in the short time provided for comment.”

The letter also said the FCC’s new procedure “would effectively eliminate any hope for timely judicial review of elimination of a rule on delegated authority.” Actions taken via delegated authority are handled by FCC bureaus without a vote of the commission.

So far, Carr has held commission votes for his Direct Final Rule actions rather than letting FCC bureau issue orders themselves. But in the July order, the FCC said its bureaus and offices have previously adopted or repealed rules without notice and comment and “reaffirm[ed] that all Bureaus and Offices may continue to take such actions in situations that are exempt from the APA’s notice-and-comment requirements.”

“This is about pushing boundaries”

The advocacy groups’ letter said that delegating authority to bureaus “makes judicial review virtually impossible, even though the order goes into effect immediately.” Parties impacted by actions made on delegated authority can’t go straight to the courts and must instead “file an application for review with the Commission as a prerequisite to any petition for judicial review,” the letter said. The groups argued that “a Chairman that does not wish to permit judicial review of elimination of a rule through DFR may order a bureau to remove the rule, then simply refuse to take action on the application for review.”

The letter was signed by Public Knowledge; Asian Americans Advancing Justice-AAJC; the Benton Institute for Broadband & Society; the Center for Digital Democracy; Common Sense Media; the Communications Workers of America; the Electronic Privacy Information Center; HTTP; LGBT Tech; the Media Access Project; MediaJustice; the Multicultural Media, Telecom and Internet Council; the National Action Network; NBJC; the National Council of Negro Women; the National Digital Inclusion Alliance; the National Hispanic Media Coalition; the National Urban League; New America’s Open Technology Institute (OTI); The Leadership Conference on Civil and Human Rights; the United Church of Christ Media Justice Ministry; and UnidosUS.

Harold Feld, senior VP of consumer advocacy group Public Knowledge, told Ars that the FCC “has a long record of thinking that things are obsolete and then discovering when they run an actual proceeding that there are people still using these things.” Feld is worried that the Direct Final Rule process could be used to eliminate consumer protections that apply to old phone networks when they are replaced by either fiber or wireless service.

“I certainly think that this is about pushing boundaries,” Feld said. When there’s a full notice-and-comment period, the FCC has to “actually address every argument made” before eliminating a rule. When the FCC provides less explanation of a decision, that “makes it much harder to challenge on appeal,” he said.

“Once you have this tool that lets you just get rid of rules without the need to do a proceeding, without the need to address the comments that are raised in that proceeding… it’s easy to see how this ramps up and how hard it is for people to stay constantly alert to look for an announcement where they will then only have 10 days to respond once it gets published,” he said.

What is a “significant” comment?

The FCC says its use of Direct Final Rule is guided by December 2024 recommendations from the Administrative Conference of the United States (ACUS), a government agency. But the FCC didn’t implement Direct Final Rule in the exact way recommended by the ACUS.

The ACUS said its guidance “encourages agencies to use direct final rulemaking, interim final rulemaking, and alternative methods of public engagement to ensure robust public participation even when they rely properly on the good cause exemption.” But the ACUS recommended taking public comment for at least 30 days, while the FCC has used 10- and 20-day periods.

The ACUS also said that agencies should only move ahead with rule deletions “if no significant adverse comments are received.” If such comments are received, the agency “can either withdraw the rule or publish a regular proposed rule that is open for public comment,” the recommendation said.

The FCC said that if it receives comments, “we will evaluate whether they are significant adverse comments that warrant further procedures before changing the rules.” The letter from 22 advocacy groups said it is worried about the leeway the FCC is giving itself in defining whether a comment is adverse and significant:

Although ACUS recommends that the agency revert to standard notice-and-comment rulemaking in the event of a single adverse comment, the draft Order requires multiple adverse comments—at which point the bureau/Commission will consider whether to shift to notice-and-comment rulemaking. If the bureau/Commission decides that adverse comments are not ‘substantive,’ it will explain its determination in a public notice that will not be filed in the Federal Register. The Commission states that it will be guided, but not bound, by the definition of ‘adverse comment’ recommended by ACUS.

Criticism from many corners

TechFreedom, a libertarian-leaning think tank, said it supports Carr’s goals in the “Delete, Delete, Delete” initiative but objected to the Direct Final Rule process. TechFreedom wrote in July comments that “deleting outdated regulations via a Direct Final Rule is unprecedented at the FCC.”

“No such process exists under current FCC rules,” the group said, urging the agency to seek public comment on the process. “If the Commission wishes to establish a new method by which it can eliminate existing regulations without undertaking a full rulemaking proceeding, it should open a docket specific to that subject and seek public comment,” the filing said.

TechFreedom said it is especially important for the FCC to “seek comment as to when the direct final rule procedures should be invoked… What is ‘routine,’ ‘insignificant,’ or ‘inconsequential’ and who is to decide—the Commissioners or the Bureau chiefs?”

The American Library Association and other groups wrote on August 14 that either 10 or 20 days is not long enough for public comment. Moreover, the groups said the two Direct Final Rule actions so far “offer minimal explanation for why the rules are being removed. There is only one sentence describing elimination of many rules and each rule removal is described in a footnote with a parenthetical about the change. It is not enough.”

The Utility Reform Network offered similar objections about the process and said that the FCC declaring technologies to be “obsolete” and markets “outdated” without a detailed explanation “suggests the Commission’s view that these rules are not minor or technical changes but support a larger deregulatory effort that should itself be subject to notice-and-comment rulemaking.”

The National Consumer Law Center and other groups said that “rushing regulatory changes as proposed is likely illegal in many instances, counterproductive, and bad policy,” and that “changes to regulations should be effectuated only through careful, thoughtful, and considered processes.”

We contacted Chairman Carr’s office and did not receive a response.

FCC delegated key decisions to bureaus

Gomez told Ars that Direct Final Rule could serve a purpose “with the right procedures and guardrails in place.” For example, she said the quick rule deletions can be justified for eliminating rules that have become obsolete because of a court reversal or Congressional actions.

“I would argue that we cannot, under the Administrative Procedure Act and the Constitution, simply eliminate rules because we’ve made a judgment call that they are unwarranted,” she said. “That does not meet the good cause exemption to notice-and-comment requirements.”

Gomez also opposes FCC bureaus making significant decisions without a commission vote, which effectively gives Carr more power over the agency’s operations. For example, T-Mobile’s purchase of US Cellular’s wireless operations and Verizon’s purchase of Frontier were approved by the FCC at the Bureau level.

In another instance cited by Gomez, the FCC Media Bureau waived a requirement for broadcast licensees to file their biennial ownership reports for 18 months. “The waiver order, which was done at the bureau level on delegated authority, simply said ‘we find good cause to waive these rules.’ There was no analysis whatsoever,” Gomez said.

Gomez also pointed out that the Carr FCC’s Wireline Competition Bureau delayed implementation of certain price caps on prison phone services. The various bureau-level decisions are a “stretching of the guardrails that we have internally for when things should be done on delegated authority, and when they should be voted by the commission,” Gomez said. “I’m concerned that [Direct Final Rule] is just the next iteration of the same issue.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Delete, Delete, Delete: How FCC Republicans are killing rules faster than ever Read More »