AI

spotify-peeved-after-10,000-users-sold-data-to-build-ai-tools

Spotify peeved after 10,000 users sold data to build AI tools


Spotify sent a warning to stop data sales, but developers say they never got it.

For millions of Spotify users, the “Wrapped” feature—which crunches the numbers on their annual listening habits—is a highlight of every year’s end, ever since it debuted in 2015. NPR once broke down exactly why our brains find the feature so “irresistible,” while Cosmopolitan last year declared that sharing Wrapped screenshots of top artists and songs had by now become “the ultimate status symbol” for tens of millions of music fans.

It’s no surprise then that, after a decade, some Spotify users who are especially eager to see Wrapped evolve are no longer willing to wait to see if Spotify will ever deliver the more creative streaming insights they crave.

With the help of AI, these users expect that their data can be more quickly analyzed to potentially uncover overlooked or never-considered patterns that could offer even more insights into what their listening habits say about them.

Imagine, for example, accessing a music recap that encapsulates a user’s full listening history—not just their top songs and artists. With that unlocked, users could track emotional patterns, analyzing how their music tastes reflected their moods over time and perhaps helping them adjust their listening habits to better cope with stress or major life events. And for users particularly intrigued by their own data, there’s even the potential to use AI to cross data streams from different platforms and perhaps understand even more about how their music choices impact their lives and tastes more broadly.

Likely just as appealing as gleaning deeper personal insights, though, users could also potentially build AI tools to compare listening habits with their friends. That could lead to nearly endless fun for the most invested music fans, where AI could be tapped to assess all kinds of random data points, like whose breakup playlists are more intense or who really spends the most time listening to a shared favorite artist.

In pursuit of supporting developers offering novel insights like these, more than 18,000 Spotify users have joined “Unwrapped,” a collective launched in February that allows them to pool and monetize their data.

Voting as a group through the decentralized data platform Vana—which Wired profiled earlier this year—these users can elect to sell their dataset to developers who are building AI tools offering fresh ways for users to analyze streaming data in ways that Spotify likely couldn’t or wouldn’t.

In June, the group made its first sale, with 99.5 percent of members voting yes. Vana co-founder Anna Kazlauskas told Ars that the collective—at the time about 10,000 members strong—sold a “small portion” of its data (users’ artist preferences) for $55,000 to Solo AI.

While each Spotify user only earned about $5 in cryptocurrency tokens—which Kazlauskas suggested was not “ideal,” wishing the users had earned about “a hundred times” more—she said the deal was “meaningful” in showing Spotify users that their data “is actually worth something.”

“I think this is what shows how these pools of data really act like a labor union,” Kazlauskas said. “A single Spotify user, you’re not going to be able to go say like, ‘Hey, I want to sell you my individual data.’ You actually need enough of a pool to sort of make it work.”

Spotify sent warning to Unwrapped

Unsurprisingly, Spotify is not happy about Unwrapped, which is perhaps a little too closely named to its popular branded feature for the streaming giant’s comfort. A spokesperson told Ars that Spotify sent a letter to the contact info listed for Unwrapped developers on their site, outlining concerns that the collective could be infringing on Spotify’s Wrapped trademark.

Further, the letter warned that Unwrapped violates Spotify’s developer policy, which bans using the Spotify platform or any Spotify content to build machine learning or AI models. And developers may also be violating terms by facilitating users’ sale of streaming data.

“Spotify honors our users’ privacy rights, including the right of portability,” Spotify’s spokesperson said. “All of our users can receive a copy of their personal data to use as they see fit. That said, UnwrappedData.org is in violation of our Developer Terms which prohibit the collection, aggregation, and sale of Spotify user data to third parties.”

But while Spotify suggests it has already taken steps to stop Unwrapped, the Unwrapped team told Ars that it never received any communication from Spotify. It plans to defend users’ right to “access, control, and benefit from their own data,” its statement said, while providing reassurances that it will “respect Spotify’s position as a global music leader.”

Unwrapped “does not distribute Spotify’s content, nor does it interfere with Spotify’s business,” developers argued. “What it provides is community-owned infrastructure that allows individuals to exercise rights they already hold under widely recognized data protection frameworks—rights to access their own listening history, preferences, and usage data.”

“When listeners choose to share or monetize their data together, they are not taking anything away from Spotify,” developers said. “They are simply exercising digital self-determination. To suggest otherwise is to claim that users do not truly own their data—that Spotify owns it for them.”

Jacob Hoffman-Andrews, a senior staff technologist for the digital rights group the Electronic Frontier Foundation, told Ars that—while EFF objects to data dividend schemes “where users are encouraged to share personal information in exchange for payment”—Spotify users should nevertheless always maintain control of their data.

“In general, listeners should have control of their own data, which includes exporting it for their own use,” Hoffman-Andrews said. “An individual’s musical history is of use not just to Spotify but also to the individual who created it. And there’s a long history of services that enable this sort of data portability, for instance Last.fm, which integrates with Spotify and many other services.”

To EFF, it seems ill-advised to sell data to AI companies, Hoffman-Andrews said, emphasizing “privacy isn’t a market commodity, it’s a fundamental right.”

“Of course, so is the right to control one’s own data,” Hoffman-Andrews noted, seeming to agree with Unwrapped developers in concluding that “ultimately, listeners should get to do what they want with their own information.”

Users’ right to privacy is the primary reason why Unwrapped developers told Ars that they’re hoping Spotify won’t try to block users from selling data to build AI.

“This is the heart of the issue: If Spotify seeks to restrict or penalize people for exercising these rights, it sends a chilling message that its listeners should have no say in how their own data is used,” the Unwrapped team’s statement said. “That is out of step not only with privacy law, but with the values of transparency, fairness, and community-driven innovation that define the next era of the Internet.”

Unwrapped sign-ups limited due to alleged Spotify issues

There could be more interest in Unwrapped. But Kazlauskas alleged to Ars that in the more than six months since Unwrapped’s launch, “Spotify has made it extraordinarily difficult” for users to port over their data. She claimed that developers have found that “every time they have an easy way for users to get their data,” Spotify shuts it down “in some way.”

Supposedly because of Spotify’s interference, Unwrapped remains in an early launch phase and can only offer limited spots for new users seeking to sell their data. Kazlauskas told Ars that about 300 users can be added each day due to the cumbersome and allegedly shifting process for porting over data.

Currently, however, Unwrapped is working on an update that could make that process more stable, Kazlauskas said, as well as changes to help users regularly update their streaming data. Those updates could perhaps attract more users to the collective.

Critics of Vana, like TechCrunch’s Kyle Wiggers, have suggested that data pools like Unwrapped will never reach “critical mass,” likely only appealing to niche users drawn to decentralization movements. Kazlauskas told Ars that data sale payments issued in cryptocurrency are one barrier for crypto-averse or crypto-shy users interested in Vana.

“The No. 1 thing I would say is, this kind of user experience problem where when you’re using any new kind of decentralized technology, you need to set up a wallet, then you’re getting tokens,” Kazlauskas explained. Users may feel culture shock, wondering, “What does that even mean? How do I vote with this thing? Is this real money?”

Kazlauskas is hoping that Vana supports a culture shift, striving to reach critical mass by giving users a “commercial lens” to start caring about data ownership. She also supports legislation like the Digital Choice Act in Utah, which “requires actually real-time API access, so people can get their data.” If the US had a federal law like that, Kazlauskas suspects that launching Unwrapped would have been “so much easier.”

Although regulations like Utah’s law could serve as a harbinger of a sea change, Kazlauskas noted that Big Tech companies that currently control AI markets employ a fierce lobbying force to maintain control over user data that decentralized movements just don’t have.

As Vana partners with Flower AI, striving, as Wired reported, to “shake up the AI industry” by releasing “a giant 100 billion-parameter model” later this year, Kazlauskas remains committed to ensuring that users are in control and “not just consumed.” She fears a future where tech giants may be motivated to use AI to surveil, influence, or manipulate users, when instead users could choose to band together and benefit from building more ethical AI.

“A world where a single company controls AI is honestly really dystopian,” Kazlauskas told Ars. “I think that it is really scary. And so I think that the path that decentralized AI offers is one where a large group of people are still in control, and you still get really powerful technology.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Spotify peeved after 10,000 users sold data to build AI tools Read More »

microsoft-ends-openai-exclusivity-in-office,-adds-rival-anthropic

Microsoft ends OpenAI exclusivity in Office, adds rival Anthropic

Microsoft’s Office 365 suite will soon incorporate AI models from Anthropic alongside existing OpenAI technology, The Information reported, ending years of exclusive reliance on OpenAI for generative AI features across Word, Excel, PowerPoint, and Outlook.

The shift reportedly follows internal testing that revealed Anthropic’s Claude Sonnet 4 model excels at specific Office tasks where OpenAI’s models fall short, particularly in visual design and spreadsheet automation, according to sources familiar with the project cited by The Information, who stressed the move is not a negotiating tactic.

Anthropic did not immediately respond to Ars Technica’s request for comment.

In an unusual arrangement showing the tangled alliances of the AI industry, Microsoft will reportedly purchase access to Anthropic’s models through Amazon Web Services—both a cloud computing rival and one of Anthropic’s major investors. The integration is expected to be announced within weeks, with subscription pricing for Office’s AI tools remaining unchanged, the report says.

Microsoft maintains that its OpenAI relationship remains intact. “As we’ve said, OpenAI will continue to be our partner on frontier models and we remain committed to our long-term partnership,” a Microsoft spokesperson told Reuters following the report. The tech giant has poured over $13 billion into OpenAI to date and is currently negotiating terms for continued access to OpenAI’s models amid ongoing negotiations about their partnership terms.

Stretching back to 2019, Microsoft’s tight partnership with OpenAI until recently gave the tech giant a head start in AI assistants based on language models, allowing for a rapid (though bumpy) deployment of OpenAI-technology-based features in Bing search and the rollout of Copilot assistants throughout its software ecosystem. It’s worth noting, however, that a recent report from the UK government found no clear productivity boost from using Copilot AI in daily work tasks among study participants.

Microsoft ends OpenAI exclusivity in Office, adds rival Anthropic Read More »

ai-vs.-maga:-populists-alarmed-by-trump’s-embrace-of-ai,-big-tech

AI vs. MAGA: Populists alarmed by Trump’s embrace of AI, Big Tech

Some Republicans are still angry over the deplatforming of Trump by tech executives once known for their progressive politics. They had been joined by a “vocal and growing group of conservatives who are fundamentally suspicious of the benefits of technological innovation,” Thierer said.

With MAGA skeptics on one side and Big Tech allies of the president on the other, a “battle for the soul of the conservative movement” is under way.

Popular resentment is now a threat to Trump’s Republican Party, warn some of its biggest supporters—especially if AI begins displacing jobs as many of its exponents suggest.

“You can displace farm workers—what are they going to do about it? You can displace factory workers—they will just kill themselves with drugs and fast food,” Tucker Carlson, one of the MAGA movement’s most prominent media figures, told a tech conference on Monday.

“If you do that to lawyers and non-profit sector employees, you will get a revolution.”

It made Trump’s embrace of Silicon Valley bosses a “significant risk” for his administration ahead of next year’s midterm elections, a leading Republican strategist said.

“It’s a real double-edged sword—the administration is forced to embrace [AI] because if the US is not the leader in AI, China will be,” the strategist said, echoing the kind of argument made by Sacks and fellow Trump adviser Michael Kratsios for their AI policy platform.

“But you could see unemployment spiking over the next year,” the strategist said.

Other MAGA supporters are urging Trump to tone down at least his public cheerleading for an AI sector so many of them consider a threat.

“The pressure that is being placed on conservatives to fall in line… is a recipe for discontent,” said Toscano.

By courting AI bosses, the Republican Party, which claims to represent the pro-family movement, religious communities, and American workers, appeared to be embracing those who are antithetical to all of those groups, he warned.

“The current view of things suggests that the most important members of the party are those that are from Silicon Valley,” Toscano said.

Additional reporting by Cristina Criddle in San Francisco.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

AI vs. MAGA: Populists alarmed by Trump’s embrace of AI, Big Tech Read More »

pay-per-output?-ai-firms-blindsided-by-beefed-up-robotstxt-instructions.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions.


“Really Simple Licensing” makes it easier for creators to get paid for AI scraping.

Logo for the “Really Simply Licensing” (RSL) standard. Credit: via RSL Collective

Leading Internet companies and publishers—including Reddit, Yahoo, Quora, Medium, The Daily Beast, Fastly, and more—think there may finally be a solution to end AI crawlers hammering websites to scrape content without permission or compensation.

Announced Wednesday morning, the “Really Simply Licensing” (RSL) standard evolves robots.txt instructions by adding an automated licensing layer that’s designed to block bots that don’t fairly compensate creators for content.

Free for any publisher to use starting today, the RSL standard is an open, decentralized protocol that makes clear to AI crawlers and agents the terms for licensing, usage, and compensation of any content used to train AI, a press release noted.

The standard was created by the RSL Collective, which was founded by Doug Leeds, former CEO of Ask.com, and Eckart Walther, a former Yahoo vice president of products and co-creator of the RSS standard, which made it easy to syndicate content across the web.

Based on the “Really Simply Syndication” (RSS) standard, RSL terms can be applied to protect any digital content, including webpages, books, videos, and datasets. The new standard supports “a range of licensing, usage, and royalty models, including free, attribution, subscription, pay-per-crawl (publishers get compensated every time an AI application crawls their content), and pay-per-inference (publishers get compensated every time an AI application uses their content to generate a response),” the press release said.

Leeds told Ars that the idea to use the RSS “playbook” to roll out the RSL standard arose after he invited Walther to speak to University of California, Berkeley students at the end of last year. That’s when the longtime friends with search backgrounds began pondering how AI had changed the search industry, as publishers today are forced to compete with AI outputs referencing their own content as search traffic nosedives.

Eckart had watched the RSS standard quickly become adopted by millions of sites, and he realized that RSS had actually always been a licensing standard, Leeds said. Essentially, by adopting the RSS standard, publishers agreed to let search engines license a “bit” of their content in exchange for search traffic, and Eckart realized that it could be just as straightforward to add AI licensing terms in the same way. That way, publishers could strive to recapture lost search revenue by agreeing to license all or some part of their content to train AI in return for payment each time AI outputs link to their content.

Leeds told Ars that the RSL standard doesn’t just benefit publishers, though. It also solves a problem for AI companies, which have complained in litigation over AI scraping that there is no effective way to license content across the web.

“We have listened to them, and what we’ve heard them say is… we need a new protocol,” Leeds said. With the RSL standard, AI firms get a “scalable way to get all the content” they want, while setting an incentive that they’ll only have to pay for the best content that their models actually reference.

“If they’re using it, they pay for it, and if they’re not using it, they don’t pay for it,” Leeds said.

No telling yet how AI firms will react to RSL

At this point, it’s hard to say if AI companies will embrace the RSL standard. Ars reached out to Google, Meta, OpenAI, and xAI—some of the big tech companies whose crawlers have drawn scrutiny—to see if it was technically feasible to pay publishers for every output referencing their content. xAI did not respond, and the other companies declined to comment without further detail about the standard, appearing to have not yet considered how a licensing layer beefing up robots.txt could impact their scraping.

Today will likely be the first chance for AI companies to wrap their heads around the idea of paying publishers per output. Leeds confirmed that the RSL Collective did not consult with AI companies when developing the RSL standard.

But AI companies know that they need a constant stream of fresh content to keep their tools relevant and to continually innovate, Leeds suggested. In that way, the RSL standard “supports what supports them,” Leeds said, “and it creates the appropriate incentive system” to create sustainable royalty streams for creators and ensure that human creativity doesn’t wane as AI evolves.

While we’ll have to wait to see how AI firms react to RSL, early adopters of the standard celebrated the launch today. That included Neil Vogel, CEO of People Inc., who said that “RSL moves the industry forward—evolving from simply blocking unauthorized crawlers, to setting our licensing terms, for all AI use cases, at global web scale.”

Simon Wistow, co-founder of Fastly, suggested the solution “is a timely and necessary response to the shifting economics of the web.”

“By making it easy for publishers to define and enforce licensing terms, RSL lays the foundation for a healthy content ecosystem—one where innovation and investment in original work are rewarded, and where collaboration between publishers and AI companies becomes frictionless and mutually beneficial,” Wistow said.

Leeds noted that a key benefit of the RSL standard is that even small creators will now have an opportunity to generate revenue for helping to train AI. Tony Stubblebine, CEO of Medium, did not mince words when explaining the battle that bloggers face as AI crawlers threaten to divert their traffic without compensating them.

“Right now, AI runs on stolen content,” Stubblebine said. “Adopting this RSL Standard is how we force those AI companies to either pay for what they use, stop using it, or shut down.”

How will the RSL standard be enforced?

On the RSL standard site, publishers can find common terms to add templated or customized text to their robots.txt files to adopt the RSL standard today and start protecting their content from unfettered AI scraping. Here’s an example of how machine-readable licensing terms could look, added directly to robots.txt files:

# NOTICE: all crawlers and bots are strictly prohibited from using this

# content for AI training without complying with the terms of the RSL

# Collective AI royalty license. Any use of this content for AI training

# without a license is a violation of our intellectual property rights.

License: https://rslcollective.org/royalty.xml

Through RSL terms, publishers can automate licensing, with the cloud company Fastly partnering with the collective to provide technical enforcement that Leeds described as tech that acts as a bouncer to keep unapproved bots away from valuable content. It seems likely that Cloudflare, which launched a pay-per-crawl program blocking greedy crawlers in July, could also help enforce the RSL standard.

For publishers, the standard “solves a business problem immediately,” Leeds told Ars, so the collective is hopeful that RSL will be rapidly and widely adopted. As further incentive, publishers can also rely on the RSL standard to “easily encrypt and license non-published, proprietary content to AI companies, including paywalled articles, books, videos, images, and data,” the RSL Collective site said, and that potentially could expand AI firms’ data pool.

On top of technical enforcement, Leeds said that publishers and content creators could legally enforce the terms, noting that the recent $1.5 billion Anthropic settlement suggests “there’s real money at stake” if you don’t train AI “legitimately.”

Should the industry adopt the standard, it could “establish fair market prices and strengthen negotiation leverage for all publishers,” the press release said. And Leeds noted that it’s very common for regulations to follow industry solutions (consider the Digital Millennium Copyright Act). Since the RSL Collective is already in talks with lawmakers, Leeds thinks “there’s good reason to believe” that AI companies will soon “be forced to acknowledge” the standard.

“But even better than that,” Leeds said, “it’s in their interest” to adopt the standard.

With RSL, AI firms can license content at scale “in a way that’s fair [and] preserves the content that they need to make their products continue to innovate.”

Additionally, the RSL standard may solve a problem that risks gutting trust and interest in AI at this early stage.

Leeds noted that currently, AI outputs don’t provide “the best answer” to prompts but instead rely on mashing up answers from different sources to avoid taking too much content from one site. That means that not only do AI companies “spend an enormous amount of money on compute costs to do that,” but AI tools may also be more prone to hallucination in the process of “mashing up” source material “to make something that’s not the best answer because they don’t have the rights to the best answer.”

“The best answer could exist somewhere,” Leeds said. But “they’re spending billions of dollars to create hallucinations, and we’re talking about: Let’s just solve that with a licensing scheme that allows you to use the actual content in a way that solves the user’s query best.”

By transforming the “ecosystem” with a standard that’s “actually sustainable and fair,” Leeds said that AI companies could also ensure that humanity never gets to the point where “humans stop producing” and “turn to AI to reproduce what humans can’t.”

Failing to adopt the RSL standard would be bad for AI innovation, Leeds suggested, perhaps paving the way for AI to replace search with a “sort of self-fulfilling swap of bad content that actually one doesn’t have any current information, doesn’t have any current thinking, because it’s all based on old training information.”

To Leeds, the RSL standard is ultimately “about creating the system that allows the open web to continue. And that happens when we get adoption from everybody,” he said, insisting that “literally the small guys are as important as the big guys” in pushing the entire industry to change and fairly compensate creators.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions. Read More »

reddit-bug-caused-lesbian-subreddit-to-be-labeled-as-a-place-for-“straight”-women

Reddit bug caused lesbian subreddit to be labeled as a place for “straight” women

Explaining further to Ars, Reddit spokesperson Tim Rathschmidt said:

There was a small bug in a test we ran that mistakenly caused the English-to-English translation(s) you saw. That bug has been resolved. Unsurprisingly, English-to-English translations are not part of our strategy, as they aren’t necessary. English-to-English translations were not a desired or expected outcome of the test.

Reddit pulled the test it was running, but its machine learning-powered translations are still functioning, Rathschmidt said. The company plans to fix the bug and run its unspecified “test” again.

Reddit’s explanation differs from user theories floating around beforehand, which were mainly that Reddit was rewriting user-created summaries with generative AI, possibly to boost SEO. Some may still be perturbed by the problem persisting for weeks without explanation and the apparent lack of manual checks for the translation service. However, Redditors can now take comfort in knowing that Reddit is not currently using generative AI to alter user-generated content without notice.

Paige_Railstone, however, maintains frustration and wants to tell Reddit admins, “STOP. Hand off.” The translation bug, they noted, led to people posting on a subreddit for parents with autism that their child might be autistic, “and how terrible that would be for them,” Paige_Railstone recalled.

“These are the kind of unintentionally insulting posts that drive autistics into leaving a community, and it increases the workload of us moderators,” they said.

Paige_Railstone also sees the incident as a reason for moderators to be more cautious.

“This never used to be a concern, but this translation service was rolled out without any notification that I’m aware of, and no option to disable it within the mods’ control. That has the potential to cause problems, as we’ve seen over the past two weeks,” they said.

Disclosure: Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.

Reddit bug caused lesbian subreddit to be labeled as a place for “straight” women Read More »

judge:-anthropic’s-$1.5b-settlement-is-being-shoved-“down-the-throat-of-authors”

Judge: Anthropic’s $1.5B settlement is being shoved “down the throat of authors”

At a hearing Monday, US district judge William Alsup blasted a proposed $1.5 billion settlement over Anthropic’s rampant piracy of books to train AI.

The proposed settlement comes in a case where Anthropic could have owed more than $1 trillion in damages after Alsup certified a class that included up to 7 million claimants whose works were illegally downloaded by the AI company.

Instead, critics fear Anthropic will get off cheaply, striking a deal with authors suing that covers less than 500,000 works and paying a small fraction of its total valuation (currently $183 billion) to get away with the massive theft. Defector noted that the settlement doesn’t even require Anthropic to admit wrongdoing, while the company continues raising billions based on models trained on authors’ works. Most recently, Anthropic raised $13 billion in a funding round, making back about 10 times the proposed settlement amount after announcing the deal.

Alsup expressed grave concerns that lawyers rushed the deal, which he said now risks being shoved “down the throat of authors,” Bloomberg Law reported.

In an order, Alsup clarified why he thought the proposed settlement was a chaotic mess. The judge said he was “disappointed that counsel have left important questions to be answered in the future,” seeking approval for the settlement despite the Works List, the Class List, the Claim Form, and the process for notification, allocation, and dispute resolution all remaining unresolved.

Denying preliminary approval of the settlement, Alsup suggested that the agreement is “nowhere close to complete,” forcing Anthropic and authors’ lawyers to “recalibrate” the largest publicly reported copyright class-action settlement ever inked, Bloomberg reported.

Of particular concern, the settlement failed to outline how disbursements would be managed for works with multiple claimants, Alsup noted. Until all these details are ironed out, Alsup intends to withhold approval, the order said.

One big change the judge wants to see is the addition of instructions requiring “anyone with copyright ownership” to opt in, with the consequence that the work won’t be covered if even one rights holder opts out, Bloomberg reported. There should also be instruction that any disputes over ownership or submitted claims should be settled in state court, Alsup said.

Judge: Anthropic’s $1.5B settlement is being shoved “down the throat of authors” Read More »

why-accessibility-might-be-ai’s-biggest-breakthrough

Why accessibility might be AI’s biggest breakthrough

For those with visual impairments, language models can summarize visual content and reformat information. Tools like ChatGPT’s voice mode with video and Be My Eyes allow a machine to describe real-world visual scenes in ways that were impossible just a few years ago.

AI language tools may be providing unofficial stealth accommodations for students—support that doesn’t require formal diagnosis, workplace disclosure, or special equipment. Yet this informal support system comes with its own risks. Language models do confabulate—the UK Department for Business and Trade study found 22 percent of users identified false information in AI outputs—which could be particularly harmful for users relying on them for essential support.

When AI assistance becomes dependence

Beyond the workplace, the drawbacks may have a particular impact on students who use the technology. The authors of a 2025 study on students with disabilities using generative AI cautioned, “Key concerns students with disabilities had included the inaccuracy of AI answers, risks to academic integrity, and subscription cost barriers,” they wrote. Students in that study had ADHD, dyslexia, dyspraxia, and autism, with ChatGPT being the most commonly used tool.

Mistakes in AI outputs are especially pernicious because, due to grandiose visions of near-term AI technology, some people think today’s AI assistants can perform tasks that are actually far outside their scope. As research on blind users’ experiences suggested, people develop complex (sometimes flawed) mental models of how these tools work, showing the need for higher awareness of AI language model drawbacks among the general public.

For the UK government employees who participated in the initial study, these questions moved from theoretical to immediate when the pilot ended in December 2024. After that time, many participants reported difficulty readjusting to work without AI assistance—particularly those with disabilities who had come to rely on the accessibility benefits. The department hasn’t announced the next steps, leaving users in limbo. When participants report difficulty readjusting to work without AI while productivity gains remain marginal, accessibility emerges as potentially the first AI application with irreplaceable value.

Why accessibility might be AI’s biggest breakthrough Read More »

in-court-filing,-google-concedes-the-open-web-is-in-“rapid-decline”

In court filing, Google concedes the open web is in “rapid decline”

Advertising and the open web

Google objects to this characterization. A spokesperson calls it a “cherry-picked” line from the filing that has been misconstrued. Google’s position is that the entire passage is referring to open-web advertising rather than the open web itself. “Investments in non-open web display advertising like connected TV and retail media are growing at the expense of those in open web display advertising,” says Google.

If we assume this is true, it doesn’t exactly let Google off the hook. As AI tools have proliferated, we’ve heard from Google time and time again that traffic from search to the web is healthy. When people use the web more, Google makes more money from all those eyeballs on ads, and indeed, Google’s earnings have never been higher. However, Google isn’t just putting ads on websites—Google is also big in mobile apps. As Google’s own filings make clear, in-app ads are by far the largest growth sector in advertising. Meanwhile, time spent on non-social and non-video content is stagnant or slightly declining, and as a result, display ads on the open web earn less.

So, whether Google’s wording in the filing is meant to address the web or advertising on the web may be a distinction without a difference. If ads on websites aren’t making the big bucks, Google’s incentives will undoubtedly change. While Google says its increasingly AI-first search experience is still consistently sending traffic to websites, it has not released data to show that. If display ads are in “rapid decline,” then it’s not really in Google’s interest to continue sending traffic to non-social and non-video content. Maybe it makes more sense to keep people penned up on its platform where they can interact with its AI tools.

Of course, the web isn’t just ad-supported content—Google representatives have repeatedly trotted out the claim that Google’s crawlers have seen a 45 percent increase in indexable content since 2023. This metric, Google says, shows that open web advertising could be imploding while the web is healthy and thriving. We don’t know what kind of content is in this 45 percent, but given the timeframe cited, AI slop is a safe bet.

If the increasingly AI-heavy open web isn’t worth advertisers’ attention, is it really right to claim the web is thriving as Google so often does? Google’s filing may simply be admitting to what we all know: the open web is supported by advertising, and ads increasingly can’t pay the bills. And is that a thriving web? Not unless you count AI slop.

In court filing, Google concedes the open web is in “rapid decline” Read More »

“first-of-its-kind”-ai-settlement:-anthropic-to-pay-authors-$1.5-billion

“First of its kind” AI settlement: Anthropic to pay authors $1.5 billion

Authors revealed today that Anthropic agreed to pay $1.5 billion and destroy all copies of the books the AI company pirated to train its artificial intelligence models.

In a press release provided to Ars, the authors confirmed that the settlement is “believed to be the largest publicly reported recovery in the history of US copyright litigation.” Covering 500,000 works that Anthropic pirated for AI training, if a court approves the settlement, each author will receive $3,000 per work that Anthropic stole. “Depending on the number of claims submitted, the final figure per work could be higher,” the press release noted.

Anthropic has already agreed to the settlement terms, but a court must approve them before the settlement is finalized. Preliminary approval may be granted this week, while the ultimate decision may be delayed until 2026, the press release noted.

Justin Nelson, a lawyer representing the three authors who initially sued to spark the class action—Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber—confirmed that if the “first of its kind” settlement “in the AI era” is approved, the payouts will “far” surpass “any other known copyright recovery.”

“It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners,” Nelson said. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.”

Groups representing authors celebrated the settlement on Friday. The CEO of the Authors’ Guild, Mary Rasenberger, said it was “an excellent result for authors, publishers, and rightsholders generally.” Perhaps most critically, the settlement shows “there are serious consequences when” companies “pirate authors’ works to train their AI, robbing those least able to afford it,” Rasenberger said.

“First of its kind” AI settlement: Anthropic to pay authors $1.5 billion Read More »

warner-bros.-sues-midjourney-to-stop-ai-knockoffs-of-batman,-scooby-doo

Warner Bros. sues Midjourney to stop AI knockoffs of Batman, Scooby-Doo


AI would’ve gotten away with it too…

Warner Bros. case builds on arguments raised in a Disney/Universal lawsuit.

DVD art for the animated movie Scooby-Doo & Batman: The Brave and the Bold. Credit: Warner Bros. Discovery

Warner Bros. hit Midjourney with a lawsuit Thursday, crafting a complaint that strives to shoot down defenses that the AI company has already raised in a similar lawsuit filed by Disney and Universal Studios earlier this year.

The big film studios have alleged that Midjourney profits off image generation models trained to produce outputs of popular characters. For Disney and Universal, intellectual property rights to pop icons like Darth Vader and the Simpsons were allegedly infringed. And now, the WB complaint defends rights over comic characters like Superman, Wonder Woman, and Batman, as well as characters considered “pillars of pop culture with a lasting impact on generations,” like Scooby-Doo and Bugs Bunny, and modern cartoon characters like Rick and Morty.

“Midjourney brazenly dispenses Warner Bros. Discovery’s intellectual property as if it were its own,” the WB complaint said, accusing Midjourney of allowing subscribers to “pick iconic” copyrighted characters and generate them in “every imaginable scene.”

Planning to seize Midjourney’s profits from allegedly using beloved characters to promote its service, Warner Bros. described Midjourney as “defiant and undeterred” by the Disney/Universal lawsuit. Despite that litigation, WB claimed that Midjourney has recently removed copyright protections in its supposedly shameful ongoing bid for profits. Nothing but a permanent injunction will end Midjourney’s outputs of allegedly “countless infringing images,” WB argued, branding Midjourney’s alleged infringements as “vast, intentional, and unrelenting.”

Examples of closely matching outputs include prompts for “screencaps” showing specific movie frames, a search term that at least one artist, Reid Southen, had optimistically predicted Midjourney would block last year, but it apparently did not.

Here are some examples included in WB’s complaint:

Midjourney’s output for the prompt, “Superman, classic cartoon character, DC comics.”

Midjourney could face devastating financial consequences in a loss. At trial, WB is hoping discovery will show the true extent of Midjourney’s alleged infringement, asking the court for maximum statutory damages, at $150,000 per infringing output. Just 2,000 infringing outputs unearthed could cost Midjourney more than its total revenue for 2024, which was approximately $300 million, the WB complaint said.

Warner Bros. hopes to hobble Midjourney’s best defense

For Midjourney, the WB complaint could potentially hit harder than the Disney/Universal lawsuit. WB’s complaint shows how closely studios are monitoring AI copyright litigation, likely choosing ideal moments to strike when studios feel they can better defend their property. So, while much of WB’s complaint echoes Disney and Universal’s arguments—which Midjourney has already begun defending against—IP attorney Randy McCarthy suggested in statements provided to Ars that WB also looked for seemingly smart ways to potentially overcome some of Midjourney’s best defenses when filing its complaint.

WB likely took note when Midjourney filed its response to the Disney/Universal lawsuit last month, arguing that its system is “trained on billions of publicly available images” and generates images not by retrieving a copy of an image in its database but based on “complex statistical relationships between visual features and words in the text-image pairs are encoded within the model.”

This defense could allow Midjourney to avoid claims that it copied WB images and distributes copies through its models. But hoping to dodge this defense, WB didn’t argue that Midjourney retains copies of its images. Rather, the entertainment giant raised a more nuanced argument that:

Midjourney used software, servers, and other technology to store and fix data associated with Warner Bros. Discovery’s Copyrighted Works in such a manner that those works are thereby embodied in the model, from which Midjourney is then able to generate, reproduce, publicly display, and distribute unlimited “copies” and “derivative works” of Warner Bros. Discovery’s works as defined by the Copyright Act.”

McCarthy noted that WB’s argument pushes the court to at least consider that even though “Midjourney does not store copies of the works in its model,” its system “nonetheless accesses the data relating to the works that are stored by Midjourney’s system.”

“This seems to be a very clever way to counter MJ’s ‘statistical pattern analysis’ arguments,” McCarthy said.

If it’s a winning argument, that could give WB a path to wipe Midjourney’s models. WB argued that each time Midjourney provides a “substantially new” version of its image generator, it “repeats this process.” And that ongoing activity—due to Midjourney’s initial allegedly “massive copying” of WB works—allows Midjourney to “further reproduce, publicly display, publicly perform, and distribute image and video outputs that are identical or virtually identical to Warner Bros. Discovery’s Copyrighted Works in response to simple prompts from subscribers.”

Perhaps further strengthening the WB’s argument, the lawsuit noted that Midjourney promotes allegedly infringing outputs on its 24/7 YouTube channel and appears to have plans to compete with traditional TV and streaming services. Asking the court to block Midjourney’s outputs instead, WB claims it’s already been “substantially and irreparably harmed” and risks further damages if the AI image generator is left unchecked.

As alleged proof that the AI company knows its tool is being used to infringe WB property, WB pointed to Midjourney’s own Discord server and subreddit, where users post outputs depicting WB characters and share tips to help others do the same. They also called out Midjourney’s “Explore” page, which allows users to drop a WB-referencing output into the prompt field to generate similar images.

“It is hard to imagine copyright infringement that is any more willful than what Midjourney is doing here,” the WB complaint said.

WB and Midjourney did not immediately respond to Ars’ request to comment.

Midjourney slammed for promising “fewer blocked jobs”

McCarthy noted that WB’s legal strategy differs in other ways from the arguments Midjourney’s already weighed in the Disney/Universal lawsuit.

The WB complaint also anticipates Midjourney’s likely defense that users are generating infringing outputs, not Midjourney, which could invalidate any charges of direct copyright infringement.

In the Disney/Universal lawsuit, Midjourney argued that courts have recently found that AI tools referencing copyrighted works is “a quintessentially transformative fair use,” accusing studios of trying to censor “an instrument for user expression.” They claim that Midjourney cannot know about infringing outputs unless studios use the company’s DMCA process, while noting that subscribers have “any number of legitimate, noninfringing grounds to create images incorporating characters from popular culture,” including “non-commercial fan art, experimentation and ideation, and social commentary and criticism.”

To avoid losing on that front, the WB complaint doesn’t depend on a ruling that Midjourney directly infringed copyrights. Instead, the complaint “more fully” emphasizes how Midjourney may be “secondarily liable for infringement via contributory, inducement and/or vicarious liability by inducing its users to directly infringe,” McCarthy suggested.

Additionally, WB’s complaint “seems to be emphasizing” that Midjourney “allegedly has the technical means to prevent its system from accepting prompts that directly reference copyrighted characters,” and “that would prevent infringing outputs from being displayed,” McCarthy said.

The complaint noted that Midjourney is in full control of what outputs can be generated. Noting that Midjourney “temporarily refused to ‘animate'” outputs of WB characters after launching video generations, the lawsuit appears to have been filed in response to Midjourney “deliberately” removing those protections and then announcing that subscribers would experience “fewer blocked jobs.”

Together, these arguments “appear to be intended to lead to the inference that Midjourney is willfully enticing its users to infringe,” McCarthy said.

WB’s complaint details simple user prompts that generate allegedly infringing outputs without any need to manipulate the system. The ease of generating popular characters seems to make Midjourney a destination for users frustrated by other AI image generators that make it harder to generate infringing outputs, WB alleged.

On top of that, Midjourney also infringes copyrights by generating WB characters, “even in response to generic prompts like ‘classic comic book superhero battle.'” And while Midjourney has seemingly taken steps to block WB characters from appearing on its “Explore” page, where users can find inspiration for prompts, these guardrails aren’t perfect, but rather “spotty and suspicious,” WB alleged. Supposedly, searches for correctly spelled character names like “Batman” are blocked, but any user who accidentally or intentionally mispells a character’s name like “Batma” can learn an easy way to work around that block.

Additionally, WB alleged, “the outputs often contain extensive nuance and detail, background elements, costumes, and accessories beyond what was specified in the prompt.” And every time that Midjourney outputs an allegedly infringing image, it “also trains on the outputs it has generated,” the lawsuit noted, creating a never-ending cycle of continually enhanced AI fakes of pop icons.

Midjourney could slow down the cycle and “minimize” these allegedly infringing outputs, if it cannot automatically block them all, WB suggested. But instead, “Midjourney has made a calculated and profit-driven decision to offer zero protection for copyright owners even though Midjourney knows about the breathtaking scope of its piracy and copyright infringement,” WB alleged.

Fearing a supposed scheme to replace WB in the market by stealing its best-known characters, WB accused Midjourney of willfully allowing WB characters to be generated in order to “generate more money for Midjourney” to potentially compete in streaming markets.

Midjourney will remove protections “on a whim”

As Midjourney’s efforts to expand its features escalate, WB claimed that trust is lost. Even if Midjourney takes steps to address rightsholders’ concerns, WB argued, studios must remain watchful of every upgrade, since apparently, “Midjourney can and will remove copyright protection measures on a whim.”

The complaint noted that Midjourney just this week announced “plans to continue deploying new versions” of its image generator, promising to make it easier to search for and save popular artists’ styles—updating a feature that many artists loathe.

Without an injunction, Midjourney’s alleged infringement could interfere with WB’s licensing opportunities for its content, while “illegally and unfairly” diverting customers who buy WB products like posters, wall art, prints, and coloring books, the complaint said.

Perhaps Midjourney’s strongest defense could be efforts to prove that WB benefits from its image generator. In the Disney/Universal lawsuit, Midjourney pointed out that studios “benefit from generative AI models,” claiming that “many dozens of Midjourney subscribers are associated with” Disney and Universal corporate email addresses. If WB corporate email addresses are found among subscribers, Midjourney could claim that WB is trying to “have it both ways” by “seeking to profit” from AI tools while preventing Midjourney and its subscribers from doing the same.

McCarthy suggested it’s too soon to say how the WB battle will play out, but Midjourney’s response will reveal how it intends to shift tactics to avoid courts potentially picking apart its defense of its training data, while keeping any blame for copyright-infringing outputs squarely on users.

“As with the Disney/Universal lawsuit, we need to wait to see how Midjourney answers these latest allegations,” McCarthy said. “It is definitely an interesting development that will have widespread implications for many sectors of our society.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Warner Bros. sues Midjourney to stop AI knockoffs of Batman, Scooby-Doo Read More »

chatgpt’s-new-branching-feature-is-a-good-reminder-that-ai-chatbots-aren’t-people

ChatGPT’s new branching feature is a good reminder that AI chatbots aren’t people

On Thursday, OpenAI announced that ChatGPT users can now branch conversations into multiple parallel threads, serving as a useful reminder that AI chatbots aren’t people with fixed viewpoints but rather malleable tools you can rewind and redirect. The company released the feature for all logged-in web users following years of user requests for the capability.

The feature works by letting users hover over any message in a ChatGPT conversation, click “More actions,” and select “Branch in new chat.” This creates a new conversation thread that includes all the conversation history up to that specific point, while preserving the original conversation intact.

Think of it almost like creating a new copy of a “document” to edit while keeping the original version safe—except that “document” is an ongoing AI conversation with all its accumulated context. For example, a marketing team brainstorming ad copy can now create separate branches to test a formal tone, a humorous approach, or an entirely different strategy—all stemming from the same initial setup.

A screenshot of conversation branching in ChatGPT. OpenAI

The feature addresses a longstanding limitation in the AI model where ChatGPT users who wanted to try different approaches had to either overwrite their existing conversation after a certain point by changing a previous prompt or start completely fresh. Branching allows exploring what-if scenarios easily—and unlike in a human conversation, you can try multiple different approaches.

A 2024 study conducted by researchers from Tsinghua University and Beijing Institute of Technology suggested that linear dialogue interfaces for LLMs poorly serve scenarios involving “multiple layers, and many subtasks—such as brainstorming, structured knowledge learning, and large project analysis.” The study found that linear interaction forces users to “repeatedly compare, modify, and copy previous content,” increasing cognitive load and reducing efficiency.

Some software developers have already responded positively to the update, with some comparing the feature to Git, the version control system that lets programmers create separate branches of code to test changes without affecting the main codebase. The comparison makes sense: Both allow you to experiment with different approaches while preserving your original work.

ChatGPT’s new branching feature is a good reminder that AI chatbots aren’t people Read More »

openai-links-up-with-broadcom-to-produce-its-own-ai-chips

OpenAI links up with Broadcom to produce its own AI chips

OpenAI is set to produce its own artificial intelligence chip for the first time next year, as the ChatGPT maker attempts to address insatiable demand for computing power and reduce its reliance on chip giant Nvidia.

The chip, co-designed with US semiconductor giant Broadcom, would ship next year, according to multiple people familiar with the partnership.

Broadcom’s chief executive Hock Tan on Thursday referred to a mystery new customer committing to $10 billion in orders.

OpenAI’s move follows the strategy of tech giants such as Google, Amazon and Meta, which have designed their own specialised chips to run AI workloads. The industry has seen huge demand for the computing power to train and run AI models.

OpenAI planned to put the chip to use internally, according to one person close to the project, rather than make them available to external customers.

Last year it began an initial collaboration with Broadcom, according to reports at the time, but the timeline for mass production of a successful chip design had previously been unclear.

On a call with analysts, Tan announced that Broadcom had secured a fourth major customer for its custom AI chip business, as it reported earnings that topped Wall Street estimates.

Broadcom does not disclose the names of these customers, but people familiar with the matter confirmed OpenAI was the new client. Broadcom and OpenAI declined to comment.

OpenAI links up with Broadcom to produce its own AI chips Read More »