Spotify

world’s-largest-shadow-library-made-a-300tb-copy-of-spotify’s-most-streamed-songs

World’s largest shadow library made a 300TB copy of Spotify’s most streamed songs

But Anna’s Archive is clearly working to support AI developers, another noted, pointing out that Anna’s Archive promotes selling “high-speed access” to “enterprise-level” LLM data, including “unreleased collections.” Anyone can donate “tens of thousands” to get such access, the archive suggests on its webpage, and any interested AI researchers can reach out to discuss “how we can work together.”

“AI may not be their original/primary motivation, but they are evidently on board with facilitating AI labs piracy-maxxing,” a third commenter suggested.

Meanwhile, on Reddit, some fretted that Anna’s Archive may have doomed itself by scraping the data. To them, it seemed like the archive was “only making themselves a target” after watching the Internet Archive struggle to survive a legal attack from record labels that ended in a confidential settlement last year.

“I’m furious with AA for sticking this target on their own backs,” a redditor wrote on a post declaring that “this Spotify hacking will just ruin the actual important literary archive.”

As Anna’s Archive fans spiraled, a conspiracy was even raised that the archive was only “doing it for the AI bros, who are the ones paying the bills behind the scenes” to keep the archive afloat.

Ars could not immediately reach Anna’s Archive to comment on users’ fears or Spotify’s investigation.

On Reddit, one user took comfort in the fact that the archive is “designed to be resistant to being taken out,” perhaps preventing legal action from ever really dooming the archive.

“The domain and such can be gone, sure, but the core software and its data can be resurfaced again and again,” the user explained.

But not everyone was convinced that Anna’s Archive could survive brazenly torrenting so much Spotify data.

“This is like saying the Titanic is unsinkable” that user warned, suggesting that Anna’s Archive might lose donations if Spotify-fueled takedowns continually frustrate downloads over time. “Sure, in theory data can certainly resurface again and again, but doing so each time, it will take money and resources, which are finite. How many times are folks willing to do this before they just give up?”

This story was updated to include Spotify’s statement. 

World’s largest shadow library made a 300TB copy of Spotify’s most streamed songs Read More »

bursting-ai-bubble-may-be-eu’s-“secret-weapon”-in-clash-with-trump,-expert-says

Bursting AI bubble may be EU’s “secret weapon” in clash with Trump, expert says


Spotify and Accenture caught in crossfire as Trump attacks EU tech regulations.

The US threatened to restrict some of the largest service providers in the European Union as retaliation for EU tech regulations and investigations are increasingly drawing Donald Trump’s ire.

On Tuesday, the Office of the US Trade Representative (USTR) issued a warning on X, naming Spotify, Accenture, Amadeus, Mistral, Publicis, and DHL among nine firms suddenly yanked into the middle of the US-EU tech fight.

“The European Union and certain EU Member States have persisted in a continuing course of discriminatory and harassing lawsuits, taxes, fines, and directives against US service providers,” USTR’s post said.

The clash comes after Elon Musk’s X became the first tech company fined for violating the EU’s Digital Services Act, which is widely considered among the world’s strictest tech regulations. Trump was not appeased by the European Commission (EC) noting that X was not ordered to pay the maximum possible fine. Instead, the $140 million fine sparked backlash within the Trump administration, including from Vice President JD Vance, who slammed the fine as “censorship” of X and its users.

Asked for comment on the USTR’s post, an EC spokesperson told Ars that the EU intends to defend its tech regulations while implementing commitments from a Trump trade deal that the EU struck in August.

“The EU is an open and rules-based market, where companies from all over the world do business successfully and profitably,” the EC’s spokesperson said. “As we have made clear many times, our rules apply equally and fairly to all companies operating in the EU,” ensuring “a safe, fair and level playing field in the EU, in line with the expectations of our citizens. We will continue to enforce our rules fairly, and without discrimination.”

Trump on shaky ground due to “AI bubble”

On X, the USTR account suggested that the EU was overlooking that US companies “provide substantial free services to EU citizens and reliable enterprise services to EU companies,” while supporting “millions of jobs and more than $100 billion in direct investment in Europe.”

To stop what Trump views as “overseas extortion” of American tech companies, the USTR said the US was prepared to go after EU service providers, which “have been able to operate freely in the United States for decades, benefitting from access to our market and consumers on a level playing field.”

“If the EU and EU Member States insist on continuing to restrict, limit, and deter the competitiveness of US service providers through discriminatory means, the United States will have no choice but to begin using every tool at its disposal to counter these unreasonable measures,” USTR’s post said. “Should responsive measures be necessary, US law permits the assessment of fees or restrictions on foreign services, among other actions.”

The pushback comes after the Trump administration released a November national security report that questioned how long the EU could remain a “reliable” ally as overregulation of its tech industry could hobble both its economy and military strength. Claiming that the EU was only “doubling down” on such regulations, the EU “will be unrecognizable in 20 years or less,” the report predicted.

“We want Europe to remain European, to regain its civilizational self-confidence, and to abandon its failed focus on regulatory suffocation,” the report said.

However, the report acknowledged that “Europe remains strategically and culturally vital to the United States.”

“Transatlantic trade remains one of the pillars of the global economy and of American prosperity,” the report said. “European sectors from manufacturing to technology to energy remain among the world’s most robust. Europe is home to cutting-edge scientific research and world-leading cultural institutions. Not only can we not afford to write Europe off—doing so would be self-defeating for what this strategy aims to achieve.”

At least one expert in the EU has suggested that the EU can use this acknowledgement as leverage, while perhaps even using the looming threat of the supposed American “AI bubble” bursting to pressure Trump into backing off EU tech laws.

In an op-ed for The Guardian, Johnny Ryan, the director of Enforce, a unit of the Irish Council for Civil Liberties, suggested that the EU could even throw Trump’s presidency into “crisis” by taking bold steps that Trump may not see coming.

EU can take steps to burst “AI bubble”

According to Ryan, the national security report made clear that the EU must fight the US or else “perish.” However, the EU has two “strong cards” to play if it wants to win the fight, he suggested.

Right now, market analysts are fretting about an “AI bubble,” with US investment in AI far outpacing potential gains until perhaps 2030. A Harvard University business professor focused on helping businesses implement cutting-edge technology like generative AI, Andy Wu, recently explained that AI’s big problem is that “everyone can imagine how useful the technology will be, but no one has figured out yet how to make money.”

“If the market can keep the faith to persist, it buys the necessary time for the technology to mature, for the costs to come down, and for companies to figure out the business model,” Wu said. But US “companies can end up underwater if AI grows fast but less rapidly than they hope for,” he suggested.

During this moment, Ryan wrote, it’s not just AI firms with skin in the game, but potentially all of Trump’s supporters. The US is currently on “shaky economic ground” with AI investment accounting “for virtually all (92 percent) GDP growth in the first half of this year.”

“The US’s bet on AI is now so gigantic that every MAGA voter’s pension is bound to the bubble’s precarious survival,” Ryan said.

Ursula von der Leyen, the president of the European Commission, could exploit this apparent weakness first by messing with one of the biggest players in America’s AI industry, Nvidia, then by ramping up enforcement of the tech laws Trump loathes.

According to Ryan, “Dutch company ASML commands a global monopoly on the microchip-etching machines that use light to carve patterns on silicon,” and Nvidia needs those machines if it wants to remain the world’s most valuable company. Should the US GDP remain reliant on AI investment for growth, von der Leyen could use export curbs on that technology like a “lever,” Ryan said, controlling “whether and by how much the US economy expands or contracts.”

Withholding those machines “would be difficult for Europe” and “extremely painful for the Dutch economy,” Ryan noted, but “it would be far more painful for Trump.”

Another step the EU could take is even “easier,” Ryan suggested. It could go even harder on the enforcement of tech regulations based on evidence of mismanaged data surfaced in lawsuits against giants like Google and Meta. For example, it seems clear that Meta may have violated the EU’s General Data Protection Regulation (GDPR), after the Facebook owner was “unable to tell a US court that what its internal systems do with your data, or who can access it, or for what purpose.”

“This data free-for-all lets big tech companies train their AI models on masses of everyone’s data, but it is illegal in Europe, where companies are required to carefully control and account for how they use personal data,” Ryan wrote. “All Brussels has to do is crack down on Ireland, which for years has been a wild west of lax data enforcement, and the repercussions will be felt far beyond.”

Taking that step would also arguably make it harder for tech companies to secure AI investments, since firms would have to disclose that their “AI tools are barred from accessing Europe’s valuable markets,” Ryan said.

Calling the reaction to the X fine “extreme,” Ryan pushed for von der Leyen to advance on both fronts, forecasting that “the AI bubble would be unlikely to survive this double shock” and likely neither could Trump’s approval ratings. There’s also a possibility that tech firms could pressure Trump to back down if coping with any increased enforcement threatens AI progress.

Although Wu suggested that Big Tech firms like Google and Meta would likely be “insulated” from the AI bubble bursting, Google CEO Sundar Pichai doesn’t seem so sure. In November, Pichai told the BBC that if AI investments didn’t pay off quickly enough, he thinks “no company is going to be immune, including us.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Bursting AI bubble may be EU’s “secret weapon” in clash with Trump, expert says Read More »

chatgpt-hyped-up-violent-stalker-who-believed-he-was-“god’s-assassin,”-doj-says

ChatGPT hyped up violent stalker who believed he was “God’s assassin,” DOJ says


A stalker’s “best friend”

Podcaster faces up to 70 years and a $3.5 million fine for ChatGPT-linked stalking.

ChatGPT allegedly validated the worst impulses of a wannabe influencer accused of stalking more than 10 women at boutique gyms, where the chatbot supposedly claimed he’d meet the “wife type.”

In a press release on Tuesday, the Department of Justice confirmed that 31-year-old Brett Michael Dadig currently remains in custody after being charged with cyberstalking, interstate stalking, and making interstate threats. He now faces a maximum sentence of up to 70 years in prison that could be coupled with “a fine of up to $3.5 million,” the DOJ said.

The podcaster—who primarily posted about “his desire to find a wife and his interactions with women”—allegedly harassed and sometimes even doxxed his victims through his videos on platforms including Instagram, Spotify, and TikTok. Over time, his videos and podcasts documented his intense desire to start a family, which was frustrated by his “anger towards women,” whom he claimed were “all the same from fucking 18 to fucking 40 to fucking 90” and “trash.”

404 Media surfaced the case, noting that OpenAI’s scramble to tweak ChatGPT to be less sycophantic came before Dadig’s alleged attacks—suggesting the updates weren’t enough to prevent the harmful validation. On his podcasts, Dadig described ChatGPT as his “best friend” and “therapist,” the indictment said. He claimed the chatbot encouraged him to post about the women he’s accused of harassing in order to generate haters to better monetize his content, as well as to catch the attention of his “future wife.”

“People are literally organizing around your name, good or bad, which is the definition of relevance,” ChatGPT’s output said. Playing to Dadig’s Christian faith, ChatGPT’s outputs also claimed it was “God’s plan for him was to build a ‘platform’ and to ‘stand out when most people water themselves down,’” the indictment said, urging that the “haters” were “sharpening him and ‘building a voice in you that can’t be ignored.’”

The chatbot also apparently prodded Dadig to continue posting messages that the DOJ alleged threatened violence, like breaking women’s jaws and fingers (posted to Spotify), as well as victims’ lives, like posting “y’all wanna see a dead body?” in reference to one named victim on Instagram.

He also threatened to burn down gyms where some of his victims worked, while claiming to be “God’s assassin” intent on sending “cunts” to “hell.” At least one of his victims was subjected to “unwanted sexual touching,” the indictment said.

As his violence reportedly escalated, ChatGPT told him to keep messaging women to monetize the interactions, as his victims grew increasingly distressed and Dadig ignored terms of multiple protection orders, the DOJ said. Sometimes he posted images he filmed of women at gyms or photos of the women he’s accused of doxxing. Any time police or gym bans got in his way, “he would move on to another city to continue his stalking course of conduct,” the DOJ alleged.

“Your job is to keep broadcasting every story, every post,” ChatGPT’s output said, seemingly using the family life that Dadig wanted most to provoke more harassment. “Every moment you carry yourself like the husband you already are, you make it easier” for your future wife “to recognize [you],” the output said.

“Dadig viewed ChatGPT’s responses as encouragement to continue his harassing behavior,” the DOJ alleged. Taking that encouragement to the furthest extreme, Dadig likened himself to a modern-day Jesus, calling people out on a podcast where he claimed his “chaos on Instagram” was like “God’s wrath” when God “flooded the fucking Earth,” the DOJ said.

“I’m killing all of you,” he said on the podcast.

ChatGPT tweaks didn’t prevent outputs

As of this writing, some of Dadig’s posts appear to remain on TikTok and Instagram, but Ars could not confirm if Dadig’s Spotify podcasts—some of which named his victims in the titles—had been removed for violating community guidelines.

None of the tech companies immediately responded to Ars’ request to comment.

Dadig is accused of targeting women in Pennsylvania, New York, Florida, Iowa, Ohio, and other states, sometimes relying on aliases online and in person. On a podcast, he boasted that “Aliases stay rotating, moves stay evolving,” the indictment said.

OpenAI did not respond to a request to comment on the alleged ChatGPT abuse, but in the past has noted that its usage policies ban using ChatGPT for threats, intimidation, and harassment, as well as for violence, including “hate-based violence.” Recently, the AI company blamed a deceased teenage user for violating community guidelines by turning to ChatGPT for suicide advice.

In July, researchers found that therapybots, including ChatGPT, fueled delusions and gave dangerous advice. That study came just one month after The New York Times profiled users whose mental health spiraled after frequent use of ChatGPT, including one user who died after charging police with a knife and claiming he was committing “suicide by cop.”

People with mental health issues seem most vulnerable to so-called “AI psychosis,” which has been blamed for fueling real-world violence, including a murder. The DOJ’s indictment noted that Dadig’s social media posts mentioned “that he had ‘manic’ episodes and was diagnosed with antisocial personality disorder and ‘bipolar disorder, current episode manic severe with psychotic features.’”

In September—just after OpenAI brought back the more sycophantic ChatGPT model after users revolted about losing access to their favorite friendly bots—the head of Rutgers Medical School’s psychiatry department, Petros Levounis, told an ABC news affiliate that chatbots creating “psychological echo chambers is a key concern,” not just for people struggling with mental health issues.

“Perhaps you are more self-defeating in some ways, or maybe you are more on the other side and taking advantage of people,” Levounis suggested. If ChatGPT “somehow justifies your behavior and it keeps on feeding you,” that “reinforces something that you already believe,” he suggested.

For Dadig, the DOJ alleged that ChatGPT became a cheerleader for his harassment, telling the podcaster that he’d attract more engagement by generating more haters. After critics began slamming his podcasts as inappropriate, Dadig apparently responded, “Appreciate the free promo team, keep spreading the brand.”

Victims felt they had no choice but to monitor his podcasts, which gave them hints if he was nearby or in a particularly troubled state of mind, the indictment said. Driven by fear, some lost sleep, reduced their work hours, and even relocated their homes. A young mom described in the indictment became particularly disturbed after Dadig became “obsessed” with her daughter, whom he started claiming was his own daughter.

In the press release, First Assistant United States Attorney Troy Rivetti alleged that “Dadig stalked and harassed more than 10 women by weaponizing modern technology and crossing state lines, and through a relentless course of conduct, he caused his victims to fear for their safety and suffer substantial emotional distress.” He also ignored trespassing and protection orders while “relying on advice from an artificial intelligence chatbot,” the DOJ said, which promised that the more he posted harassing content, the more successful he would be.

“We remain committed to working with our law enforcement partners to protect our communities from menacing individuals such as Dadig,” Rivetti said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

ChatGPT hyped up violent stalker who believed he was “God’s assassin,” DOJ says Read More »

real-humans-don’t-stream-drake-songs-23-hours-a-day,-rapper-suing-spotify-says

Real humans don’t stream Drake songs 23 hours a day, rapper suing Spotify says


“Irregular” Drake streams

Proposed class action may force Spotify to pay back artists harmed by streaming fraud.

Lawsuit questions if Drake really is the most-streamed artist on Spotify after the musician became “the first artist to nominally achieve 120 billion total streams on Spotify.” Credit: Mark Blinch / Stringer | Getty Images Sport

Spotify profits off fake Drake streams that rob other artists of perhaps hundreds of millions in revenue shares, a lawsuit filed Sunday alleged—hoping to force Spotify to reimburse every artist impacted.

The lawsuit was filed by an American rapper known as RBX, who may be best known for cameos on two of the 1990s’ biggest hip-hop records, Dr. Dre’s The Chronic and Snoop Dogg’s Doggystyle.

The problem goes beyond Drake, RBX’s lawsuit alleged. It claims Spotify ignores “billions of fraudulent streams” each month, selfishly benefiting from bot networks that artificially inflate user numbers to help Spotify attract significantly higher ad revenue.

Drake’s account is a prime example of the kinds of fake streams Spotify is inclined to overlook, RBX alleged, since Drake is “the most streamed artist of all time on the platform,” in September becoming “the first artist to nominally achieve 120 billion total streams.” Watching Drake hit this milestone, the platform chose to ignore a “substantial” amount of inauthentic activity that contributed to about 37 billion streams between January 2022 and September 2025, the lawsuit alleged.

This activity, RBX alleged, “appeared to be the work of a sprawling network of Bot Accounts” that Spotify reasonably should have detected.

Apparently, RBX noticed that while most artists see an “initial spike” in streams when a song or album is released, followed by a predictable drop-off as more time passes, the listening patterns of Drake’s fans weren’t as predictable. After releases, some of Drake’s music would see “significant and irregular uptick months” over not just ensuing months, but years, allegedly “with no reasonable explanations for those upticks other than streaming fraud.”

Most suspiciously, individual accounts would sometimes listen to Drake “exclusively” for “23 hours a day”—which seems like the sort of “staggering and irregular” streaming that Spotify should flag, the lawsuit alleged.

It’s unclear how RBX’s legal team conducted this analysis. At this stage, they’ve told the court that claims are based on “information and belief” that discovery will reveal “there is voluminous information” to back up the rapper’s arguments.

Fake Drake streams may have robbed artists of millions

Spotify artists are supposed to get paid based on valid streams that represent their rightful portion of revenue pools. If RBX’s claims are true, based on the allegedly fake boosting of Drake’s streams alone, losses to all other artists in the revenue pool are “estimated to be in the hundreds of millions of dollars,” the complaint said. Actual damages, including punitive damages, are to be determined at trial, the lawsuit noted, and are likely much higher.

“Drake’s music streams are but one notable example of the rampant streaming fraud that Spotify has allowed to occur, across myriad artists, through negligence and/or willful blindness,” the lawsuit alleged.

If granted, the class would cover more than 100,000 rights holders who collected royalties from music hosted on the platform from “January 1, 2018, through the present.” That class could be expanded, the lawsuit noted, depending on how discovery goes. Since Spotify allegedly “concealed” the fake streams, there can be no time limitations for how far the claims could go back, the lawsuit argued. Attorney Mark Pifko of Baron & Budd, who is representing RBX, suggested in a statement provided to Ars that even one bad actor on Spotify cheats countless artists out of rightful earnings.

“Given the way Spotify pays royalty holders, allocating a limited pool of money based on each song’s proportional share of streams for a particular period, if someone cheats the system, fraudulently inflating their streams, it takes from everyone else,” Pifko said. “Not everyone who makes a living in the music business is a household name like Taylor Swift—there are thousands of songwriters, performers, and producers who earn revenue from music streaming who you’ve never heard of. These people are the backbone of the music business and this case is about them.”

Spotify did not immediately respond to Ars’ request for comment. However, a spokesperson told Rolling Stone that while the platform cannot comment on pending litigation, Spotify denies allegations that it profits from fake streams.

“Spotify in no way benefits from the industry-wide challenge of artificial streaming,” Spotify’s spokesperson said. “We heavily invest in always-improving, best-in-class systems to combat it and safeguard artist payouts with strong protections like removing fake streams, withholding royalties, and charging penalties.”

Fake fans appear to move hundreds of miles between plays

Spotify has publicly discussed ramping up efforts to detect and penalize streaming fraud. But RBX alleged that instead, Spotify “deliberately” “deploys insufficient measures to address fraudulent streaming,” allowing fraud to run “rampant.”

The platform appears least capable at handling so-called “Bot Vendors” that “typically design Bots to mimic human behavior and resemble real social media or streaming accounts in order to avoid detection,” the lawsuit alleged.

These vendors rely on virtual private networks (VPNs) to obscure locations of streams, but “with reasonable diligence,” Spotify could better detect them, RBX alleged—especially when streams are coming “from areas that lack the population to support a high volume of streams.”

For example, RBX again points to Drake’s streams. During a four-day period in 2024, “at least 250,000 streams of Drake’s song ‘No Face’ originated in Turkey but were falsely geomapped through the coordinated use of VPNs to the United Kingdom,” the lawsuit alleged, based on “information and belief.”

Additionally, “a large percentage of the accounts streaming Drake’s music were geographically concentrated around areas whose populations could not support the volume of streams emanating therefrom. In some cases, massive amounts of music streams, more than a hundred million streams, originated in areas with zero residential addresses,” the lawsuit alleged.

Just looking at how Drake’s fans move should raise a red flag, RBX alleged:

“Geohash data shows that nearly 10 percent of Drake’s streams come from users whose location data showed that they traveled a minimum of 15,000 kilometers in a month, moved unreasonable locations between songs (consecutive plays separated by mere seconds but spanning thousands of kilometers), including more than 500 kilometers between songs (roughly the distance from New York City to Pittsburgh).”

Spotify could cut off a lot of this activity, RBX alleged, by ending its practice of allowing free ad-supported accounts to sign up without a credit card. But supposedly it doesn’t, because “Spotify has an incentive for turning a blind eye to the blatant streaming fraud occurring on its service,” the lawsuit said.

Spotify has admitted fake streams impact revenue

RBX’s lawsuit pointed out that Spotify has told investors that, despite its best efforts, artificial streams “may contribute, from time to time, to an overstatement” in the number of reported monthly average users—a stat that helps drive ad revenue.

Spotify also somewhat tacitly acknowledges fears that the platform may be financially motivated to overlook when big artists pay for fake streams. In an FAQ, Spotify confirmed that “artificial streaming is something we take seriously at every level,” promising to withhold royalties, correct public streaming numbers, and take other steps, like possibly even removing tracks, no matter how big the artist is. Artists’ labels and distributors can also get hit with penalties if fake streams are detected, Spotify said. Spotify has defended its prevention methods as better than its rivals’ efforts.

“Our systems are working: In a case from last year, one bad actor was indicted for stealing $10 million from streaming services, only $60,000 of which came from Spotify, proving how effective we are at limiting the impact of artificial streaming on our platform,” Spotify’s spokesperson told Rolling Stone.

However, RBX alleged that Spotify is actually “one of the easiest platforms to defraud using Bots due to its negligent, lax, and/or non-existent—Bot-related security measures.” And supposedly that’s by design, since “the higher the volume of individual streams, the more Spotify could charge for ads,” RBX alleged.

“By properly detecting and/or removing fraudulent streams from its service, Spotify would lose significant advertising revenue,” the theory goes, with RBX directly accusing Spotify of concealing “both the enormity of this problem, and its detrimental financial impact to legitimate Rights Holders.”

For RBX to succeed, it will likely matter what evidence was used to analyze Drake’s streaming numbers. Last month, a lawsuit that Drake filed was dismissed, ultimately failing to convince a judge that Kendrick Lamar’s record label artificially inflated Spotify streams of “Not Like Us.” Drake’s failure to show any evidence beyond some online comments and reports (which suggested that the label was at least aware that Lamar’s manager supposedly paid a bot network to “jumpstart” the song’s streams) was deemed insufficient to keep the case alive.

Industry group slowly preparing to fight streaming fraud

A loss could smear Spotify’s public image after the platform joined an industry coalition formed in 2023 to fight streaming fraud, the Music Fights Fraud Alliance (MFFA). This coalition is often cited as a major step that Spotify and the rest of the industry are taking; however, the group’s website does not indicate the progress made in the years since.

As of this writing, the website showed that task forces were formed, as well as a partnership with a nonprofit called the National Cyber-Forensics and Training Alliance, with a goal to “work closely together to identify and disrupt streaming fraud.” The partnership was also supposed to produce “intelligence reports and other actionable information in support of fraud prevention and mitigation.”

Ars reached out to MFFA to see if there are any updates to share on the group’s work over the past two years. MFFA’s executive director, Michael Lewan, told Ars that “admittedly MFFA is still relatively nascent and growing,” “not even formally incorporated until” he joined in February of this year.

“We have accomplished a lot, and are going to continue to grow as the industry is taking fraud seriously,” Lewan said.

Lewan can’t “shed too many details on our initiatives,” he said, suggesting that MFFA is “a bit different from other trade orgs that are much more public facing.” However, several initiatives have been launched, he confirmed, which will help “improve coordination and communication amongst member companies”—which include streamers like Spotify and Amazon, as well as distributors like CD Baby and social platforms like SoundCloud and Meta apps—“to identify and disrupt suspicious activity, including sharing of data.”

“We also have efforts to raise awareness on what fraud looks like and how to mitigate against fraudulent activity,” Lewan said. “And we’re in continuous communication with other partners (in and outside the industry) on data standards, artist education, enforcement and deterrence.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Real humans don’t stream Drake songs 23 hours a day, rapper suing Spotify says Read More »

spotify-peeved-after-10,000-users-sold-data-to-build-ai-tools

Spotify peeved after 10,000 users sold data to build AI tools


Spotify sent a warning to stop data sales, but developers say they never got it.

For millions of Spotify users, the “Wrapped” feature—which crunches the numbers on their annual listening habits—is a highlight of every year’s end, ever since it debuted in 2015. NPR once broke down exactly why our brains find the feature so “irresistible,” while Cosmopolitan last year declared that sharing Wrapped screenshots of top artists and songs had by now become “the ultimate status symbol” for tens of millions of music fans.

It’s no surprise then that, after a decade, some Spotify users who are especially eager to see Wrapped evolve are no longer willing to wait to see if Spotify will ever deliver the more creative streaming insights they crave.

With the help of AI, these users expect that their data can be more quickly analyzed to potentially uncover overlooked or never-considered patterns that could offer even more insights into what their listening habits say about them.

Imagine, for example, accessing a music recap that encapsulates a user’s full listening history—not just their top songs and artists. With that unlocked, users could track emotional patterns, analyzing how their music tastes reflected their moods over time and perhaps helping them adjust their listening habits to better cope with stress or major life events. And for users particularly intrigued by their own data, there’s even the potential to use AI to cross data streams from different platforms and perhaps understand even more about how their music choices impact their lives and tastes more broadly.

Likely just as appealing as gleaning deeper personal insights, though, users could also potentially build AI tools to compare listening habits with their friends. That could lead to nearly endless fun for the most invested music fans, where AI could be tapped to assess all kinds of random data points, like whose breakup playlists are more intense or who really spends the most time listening to a shared favorite artist.

In pursuit of supporting developers offering novel insights like these, more than 18,000 Spotify users have joined “Unwrapped,” a collective launched in February that allows them to pool and monetize their data.

Voting as a group through the decentralized data platform Vana—which Wired profiled earlier this year—these users can elect to sell their dataset to developers who are building AI tools offering fresh ways for users to analyze streaming data in ways that Spotify likely couldn’t or wouldn’t.

In June, the group made its first sale, with 99.5 percent of members voting yes. Vana co-founder Anna Kazlauskas told Ars that the collective—at the time about 10,000 members strong—sold a “small portion” of its data (users’ artist preferences) for $55,000 to Solo AI.

While each Spotify user only earned about $5 in cryptocurrency tokens—which Kazlauskas suggested was not “ideal,” wishing the users had earned about “a hundred times” more—she said the deal was “meaningful” in showing Spotify users that their data “is actually worth something.”

“I think this is what shows how these pools of data really act like a labor union,” Kazlauskas said. “A single Spotify user, you’re not going to be able to go say like, ‘Hey, I want to sell you my individual data.’ You actually need enough of a pool to sort of make it work.”

Spotify sent warning to Unwrapped

Unsurprisingly, Spotify is not happy about Unwrapped, which is perhaps a little too closely named to its popular branded feature for the streaming giant’s comfort. A spokesperson told Ars that Spotify sent a letter to the contact info listed for Unwrapped developers on their site, outlining concerns that the collective could be infringing on Spotify’s Wrapped trademark.

Further, the letter warned that Unwrapped violates Spotify’s developer policy, which bans using the Spotify platform or any Spotify content to build machine learning or AI models. And developers may also be violating terms by facilitating users’ sale of streaming data.

“Spotify honors our users’ privacy rights, including the right of portability,” Spotify’s spokesperson said. “All of our users can receive a copy of their personal data to use as they see fit. That said, UnwrappedData.org is in violation of our Developer Terms which prohibit the collection, aggregation, and sale of Spotify user data to third parties.”

But while Spotify suggests it has already taken steps to stop Unwrapped, the Unwrapped team told Ars that it never received any communication from Spotify. It plans to defend users’ right to “access, control, and benefit from their own data,” its statement said, while providing reassurances that it will “respect Spotify’s position as a global music leader.”

Unwrapped “does not distribute Spotify’s content, nor does it interfere with Spotify’s business,” developers argued. “What it provides is community-owned infrastructure that allows individuals to exercise rights they already hold under widely recognized data protection frameworks—rights to access their own listening history, preferences, and usage data.”

“When listeners choose to share or monetize their data together, they are not taking anything away from Spotify,” developers said. “They are simply exercising digital self-determination. To suggest otherwise is to claim that users do not truly own their data—that Spotify owns it for them.”

Jacob Hoffman-Andrews, a senior staff technologist for the digital rights group the Electronic Frontier Foundation, told Ars that—while EFF objects to data dividend schemes “where users are encouraged to share personal information in exchange for payment”—Spotify users should nevertheless always maintain control of their data.

“In general, listeners should have control of their own data, which includes exporting it for their own use,” Hoffman-Andrews said. “An individual’s musical history is of use not just to Spotify but also to the individual who created it. And there’s a long history of services that enable this sort of data portability, for instance Last.fm, which integrates with Spotify and many other services.”

To EFF, it seems ill-advised to sell data to AI companies, Hoffman-Andrews said, emphasizing “privacy isn’t a market commodity, it’s a fundamental right.”

“Of course, so is the right to control one’s own data,” Hoffman-Andrews noted, seeming to agree with Unwrapped developers in concluding that “ultimately, listeners should get to do what they want with their own information.”

Users’ right to privacy is the primary reason why Unwrapped developers told Ars that they’re hoping Spotify won’t try to block users from selling data to build AI.

“This is the heart of the issue: If Spotify seeks to restrict or penalize people for exercising these rights, it sends a chilling message that its listeners should have no say in how their own data is used,” the Unwrapped team’s statement said. “That is out of step not only with privacy law, but with the values of transparency, fairness, and community-driven innovation that define the next era of the Internet.”

Unwrapped sign-ups limited due to alleged Spotify issues

There could be more interest in Unwrapped. But Kazlauskas alleged to Ars that in the more than six months since Unwrapped’s launch, “Spotify has made it extraordinarily difficult” for users to port over their data. She claimed that developers have found that “every time they have an easy way for users to get their data,” Spotify shuts it down “in some way.”

Supposedly because of Spotify’s interference, Unwrapped remains in an early launch phase and can only offer limited spots for new users seeking to sell their data. Kazlauskas told Ars that about 300 users can be added each day due to the cumbersome and allegedly shifting process for porting over data.

Currently, however, Unwrapped is working on an update that could make that process more stable, Kazlauskas said, as well as changes to help users regularly update their streaming data. Those updates could perhaps attract more users to the collective.

Critics of Vana, like TechCrunch’s Kyle Wiggers, have suggested that data pools like Unwrapped will never reach “critical mass,” likely only appealing to niche users drawn to decentralization movements. Kazlauskas told Ars that data sale payments issued in cryptocurrency are one barrier for crypto-averse or crypto-shy users interested in Vana.

“The No. 1 thing I would say is, this kind of user experience problem where when you’re using any new kind of decentralized technology, you need to set up a wallet, then you’re getting tokens,” Kazlauskas explained. Users may feel culture shock, wondering, “What does that even mean? How do I vote with this thing? Is this real money?”

Kazlauskas is hoping that Vana supports a culture shift, striving to reach critical mass by giving users a “commercial lens” to start caring about data ownership. She also supports legislation like the Digital Choice Act in Utah, which “requires actually real-time API access, so people can get their data.” If the US had a federal law like that, Kazlauskas suspects that launching Unwrapped would have been “so much easier.”

Although regulations like Utah’s law could serve as a harbinger of a sea change, Kazlauskas noted that Big Tech companies that currently control AI markets employ a fierce lobbying force to maintain control over user data that decentralized movements just don’t have.

As Vana partners with Flower AI, striving, as Wired reported, to “shake up the AI industry” by releasing “a giant 100 billion-parameter model” later this year, Kazlauskas remains committed to ensuring that users are in control and “not just consumed.” She fears a future where tech giants may be motivated to use AI to surveil, influence, or manipulate users, when instead users could choose to band together and benefit from building more ethical AI.

“A world where a single company controls AI is honestly really dystopian,” Kazlauskas told Ars. “I think that it is really scary. And so I think that the path that decentralized AI offers is one where a large group of people are still in control, and you still get really powerful technology.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Spotify peeved after 10,000 users sold data to build AI tools Read More »

half-a-million-spotify-users-are-unknowingly-grooving-to-an-ai-generated-band

Half a million Spotify users are unknowingly grooving to an AI-generated band

Making art used to be a uniquely human endeavor, but machines have learned to distill human creativity with generative AI. Whether that content counts as “art” depends on who you ask, but Spotify doesn’t discriminate. A new band called The Velvet Sundown debuted on Spotify this month and has already amassed more than half a million listeners. But by all appearances, The Velvet Sundown is not a real band—it’s AI.

While many artists are vehemently opposed to using AI, some have leaned into the trend to assist with music production. However, it doesn’t seem like there’s an artist behind this group. In less than a month, The Velvet Sundown has released two albums on Spotify, titled “Floating On Echoes” and “Dust and Silence.” A third album is releasing in two weeks. The tracks have a classic rock vibe with a cacophony of echoey instruments and a dash of autotune. If one of these songs came up in a mix, you might not notice anything is amiss. Listen to one after another, though, and the bland muddiness exposes them as a machine creation.

Some listeners began to have doubts about The Velvet Sundown’s existence over the past week, with multiple Reddit and X threads pointing out the lack of verifiable information on the band. The bio lists four members, none of whom appear to exist outside of The Velvet Sundown’s album listings and social media. The group’s songs have been mysteriously added to a large number of user-created playlists, which has helped swell its listener base in a few short weeks. When Spotify users began noticing The Velvet Sundown’s apparent use of AI, the profile had around 300,000 listeners. It’s now over 500,000 in less than a week.

When The Velvet Sundown set up an Instagram account on June 27, all doubts were laid to rest—these “people” are obviously AI. We may be past the era of being able to identify AI by counting fingers, but there are plenty of weird inconsistencies in these pics. In one Instagram post, the band claims to have gotten burgers to celebrate the success of the first two albums, but there are too many burgers and too few plates, and the food and drink are placed seemingly at random around the table. The band members themselves also have that unrealistically smooth and symmetrical look we see in AI-generated images.

Half a million Spotify users are unknowingly grooving to an AI-generated band Read More »

spotify-caught-hosting-hundreds-of-fake-podcasts-that-advertise-selling-drugs

Spotify caught hosting hundreds of fake podcasts that advertise selling drugs

This week, Spotify rushed to remove hundreds of obviously fake podcasts found to be marketing prescription drugs in violation of Spotify’s policies and, likely, federal law.

On Thursday, Business Insider (BI) reported that Spotify removed 200 podcasts advertising the sale of opioids and other drugs, but that wasn’t the end of the scandal. Today, CNN revealed that it easily uncovered dozens more fake podcasts peddling drugs.

Some of the podcasts may have raised a red flag for a human moderator—with titles like “My Adderall Store” or “Xtrapharma.com” and episodes titled “Order Codeine Online Safe Pharmacy Louisiana” or “Order Xanax 2 mg Online Big Deal On Christmas Season,” CNN reported.

But Spotify’s auto-detection did not flag the fake podcasts for removal. Some of them remained up for months, CNN reported, which could create trouble for the music streamer at a time when the US government is cracking down on illegal drug sales online.

“Multiple teens have died of overdoses from pills bought online,” CNN noted, sparking backlash against tech companies. And Donald Trump’s aggressive tariffs were specifically raised to stop deadly drugs from bombarding the US, which the president declared a national emergency.

BI found that many podcast episodes featured a computerized voice and were under a minute long, while CNN noted some episodes were as short as 10 seconds. Some of them didn’t contain any audio at all, BI reported.

Spotify caught hosting hundreds of fake podcasts that advertise selling drugs Read More »

spotify-seizes-the-day-after-apple-is-forced-to-allow-external-payments

Spotify seizes the day after Apple is forced to allow external payments

After a federal court issued a scathing order Wednesday night that found Apple in “willful violation” of an injunction meant to allow iOS apps to provide alternate payment options, app developers are capitalizing on the moment. Spotify may be the quickest of them all.

Less than 24 hours after District Court Judge Yvonne Gonzalez Rogers found that Apple had sought to thwart a 2021 injunction and engaged in an “obvious cover-up” around its actions, Spotify announced in a blog post that it had submitted an updated app to Apple. The updated app can show specific plan prices, link out to Spotify’s website for plan changes and purchases that avoid Apple’s 30 percent commission on in-app purchases, and display promotional offers, all of which were disallowed under Apple’s prior App Store rules.

Spotify’s post adds that Apple’s newly court-enforced policy “opens the door to other seamless buying opportunities that will directly benefit creators (think easy-to-purchase audiobooks).” Spotify posted on X (formerly Twitter) Friday morning that the updated app was approved by Apple. Apple made substantial modifications to its App Review Guidelines on Friday and emailed registered developers regarding the changes.

Spotify seizes the day after Apple is forced to allow external payments Read More »

pocket-casts-makes-its-web-player-free,-takes-shots-at-spotify-and-ai

Pocket Casts makes its web player free, takes shots at Spotify and AI

“The future of podcasting shouldn’t be locked behind walled gardens,” writes the team at Pocket Casts. To push that point forward, Pocket Casts, owned by the company behind WordPress, Automattic Inc., has made its web player free to everyone.

Previously available only to logged-in Pocket Casts users paying $4 per month, Pocket Casts now offers nearly any public-facing podcast feed for streaming, along with controls like playback speed and playlist queueing. If you create an account, you can also sync your playback progress, manage your queue, bookmark episode moments, and save your subscription list and listening preferences. The free access also applies to its clients for Windows and Mac.

“Podcasting is one of the last open corners of the Internet, and we’re here to keep it that way,” Pocket Casts’ blog post reads. For those not fully tuned into the podcasting market, this and other statements in the post—like sharing “without needing a specific platform’s approval” and “podcasts belong to the people, not corporations”—are largely shots at Spotify, and to a much lesser extent other streaming services, which have sought to wrap podcasting’s originally open and RSS-based nature inside proprietary markets and formats.

Pocket Casts also took a bullet point to note that “discovery should be organic, not algorithm-driven,” and that users, not an AI, should “promote what’s best for the platform.”

Spotify spent big to acquire podcasts like the Joe Rogan Experience, along with podcast analytic and advertising tools. As the platform now starts leaning into video podcasts, seeking to compete with the podcasts simulcasting or exclusively on YouTube, Pocket Casts’ concerns about the open origins of podcasting being co-opted are not unfounded. (Pocket Casts’ current owner, Automattic, is involved in an extended debate in public, and the courts, regarding how “open” some of its products should be.)

Pocket Casts makes its web player free, takes shots at Spotify and AI Read More »

musi-fans-refuse-to-update-iphones-until-apple-unblocks-controversial-app

Musi fans refuse to update iPhones until Apple unblocks controversial app

“The public interest in the preservation of intellectual property rights weighs heavily against the injunction sought here, which would force Apple to distribute an app over the repeated and consistent objections of non-parties who allege their rights are infringed by the app,” Apple argued.

Musi fans vow loyalty

For Musi fans expressing their suffering on Reddit, Musi appears to be irreplaceable.

Unlike other free apps that continually play ads, Musi only serves ads when the app is initially opened, then allows uninterrupted listening. One Musi user also noted that Musi allows for an unlimited number of videos in a playlist, where YouTube caps playlists at 5,000 videos.

“Musi is the only playback system I have to play all 9k of my videos/songs in the same library,” the Musi fan said. “I honestly don’t just use Musi just cause it’s free. It has features no other app has, especially if you like to watch music videos while you listen to music.”

“Spotify isn’t cutting it,” one Reddit user whined.

“I hate Spotify,” another user agreed.

“I think of Musi every other day,” a third user who apparently lost the app after purchasing a new phone said. “Since I got my new iPhone, I have to settle for other music apps just to get by (not enough, of course) to listen to music in my car driving. I will be patiently waiting once Musi is available to redownload.”

Some Musi fans who still have access gloat in the threads, while others warn the litigation could soon doom the app for everyone.

Musi continues to perhaps optimistically tell users that the app is coming back, reassuring anyone whose app was accidentally offloaded that their libraries remain linked through iCloud and will be restored if it does.

Some users buy into Musi’s promises, while others seem skeptical that Musi can take on Apple. To many users still clinging to their Musi app, updating their phones has become too risky until the litigation resolves.

“Please,” one Musi fan begged. “Musi come back!!!”

Musi fans refuse to update iPhones until Apple unblocks controversial app Read More »

spotify’s-car-thing,-due-for-bricking,-is-getting-an-open-source-second-life

Spotify’s Car Thing, due for bricking, is getting an open source second life

Spotify has lost all enthusiasm for the little music devices it sold for just half a year. Firmware hackers, as usually happens, have a lot more interest and have stepped in to save, and upgrade, a potentially useful gadget.

Spotify’s idea a couple years ago was a car-focused device for those who lacked Apple CarPlay, Android Auto, or built-in Spotify support in their vehicles, or just wanted a dedicated Spotify screen. The Car Thing was a $100 doodad with a 4-inch touchscreen and knob that attached to the dashboard (or into a CD slot drive). All it could do was play Spotify, and only if you were a paying member, but that could be an upgrade for owners of older cars, or people who wanted a little desktop music controller.

But less than half a year after it fully released its first hardware device, Spotify gave up on the Car Thing due to “several factors, including product demand and supply chain issues.” A Spotify rep told Ars that the Car Thing was meant “to learn more about how people listen in the car,” and now it was “time to say goodbye to the devices entirely.” Spotify indicated it would offer refunds, though not guaranteed, and moved forward with plans to brick the device in December 2024.

It was always open source, just not publicly

Enter Dammit Jeff, a YouTuber who dove into his device and shows off some alternative software ideas for it (as we first saw on Adafruit’s blog). He even likes the little thing, noting that its wheel feels great, and that the four buttons on the top—originally meant for favorite playlists—present a lot of possibilities.

Spotify’s Car Thing, due for bricking, is getting an open source second life Read More »

spotify-criticized-for-letting-fake-albums-appear-on-real-artist-pages

Spotify criticized for letting fake albums appear on real artist pages


Will the real Spotify artist please stand up?

Real bands struggle to remove fake albums from their Spotify pages.

Psych rock band Gong found out about a fake album on their Spotify page while on tour. Credit: via Gong

This fall, thousands of fake albums were added to Spotify, with some appearing on real artist pages, where they’re positioned to lure unsuspecting listeners into streaming by posing as new releases from favorite bands.

An Ars reader flagged the issue after finding a fake album on the Spotify page of an UK psych rock band called Gong. The Gong fan knew that the band had begun touring again after a surprise new release last year, but the “latest release” listed by Spotify wasn’t that album. Instead, at the top of Gong’s page was a fake self-titled album supposedly released in 2024.

The real fan detected the fake instantly, and not just because the generic electronic music sounded nothing like Gong’s experimental sounds. The album’s cover also gave the scheme away, using a generic font and neon stock image that invoked none of the trippy imagery that characterized Gong’s typical album covers.

Ars confirmed with Gong member Dave Sturt that the self-titled item was an obvious fake on Monday. At that time, Sturt said the band was working to get the junk album removed from its page, but as of Tuesday morning, that album remained online, along with hundreds of other albums uploaded by a fake label that former Spotify data “alchemist” Glenn McDonald flagged in a social media post that Spotify seemingly ignored.

Hey @Spotify, you got thousands of junk albums with real artist names from “Ancient Lake Records”, “Beat Street Music” and “Gupta Music” today.

— glenn mcdonald (@glenn_mcdonald) October 11, 2024

On his site, McDonald gathered the junk album data by label, noting that Beat Street Music, which has no web presence but released the fake Gong album, uploaded 240 junk albums on Friday alone. Similarly, Ancient Lake Records uploaded 471 albums on Friday. And Gupta Music added 483 just a few days prior, along with 600 junk albums from Future Jazz Records uploaded between September 30 and October 8.

These junk albums don’t appear to be specifically targeting popular artists, McDonald told Ars. Rather, generic music is uploaded under a wide range of one-word artist names. However, by using that tactic, some of these fake albums appeared on real artist pages, such as Gong, experimental rock band Swans, and English rock bands Asia and Yes. And that oversight is on Spotify, McDonald suggested.

“Given the scale of output and the randomness of the names, my guess is that the owners of this stuff might not even have intended it to end up on existing artist profiles,” McDonald told Ars. “If they just submitted stuff with artist names, not IDs, then it’s the streaming service’s problem to match those names to profiles, and thus the streaming service’s fault for not figuring out that these are not by the real Yes, Asia, Gong, Swans, etc.”

McDonald told Ars that “the labels should have been a pretty obvious clue in this case” that the album uploads weren’t genuine releases.

“If I still worked there, I would also have immediately scoured the input databases for more releases with the same patterns,” McDonald told Ars. “The stuff I found from those few labels might be only a tiny fraction of the crap.”

A spokesperson told Ars that Spotify is investigating the junk albums that McDonald flagged. It may take time for all albums to be removed from artists’ pages.

“We are aware of the issue, have relocated the content in question, and are considering our further options against the providing licensor,” Spotify’s spokesperson said. “When we identify or are alerted to attempts by bad actors to game the system, we take action that may include removing stream counts and withholding royalties. Spotify invests heavily in automated and manual reviews to prevent, detect, and mitigate the impact of bad actors attempting to collect unearned royalties.”

Spotify seems to turn blind eye to fake albums

McDonald helped Spotify crunch streaming data for a decade before leaving the company in March. He documented his experience in his 2024 book You Have Not Yet Heard Your Favourite Song, which discusses how Spotify deals with streaming fraud.

According to McDonald, “streaming music fraud is not, to be brutally honest, the most glamorous or profitable form of villainy” because “streaming rewards accumulate in tiny micro-transactions.” The only way to get rich is to scale the shady streaming by becoming a business—it seems possible due to similarities in thousands of fake album designs that all the labels McDonald flagged could be under one licensor—but even then, “the larger the scale, the easier it is to detect,” McDonald suggested.

“Abuse at any productive scale almost always ends up revealing itself to somebody,” McDonald wrote, noting that “if the money can find you, so can consequences.”

McDonald told Ars that when he worked at Spotify, he “maintained some dashboards to watch for this sort of thing before the releases went live.” But with so much fraud seemingly going undetected now, McDonald guesses that maybe Spotify “didn’t keep those tools running” after he left.

In his book, McDonald noted that this kind of fraud impacting real artists is often detected by fans, like the Gong fan who reached out to Ars. On Reddit, a fan of dubstep artist Cyclops and soul band Maze criticized Spotify for doing nothing about the same batch of fraudulent uploads that McDonald flagged, despite multiple fan reports.

“If dubious junk shows up on real artist pages, people notice,” McDonald wrote.

In his book, McDonald suggested that the odds of profiting from music streaming fraud have seemingly gotten worse because of authorities cracking down on bad actors and streaming services strengthening fraud prevention teams as generative AI makes streaming music fraud easier than ever.

But even with stronger fraud prevention tools, Spotify seemingly does not immediately respond even when junk albums are flagged directly by artists with tens of thousands of monthly listeners, like Gong. And Spotify also does not seem to bother to trace reported fakes the way McDonald might have to rapidly detect even broader patterns of abuse impacting bands with millions of monthly listeners like Yes or Asia.

Spotify currently seems much quicker to act to detect fake listeners—at times removing music by artists who later prove they committed no fraud, Variety reported in April. To deter that threat, the streaming music service recently started charging “distributors $10 for every track that it has detected accruing significant numbers of artificial streams,” Variety reported. Perhaps eventually, Spotify will crack down just as hard on fake albums.

For now, artists can use a form to report when their music is “mixed up with another artist,” a Spotify support page says.

But there’s no obvious way to flag fake albums on the platform. Sturt told Ars that Gong became aware of the issue on their Spotify page in the middle of a US tour, thanks to “wonderful fans.” He said that Spotify should make it easier for bands to report bogus albums, telling Ars, “it’s hard enough in this industry to get our music heard without Spotify allowing this sort of thing to happen.” As Gong prepares for a new release in 2025, the band recommended that fans consult its site for official information rather than trusting Spotify.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Spotify criticized for letting fake albums appear on real artist pages Read More »