Author name: Kelly Newman

what’s-wrong-with-aaa-games?-the-development-of-the-next-battlefield-has-answers.

What’s wrong with AAA games? The development of the next Battlefield has answers.


EA insiders describe stress and setbacks in a project that’s too big to fail.

A marketing image for Battlefield depicting soldiers and jets

After the lukewarm reception of Battlefield 2042, EA is doubling down.

After the lukewarm reception of Battlefield 2042, EA is doubling down.

It’s been 23 years since the first Battlefield game, and the video game industry is nearly unrecognizable to anyone who was immersed in it then. Many people who loved the games of that era have since become frustrated with where AAA (big budget) games have ended up.

Today, publisher EA is in full production on the next Battlefield title—but sources close to the project say it has faced culture clashes, ballooning budgets, and major disruptions that have left many team members fearful that parts of the game will not be finished to players’ satisfaction in time for launch during EA’s fiscal year.

They also say the company has made major structural and cultural changes to how Battlefield games are created to ensure it can release titles of unprecedented scope and scale. This is all to compete with incumbents like the Call of Duty games and Fortnite, even though no prior Battlefield has achieved anywhere close to that level of popular and commercial success.

I spoke with current and former EA employees who work or have recently worked directly on the game—they span multiple studios, disciplines, and seniority levels and all agreed to talk about the project on the condition of anonymity. Asked to address the reporting in this article, EA declined to comment.

According to these first-hand accounts, the changes have led to extraordinary stress and long hours. Every employee I spoke to across several studios either took exhaustion leave themselves or directly knew staffers who did. Two people who had worked on other AAA projects within EA or elsewhere in the industry said this project had more people burning out and needing to take leave than they’d ever seen before.

Each of the sources I spoke with shared sincere hopes that the game will still be a hit with players, pointing to its strong conceptual start and the talent, passion, and pedigree of its development team. Whatever the end result, the inside story of the game’s development illuminates why the medium and the industry are in the state they’re in today.

Table of Contents

The road to Glacier

To understand exactly what’s going on with the next Battlefield title—codenamed Glacier—we need to rewind a bit.

In the early 2010s, Battlefield 3 and Battlefield 4 expanded the franchise audience to more directly compete with Call of Duty, the heavy hitter at the time. Developed primarily by EA-owned, Sweden-based studio DICE, the Battlefield games mixed the franchise’s promise of combined arms warfare and high player counts with Call of Duty’s faster pace and greater platform accessibility.

This was a golden age for Battlefield. However, 2018’s Battlefield V launched to a mixed reception, and EA began losing players’ attention in an expanding industry.

Battlefield 3, pictured here, kicked off the franchise’s golden age. Credit: EA

Instead, the hot new online shooters were Overwatch (2016), Fortnite (2017), and a resurgent Call of Duty. Fortnite was driven by a popular new gameplay mode called Battle Royale, and while EA attempted a Battle Royale mode in Battlefield V, it didn’t achieve the desired level of popularity.

After V, DICE worked on a Battlefield title that was positioned as a throwback to the glory days of 3 and 4. That game would be called Battlefield 2042 (after the future year in which it was set), and it would launch in 2021.

The launch of Battlefield 2042 is where Glacier’s development story begins. Simply put, the game was not fun enough, and Battlefield 2042 launched as a dud.

Don’t repeat past mistakes

Players were disappointed—but so were those who worked on 2042. Sources tell me that prior to launch, Battlefield 2042 “massively missed” its alpha target—a milestone by which most or all of the foundational features of the game are meant to be in place. Because of this, the game’s final release would need to be delayed in order to deliver on the developers’ intent (and on players’ expectations).

“Realistically, they have to delay the game by at least six months to complete it. Now, they eventually only delayed it by, I think, four or five weeks, which from a development point of view means very little,” said one person who worked closely with the project at the time.

Developers at DICE had hoped for more time. Morale fell, but the team marched ahead to the game’s lukewarm launch.

Ultimately, EA made back some ground with what the company calls “live operations”—additional content and updates in the months following launch—but the game never fulfilled its ambitions.

Plans were already underway for the next Battlefield game, so a postmortem was performed on 2042. It concluded that the problems had been in execution, not vision. New processes were put into place so that issues could be identified earlier and milestones like the alpha wouldn’t be missed.

To help achieve this, EA hired three industry luminaries to lead Glacier, all of them based in the United States.

The franchise leadership dream team

2021 saw EA bring on Byron Beede as general manager for Battlefield; he had previously been general manager for both Call of Duty (including the Warzone Battle Royale) and the influential shooter Destiny. EA also hired Marcus Lehto—co-creator of Halo—as creative chief of a newly formed Seattle studio called Ridgeline Games, which would lead the development of Glacier’s single-player campaign.

Finally, there was Vince Zampella, one of the leaders of the team that initially created Call of Duty in 2003. He joined EA in 2010 to work on other franchises, but in 2021, EA announced that Zampella would oversee Battlefield moving forward.

In the wake of these changes, some prominent members of DICE departed, including General Manager Oskar Gabrielson and Creative Director Lars Gustavsson, who had been known by the nickname “Mr. Battlefield.” With this changing of the guard, EA was ready to place a bigger bet than ever on the next Battlefield title.

100 million players

While 2042 struggled, competitors Call of Duty and Fortnite were posting astonishing player and revenue numbers, thanks in large part to the popularity of their Battle Royale modes.

EA’s executive leadership believed Battlefield had the potential to stand toe to toe with them, if the right calls were made and enough was invested.

A lofty player target was set for Glacier: 100 million players over a set period of time that included post-launch.

Fortnite characters looking across the many islands and vast realm of the game.

Fortnite‘s huge success has publishers like EA chasing the same dollars. Credit: Epic Games

“Obviously, Battlefield has never achieved those numbers before,” one EA employee told me. “It’s important to understand that over about that same period, 2042 has only gotten 22 million,” another said. Even 2016’s Battlefield 1—the most successful game in the franchise by numbers—had achieved “maybe 30 million plus.”

Of course, most previous Battlefield titles had been premium releases, with an up-front purchase cost and no free-to-play mode, whereas successful competitors like Fortnite and Call of Duty made their Battle Royale modes freely available, monetizing users with in-game purchases and season passes that unlocked post-launch content.

It was thought that if Glacier did the same, it could achieve comparable numbers, so a free-to-play Battle Royale mode was made a core offering for the title, alongside a six-hour single-player campaign, traditional Battlefield multiplayer modes like Conquest and  Rush, a new F2P mode called Gauntlet, and a community content mode called Portal.

The most expensive Battlefield ever

All this meant that Glacier would have a broader scope than its predecessors. Developers say it has the largest budget of any Battlefield title to date.

The project targeted a budget of more than $400 million back in early 2023, which was already more than was originally planned at the start.

However, major setbacks significantly disrupted production in 2023 (more on that in a moment) and hundreds of additional developers were brought onto Glacier from various EA-owned studios to get things back on track, significantly increasing the cost. Multiple team members with knowledge of the project’s finances told me that the current projections are now well north of that $400 million amount.

Skepticism in the ranks

Despite the big ambitions of the new leadership team and EA executives, “very few people” working in the studios believed the 100 million target was achievable, two sources told me. Many of those who had worked on Battlefield for a long time at DICE in Stockholm were particularly skeptical.

“Among the things that we are predicting is that we won’t have to cannibalize anyone else’s sales,” one developer said. “That there’s just such an appetite out there for shooters of this kind that we will just naturally be able to get the audience that we need.”

Regarding the lofty player and revenue targets, one source said that “nothing in the market research or our quality deliverables indicates that we would be anywhere near that.”

“I think people are surprised that they actually worked on a next Battlefield game and then increased the ambitions to what they are right now,” said another.

In 2023, a significant disruption to the project put one game mode in jeopardy, foreshadowing a more troubled development than anyone initially imagined.

Ridgeline implodes

Battlefield games have a reputation for middling single-player campaigns, and Battlefield 2042 didn’t include one at all. But part of this big bet on Glacier was the idea of offering the complete package, so Ridgeline Games scaled up while working on a campaign EA hoped would keep Battlefield competitive with Call of Duty, which usually has included a single-player campaign in its releases.

The studio worked on the campaign for about two years while it was also scaling and hiring talent to catch up to established studios within the Battlefield family.

It didn’t work out. In February of 2024, Ridgeline was shuttered, Halo luminary Marcus Lehto left the company, and the rest of the studios were left to pick up the pieces. When a certain review came up not long before the studio was shuttered, Glacier’s top leadership were dissatisfied with the progress they were seeing, and the call was made.

Sources in EA teams outside Ridgeline told me that there weren’t proper check-ins and internal reviews on the progress, obscuring the true state of the project until the fateful review.

On the other hand, those closer to Ridgeline described a situation in which the team couldn’t possibly complete its objectives, as it was expected to hire and scale up from zero while also meeting the same milestones as established studios with resources already in place. “They kept reallocating funds—essentially staff months—out of our budget,” one person told me. “And, you know, we’re sitting there trying to adapt to doing more with less.”

A Battlefield logo with a list of studios beneath it

A marketing image from EA showing now-defunct Ridgeline Games on the list of groups involved. Credit: EA

After the shuttering of Ridgeline, ownership of single-player shifted to three other EA studios: Criterion, DICE, and Motive. But those teams had a difficult road ahead, as “there was essentially nothing left that Ridgeline had spent two years working on that they could pick up on and build, so they had to redo essentially everything from scratch within the same constraints of when the game had to release.”

Single-player was two years behind. As of late spring, it was the only game mode that had failed to reach alpha, well over a year after the initial overall alpha target for the project.

Multiple sources said its implosion was symptomatic of some broader cultural and process problems that affected the rest of the project, too.

Culture shock

Speaking with people who have worked or currently work at DICE in Sweden, the tension between some at that studio and the new, US-based leadership team was obvious—and to a degree, that’s expected.

DICE had “the pride of having started Battlefield and owned that IP,” but now the studio was just “supporting it for American leadership,” said one person who worked there. Further, “there’s a lot of distrust and disbelief… when it comes to just operating toward numbers that very few people believe in apart from the leadership.”

But the tensions appear to go deeper than that. Two other major factors were at play: scaling pains as the scope of the project expanded and differences in cultural values between US leadership and the workers in Europe.

“DICE being originally a Swedish studio, they are a bit more humble. They want to build the best game, and they want to achieve the greatest in terms of the game experience,” one developer told me. “Of course, when you’re operated by EA, you have to set financial expectations in order to be as profitable as possible.”

That tension wasn’t new. But before 2042 failed to meet expectations, DICE Stockholm employees say they were given more leeway to set the vision for the game, as well as greater influence on timeline and targets.

Some EU-based team members were vocally dismayed at how top-down directives from far-flung offices, along with the US company’s emphasis on quarterly profits, have affected Glacier’s development far more than with previous Battlefield titles.

This came up less in talking to US-based staff, but everyone I spoke with on both continents agreed on one thing: Growing pains accompanied the transition from a production environment where one studio leads and others offer support to a new setup with four primary studios—plus outside support from all over EA—and all of it helmed by LA-based leadership.

EA is not alone in adopting this approach; it’s also used by competitor Activision-Blizzard on the Call of Duty franchise (though it’s worth noting that a big hit like Epic Games’ Fortnite has a very different structure).

Whereas publishers like EA and Activision-Blizzard used to house several studios, each of which worked on its own AAA game, they now increasingly make bigger bets on singular games-as-a-service offerings, with several of their studios working in tandem on a single project.

“Development of games has changed so much in the last 10 to 15 years,” said one developer. The new arrangement excites investors and shareholders, who can imagine returns from the next big unicorn release, but it can be a less creatively fulfilling way to work, as directives come from the top down, and much time is spent on dealing with inter-studio process. Further, it amplifies the effects of failures, with a higher human cost to people working on projects that don’t meet expectations.

It has also made the problems that affected Battlefield 2042‘s development more difficult to avoid.

Clearing the gates

EA studios use a system of “gates” to set the pace of development. Projects have to meet certain criteria to pass each gate.

For gate one, teams must have a clear sense of what they want to make and some proof of concept showing that this vision is achievable.

As they approach gate two, they’re building out and testing key technology, asking themselves if it can work at scale.

Gate three signifies full production. Glacier was expected to pass gate three in early 2023, but it was significantly delayed. When it did pass, some on the ground questioned whether it should have.

“I did not see robust budget, staff plan, feature list, risk planning, et cetera, as we left gate three,” said one person. In the way EA usually works, these things would all be expected at this stage.

As the project approached gate three and then alpha, several people within the organization tried to communicate that the game wasn’t on footing as firm as the top-level planning suggested. One person attributed this to the lack of a single source of truth within the organization. While developers tracked issues and progress in one tool, others (including project leadership) leaned on other sources of information that weren’t as tied to on-the-ground reality when making decisions.

A former employee with direct knowledge of production plans told me that as gate three approached, prototypes of some important game features were not ready, but since there wasn’t time to complete proofs of concept, the decision was handed down to move ahead to production even though the normal prerequisites were not met.

“If you don’t have those things fleshed out when you’re leaving pre-pro[duction], you’re just going to be playing catch-up the entire time you’re in production,” this source said.

In some cases, employees who flagged the problems believed they were being punished. Two EA employees each told me they found themselves cut out of meetings once they raised concerns like this.

Gate three was ultimately declared clear, and as of late May 2025, alpha was achieved for everything except the single-player campaign. But I’m told that this occurred with some tasks still un-estimated and many discrepancies remaining, leaving the door open to problems and compromises down the road.

The consequences for players

Because of these issues, the majority of the people I spoke with said they expect planned features or content to be cut before the game actually launches—which is normal, to a degree. But these common game development problems can contribute to other aspects of modern AAA gaming that many consumers find frustrating.

First off, making major decisions so late in the process can lead to huge day-one patches. Players of all types of AAA games often take to Reddit and social media to malign day-one patches as a frustrating annoyance for modern titles.

Battlefield 2042 had a sizable day-one patch. When multiplayer RPG Anthem (another big investment by EA) launched to negative reviews, that was partly because critics and others with pre-launch access were playing a build that was weeks old; a day-one patch significantly improved some aspects of the game, but that came after the negative press began to pour out.

A player character confronts a monster in Anthem

Anthem, another EA project with a difficult development, launched with a substantial day-one patch. Credit: EA

Glacier’s late arrival to Alpha and the teams’ problems with estimating the status of features could lead to a similarly significant day-one patch. That’s in part because EA has to deliver the work to external partners far in advance of the actual launch date.

“They have these external deadlines to do with the submissions into what EA calls ‘first-party’—that’s your PlayStation and Xbox submissions,” one person explained. “They have to at least have builds ready that they can submit.”

What ends up on the disc or what pre-loads from online marketplaces must be finalized long before the game’s actual release date. When a project is far behind or prone to surprises in the final stretch, those last few weeks are where a lot of vital work happens, so big launch patches become a necessity.

These struggles over content often lead to another pet peeve of players: planned launch content being held until later. “There’s a bit of project management within the Battlefield project that they can modify,” a former senior EA employee who worked on the project explained. “They might push it into Season 1 or Season 2.”

That way, players ultimately get the intended feature or content, but in some cases, they may end up paying more for it, as it ends up being part of a post-launch package like a battle pass.

These challenges are a natural extension of the fiscal-quarter-oriented planning that large publishers like EA adhere to. “The final timelines don’t change. The final numbers don’t change,” said one source. “So there is an enormous amount of pressure.”

A campaign conundrum

Single-player is also a problem. “Single-player in itself is massively late—it’s the latest part of the game,” I was told. “Without an enormous patch on day one or early access to the game, it’s unrealistic that they’re going to be able to release it to what they needed it to do.”

If the single-player mode is a linear, narrative campaign as originally planned, it may not be possible to delay missions or other content from the campaign to post-launch seasons.

“Single-player is secondary to multiplayer, so they will shift the priority to make sure that single-player meets some minimal expectations, however you want to measure that. But the multiplayer is the main focus,” an EA employee said.

“They might have to cut a part of the single-player out in order for the game to release with a single-player [campaign] on it,” they continued. “Or they would have to severely work through the summer and into the later part of this year and try to fix that.”

That—and the potential for a disappointing product—is a cost for players, but there are costs for the developers who work on the game, too.

Because timelines must be kept, and not everything can be cut or moved post-launch, it falls on employees to make up the gap. As we’ve seen in countless similar reports about AAA video game development before, that sometimes means longer hours and heavier stress.

AAA’s burnout problem

More than two decades ago, the spouse of an EA employee famously wrote an open letter to bring attention to the long hours and high stress developers there were facing.

Since then, some things have improved. People at all levels within EA are more conscious of the problems that were highlighted, and there have been efforts to mitigate some of them, like more comp time and mental health resources. However, many of those old problems linger in some form.

I heard several first-hand accounts of people working on Glacier who had to take stress or mental or exhaustion health leave, ranging from a couple of weeks to several months.

“There’s like—I would hesitate to count—but a large number compared to other projects I’ve been on who have taken mental exhaustion leave here. Some as short as two weeks to a month, some as long as eight months and nine,” one staffer told me after saying they had taken some time themselves.

This was partly because of long hours that were required when working directly with studios in both the US and Europe—a symptom of the new, multi-studio structure.

“My day could start as early as 5: 00 [am],” one person said. The first half of the day involved meetings with a studio in one part of the world while the second included meetings with a studio in another region. “Then my evenings would be spent doing my work because I’d be tied up juggling things all across the board and across time zones.”

This sort of workload was not limited to a brief, planned period of focused work, the employees said. Long hours were particularly an issue for those working in or closely with Ridgeline, the studio initially tasked with making the game’s single-player campaign.

From the beginning, members of the Ridgeline team felt they were expected to deliver work at a similar level to that of established studios like DICE or Ripple Effect before they were even fully staffed.

“They’ve done it before,” one person who was involved with Ridgeline said of DICE. “They’re a well-oiled machine.” But Ridgeline was “starting from zero” and was “expected to produce the same stuff.”

Within just six months of the starting line, some developers at Ridgeline said they were already feeling burnt out.

In the wake of the EA Spouses event, EA developed resources for employees. But in at least some cases, they weren’t much help.

“I sought some, I guess, mental help inside of EA. From HR or within that organization of some sort, just to be able to express it—the difficulties that I experienced personally or from coworkers on the development team that had experienced this, you know, that had lived through that,” said another employee. “And the nature of that is there’s nobody to listen. They pretend to listen, but nobody ultimately listens. Very few changes are made on the back of it.”

This person went on to say that “many people” had sought similar help and felt the same way, as far back as the post-launch period for 2042 and as recently as a few months ago.

Finding solutions

There have been a lot of stories like this about the games industry over the years, and it can feel relentlessly grim to keep reading them—especially when they’re coming alongside frequent news of layoffs, including at EA. Problems are exposed, but solutions don’t get as much attention.

In that spirit, let’s wrap up by listening to what some in the industry have said about what doing things better could look like—with the admitted caveat that these proposals are still not always common practice in AAA development.

“Build more slowly”

When Swen Vincke—studio head for Larian Studios and game director for the runaway success Baldur’s Gate 3—accepted an award at the Game Developers Conference, he took his moment on stage to express frustration at publishers like EA.

“I’ve been fighting publishers my entire life, and I keep on seeing the same, same, same mistakes over and over and over,” he said. “It’s always the quarterly profits. The only thing that matters are the numbers.”

After the awards show, he took to X to clarify his statements, saying, “This message was for those who try to double their revenue year after year. You don’t have to do that. Build more slowly and make your aim improving the state of the art, not squeezing out the last drop.”

A man stands on stage giving a speech

Swen Vincke giving a speech at the 2024 Game Developers Choice Awards. Credit: Game Developers Conference

In planning projects like Glacier, publicly traded companies often pursue huge wins—and there’s even more pressure to do so if a competing company has already achieved big success with similar titles.

But going bigger isn’t always the answer, and many in the industry believe the “one big game” strategy is increasingly nonviable.

In this attention economy?

There may not be enough player time or attention to go around, given the numerous games-as-a-service titles that are as large in scope as Call of Duty games or Fortnite. Despite the recent success of new entrant Marvel Rivals, there have been more big AAA live service shooter flops than wins in recent years.

Just last week, a data-based report by prominent games marketing newsletter GameDiscoverCo came to a prescient realization. “Genres like Arena Shooter, Battle Royale, and Hero Shooter look amazing from a revenue perspective. But there’s only 29 games in all of Steam’s history that have grossed >$1m in those subgenres,” wrote GameDiscoverCo’s Simon Carless.

It gets worse. “Only Naraka Bladepoint, Overwatch 2 & Marvel Rivals have grossed >$25m and launched since 2020 in those subgenres,” Carless added. (It’s important to clarify that he is just talking Steam numbers here, though.) That’s a stark counterpoint to reports that Call of Duty has earned more than $30 billion in lifetime revenue.

Employees of game publishers and studios are deeply concerned about this. In a 2025 survey of professional game developers, “one of the biggest issues mentioned was market oversaturation, with many developers noting how tough it is to break through and build a sustainable player base.”

Despite those headwinds, publishers like EA are making big bets in well-established spaces rather than placing a variety of smaller bets in newer areas ripe for development. Some of the biggest recent multiplayer hits on Steam have come from smaller studios that used creative ideas, fresh genres, strong execution, and the luck (or foresight) of reaching the market at exactly the right time.

That might suggest that throwing huge teams and large budgets up against well-fortified competitors is an especially risky strategy—hence some of the anxiety from the EA developers I spoke with.

Working smarter, not harder

That anxiety has led to steadily growing unionization efforts across the industry. From QA workers at Bethesda to more wide-ranging unions at Blizzard and CD Projekt Red, there’s been more movement on this front in the past two or three years than there had been in decades beforehand.

Unionization isn’t a cure-all, and it comes with its own set of new challenges—but it does have the potential to shift some of the conversations toward more sustainable practices, so that’s another potential part of the solution.

Insomniac Games CEO Ted Price spoke authoritatively on sustainability and better work practices for the industry way back at 2021’s Develop:Brighton conference:

I think the default is to brute force the problem—in other words, to throw money or people at it, but that can actually cause more chaos and affect well-being, which goes against that balance. The harder and, in my opinion, more effective solution is to be more creative within constraints… In the stress of hectic production, we often feel we can’t take our foot off the gas pedal—but that’s often what it takes.

That means publishers and studios should plan for problems and work from accurate data about where the team is at, but it also means having a willingness to give their people more time, provided the capital is available to do so.

Giving people what they need to do their jobs sounds like a simple solution to a complex problem, but it was at the heart of every conversation I had about Glacier.

Most EA developers—including leaders who are beholden to lofty targets—want to make a great game. “At the end of the day, they’re all really good people and they work really hard and they really want to deliver a good product for their customer,” one former EA developer assured me as we ended our call.

As for making the necessary shifts toward sustainability in the industry, “It’s kind of in the best interest of making the best possible game for gamers,” explained another. “I hope to God that they still achieve what they need to achieve within the timelines that they have, for the sake of Battlefield as a game to actually meet the expectations of the gamers and for people to maintain their jobs.”

Photo of Samuel Axon

Samuel Axon is the editorial lead for tech and gaming coverage at Ars Technica. He covers AI, software development, gaming, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

What’s wrong with AAA games? The development of the next Battlefield has answers. Read More »

pay-up-or-stop-scraping:-cloudflare-program-charges-bots-for-each-crawl

Pay up or stop scraping: Cloudflare program charges bots for each crawl

“Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho—and then giving that agent a budget to spend to acquire the best and most relevant content,” Cloudflare said, promising that “we enable a future where intelligent agents can programmatically negotiate access to digital resources.”

AI crawlers now blocked by default

Cloudflare’s announcement comes after rolling out a feature last September, allowing website owners to block AI crawlers in a single click. According to Cloudflare, over 1 million customers chose to block AI crawlers, signaling that people want more control over their content at a time when Cloudflare observed that writing instructions for AI crawlers in robots.txt files was widely “underutilized.”

To protect more customers moving forward, any new customers (including anyone on a free plan) who sign up for Cloudflare services will have their domains, by default, set to block all known AI crawlers.

This marks Cloudflare’s transition away from the dreaded opt-out models of AI scraping to a permission-based model, which a Cloudflare spokesperson told Ars is expected to “fundamentally change how AI companies access web content going forward.”

In a world where some website owners have grown sick and tired of attempting and failing to block AI scraping through robots.txt—including some trapping AI crawlers in tarpits to punish them for ignoring robots.txt—Cloudflare’s feature allows users to choose granular settings to prevent blocks on AI bots from impacting bots that drive search engine traffic. That’s critical for small content creators who want their sites to still be discoverable but not digested by AI bots.

“AI crawlers collect content like text, articles, and images to generate answers, without sending visitors to the original source—depriving content creators of revenue, and the satisfaction of knowing someone is reading their content,” Cloudflare’s blog said. “If the incentive to create original, quality content disappears, society ends up losing, and the future of the Internet is at risk.”

Disclosure: Condé Nast, which owns Ars Technica, is a partner involved in Cloudflare’s beta test.

This story was corrected on July 1 to remove publishers incorrectly listed as participating in Cloudflare’s pay-per-crawl beta.

Pay up or stop scraping: Cloudflare program charges bots for each crawl Read More »

supreme-court-to-decide-whether-isps-must-disconnect-users-accused-of-piracy

Supreme Court to decide whether ISPs must disconnect users accused of piracy

The Supreme Court has agreed to hear a case that could determine whether Internet service providers must terminate users who are accused of copyright infringement.

In a list of orders released today, the court granted a petition filed by cable company Cox. The ISP, which was sued by Sony Music Entertainment, is trying to overturn a ruling that it is liable for copyright infringement because it failed to terminate users accused of piracy. Music companies want ISPs to disconnect users whose IP addresses are repeatedly connected to torrent downloads.

“We are pleased the US Supreme Court has decided to address these significant copyright issues that could jeopardize Internet access for all Americans and fundamentally change how Internet service providers manage their networks,” Cox said today.

Cox was once on the hook for $1 billion in the case. In February 2024, the 4th Circuit court of appeals overturned the $1 billion verdict, deciding that Cox did not profit directly from copyright infringement committed by users. But the appeals court found that Cox was guilty of willful contributory infringement and ordered a new damages trial.

The Cox petition asks the Supreme Court to decide whether an ISP “can be held liable for ‘materially contributing’ to copyright infringement merely because it knew that people were using certain accounts to infringe and did not terminate access, without proof that the service provider affirmatively fostered infringement or otherwise intended to promote it.”

Trump admin backed Cox; Sony petition denied

The Trump administration backed Cox last month, saying that ISPs shouldn’t be forced to terminate the accounts of people accused of piracy. Solicitor General John Sauer told the court in a brief that the 4th Circuit decision, if not overturned, “subjects ISPs to potential liability for all acts of copyright infringement committed by particular subscribers as long as the music industry sends notices alleging past instances of infringement by those subscribers” and “might encourage providers to avoid substantial monetary liability by terminating subscribers after receiving a single notice of alleged infringement.”

Supreme Court to decide whether ISPs must disconnect users accused of piracy Read More »

nih-budget-cuts-affect-research-funding-beyond-us-borders

NIH budget cuts affect research funding beyond US borders


European leaders say they will fill the funding void. Is that realistic?

Credit: E+ via Getty Images

Rory de Vries, an associate professor of virology in the Netherlands, was lifting weights at the gym when he noticed a WhatsApp message from his research partners at Columbia University, telling him his research funding had been cancelled. The next day he received the official email: “Hi Rory, Columbia has received a termination notice for this contract, including all subcontracts,” it stated. “Unfortunately, we must advise you to immediately stop work and cease incurring charges on this subcontract.”

De Vries was disappointed, though not surprised—his team knew this might happen under the new Trump administration. His projects focused on immune responses and a new antiviral treatment for respiratory viruses like Covid-19. Animals had responded well in pre-clinical trials, and he was about to explore the next steps for applications in humans. But the news, which he received in March, left him with a cascade of questions: What would happen to the doctoral student he had just hired for his project, a top candidate plucked from a pool of some 300 aspiring scientists? How would his team comply with local Dutch law, which, unlike the US, forbids terminating a contract without cause or notice? And what did the future hold for his projects, two of which contained promising data for treating Covid-19 and other respiratory illnesses in humans?

It was all up in the air, leaving de Vries, who works at the Erasmus Medical Center in Rotterdam and whose research has appeared in top-tier publications scrambling for last-minute funding from the Dutch government or the European Union.

Of the 20 members in his group, he will soon run out of money to pay the salaries for four. As of June, he

estimated that his team has enough to keep going for about six months in its current form if it draws money from other funding sources.

But that still leaves funding uncertain in the long-term: “So, yeah, that’s a little bit of an emergency solution,” he said.

Cuts to science funding in the US have devastated American institutions, hitting cancer research and other vital fields, but they also affect a raft of international collaborations and scientists based abroad. In Canada, Australia, South Africa and elsewhere, projects receiving funds from the National Institutes of Health have been terminated or stalled due to recent budget cuts.

Researchers in Europe and the US have long collaborated to tackle tough scientific questions. Certain fields, like rare diseases, particularly benefit from international collaboration because it widens the pool of patients available to study. European leaders have said that they will step into the gap created by Trump’s NIH cuts to make Europe a magnet for science—and they have launched a special initiative to attract US scientists. But some researchers doubt that Europe alone can truly fill the void.

In many European countries, scientist salaries are modest and research funding has lagged behind inflation in recent years. In a May press release, a French scientists’ union described current pay as “scandalously low” and said research funding in France and Europe as a whole lags behind the US, South Korea, China, Taiwan, and Japan. Europe and its member states would need to increase research funding by up to 150 billion euros (roughly USD $173 billion) per year to properly support science, said Boris Gralak, general secretary of the French union, in an interview with Undark.

The shifts are not just about money, but the pattern of how international research unfolds, said Stefan Pfister, a pediatric cancer specialist in Germany who has also received NIH funds. The result, he said, is “this kind of capping and compromising well-established collaborations.”

Funding beyond US borders

For decades, international researchers have received a small slice of the National Institutes of Health budget. In 2024, out of an overall budget of $48 billion, the NIH dispensed $69 million to 125 projects across the European continent and $262 million in funding worldwide, according to the NIH award database.

The US and Europe “have collaborated in science for, you know, centuries at this point,” said Cole Donovan, associate director of science and technology ecosystem development at the Federation of American Scientists, noting that the relationship was formalized in 1997 in an agreement highlighting the two regions’ common interests.

And it has overall been beneficial, said Donovan, who worked in the State Department for a decade to help facilitate such collaborations. In some cases, European nations simply have capabilities that do not exist in the US, like the Czech Republic and Romania, he said, which have some of the most sophisticated laser facilities in the world.

“If you’re a researcher and you want to use those facilities,” he added, “you have to have a relationship with people in those countries.”

Certain fields, like rare diseases, particularly benefit from international collaboration because it widens the pool of patients available to study.

The shared nature of research is driven by personal connections and scientific interest, Donovan said: “The relationship in science and technology is organic.”

But with the recent cuts to NIH funding, the fate of those research projects—particularly on the health effects of climate change, transgender health, and Covid-19—has been thrown into question. On May 1, the NIH said it would not reissue foreign subawards, which fund researchers outside the US who work with American collaborators—or agree to US researchers asking to add a foreign colleague to a project. The funding structure lacked transparency and could harm national security, the NIH stated, though it noted that it would not “retroactively revise ongoing awards to remove foreign subawards at this time.” (The NIH would continue to support direct foreign awards, according to the statement.)

The cuts have hit European researchers like de Vries, whose institution, Erasmus MC, was a sub-awardee on three Columbia University grants to support his work. Two projects on Covid-19 transmission and treatment have ended abruptly, while another, on a potential treatment for measles, has been frozen, awaiting review at the end of May, though by late June he still had no news and said he assumed it would not be renewed.We’re trying to scrape together some money to do some two or three last experiments, so we at least can publish the work and that it’s in literature and anyone else can pick it up,” he said. “But yeah, the work has stopped.”

His Ph.D. students must now shift the focus of their theses; for some, that means pivoting after nearly three years of study.

De Vries’ team has applied for funds from the Dutch government, as well as sought industry funding, for a new project evaluating a vaccine for RSV—something he wouldn’t have done otherwise, he said, since industry funding can limit research questions. “Companies might not be interested in in-depth immunological questions, or a side-by-side comparison of their vaccine with the direct competition,” he wrote in an email.

International scientists who have received direct awards have so far been unaffected, but say they are still nervous about potential further cuts. Pfister, for example, is now leading a five-year project to develop treatments for childhood tumors; with the majority of funding coming from NIH and Cancer Research U.K., a British-based cancer charity, “not knowing what the solution will look like next year,” he said, “generates uncertainties.”

The jointly funded $25 million project—which scientists from nine institutions across five countries including the US are collaborating on—explores treatments for seven childhood cancers and offers a rare opportunity to make progress in tackling tumors in children, Pfister added, as treatments have lagged in the field due to the small market and the high costs of development. Tumors in children differ from those in adults and, until recently, were harder to target, said Pfister. But new discoveries have allowed researchers to target cancer more specifically in children, and global cooperation is central to that progress.

The US groups, which specialize in drug chemistry, develop lead compounds for potential drugs. Pfister’s team then carries out experiments on toxicity and effectiveness. The researchers hope to bring at least one treatment, into early-phase clinical trials.

Funding from NIH is confirmed for this financial year. Beyond that, the researchers are staying hopeful, Pfister said.

“It’s such an important opportunity for all of us to work together,” said Pfister, “that we don’t want to think about worst-case scenarios.”

Pfister told Undark that his team in Heidelberg, Germany, has assembled the world´s biggest store of pediatric cancer models; no similar stock currently exists in the US The work of the researchers is complementary, he stressed: “If significant parts would drop out, you cannot run the project anymore.”

Rare diseases benefit from international projects, he added. In these fields, “We don’t have the patient numbers, we don’t have the critical mass,” in one country alone, he said. In his field, researchers conduct early clinical trials in patients on both sides of the Atlantic. “That’s just not because we are crazy, but just because this the only way to physically conduct them.”

The US has spearheaded much drug development, he noted. “Obviously the US has been the powerhouse for biomedical research for the last 50 years, so it’s not surprising that some of the best people and the best groups are sitting there,” he said. A smaller US presence in the field would reduce the critical mass of people and resources available, which would be a disaster for patients, he said. “Any dreams of this all moving to Europe are illusions in my mind.”

While Europe has said it will step in to fill the gap, the amounts discussed were not enough, Gralak said. The amount of money available in Europe “is a very different order of magnitude,” Pfister said. It also won’t help their colleagues in the US, who European researchers need to thrive in order to maintain necessary collaborations, he said. “In the US, we are talking about dozens of billions of dollars less in research, and this cannot be compensated by any means, by the EU or any other funder.” Meanwhile, the French scientists’ union said the country has failed to meet funding promises made as long ago as 2010.

And although Europe receives a sliver of NIH funds, these cuts could have a real impact on public health. De Vries said that his measles treatment was at such an early stage that its potential benefits remained unproven, but if effective it could have been the only treatment of its kind at a time when cases are rising.

And he said the stalling of both his work and other research on Covid-19 leaves the world less prepared for a future pandemic. The antiviral drug he has developed had positive results in ferrets but needs further refinement to work in humans. If the drugs were available for people, “that would be great,” he said. “Then we could actually work on interrupting a pandemic early.”

New opportunities for Europe

The shift in US direction offers an opportunity for the EU, said Mike Galsworthy, a British scientist who campaigned to unite British and EU science in the wake of Brexit. The US will no longer be the default for ambitious researchers from across the world, he said: “It’s not just US scientists going to Canada and Europe. There’s also going to be the huge brain diversion.” he said. “If you are not a native English speaker and not White, you might be extra nervous about going to the States for work there right now,” he added.

And in recent weeks, European governments have courted fleeing scientists. In April, France launched a platform called Choose France for Science, which allows institutions to request funding for international researchers, and highlights an interest in health, climate science, and artificial intelligence, among other research areas Weeks later, the European Union announced a new program called Choose Europe for Science, aiming to make Europe a “magnet for researchers.” It includes a 500 million Euro (roughly USD $578 million) funding package for 2025-2027, new seven-year “super grants,” to attract the best researchers, and top-up funds that would help scientists from outside Europe settle into their new institution of choice.

The initial funding comes from money already allocated to Horizon Europe—the EU’s central research and innovation funding program. But some researchers are skeptical. The French union leader, Gralak, who is also a researcher in mathematical physics, described the programs as PR initiatives. He criticized European leaders for taking advantage of the problems in US science to attract talent to Europe, and said leaders should support science in Europe through proper and sufficient investment. The programs are “derisory and unrealistic,” he said.

“It’s not just US scientists going to Canada and Europe. There’s also going to be the huge brain diversion.”

Others agreed that Europe’s investment in science is inadequate. Bringing scientists to Europe would be “great for science and the talent, but that also means that will come from a line where there’s normally funding for European researchers,” said de Vries, the researcher from Rotterdam. As Mathilde Richard, a colleague of de Vries who works on viruses and has five active NIH grants, told Undark: “Why did I start to apply to NIH funds? And still, the most straightforward answer is that there isn’t enough in Europe.”

In the Netherlands, a rightwing government has said it will cut science funding by a billion euros over the next five years. And while the flagship program Horizon Europe encourages large-scale projects spanning multiple countries, scientists spend years putting together the major cross-country collaborations the system requires. Meanwhile, European Research Council grants are “extremely competitive and limited,” de Vries said.

Richard’s NIH grants pay for 65 percent of her salary and for 80 percent of her team, and she believes she’s the most dependent on US funds of anyone in her department at Erasmus Medical Center in Rotterdam. She applied because the NIH funding seemed more sustainable than local money, she said. In Europe, too often funding is short-term and has a time-consuming administrative burden, she said, which hinders researchers from developing long-term plans. “We have to battle so much to just do our work and find funds to just do our basic work,” she said. “I think we need to advocate for a better and more sustainable way of funding research.”

Scientists, too, are worried about what US cuts mean for global science, beyond the short-term. Paltry science funding could discourage a generation of talented people from entering the field, Pfister suggested: “In the end, the resources are not only monetary, but also the brain resources are reduced.”

Let’s not talk about it

A few months ago, Pfister attended a summit in Boston for Cancer Grand Challenges, a research initiative co-funded by the NIH’s National Cancer Institute and Cancer Research U.K. Nobody from the NIH came because they had no funding to travel. “So we are all sitting in Boston, and they are sitting like 200 miles away,” he said.

More concerning was the fact that those present seemed afraid to discuss why the NIH staff were absent, he said. “It was us Europeans to basically, kind of break the ice to, you know, at least talk about it.”

Pfister said that some European researchers are now hesitant about embarking on US collaborations, even if there is funding available. And some German scientists are taking steps to ensure that they are protected if a similar budget crackdown occurred in Germany, he said—devising independent review processes, separating research policy from funding, and developing funding models less dependent on government-only sources, he said. “I think the most scary part is that you know, this all happened in three months.”

Despite the worry and uncertainty, de Vries offered a hopeful view of the future. “We will not be defeated by NIH cuts,” he said. “I feel confident that Europe will organize itself.”

This article was originally published on Undark. Read the original article.

NIH budget cuts affect research funding beyond US borders Read More »

vmware-perpetual-license-holder-receives-audit-letter-from-broadcom

VMware perpetual license holder receives audit letter from Broadcom

The letter, signed by Aiden Fitzgerald, director of global sales operations at Broadcom, claims that Broadcom will use its time “as efficiently and productively as possible to minimize disruption.”

Still, the security worker that Ars spoke with is concerned about the implications of the audit and said they “expect a big financial impact” for their employer. They added:

Because we are focusing on saving costs and are on a pretty tight financial budget, this will likely have impact on the salary negotiations or even layoffs of employees. Currently, we have some very stressed IT managers [and] legal department [employees] …

The employee noted that they are unsure if their employer exceeded its license limits. If the firm did, it could face “big” financial repercussions, the worker noted.

Users deny wrongdoing

As Broadcom works to ensure that people aren’t using VMware outside its terms, some suggest that the semiconductor giant is wasting some time by investigating organizations that aren’t violating agreements.

After Broadcom started sending cease-and-desist letters, at least one firm claimed that it got a letter from Broadcom despite no longer using VMware at all.

Additionally, various companies claimed that they received a cease-and-desist from Broadcom despite not implementing any updates after their VMware support contract expired.

The employee at the Dutch firm that received an audit notice this month claimed that the only update that their employer has issued to the VMware offerings it uses since support ended was a “critical security patch.”

That employee also claimed to Ars that their company didn’t receive a cease-and-desist letter from Broadcom before being informed of an audit.

Broadcom didn’t respond to Ars’ request for comment ahead of publication, so we’re unable to confirm if the company is sending audit letters without sending cease-and-desist letters first. Ars also reached out to Connor Consulting but didn’t hear back.

“When we saw the news that they were going to send cease-and-desist letters and audits, our management thought it was a bluff and that they would never do that,” the anonymous security worker said.

Broadcom’s litigious techniques to ensure VMware agreements are followed have soured its image among some current and former customers. Broadcom’s $69 billion VMware acquisition has proven lucrative, but as Broadcom approaches two years of VMware ownership, there are still calls for regulation of its practices, which some customers and partners believe are “legally and ethically flawed.”

VMware perpetual license holder receives audit letter from Broadcom Read More »

actively-exploited-vulnerability-gives-extraordinary-control-over-server-fleets

Actively exploited vulnerability gives extraordinary control over server fleets

On Wednesday, CISA added CVE-2024-54085 to its list of vulnerabilities known to be exploited in the wild. The notice provided no further details.

In an email on Thursday, Eclypsium researchers said the scope of the exploits has the potential to be broad:

  • Attackers could chain multiple BMC exploits to implant malicious code directly into the BMC’s firmware, making their presence extremely difficult to detect and allowing them to survive OS reinstalls or even disk replacements.
  • By operating below the OS, attackers can evade endpoint protection, logging, and most traditional security tools.
  • With BMC access, attackers can remotely power on or off, reboot, or reimage the server, regardless of the primary operating system’s state.
  • Attackers can scrape credentials stored on the system, including those used for remote management, and use the BMC as a launchpad to move laterally within the network
  • BMCs often have access to system memory and network interfaces, enabling attackers to sniff sensitive data or exfiltrate information without detection
  • Attackers with BMC access can intentionally corrupt firmware, rendering servers unbootable and causing significant operational disruption

With no publicly known details of the ongoing attacks, it’s unclear which groups may be behind them. Eclypsium said the most likely culprits would be espionage groups working on behalf of the Chinese government. All five of the specific APT groups Eclypsium named have a history of exploiting firmware vulnerabilities or gaining persistent access to high-value targets.

Eclypsium said the line of vulnerable AMI MegaRAC devices uses an interface known as Redfish. Server makers known to use these products include AMD, Ampere Computing, ASRock, ARM, Fujitsu, Gigabyte, Huawei, Nvidia, Supermicro, and Qualcomm. Some, but not all, of these vendors have released patches for their wares.

Given the damage possible from exploitation of this vulnerability, admins should examine all BMCs in their fleets to ensure they aren’t vulnerable. With products from so many different server makers affected, admins should consult with their manufacturer when unsure if their networks are exposed.

Actively exploited vulnerability gives extraordinary control over server fleets Read More »

researchers-develop-a-battery-cathode-material-that-does-it-all

Researchers develop a battery cathode material that does it all

Battery electrode materials need to do a lot of things well. They need to be conductors to get charges to and from the ions that shuttle between the electrodes. They also need to have an open structure that allows the ions to move around before they reach a site where they can be stored. The storage of lots of ions also causes materials to expand, creating mechanical stresses that can cause the structure of the electrode material to gradually decay.

Because it’s hard to get all of these properties from a single material, many electrodes are composite materials, with one chemical used to allow ions into and out of the electrode, another to store them, and possibly a third that provides high conductivity. Unfortunately, this can create new problems, with breakdowns at the interfaces between materials slowly degrading the battery’s capacity.

Now, a team of researchers is proposing a material that seemingly does it all. It’s reasonably conductive, it allows lithium ions to move around and find storage sites, and it’s made of cheap and common elements. Perhaps best of all, it undergoes self-healing, smoothing out damage across charge/discharge cycles.

High capacity

The research team, primarily based in China, set out to limit the complexity of cathodes. “Conventional composite cathode designs, which typically incorporate a cathode active material, catholyte, and electronic conducting additive, are often limited by the substantial volume fraction of electrochemically inactive components,” the researchers wrote. The solution, they reasoned, was to create an all-in-one material that gets rid of most of these materials.

A number of papers had reported good luck with chlorine-based chemicals, which allowed ions to move readily through the material but didn’t conduct electricity very well. So the researchers experimented with pre-loading one of these materials with lithium. And they focused on iron chloride since it’s a very cheap material.

Researchers develop a battery cathode material that does it all Read More »

curated-realities:-an-ai-film-festival-and-the-future-of-human-expression

Curated realities: An AI film festival and the future of human expression


We saw 10 AI films and interviewed Runway’s CEO as well as Hollywood pros.

An AI-generated frame of a person looking at an array of television screens

A still from Total Pixel Space, the Grand Prix winner at AIFF 2025.

A still from Total Pixel Space, the Grand Prix winner at AIFF 2025.

Last week, I attended a film festival dedicated to shorts made using generative AI. Dubbed AIFF 2025, it was an event precariously balancing between two different worlds.

The festival was hosted by Runway, a company that produces models and tools for generating images and videos. In panels and press briefings, a curated list of industry professionals made the case for Hollywood to embrace AI tools. In private meetings with industry professionals, I gained a strong sense that there is already a widening philosophical divide within the film and television business.

I also interviewed Runway CEO Cristóbal Valenzuela about the tightrope he walks as he pitches his products to an industry that has deeply divided feelings about what role AI will have in its future.

To unpack all this, it makes sense to start with the films, partly because the film that was chosen as the festival’s top prize winner says a lot about the issues at hand.

A festival of oddities and profundities

Since this was the first time the festival has been open to the public, the crowd was a diverse mix: AI tech enthusiasts, working industry creatives, and folks who enjoy movies and who were curious about what they’d see—as well as quite a few people who fit into all three groups.

The scene at the entrance to the theater at AIFF 2025 in Santa Monica, California.

The films shown were all short, and most would be more at home at an art film fest than something more mainstream. Some shorts featured an animated aesthetic (including one inspired by anime) and some presented as live action. There was even a documentary of sorts. The films could be made entirely with Runway or other AI tools, or those tools could simply be a key part of a stack that also includes more traditional filmmaking methods.

Many of these shorts were quite weird. Most of us have seen by now that AI video-generation tools excel at producing surreal and distorted imagery—sometimes whether the person prompting the tool wants that or not. Several of these films leaned into that limitation, treating it as a strength.

Representing that camp was Vallée Duhamel’s Fragments of Nowhere, which visually explored the notion of multiple dimensions bleeding into one another. Cars morphed into the sides of houses, and humanoid figures, purported to be inter-dimensional travelers, moved in ways that defied anatomy. While I found this film visually compelling at times, I wasn’t seeing much in it that I hadn’t already seen from dreamcore or horror AI video TikTok creators like GLUMLOT or SinRostroz in recent years.

More compelling were shorts that used this propensity for oddity to generate imagery that was curated and thematically tied to some aspect of human experience or identity. For example, More Tears than Harm by Herinarivo Rakotomanana was a rotoscope animation-style “sensory collage of childhood memories” of growing up in Madagascar. Its specificity and consistent styling lent it a credibility that Fragments of Nowhere didn’t achieve. I also enjoyed Riccardo Fusetti’s Editorial on this front.

More Tears Than Harm, an unusual animated film at AIFF 2025.

Among the 10 films in the festival, two clearly stood above the others in my impressions—and they ended up being the Grand Prix and Gold prize winners. (The judging panel included filmmakers Gaspar Noé and Harmony Korine, Tribeca Enterprises CEO Jane Rosenthal, IMAX head of post and image capture Bruce Markoe, Lionsgate VFX SVP Brianna Domont, Nvidia developer relations lead Richard Kerris, and Runway CEO Cristóbal Valenzuela, among others).

Runner-up Jailbird was the aforementioned quasi-documentary. Directed by Andrew Salter, it was a brief piece that introduced viewers to a program in the UK that places chickens in human prisons as companion animals, to positive effect. Why make that film with AI, you might ask? Well, AI was used to achieve shots that wouldn’t otherwise be doable for a small-budget film to depict the experience from the chicken’s point of view. The crowd loved it.

Jailbird, the runner-up at AIFF 2025.

Then there was the Grand Prix winner, Jacob Adler’s Total Pixel Space, which was, among other things, a philosophical defense of the very idea of AI art. You can watch Total Pixel Space on YouTube right now, unlike some of the other films. I found it strangely moving, even as I saw its selection as the festival’s top winner with some cynicism. Of course they’d pick that one, I thought, although I agreed it was the most interesting of the lot.

Total Pixel Space, the Grand Prix winner at AIFF 2025.

Total Pixel Space

Even though it risked navel-gazing and self-congratulation in this venue, Total Pixel Space was filled with compelling imagery that matched the themes, and it touched on some genuinely interesting ideas—at times, it seemed almost profound, didactic as it was.

“How many images can possibly exist?” the film’s narrator asked. To answer that, it explains the concept of total pixel space, which actually reflects how image generation tools work:

Pixels are the building blocks of digital images—tiny tiles forming a mosaic. Each pixel is defined by numbers representing color and position. Therefore, any digital image can be represented as a sequence of numbers…

Just as we don’t need to write down every number between zero and one to prove they exist, we don’t need to generate every possible image to prove they exist. Their existence is guaranteed by the mathematics that defines them… Every frame of every possible film exists as coordinates… To deny this would be to deny the existence of numbers themselves.

The nine-minute film demonstrates that the number of possible images or films is greater than the number of atoms in the universe and argues that photographers and filmmakers may be seen as discovering images that already exist in the possibility space rather than creating something new.

Within that framework, it’s easy to argue that generative AI is just another way for artists to “discover” images.

The balancing act

“We are all—and I include myself in that group as well—obsessed with technology, and we keep chatting about models and data sets and training and capabilities,” Runway CEO Cristóbal Valenzuela said to me when we spoke the next morning. “But if you look back and take a minute, the festival was celebrating filmmakers and artists.”

I admitted that I found myself moved by Total Pixel Space‘s articulations. “The winner would never have thought of himself as a filmmaker, and he made a film that made you feel something,” Valenzuela responded. “I feel that’s very powerful. And the reason he could do it was because he had access to something that just wasn’t possible a couple of months ago.”

First-time and outsider filmmakers were the focus of AIFF 2025, but Runway works with established studios, too—and those relationships have an inherent tension.

The company has signed deals with companies like Lionsgate and AMC Networks. In some cases, it trains on data provided by those companies; in others, it embeds within them to try to develop tools that fit how they already work. That’s not something competitors like OpenAI are doing yet, so that, combined with a head start in video generation, has allowed Runway to grow and stay competitive so far.

“We go directly into the companies, and we have teams of creatives that are working alongside them. We basically embed ourselves within the organizations that we’re working with very deeply,” Valenzuela explained. “We do versions of our film festival internally for teams as well so they can go through the process of making something and seeing the potential.”

Founded in 2018 at New York University’s Tisch School of the Arts by two Chileans and one Greek co-founder, Runway has a very different story than its Silicon Valley competitors. It was one of the first to bring an actually usable video-generation tool to the masses. Runway also contributed in foundational ways to the popular Stable Diffusion model.

Though it is vastly outspent by competitors like OpenAI, it has taken a hands-on approach to working with existing industries. You won’t hear Valenzuela or other Runway leaders talking about the imminence of AGI or anything so lofty; instead, it’s all about selling the product as something that can solve existing problems in creatives’ workflows.

Still, an artist’s mindset and relationships within the industry don’t negate some fundamental conflicts. There are multiple intellectual property cases involving Runway and its peers, and though the company hasn’t admitted it, there is evidence that it trained its models on copyrighted YouTube videos, among other things.

Cristóbal Valenzuela speaking on the AIFF 2025 stage. Credit: Samuel Axon

Valenzuela suggested that studios are worried about liability, not underlying principles, though, saying:

Most of the concerns on copyright are on the output side, which is like, how do you make sure that the model doesn’t create something that already exists or infringes on something. And I think for that, we’ve made sure our models don’t and are supportive of the creative direction you want to take without being too limiting. We work with every major studio, and we offer them indemnification.

In the past, he has also defended Runway by saying that what it’s producing is not a re-creation of what has come before. He sees the tool’s generative process as distinct—legally, creatively, and ethically—from simply pulling up assets or references from a database.

“People believe AI is sort of like a system that creates and conjures things magically with no input from users,” he said. “And it’s not. You have to do that work. You still are involved, and you’re still responsible as a user in terms of how you use it.”

He seemed to share this defense of AI as a legitimate tool for artists with conviction, but given that he’s been pitching these products directly to working filmmakers, he was also clearly aware that not everyone agrees with him. There is not even a consensus among those in the industry.

An industry divided

While in LA for the event, I visited separately with two of my oldest friends. Both of them work in the film and television industry in similar disciplines. They each asked what I was in town for, and I told them I was there to cover an AI film festival.

One immediately responded with a grimace of disgust, “Oh, yikes, I’m sorry.” The other responded with bright eyes and intense interest and began telling me how he already uses AI in his day-to-day to do things like extend shots by a second or two for a better edit, and expressed frustration at his company for not adopting the tools faster.

Neither is alone in their attitudes. Hollywood is divided—and not for the first time.

There have been seismic technological changes in the film industry before. There was the transition from silent films to talkies, obviously; moviemaking transformed into an entirely different art. Numerous old jobs were lost, and numerous new jobs were created.

Later, there was the transition from film to digital projection, which may be an even tighter parallel. It was a major disruption, with some companies and careers collapsing while others rose. There were people saying, “Why do we even need this?” while others believed it was the only sane way forward. Some audiences declared the quality worse, and others said it was better. There were analysts arguing it could be stopped, while others insisted it was inevitable.

IMAX’s head of post production, Bruce Markoe, spoke briefly about that history at a press mixer before the festival. “It was a little scary,” he recalled. “It was a big, fundamental change that we were going through.”

People ultimately embraced it, though. “The motion picture and television industry has always been very technology-forward, and they’ve always used new technologies to advance the state of the art and improve the efficiencies,” Markoe said.

When asked whether he thinks the same thing will happen with generative AI tools, he said, “I think some filmmakers are going to embrace it faster than others.” He pointed to AI tools’ usefulness for pre-visualization as particularly valuable and noted some people are already using it that way, but it will take time for people to get comfortable with.

And indeed, many, many filmmakers are still loudly skeptical. “The concept of AI is great,” The Mitchells vs. the Machines director Mike Rianda said in a Wired interview. “But in the hands of a corporation, it is like a buzzsaw that will destroy us all.”

Others are interested in the technology but are concerned that it’s being brought into the industry too quickly, with insufficient planning and protections. That includes Crafty Apes Senior VFX Supervisor Luke DiTomasso. “How fast do we roll out AI technologies without really having an understanding of them?” he asked in an interview with Production Designers Collective. “There’s a potential for AI to accelerate beyond what we might be comfortable with, so I do have some trepidation and am maybe not gung-ho about all aspects of it.

Others remain skeptical that the tools will be as useful as some optimists believe. “AI never passed on anything. It loved everything it read. It wants you to win. But storytelling requires nuance—subtext, emotion, what’s left unsaid. That’s something AI simply can’t replicate,” said Alegre Rodriquez, a member of the Emerging Technology committee at the Motion Picture Editors Guild.

The mirror

Flying back from Los Angeles, I considered two key differences between this generative AI inflection point for Hollywood and the silent/talkie or film/digital transitions.

First, neither of those transitions involved an existential threat to the technology on the basis of intellectual property and copyright. Valenzuela talked about what matters to studio heads—protection from liability over the outputs. But the countless creatives who are critical of these tools also believe they should be consulted and even compensated for their work’s use in the training data for Runway’s models. In other words, it’s not just about the outputs, it’s also about the sourcing. As noted before, there are several cases underway. We don’t know where they’ll land yet.

Second, there’s a more cultural and philosophical issue at play, which Valenzuela himself touched on in our conversation.

“I think AI has become this sort of mirror where anyone can project all their fears and anxieties, but also their optimism and ideas of the future,” he told me.

You don’t have to scroll for long to come across techno-utopians declaring with no evidence that AGI is right around the corner and that it will cure cancer and save our society. You also don’t have to scroll long to encounter visceral anger at every generative AI company from people declaring the technology—which is essentially just a new methodology for programming a computer—fundamentally unethical and harmful, with apocalyptic societal and economic ramifications.

Amid all those bold declarations, this film festival put the focus on the on-the-ground reality. First-time filmmakers who might never have previously cleared Hollywood’s gatekeepers are getting screened at festivals because they can create competitive-looking work with a fraction of the crew and hours. Studios and the people who work there are saying they’re saving time, resources, and headaches in pre-viz, editing, visual effects, and other work that’s usually done under immense time and resource pressure.

“People are not paying attention to the very huge amount of positive outcomes of this technology,” Valenzuela told me, pointing to those examples.

In this online discussion ecosystem that elevates outrage above everything else, that’s likely true. Still, there is a sincere and rigorous conviction among many creatives that their work is contributing to this technology’s capabilities without credit or compensation and that the structural and legal frameworks to ensure minimal human harm in this evolving period of disruption are still inadequate. That’s why we’ve seen groups like the Writers Guild of America West support the Generative AI Copyright Disclosure Act and other similar legislation meant to increase transparency about how these models are trained.

The philosophical question with a legal answer

The winning film argued that “total pixel space represents both the ultimate determinism and the ultimate freedom—every possibility existing simultaneously, waiting for consciousness to give it meaning through the act of choice.”

In making this statement, the film suggested that creativity, above all else, is an act of curation. It’s a claim that nothing, truly, is original. It’s a distillation of human expression into the language of mathematics.

To many, that philosophy rings undeniably true: Every possibility already exists, and artists are just collapsing the waveform to the frame they want to reveal. To others, there is more personal truth to the romantic ideal that artwork is valued precisely because it did not exist until the artist produced it.

All this is to say that the debate about creativity and AI in Hollywood is ultimately a philosophical one. But it won’t be resolved that way.

The industry may succumb to litigation fatigue and a hollowed-out workforce—or it may instead find its way to fair deals, new opportunities for fresh voices, and transparent training sets.

For all this lofty talk about creativity and ideas, the outcome will come down to the contracts, court decisions, and compensation structures—all things that have always been at least as big a part of Hollywood as the creative work itself.

Photo of Samuel Axon

Samuel Axon is the editorial lead for tech and gaming coverage at Ars Technica. He covers AI, software development, gaming, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Curated realities: An AI film festival and the future of human expression Read More »

with-12.2-update,-civilization-vii-tries-to-win-back-traditionalists

With 1.2.2 update, Civilization VII tries to win back traditionalists

There’s also a new loading screen with more detailed information and more interactive elements, which Firaxis says is a hint at other major UI overhauls to come. That said, players have already complained that it doesn’t look very nice because the 2D leader assets that appear on it have been scaled awkwardly and look fuzzy.

The remaining changes are largely balance and systems-related. Trade convoys can now travel over land, which means treasure ships will no longer get stuck in lakes, and there are broader strategic options for tackling the economic path in the Exploration Age. There has been a significant effort to overhaul town focuses, including the addition of a couple new ones, and the much-anticipated nerf of the Hub Town focus; it now provides +1 influence per connected town instead of two, though that may still not be quite enough to make the Hub Town, well, not overpowered.

You can find a bunch of other small balance tweaks in the patch notes, including new city-state bonuses, pantheons, and religious beliefs, among other things.

Lastly, and perhaps most importantly to some, you can now issue a command to pet the scout unit’s dog.

Next steps

As far as I can tell, there are still two major traditional features fans are waiting on: autoexplore for scout units and hotseat multiplayer support. Firaxis says it’s working on both, but neither made it into 1.2.2. Players have also been asking for further UI overhauls. Firaxis says those are coming, too.

When Civilization VII launched, I wrote that I quite liked it, but I also pointed out bugs and balance changes and noted that it won’t please traditionalists. For some players, the review said it might be better to wait. We did a follow-up article about a month in, interviewing the developers. But that was still during the “fix things that are on fire stage.”

More than any previous update, today’s 1.2.2 is the first one that seems like a natural jumping-on point for people who have been taking a wait-and-see approach.

It’s quite common for strategy games like this to not really fully hit their stride until weeks or even months of updates. Civilization VII‘s UI problems made it a particularly notable example of that trend, but the good news is that it’s also following the same path as the games before it that got good post-launch support: slowly, it’s becoming a game a broader range of Civ fans can enjoy.

With 1.2.2 update, Civilization VII tries to win back traditionalists Read More »

microsoft-lays-out-its-path-to-useful-quantum-computing

Microsoft lays out its path to useful quantum computing


Its platform needs error correction that works with different hardware.

Some of the optical hardware needed to make Atom Computing’s machines work. Credit: Atom Computing

On Thursday, Microsoft’s Azure Quantum group announced that it has settled on a plan for getting error correction on quantum computers. While the company pursues its own hardware efforts, the Azure team is a platform provider that currently gives access to several distinct types of hardware qubits. So it has chosen a scheme that is suitable for several different quantum computing technologies (notably excluding its own). The company estimates that the system it has settled on can take hardware qubits with an error rate of about 1 in 1,000 and use them to build logical qubits where errors are instead 1 in 1 million.

While it’s describing the scheme in terms of mathematical proofs and simulations, it hasn’t shown that it works using actual hardware yet. But one of its partners, Atom Computing, is accompanying the announcement with a description of how its machine is capable of performing all the operations that will be needed.

Arbitrary connections

There are similarities and differences between what the company is talking about today and IBM’s recent update of its roadmap, which described another path to error-resistant quantum computing. In IBM’s case, it makes both the software stack that will perform the error correction and the hardware needed to implement it. It uses chip-based hardware, with the connections among qubits mediated by wiring that’s laid out when the chip is fabricated. Since error correction schemes require a very specific layout of connections among qubits, once IBM decides on a quantum error correction scheme, it can design chips with the wiring needed to implement that scheme.

Microsoft’s Azure, in contrast, provides its users with access to hardware from several different quantum computing companies, each based on different technology. Some of them, like Rigetti and Microsoft’s own planned processor, are similar to IBM’s in that they have a fixed layout during manufacturing, and so can only handle codes that are compatible with their wiring layout. But others, such as those provided by Quantinuum and Atom Computing, store their qubits in atoms that can be moved around and connected in arbitrary ways. Those arbitrary connections allow very different types of error correction schemes to be considered.

It can be helpful to think of this using an analogy to geometry. A chip is like a plane, where it’s easiest to form the connections needed for error correction among neighboring qubits; longer connections are possible, but not as easy. Things like trapped ions and atoms provide a higher-dimensional system where far more complicated patterns of connections are possible. (Again, this is an analogy. IBM is using three-dimensional wiring in its processing chips, while Atom Computing stores all its atoms in a single plane.)

Microsoft’s announcement is focused on the sorts of processors that can form the more complicated, arbitrary connections. And, well, it’s taking full advantage of that, building an error correction system with connections that form a four-dimensional hypercube. “We really have focused on the four-dimensional codes due to their amenability to current and near term hardware designs,” Microsoft’s Krysta Svore told Ars.

The code not only describes the layout of the qubits and their connections, but also the purpose of each hardware qubit. Some of them are used to hang on to the value of the logical qubit(s) stored in a single block of code. Others are used for what are called “weak measurements.” These measurements tell us something about the state of the ones that are holding on to the data—not enough to know their values (a measurement that would end the entanglement), but enough to tell if something has changed. The details of the measurement allow corrections to be made that restore the original value.

Microsoft’s error correction system is described in a preprint that the company recently released. It includes a family of related geometries, each of which provides different degrees of error correction, based on how many simultaneous errors they can identify and fix. The descriptions are about what you’d expect for complicated math and geometry—”Given a lattice Λ with an HNF L, the code subspace of the 4D geometric code CΛ is spanned by the second homology H2(T4Λ,F2) of the 4-torus T4Λ—but the gist is that all of them convert collections of physical qubits into six logical qubits that can be error corrected.

The more hardware qubits you add to host those six logical qubits, the greater error protection each of them gets. That becomes important because some more sophisticated algorithms will need more than the one-in-a-million error protection that Svore said Microsoft’s favored version will provide. That favorite is what’s called the Hadamard version, which bundles 96 hardware qubits to form six logical qubits, and has a distance of eight (distance being a measure of how many simultaneous errors it can tolerate). You can compare that with IBM’s announcement, which used 144 hardware qubits to host 12 logical qubits at a distance of 12 (so, more hardware, but more logical qubits and greater error resistance).

The other good stuff

On its own, a description of the geometry is not especially exciting. But Microsoft argues that this family of error correction codes has a couple of significant advantages. “All of these codes in this family are what we call single shot,” Svore said. “And that means that, with a very low constant number of rounds of getting information about the noise, one can decode and correct the errors. This is not true of all codes.”

Limiting the number of measurements needed to detect errors is important. For starters, measurements themselves can create errors, so making fewer makes the system more robust. In addition, in things like neutral atom computers, the atoms have to be moved to specific locations where measurements take place, and the measurements heat them up so that they can’t be reused until cooled. So, limiting the measurements needed can be very important for the performance of the hardware.

The second advantage of this scheme, as described in the draft paper, is the fact that you can perform all the operations needed for quantum computing on the logical qubits these schemes host. Just like in regular computers, all the complicated calculations performed on a quantum computer are built up from a small number of simple logical operations. But not every possible logical operation works well with any given error correction scheme. So it can be non-trivial to show that an error correction scheme is compatible with enough of the small operations to enable universal quantum computation.

So, the paper describes how some logical operations can be performed relatively easily, while a few others require manipulations of the error correction scheme in order to work. (These manipulations have names like lattice surgery and magic state distillation, which are good signs that the field doesn’t take itself that seriously.)

So, in sum, Microsoft feels that it has identified an error correction scheme that is fairly compact, can be implemented efficiently on hardware that stores qubits in photons, atoms, or trapped ions, and enables universal computation. What it hasn’t done, however, is show that it actually works. And that’s because it simply doesn’t have the hardware right now. Azure is offering trapped ion machines from IonQ and Qantinuum, but these top out at 56 qubits—well below the 96 needed for their favored version of these 4D codes. The largest it has access to is a 100-qubit machine from a company called PASQAL, which barely fits the 96 qubits needed, leaving no room for error.

While it should be possible to test smaller versions of codes in the same family, the Azure team has already demonstrated its ability to work with error correction codes based on hypercubes, so it’s unclear whether there’s anything to gain from that approach.

More atoms

Instead, it appears to be waiting for another partner, Atom Computing, to field its next-generation machine, one it’s designing in partnership with Microsoft. “This first generation that we are building together between Atom Computing and Microsoft will include state-of-the-art quantum capabilities, will have 1,200 physical qubits,” Svore said “And then the next upgrade of that machine will have upwards of 10,000. And so you’re looking at then being able to go to upwards of a hundred logical qubits with deeper and more reliable computation available. “

So, today’s announcement was accompanied by an update on progress from Atom Computing, focusing on a process called “midcircuit measurement.” Normally, during quantum computing algorithms, you have to resist performing any measurements of the value of qubits until the entire calculation is complete. That’s because quantum calculations depend on things like entanglement and each qubit being in a superposition between its two values; measurements can cause all that to collapse, producing definitive values and ending entanglement.

Quantum error correction schemes, however, require that some of the hardware qubits undergo weak measurements multiple times while the computation is in progress. Those are quantum measurements taking place in the middle of a computation—midcircuit measurements, in other words. To show that its hardware will be up to the task that Microsoft expects of it, the company decided to demonstrate mid-circuit measurements on qubits implementing a simple error correction code.

The process reveals a couple of notable features that are distinct from doing this with neutral atoms. To begin with, the atoms being used for error correction have to be moved to a location—the measurement zone—where they can be measured without disturbing anything else. Then, the measurement typically heats up the atom slightly, meaning they have to be cooled back down afterward. Neither of these processes is perfect, and so sometimes an atom gets lost and needs to be replaced with one from a reservoir of spares. Finally, the atom’s value needs to be reset, and it has to be sent back to its place in the logical qubit.

Testing revealed that about 1 percent of the atoms get lost each cycle, but the system successfully replaces them. In fact, they set up a system where the entire collection of atoms is imaged during the measurement cycle, and any atom that goes missing is identified by an automated system and replaced.

Overall, without all these systems in place, the fidelity of a qubit is about 98 percent in this hardware. With error correction turned on, even this simple logical qubit saw its fidelity rise over 99.5 percent. All of which suggests their next computer should be up to some significant tests of Microsoft’s error correction scheme.

Waiting for the lasers

The key questions are when it will be released, and when its successor, which should be capable of performing some real calculations, will follow it? That’s something that’s a challenging question to ask because, more so than some other quantum computing technologies, neutral atom computing is dependent on something that’s not made by the people who build the computers: lasers. Everything about this system—holding atoms in place, moving them around, measuring, performing manipulations—is done with a laser. The lower the noise of the laser (in terms of things like frequency drift and energy fluctuations), the better performance it’ll have.

So, while Atom can explain its needs to its suppliers and work with them to get things done, it has less control over its fate than some other companies in this space.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Microsoft lays out its path to useful quantum computing Read More »

new-dating-for-white-sands-footprints-confirms-controversial-theory

New dating for White Sands footprints confirms controversial theory

Some of the sediment layers contained the remains of ancient grass seeds mixed with the sediment. Bennett and his colleagues radiocarbon-dated seeds from the layer just below the oldest footprints and the layer just above the most recent ones. According to those 2021 results, the oldest footprints were made sometime after 23,000 years ago; the most recent ones were made sometime before 21,000 years ago.

At that time, the northern half of the continent was several kilometers below massive sheets of ice. The existence of 23,000-year-old footprints could only mean that people were already living in what’s now New Mexico before the ice sheets sealed off the southern half of the continent from the rest of the world for the next few thousand years.

Ancient human footprints found in situ at at White Sands National Park in New Mexico.

Ancient human footprints found in situ at White Sands National Park in New Mexico. Credit: Jeffrey S. Pigati et al., 2023

Other researchers were skeptical of those results, pointing out that the aquatic plants (Ruppia cirrhosa) analyzed were prone to absorbing the ancient carbon in groundwater, which could have skewed the findings and made the footprints seem older than they actually were. And the pollen samples weren’t taken from the same sediment layers as the footprints.

So the same team followed up by radiocarbon-dating pollen sampled from the same layers as some of the footprints—those that weren’t too thin for sampling. This pollen came from pine, spruce, and fir trees, i.e., terrestrial plants, thereby addressing the issue of groundwater carbon seeping into samples. They also analyzed quartz grains taken from clay just above the lowest layer of footprints using a different method, optically stimulated luminescence dating. They published those findings in 2023, which agreed with their earlier estimate.

New dating for White Sands footprints confirms controversial theory Read More »

via-the-false-claims-act,-nih-puts-universities-on-edge

Via the False Claims Act, NIH puts universities on edge


Funding pause at U. Michigan illustrates uncertainty around new language in NIH grants.

University of Michigan students walk on the UM campus next to signage displaying the University’s “Core Values” on April 3, 2025 in Ann Arbor, Michigan. Credit: Bill Pugliano/Getty Images

Earlier this year, a biomedical researcher at the University of Michigan received an update from the National Institutes of Health. The federal agency, which funds a large swath of the country’s medical science, had given the green light to begin releasing funding for the upcoming year on the researcher’s multi-year grant.

Not long after, the researcher learned that the university had placed the grant on hold. The school’s lawyers, it turned out, were wrestling with a difficult question: whether to accept new terms in the Notice of Award, a legal document that outlines the grant’s terms and conditions.

Other researchers at the university were having the same experience. Indeed, Undark’s reporting suggests that the University of Michigan—among the top three university recipients of NIH funding in 2024, with more than $750 million in grants—had quietly frozen some, perhaps all, of its incoming NIH funding dating back to at least the second half of April.

The university’s director of public affairs, Kay Jarvis, declined to comment for this article or answer a list of questions from Undark, instead pointing to the institution’s research website.

In conversations with Michigan scientists, and in internal communications obtained by Undark, administrators explained the reason for the delays: University officials were concerned about new language in NIH grant notices. That language said that universities will be subject to liability under a Civil War-era statute called the False Claims Act if they fail to abide by civil rights laws and a January 20 executive order related to gender.

For the most part, public attention to NIH funding has focused on what the new Trump administration is doing on its end, including freezing and terminating grants at elite institutions for alleged Title VI and IX violations, and slashing funding for newly disfavored areas of research. The events in Ann Arbor show how universities themselves are struggling to cope with a wave of recent directives from the federal government.

The new terms may expose universities to significant legal risk, according to several experts. “The Trump administration is using the False Claims Act as a massive threat to the bottom lines of research institutions,” said Samuel Bagenstos, a law professor at the University of Michigan, who served as general counsel for the Department of Health and Human Services during the Biden administration. (Bagenstos said he has not advised the university’s lawyers on this issue.) That law entitles the government to collect up to three times the financial damage. “So potentially you could imagine the Trump administration seeking all the federal funds times three that an institution has received if they find a violation of the False Claims Act.”

Such an action, Bagenstos and another legal expert said, would be unlikely to hold up in court. But the possibility, he said, is enough to cause concern for risk-averse institutions.

The grant pauses unsettled the affected researchers. One of them noted that the university had put a hold on a grant that supported a large chunk of their research program. “I don’t have a lot of money left,” they said.

The researcher worried that if funds weren’t released soon, personnel would have to be fired and medical research halted. “There’s a feeling in the air that somebody’s out to get scientists,” said the researcher, reflecting on the impact of all the changes at the federal level. “And it could be your turn tomorrow for no clear reason.” (The researcher, like other Michigan scientists interviewed for this story, spoke on condition of anonymity for fear of retaliation.)

Bagenstos said some other universities had also halted funding—a claim Undark was unable to confirm. At Michigan, at least, money is now flowing: On Wednesday, June 11, just hours after Undark sent a list of questions to the university’s public affairs office, some researchers began receiving emails saying their funding would be released. And research administrators received a message stating that the university would begin releasing the more than 270 awards that it had placed on hold.

The federal government distributes tens of billions of dollars each year to universities through NIH funding. In the past, the terms of those grants have required universities to comply with civil rights laws. More recently, though, the scope of those expectations has expanded. Multiple recent award notices viewed by Undark now contain language referring to a January 20 executive order that states the administration “will defend women’s rights and protect freedom of conscience by using clear and accurate language and policies that recognize women are biologically female, and men are biologically male.” The notices also contain four bullet points, one of which asks the grant recipient—meaning the researcher’s institution—to acknowledge that “a knowing false statement” regarding compliance is subject to liability under the False Claims Act.

Read an NIH Notice of Award

Alongside this change, on April 21, the agency issued a policy requiring universities to certify that they will not participate in discriminatory DEI activities or boycotts of Israel, noting that false statements would be subject to penalties under the False Claims Act. (That measure was rescinded in early June, reinstated, and then rescinded again while the agency awaits further White House guidance.) Additionally, in May, an announcement from the Department of Justice encouraged use of the False Claims Act in civil rights enforcement.

Some experts said that signing onto FCA terms could put universities in a vulnerable position, not because they aren’t following civil rights laws, but because the new grant language is vague and seemingly ripe for abuse.

The False Claims Act says someone who knowingly submits a false claim to the government can be held liable for triple damages. In the case of a major research institution like the University of Michigan, worst-case scenarios could range into the billions of dollars.

It’s not just the dollar amount that may cause schools to act in a risk-averse way, said Bagenstos. The False Claims Act also contains what’s known as a “qui tam” provision, which allows private entities to file a lawsuit on behalf of the United States and then potentially take a piece of the recovery money. “The government does not have the resources to identify and pursue all cases of legitimate fraud” in the country, said Bagenstos, so generally the provision is a useful one. But it can be weaponized when “yoked to a pernicious agenda of trying to suppress speech by institutions of higher learning, or simply to try to intimidate them.”

Avoiding the worst-case scenario might seem straightforward enough: Just follow civil rights laws. But in reality, it’s not entirely clear where a university’s responsibility starts and stops. For example, an institution might officially adopt policies that align with the new executive orders. But if, say, a student group, or a sociology department, steps out of bounds, then the university might be understood to not be in compliance—particularly by a less-than-friendly federal administration.

University attorneys may also balk at the ambiguity and vagueness of terms like “gender ideology” and “DEI,” said Andrew Twinamatsiko, a director of the Center for Health Policy and the Law at the O’Neill Institute at Georgetown Law. Litigation-averse universities may end up rolling back their programming, he said, because they don’t want to run afoul of the government’s overly broad directives.

“I think this is a time that calls for some courage,” said Bagenstos. If every university decides the risks are too great, then the current policies will prevail without challenge, he said, even though some are legally unsound. And the bar for False Claims Act liability is actually quite high, he pointed out: There’s a requirement that the person knowingly made a false statement or deliberately ignored facts. Universities are actually well-positioned to prevail in court, said Bagenstos and other legal experts. The issue is that they don’t want to engage in drawn-out and potentially costly litigation.

One possibility might be for a trade group, such as the Association of American Universities, to mount the legal challenge, said Richard Epstein, a libertarian legal scholar. In his view, the new NIH terms are unconstitutional because such conditions on spending, which he characterized as “unrelated to scientific endeavors,” need to be authorized by Congress.

The NIH did not respond to repeated requests for comment.

Some people expressed surprise at the insertion of the False Claims Act language.

Michael Yassa, a professor of neurobiology and behavior at the University of California, Irvine, said that he wasn’t aware of the new terms until Undark contacted him. The NIH-supported researcher and study-section chair started reading from a recent Notice of Award during the interview. “I can’t give you a straight answer on this one,” he said, and after further consideration, added, “Let me run this by a legal team.”

Andrew Miltenberg, an attorney in New York City who’s nationally known for his work on Title IX litigation, was more pointed. “I don’t actually understand why it’s in there,” he said, referring to the new grant language. “I don’t think it belongs in there. I don’t think it’s legal, and I think it’s going to take some lawsuits to have courts interpret the fact that there’s no real place for it.

This article was originally published on Undark. Read the original article.

Via the False Claims Act, NIH puts universities on edge Read More »