Features

what’s-wrong-with-aaa-games?-the-development-of-the-next-battlefield-has-answers.

What’s wrong with AAA games? The development of the next Battlefield has answers.


EA insiders describe stress and setbacks in a project that’s too big to fail.

A marketing image for Battlefield depicting soldiers and jets

After the lukewarm reception of Battlefield 2042, EA is doubling down.

After the lukewarm reception of Battlefield 2042, EA is doubling down.

It’s been 23 years since the first Battlefield game, and the video game industry is nearly unrecognizable to anyone who was immersed in it then. Many people who loved the games of that era have since become frustrated with where AAA (big budget) games have ended up.

Today, publisher EA is in full production on the next Battlefield title—but sources close to the project say it has faced culture clashes, ballooning budgets, and major disruptions that have left many team members fearful that parts of the game will not be finished to players’ satisfaction in time for launch during EA’s fiscal year.

They also say the company has made major structural and cultural changes to how Battlefield games are created to ensure it can release titles of unprecedented scope and scale. This is all to compete with incumbents like the Call of Duty games and Fortnite, even though no prior Battlefield has achieved anywhere close to that level of popular and commercial success.

I spoke with current and former EA employees who work or have recently worked directly on the game—they span multiple studios, disciplines, and seniority levels and all agreed to talk about the project on the condition of anonymity. Asked to address the reporting in this article, EA declined to comment.

According to these first-hand accounts, the changes have led to extraordinary stress and long hours. Every employee I spoke to across several studios either took exhaustion leave themselves or directly knew staffers who did. Two people who had worked on other AAA projects within EA or elsewhere in the industry said this project had more people burning out and needing to take leave than they’d ever seen before.

Each of the sources I spoke with shared sincere hopes that the game will still be a hit with players, pointing to its strong conceptual start and the talent, passion, and pedigree of its development team. Whatever the end result, the inside story of the game’s development illuminates why the medium and the industry are in the state they’re in today.

Table of Contents

The road to Glacier

To understand exactly what’s going on with the next Battlefield title—codenamed Glacier—we need to rewind a bit.

In the early 2010s, Battlefield 3 and Battlefield 4 expanded the franchise audience to more directly compete with Call of Duty, the heavy hitter at the time. Developed primarily by EA-owned, Sweden-based studio DICE, the Battlefield games mixed the franchise’s promise of combined arms warfare and high player counts with Call of Duty’s faster pace and greater platform accessibility.

This was a golden age for Battlefield. However, 2018’s Battlefield V launched to a mixed reception, and EA began losing players’ attention in an expanding industry.

Battlefield 3, pictured here, kicked off the franchise’s golden age. Credit: EA

Instead, the hot new online shooters were Overwatch (2016), Fortnite (2017), and a resurgent Call of Duty. Fortnite was driven by a popular new gameplay mode called Battle Royale, and while EA attempted a Battle Royale mode in Battlefield V, it didn’t achieve the desired level of popularity.

After V, DICE worked on a Battlefield title that was positioned as a throwback to the glory days of 3 and 4. That game would be called Battlefield 2042 (after the future year in which it was set), and it would launch in 2021.

The launch of Battlefield 2042 is where Glacier’s development story begins. Simply put, the game was not fun enough, and Battlefield 2042 launched as a dud.

Don’t repeat past mistakes

Players were disappointed—but so were those who worked on 2042. Sources tell me that prior to launch, Battlefield 2042 “massively missed” its alpha target—a milestone by which most or all of the foundational features of the game are meant to be in place. Because of this, the game’s final release would need to be delayed in order to deliver on the developers’ intent (and on players’ expectations).

“Realistically, they have to delay the game by at least six months to complete it. Now, they eventually only delayed it by, I think, four or five weeks, which from a development point of view means very little,” said one person who worked closely with the project at the time.

Developers at DICE had hoped for more time. Morale fell, but the team marched ahead to the game’s lukewarm launch.

Ultimately, EA made back some ground with what the company calls “live operations”—additional content and updates in the months following launch—but the game never fulfilled its ambitions.

Plans were already underway for the next Battlefield game, so a postmortem was performed on 2042. It concluded that the problems had been in execution, not vision. New processes were put into place so that issues could be identified earlier and milestones like the alpha wouldn’t be missed.

To help achieve this, EA hired three industry luminaries to lead Glacier, all of them based in the United States.

The franchise leadership dream team

2021 saw EA bring on Byron Beede as general manager for Battlefield; he had previously been general manager for both Call of Duty (including the Warzone Battle Royale) and the influential shooter Destiny. EA also hired Marcus Lehto—co-creator of Halo—as creative chief of a newly formed Seattle studio called Ridgeline Games, which would lead the development of Glacier’s single-player campaign.

Finally, there was Vince Zampella, one of the leaders of the team that initially created Call of Duty in 2003. He joined EA in 2010 to work on other franchises, but in 2021, EA announced that Zampella would oversee Battlefield moving forward.

In the wake of these changes, some prominent members of DICE departed, including General Manager Oskar Gabrielson and Creative Director Lars Gustavsson, who had been known by the nickname “Mr. Battlefield.” With this changing of the guard, EA was ready to place a bigger bet than ever on the next Battlefield title.

100 million players

While 2042 struggled, competitors Call of Duty and Fortnite were posting astonishing player and revenue numbers, thanks in large part to the popularity of their Battle Royale modes.

EA’s executive leadership believed Battlefield had the potential to stand toe to toe with them, if the right calls were made and enough was invested.

A lofty player target was set for Glacier: 100 million players over a set period of time that included post-launch.

Fortnite characters looking across the many islands and vast realm of the game.

Fortnite‘s huge success has publishers like EA chasing the same dollars. Credit: Epic Games

“Obviously, Battlefield has never achieved those numbers before,” one EA employee told me. “It’s important to understand that over about that same period, 2042 has only gotten 22 million,” another said. Even 2016’s Battlefield 1—the most successful game in the franchise by numbers—had achieved “maybe 30 million plus.”

Of course, most previous Battlefield titles had been premium releases, with an up-front purchase cost and no free-to-play mode, whereas successful competitors like Fortnite and Call of Duty made their Battle Royale modes freely available, monetizing users with in-game purchases and season passes that unlocked post-launch content.

It was thought that if Glacier did the same, it could achieve comparable numbers, so a free-to-play Battle Royale mode was made a core offering for the title, alongside a six-hour single-player campaign, traditional Battlefield multiplayer modes like Conquest and  Rush, a new F2P mode called Gauntlet, and a community content mode called Portal.

The most expensive Battlefield ever

All this meant that Glacier would have a broader scope than its predecessors. Developers say it has the largest budget of any Battlefield title to date.

The project targeted a budget of more than $400 million back in early 2023, which was already more than was originally planned at the start.

However, major setbacks significantly disrupted production in 2023 (more on that in a moment) and hundreds of additional developers were brought onto Glacier from various EA-owned studios to get things back on track, significantly increasing the cost. Multiple team members with knowledge of the project’s finances told me that the current projections are now well north of that $400 million amount.

Skepticism in the ranks

Despite the big ambitions of the new leadership team and EA executives, “very few people” working in the studios believed the 100 million target was achievable, two sources told me. Many of those who had worked on Battlefield for a long time at DICE in Stockholm were particularly skeptical.

“Among the things that we are predicting is that we won’t have to cannibalize anyone else’s sales,” one developer said. “That there’s just such an appetite out there for shooters of this kind that we will just naturally be able to get the audience that we need.”

Regarding the lofty player and revenue targets, one source said that “nothing in the market research or our quality deliverables indicates that we would be anywhere near that.”

“I think people are surprised that they actually worked on a next Battlefield game and then increased the ambitions to what they are right now,” said another.

In 2023, a significant disruption to the project put one game mode in jeopardy, foreshadowing a more troubled development than anyone initially imagined.

Ridgeline implodes

Battlefield games have a reputation for middling single-player campaigns, and Battlefield 2042 didn’t include one at all. But part of this big bet on Glacier was the idea of offering the complete package, so Ridgeline Games scaled up while working on a campaign EA hoped would keep Battlefield competitive with Call of Duty, which usually has included a single-player campaign in its releases.

The studio worked on the campaign for about two years while it was also scaling and hiring talent to catch up to established studios within the Battlefield family.

It didn’t work out. In February of 2024, Ridgeline was shuttered, Halo luminary Marcus Lehto left the company, and the rest of the studios were left to pick up the pieces. When a certain review came up not long before the studio was shuttered, Glacier’s top leadership were dissatisfied with the progress they were seeing, and the call was made.

Sources in EA teams outside Ridgeline told me that there weren’t proper check-ins and internal reviews on the progress, obscuring the true state of the project until the fateful review.

On the other hand, those closer to Ridgeline described a situation in which the team couldn’t possibly complete its objectives, as it was expected to hire and scale up from zero while also meeting the same milestones as established studios with resources already in place. “They kept reallocating funds—essentially staff months—out of our budget,” one person told me. “And, you know, we’re sitting there trying to adapt to doing more with less.”

A Battlefield logo with a list of studios beneath it

A marketing image from EA showing now-defunct Ridgeline Games on the list of groups involved. Credit: EA

After the shuttering of Ridgeline, ownership of single-player shifted to three other EA studios: Criterion, DICE, and Motive. But those teams had a difficult road ahead, as “there was essentially nothing left that Ridgeline had spent two years working on that they could pick up on and build, so they had to redo essentially everything from scratch within the same constraints of when the game had to release.”

Single-player was two years behind. As of late spring, it was the only game mode that had failed to reach alpha, well over a year after the initial overall alpha target for the project.

Multiple sources said its implosion was symptomatic of some broader cultural and process problems that affected the rest of the project, too.

Culture shock

Speaking with people who have worked or currently work at DICE in Sweden, the tension between some at that studio and the new, US-based leadership team was obvious—and to a degree, that’s expected.

DICE had “the pride of having started Battlefield and owned that IP,” but now the studio was just “supporting it for American leadership,” said one person who worked there. Further, “there’s a lot of distrust and disbelief… when it comes to just operating toward numbers that very few people believe in apart from the leadership.”

But the tensions appear to go deeper than that. Two other major factors were at play: scaling pains as the scope of the project expanded and differences in cultural values between US leadership and the workers in Europe.

“DICE being originally a Swedish studio, they are a bit more humble. They want to build the best game, and they want to achieve the greatest in terms of the game experience,” one developer told me. “Of course, when you’re operated by EA, you have to set financial expectations in order to be as profitable as possible.”

That tension wasn’t new. But before 2042 failed to meet expectations, DICE Stockholm employees say they were given more leeway to set the vision for the game, as well as greater influence on timeline and targets.

Some EU-based team members were vocally dismayed at how top-down directives from far-flung offices, along with the US company’s emphasis on quarterly profits, have affected Glacier’s development far more than with previous Battlefield titles.

This came up less in talking to US-based staff, but everyone I spoke with on both continents agreed on one thing: Growing pains accompanied the transition from a production environment where one studio leads and others offer support to a new setup with four primary studios—plus outside support from all over EA—and all of it helmed by LA-based leadership.

EA is not alone in adopting this approach; it’s also used by competitor Activision-Blizzard on the Call of Duty franchise (though it’s worth noting that a big hit like Epic Games’ Fortnite has a very different structure).

Whereas publishers like EA and Activision-Blizzard used to house several studios, each of which worked on its own AAA game, they now increasingly make bigger bets on singular games-as-a-service offerings, with several of their studios working in tandem on a single project.

“Development of games has changed so much in the last 10 to 15 years,” said one developer. The new arrangement excites investors and shareholders, who can imagine returns from the next big unicorn release, but it can be a less creatively fulfilling way to work, as directives come from the top down, and much time is spent on dealing with inter-studio process. Further, it amplifies the effects of failures, with a higher human cost to people working on projects that don’t meet expectations.

It has also made the problems that affected Battlefield 2042‘s development more difficult to avoid.

Clearing the gates

EA studios use a system of “gates” to set the pace of development. Projects have to meet certain criteria to pass each gate.

For gate one, teams must have a clear sense of what they want to make and some proof of concept showing that this vision is achievable.

As they approach gate two, they’re building out and testing key technology, asking themselves if it can work at scale.

Gate three signifies full production. Glacier was expected to pass gate three in early 2023, but it was significantly delayed. When it did pass, some on the ground questioned whether it should have.

“I did not see robust budget, staff plan, feature list, risk planning, et cetera, as we left gate three,” said one person. In the way EA usually works, these things would all be expected at this stage.

As the project approached gate three and then alpha, several people within the organization tried to communicate that the game wasn’t on footing as firm as the top-level planning suggested. One person attributed this to the lack of a single source of truth within the organization. While developers tracked issues and progress in one tool, others (including project leadership) leaned on other sources of information that weren’t as tied to on-the-ground reality when making decisions.

A former employee with direct knowledge of production plans told me that as gate three approached, prototypes of some important game features were not ready, but since there wasn’t time to complete proofs of concept, the decision was handed down to move ahead to production even though the normal prerequisites were not met.

“If you don’t have those things fleshed out when you’re leaving pre-pro[duction], you’re just going to be playing catch-up the entire time you’re in production,” this source said.

In some cases, employees who flagged the problems believed they were being punished. Two EA employees each told me they found themselves cut out of meetings once they raised concerns like this.

Gate three was ultimately declared clear, and as of late May 2025, alpha was achieved for everything except the single-player campaign. But I’m told that this occurred with some tasks still un-estimated and many discrepancies remaining, leaving the door open to problems and compromises down the road.

The consequences for players

Because of these issues, the majority of the people I spoke with said they expect planned features or content to be cut before the game actually launches—which is normal, to a degree. But these common game development problems can contribute to other aspects of modern AAA gaming that many consumers find frustrating.

First off, making major decisions so late in the process can lead to huge day-one patches. Players of all types of AAA games often take to Reddit and social media to malign day-one patches as a frustrating annoyance for modern titles.

Battlefield 2042 had a sizable day-one patch. When multiplayer RPG Anthem (another big investment by EA) launched to negative reviews, that was partly because critics and others with pre-launch access were playing a build that was weeks old; a day-one patch significantly improved some aspects of the game, but that came after the negative press began to pour out.

A player character confronts a monster in Anthem

Anthem, another EA project with a difficult development, launched with a substantial day-one patch. Credit: EA

Glacier’s late arrival to Alpha and the teams’ problems with estimating the status of features could lead to a similarly significant day-one patch. That’s in part because EA has to deliver the work to external partners far in advance of the actual launch date.

“They have these external deadlines to do with the submissions into what EA calls ‘first-party’—that’s your PlayStation and Xbox submissions,” one person explained. “They have to at least have builds ready that they can submit.”

What ends up on the disc or what pre-loads from online marketplaces must be finalized long before the game’s actual release date. When a project is far behind or prone to surprises in the final stretch, those last few weeks are where a lot of vital work happens, so big launch patches become a necessity.

These struggles over content often lead to another pet peeve of players: planned launch content being held until later. “There’s a bit of project management within the Battlefield project that they can modify,” a former senior EA employee who worked on the project explained. “They might push it into Season 1 or Season 2.”

That way, players ultimately get the intended feature or content, but in some cases, they may end up paying more for it, as it ends up being part of a post-launch package like a battle pass.

These challenges are a natural extension of the fiscal-quarter-oriented planning that large publishers like EA adhere to. “The final timelines don’t change. The final numbers don’t change,” said one source. “So there is an enormous amount of pressure.”

A campaign conundrum

Single-player is also a problem. “Single-player in itself is massively late—it’s the latest part of the game,” I was told. “Without an enormous patch on day one or early access to the game, it’s unrealistic that they’re going to be able to release it to what they needed it to do.”

If the single-player mode is a linear, narrative campaign as originally planned, it may not be possible to delay missions or other content from the campaign to post-launch seasons.

“Single-player is secondary to multiplayer, so they will shift the priority to make sure that single-player meets some minimal expectations, however you want to measure that. But the multiplayer is the main focus,” an EA employee said.

“They might have to cut a part of the single-player out in order for the game to release with a single-player [campaign] on it,” they continued. “Or they would have to severely work through the summer and into the later part of this year and try to fix that.”

That—and the potential for a disappointing product—is a cost for players, but there are costs for the developers who work on the game, too.

Because timelines must be kept, and not everything can be cut or moved post-launch, it falls on employees to make up the gap. As we’ve seen in countless similar reports about AAA video game development before, that sometimes means longer hours and heavier stress.

AAA’s burnout problem

More than two decades ago, the spouse of an EA employee famously wrote an open letter to bring attention to the long hours and high stress developers there were facing.

Since then, some things have improved. People at all levels within EA are more conscious of the problems that were highlighted, and there have been efforts to mitigate some of them, like more comp time and mental health resources. However, many of those old problems linger in some form.

I heard several first-hand accounts of people working on Glacier who had to take stress or mental or exhaustion health leave, ranging from a couple of weeks to several months.

“There’s like—I would hesitate to count—but a large number compared to other projects I’ve been on who have taken mental exhaustion leave here. Some as short as two weeks to a month, some as long as eight months and nine,” one staffer told me after saying they had taken some time themselves.

This was partly because of long hours that were required when working directly with studios in both the US and Europe—a symptom of the new, multi-studio structure.

“My day could start as early as 5: 00 [am],” one person said. The first half of the day involved meetings with a studio in one part of the world while the second included meetings with a studio in another region. “Then my evenings would be spent doing my work because I’d be tied up juggling things all across the board and across time zones.”

This sort of workload was not limited to a brief, planned period of focused work, the employees said. Long hours were particularly an issue for those working in or closely with Ridgeline, the studio initially tasked with making the game’s single-player campaign.

From the beginning, members of the Ridgeline team felt they were expected to deliver work at a similar level to that of established studios like DICE or Ripple Effect before they were even fully staffed.

“They’ve done it before,” one person who was involved with Ridgeline said of DICE. “They’re a well-oiled machine.” But Ridgeline was “starting from zero” and was “expected to produce the same stuff.”

Within just six months of the starting line, some developers at Ridgeline said they were already feeling burnt out.

In the wake of the EA Spouses event, EA developed resources for employees. But in at least some cases, they weren’t much help.

“I sought some, I guess, mental help inside of EA. From HR or within that organization of some sort, just to be able to express it—the difficulties that I experienced personally or from coworkers on the development team that had experienced this, you know, that had lived through that,” said another employee. “And the nature of that is there’s nobody to listen. They pretend to listen, but nobody ultimately listens. Very few changes are made on the back of it.”

This person went on to say that “many people” had sought similar help and felt the same way, as far back as the post-launch period for 2042 and as recently as a few months ago.

Finding solutions

There have been a lot of stories like this about the games industry over the years, and it can feel relentlessly grim to keep reading them—especially when they’re coming alongside frequent news of layoffs, including at EA. Problems are exposed, but solutions don’t get as much attention.

In that spirit, let’s wrap up by listening to what some in the industry have said about what doing things better could look like—with the admitted caveat that these proposals are still not always common practice in AAA development.

“Build more slowly”

When Swen Vincke—studio head for Larian Studios and game director for the runaway success Baldur’s Gate 3—accepted an award at the Game Developers Conference, he took his moment on stage to express frustration at publishers like EA.

“I’ve been fighting publishers my entire life, and I keep on seeing the same, same, same mistakes over and over and over,” he said. “It’s always the quarterly profits. The only thing that matters are the numbers.”

After the awards show, he took to X to clarify his statements, saying, “This message was for those who try to double their revenue year after year. You don’t have to do that. Build more slowly and make your aim improving the state of the art, not squeezing out the last drop.”

A man stands on stage giving a speech

Swen Vincke giving a speech at the 2024 Game Developers Choice Awards. Credit: Game Developers Conference

In planning projects like Glacier, publicly traded companies often pursue huge wins—and there’s even more pressure to do so if a competing company has already achieved big success with similar titles.

But going bigger isn’t always the answer, and many in the industry believe the “one big game” strategy is increasingly nonviable.

In this attention economy?

There may not be enough player time or attention to go around, given the numerous games-as-a-service titles that are as large in scope as Call of Duty games or Fortnite. Despite the recent success of new entrant Marvel Rivals, there have been more big AAA live service shooter flops than wins in recent years.

Just last week, a data-based report by prominent games marketing newsletter GameDiscoverCo came to a prescient realization. “Genres like Arena Shooter, Battle Royale, and Hero Shooter look amazing from a revenue perspective. But there’s only 29 games in all of Steam’s history that have grossed >$1m in those subgenres,” wrote GameDiscoverCo’s Simon Carless.

It gets worse. “Only Naraka Bladepoint, Overwatch 2 & Marvel Rivals have grossed >$25m and launched since 2020 in those subgenres,” Carless added. (It’s important to clarify that he is just talking Steam numbers here, though.) That’s a stark counterpoint to reports that Call of Duty has earned more than $30 billion in lifetime revenue.

Employees of game publishers and studios are deeply concerned about this. In a 2025 survey of professional game developers, “one of the biggest issues mentioned was market oversaturation, with many developers noting how tough it is to break through and build a sustainable player base.”

Despite those headwinds, publishers like EA are making big bets in well-established spaces rather than placing a variety of smaller bets in newer areas ripe for development. Some of the biggest recent multiplayer hits on Steam have come from smaller studios that used creative ideas, fresh genres, strong execution, and the luck (or foresight) of reaching the market at exactly the right time.

That might suggest that throwing huge teams and large budgets up against well-fortified competitors is an especially risky strategy—hence some of the anxiety from the EA developers I spoke with.

Working smarter, not harder

That anxiety has led to steadily growing unionization efforts across the industry. From QA workers at Bethesda to more wide-ranging unions at Blizzard and CD Projekt Red, there’s been more movement on this front in the past two or three years than there had been in decades beforehand.

Unionization isn’t a cure-all, and it comes with its own set of new challenges—but it does have the potential to shift some of the conversations toward more sustainable practices, so that’s another potential part of the solution.

Insomniac Games CEO Ted Price spoke authoritatively on sustainability and better work practices for the industry way back at 2021’s Develop:Brighton conference:

I think the default is to brute force the problem—in other words, to throw money or people at it, but that can actually cause more chaos and affect well-being, which goes against that balance. The harder and, in my opinion, more effective solution is to be more creative within constraints… In the stress of hectic production, we often feel we can’t take our foot off the gas pedal—but that’s often what it takes.

That means publishers and studios should plan for problems and work from accurate data about where the team is at, but it also means having a willingness to give their people more time, provided the capital is available to do so.

Giving people what they need to do their jobs sounds like a simple solution to a complex problem, but it was at the heart of every conversation I had about Glacier.

Most EA developers—including leaders who are beholden to lofty targets—want to make a great game. “At the end of the day, they’re all really good people and they work really hard and they really want to deliver a good product for their customer,” one former EA developer assured me as we ended our call.

As for making the necessary shifts toward sustainability in the industry, “It’s kind of in the best interest of making the best possible game for gamers,” explained another. “I hope to God that they still achieve what they need to achieve within the timelines that they have, for the sake of Battlefield as a game to actually meet the expectations of the gamers and for people to maintain their jobs.”

Photo of Samuel Axon

Samuel Axon is the editorial lead for tech and gaming coverage at Ars Technica. He covers AI, software development, gaming, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

What’s wrong with AAA games? The development of the next Battlefield has answers. Read More »

android-16-review:-post-hype

Android 16 review: Post-hype


Competent, not captivating

The age of big, exciting Android updates is probably over.

Android 16 on a Pixel

Android 16 is currently only available for Pixel phones. Credit: Ryan Whitwam

Android 16 is currently only available for Pixel phones. Credit: Ryan Whitwam

Google recently released Android 16, which brings a smattering of new features for Pixel phones, with promises of additional updates down the road. The numbering scheme has not been consistent over the years, and as a result, Android 16 is actually the 36th major release in a lineage that stretches back nearly two decades. In 2008, we didn’t fully understand how smartphones would work, so there was a lot of trial and error. In 2025, the formula has been explored every which way. Today’s smartphones run mature software, and that means less innovation in each yearly release. That trend is exemplified and amplified by Google’s approach to Android 16.

The latest release is perhaps the most humdrum version of the platform yet, but don’t weep for Google. The company has been working toward this goal for years: a world where the average phone buyer doesn’t need to worry about Android version numbers.

A little fun up front

When you install Android 16 on one of Google’s Pixel phones, you may need to check the settings to convince yourself that the update succeeded. Visually, the changes are so minuscule that you’ll only notice them if you’re obsessive about how Android works. For example, Google changed the style of icons in the overview screen and added a few more options to the overview app menus. There are a lot of these minor style tweaks; we expect more when Google releases Material 3 Expressive, but that’s still some way off.

There are some thoughtful UI changes, but again, they’re very minor and you may not even notice them at first. For instance, Google’s predictive back gesture, which allows the previous screen to peek out from behind the currently displayed one, now works with button navigation.

Apps targeting the new API (level 36) will now default to using edge-to-edge rendering, which removes the navigation background to make apps more immersive. Android apps have long neglected larger form factors because Google itself was neglecting those devices. Since the Android 12L release a few years ago, Google has been attempting to right that wrong. Foldable phones have suffered from many of the same issues with app scaling that tablets have, but all big-screen Android devices will soon benefit from adaptive apps. Previously, apps could completely ignore the existence of large screens and render a phone-shaped UI on a large screen.

Advanced Protection is a great addition to Android, even if it’s not the most riveting.

Credit: Ryan Whitwam

Advanced Protection is a great addition to Android, even if it’s not the most riveting. Credit: Ryan Whitwam

In Android 16, apps will automatically adapt to larger screens, saving you from having to tinker with the forced aspect ratio tools built into Google and Samsung devices. Don’t confuse this with tablet-style interfaces, though. Just because an app fills the screen, it’s no guarantee that it will look good. Most of the apps we’ve run on the Pixel 9 Pro Fold are still using stretched phone interfaces that waste space. Developers need to make adjustments to properly take advantage of larger screens. Will they? That’s yet another aspect of Android 16 that we hope will come later.

Security has been a focus in many recent Android updates. While not the most sexy improvement, the addition of Advanced Protection in Android 16 could keep many people from getting hit with malware, and it makes it harder for government entities to capture your data. This feature blocks insecure 2G connections, websites lacking HTTPS, and exploits over USB. It disables sideloading of apps, too, which might make some users wary. However, if you know someone who isn’t tech savvy, you should encourage them to enable Advanced Protection when (and if) they get access to Android 16. This is a great feature that Google should have added years ago.

The changes to notifications will probably make the biggest impact on your daily life. Whether you’re using Android or iOS, notification spam is getting out of hand. Every app seems to want our attention, and notifications can really pile up. Android 16 introduces a solid quality-of-life improvement by bundling notifications from each app. While notification bundles were an option before, they were primarily used for messaging, and not all developers bothered. Now, the notification shade is less overwhelming, and it’s easy to expand each block to triage individual items.

Progress notification

Android 16’s progress notifications are partially implemented in the first release.

Credit: Ryan Whitwam

Android 16’s progress notifications are partially implemented in the first release. Credit: Ryan Whitwam

Google has also added a new category of notifications that can show progress, similar to a feature on the iPhone. The full notification will include a live updating bar that can tell you exactly when your Uber will show up, for example. These notifications will come first to delivery and rideshare apps, but none of them are working yet. You can get a preview of how these notifications will work with the Android 16 easter egg, which sends a little spaceship rocketing toward a distant planet.

The progress notifications will also have a large status bar chip with basic information visible at all times. Tapping on it will expand the full notification. However, this is also not implemented in the first release of Android 16. Yes, this is a recurring theme with Google’s new OS.

More fun still to come

You may notice that none of the things we’ve discussed in Android 16 are exactly riveting—better security features and cleaner notifications are nice to have, but this is hardly a groundbreaking update. It might have been more exciting were it not for the revamped release schedule, though. This Android 16 release isn’t even the Android 16. There will be a second Android 16 update later in the year, and some of the most interesting features aren’t arriving as part of either one.

Traditionally, Google has released new versions of Android in the fall, around the time new Pixel phones arrive. Android 15, for example, began its rollout in October 2024. Just eight months later, we’re on to Android 16. This is the first cycle in which Google will split its new version into two updates. Going forward, the bigger update will arrive in Q2, and the smaller one, which includes API and feature tweaks, will come at the end of the year.

Google has said the stylish but divisive Material 3 Expressive UI and the desktop windowing feature will come later. They’re currently in testing with the latest beta for Android 16 QPR1, which will become a Pixel Drop in September. It’s easy to imagine that with a single fall Android 16 release, both of these changes would have been included.

In the coming months, we expect to see some Google apps updated with support for Material 3, but the changes will be minimal unless you’re using a phone that runs Google’s Android theme. For all intents and purposes, that means a Pixel. Motorola has traditionally hewed closely to Google’s interface, while Samsung, OnePlus, and others forged their own paths. But even Moto has been diverging more as it focuses on AI. It’s possible that Google’s big UI shakeup will only affect Pixel users.

As for desktop windowing, that may have limited impact, too. On-device windowing will only be supported on tablets—even tablet-style foldables will be left out. We’ve asked Google to explain this decision and will report back if we get more details. Non-tablet devices will be able to project a desktop-style interface on an external display via USB video-out, but the feature won’t be available universally. Google tells Ars that it’s up to OEMs to support this feature. So even a phone that has video-out over USB may not have desktop windowing. Again, Pixels may be the best (or only) way to get Android’s new desktop mode.

The end of version numbers

There really isn’t much more to say about Android 16 as it currently exists. This update isn’t flashy, but it lays important groundwork for the future. The addition of Material 3 Expressive will add some of the gravitas we expect from major version bumps, but it’s important to remember that this is just Google’s take on Android—other companies have their own software interests, mostly revolving around AI. We’ll have to wait to see what Samsung, OnePlus, and others do with the first Android 16 release. The underlying software has been released in the Android Open Source Project (AOSP), but it will be a few months before other OEMs have updates.

In some ways, boring updates are exactly what Google has long wanted from Android. Consider the era when Android updates were undeniably exciting—a time when the addition of screenshots could be a headlining feature (Android 4.0 Ice Cream Sandwich) or when Google finally figured out how to keep runaway apps from killing your battery (Android 6.0 Marshmallow). But there was a problem with these big tentpole updates: Not everyone got them, and they were salty about it.

During the era of rapid software improvement, it took the better part of a year (or longer!) for a company like Samsung or LG to deploy new Android updates. Google would announce a laundry list of cool features, but only the tiny sliver of people using Nexus (and later Pixel) phones would see them. By the time a Samsung Galaxy user had the new version, it was time for Google to release another yearly update.

This “fragmentation” issue was a huge headache for Google, leading it to implement numerous platform changes over the years to take the pressure off its partners and app developers. There were simple tweaks like adding important apps, including Maps and the keyboard (later Gboard), to the Play Store so they could be updated regularly. On the technical side, initiatives like Project Mainline made the platform more modular so features could be added and improved outside of major updates. Google has also meticulously moved features into Play Services, which can deliver system-level changes without an over-the-air update (although there are drawbacks to that).

Android I/O sign

Android version numbers hardly matter anymore—it’s just Android.

Credit: Ryan Whitwam

Android version numbers hardly matter anymore—it’s just Android. Credit: Ryan Whitwam

The overarching story of Android has been a retreat from monolithic updates, and that means there’s less to get excited about when a new version appears. Rather than releasing a big update rife with changes, Google has shown a preference for rolling out features via the Play Store and Play Services to the entire Android ecosystem. Experiences like Play Protect anti-malware, Google Play Games, Google Cast, Find My Device, COVID-19 exposure alerts, Quick Share, and myriad more were released to almost all Google-certified Android devices without system updates.

As more features arrive in dribs and drabs via Play Services and Pixel Drops, the numbered version changes are less important. People used to complain about missing out on the tentpole updates, but it’s quieter when big features are decoupled from version numbers. And that’s where we are—Android 15 or Android 16—the number is no longer important. You won’t notice a real difference, but the upshot is that most phones get new features faster than they once did. That was the cost to fix fragmentation.

Boring updates aren’t just a function of rearranging features. Even if all the promised upgrades were here now, Android 16 would still barely move the needle. Phones are now mature products with established usage paradigms. It’s been almost 20 years since the age of touchscreen smartphones began, and we’ve figured out how these things should work. It’s not just Android updates settling into prosaic predictability—Apple is running low on paradigm shifts, too. The release of iOS 26 will add some minor improvements to a few apps, and the theme is getting more transparent with the controversial “Liquid Glass” UI. And that’s it.

Until there’s a marked change in form factors or capability, these flat glass slabs will look and work more or less as they do now (with a lot more AI slop, whether you like it or not). If you have a recent non-Pixel Android device, you’ll probably get Android 16 in the coming months, but it won’t change the way you use your phone.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Android 16 review: Post-hype Read More »

an-exceedingly-rare-asteroid-flyby-will-happen-soon,-but-nasa-may-be-left-on-the-sidelines

An exceedingly rare asteroid flyby will happen soon, but NASA may be left on the sidelines


“Nature is handing us an incredibly rare experiment.”

An illustration of the OSIRIS-Apex mission at Apophis. Credit: NASA

An illustration of the OSIRIS-Apex mission at Apophis. Credit: NASA

A little less than four years from now, a killer asteroid will narrowly fly past planet Earth. This will be a celestial event visible around the world—for a few weeks, Apophis will shine among the brightest objects in the night sky.

The near miss by the large Apophis asteroid in April 2029 offers NASA a golden—and exceedingly rare—opportunity to observe such an object like this up close. Critically, the interaction between Apophis and Earth’s gravitational pull will offer scientists an unprecedented chance to study the interior of an asteroid.

This is fascinating for planetary science, but it also has serious implications for planetary defense. In the future, were such an asteroid on course to strike Earth, an effective plan to deflect it would depend on knowing what the interior looks like.

“This is a remarkable opportunity,” said Bobby Braun, who leads space exploration for the Johns Hopkins Applied Physics Laboratory, in an interview. “From a probability standpoint, there’s not going to be another chance to study a killer asteroid like this for thousands of years. Sooner or later, we’re going to need this knowledge.”

But we may not get it.

NASA has some options for tracking Apophis during its flyby. However, the most promising of these, a mission named OSIRIS-Apex that breathes new life into an old spacecraft that otherwise would drift into oblivion, is slated for cancellation by the Trump White House’s budget for fiscal year 2026.

Other choices, including dragging dual space probes out of storage, the Janus spacecraft, and other concepts that were submitted to NASA a year ago as part of a call for ideas, have already been rejected or simply left on the table. As a result, NASA currently has no plans to study what will be the most important asteroid encounter since the formation of the space agency.

“The world is watching,” said Richard Binzel, an asteroid expert at the Massachusetts Institute of Technology. “NASA needs to step up and do their job.”

But will they?

A short history of planetary defense

For decades, nearly every public survey asking what NASA should work on has rated planetary defense at or near the very top of the space agency’s priorities. Yet for a long time, no part of NASA actually focused on finding killer asteroids or developing the technology to deflect them.

In authorization bills dating back to 2005, Congress began mandating that NASA “detect, track, catalog, and characterize” near-Earth objects that were 140 meters in diameter or larger. Congress established a goal of finding 90 percent of these by the year 2020. (We’ve blown past that deadline, obviously.)

NASA had been informally studying asteroids and comets for decades but did not focus on planetary defense until 2016, when the space agency established the Planetary Defense Coordination Office. In the decade since, NASA has made some progress, identifying more than 26,000 near-Earth objects, which are defined as asteroids and comets that come within 30 million miles of our planet’s orbit.

Moreover, NASA has finally funded a space mission designed specifically to look for near-Earth threats, NEO Surveyor, a space telescope with the goal of “finding asteroids before they find us.” The $1.2 billion mission is due to launch no earlier than September 2027.

NASA also funded the DART mission, which launched in 2021 and impacted a 160-meter asteroid named Dimorphous a year later to demonstrate the ability to make a minor deflection.

But in a report published this week, NASA’s Office of Inspector General found that despite these advances, the space agency’s approach to planetary defense still faces some significant challenges. These include a lack of resources, a need for better strategic planning, and competition with NASA’s more established science programs for limited funding.

A comprehensive plan to address planetary defense must include two elements, said Ed Lu, a former NASA astronaut who co-founded the B612 Foundation to protect Earth from asteroid impacts.

The first of these is the finding and detection of asteroid threats. That is being addressed both by the forthcoming NEO Surveyor and the recently completed Vera C. Rubin Observatory, which is likely to find thousands of new near-Earth threats. The challenge in the coming years will be processing all of this data, calculating orbits, and identifying threats. Lu said NASA must do a better job of being transparent in how it makes these calculations.

The second thing Lu urged NASA to do is develop a follow-up mission to DART. It was successful, he said, but DART was just an initial demonstration. Such a capability needs to be tested against a larger asteroid with different properties.

An asteroid that might look a lot like Apophis.

About Apophis

Astronomers using a telescope in Arizona found Apophis in 2004, and they were evidently fans of the television series Stargate SG-1, in which a primary villain who threatens civilization on Earth is named Apophis.

Because of its orbit, Apophis comes near Earth about every eight years. It is fairly large, about 370 meters across. This is not big enough to wipe out civilization on Earth, but it would cause devastating consequences across a large region, imparting about 300 times as much impact force on the planet as the Tunguska event in 1908, over Siberia. It will miss Earth by about 31,600 km (19,600 miles) on April 13, 2029.

“We like to say that’s because nature has a sense of humor,” said Binzel, the MIT asteroid scientist, of this date.

Astronomers estimate that an asteroid this large comes this close to Earth only about once every 7,500 years. It also appears to be a stony, non-metallic type of asteroid known as an ordinary chondrite. This is the most common type of asteroid in the Solar System.

Areas of the planet that will be able to see Apophis at its closest approach to Earth in April 2029.

Credit: Rick Binzel

Areas of the planet that will be able to see Apophis at its closest approach to Earth in April 2029. Credit: Rick Binzel

All of this is rather convenient for scientists hoping to understand more about potential asteroids that might pose a serious threat to the planet.

The real cherry on top with the forthcoming encounter is that Apophis will be perturbed by Earth’s gravitational pull.

“Nature is handing us an incredibly rare experiment where the Earth’s gravity is going to tug and stretch this asteroid,” Binzel said. “By seeing how the asteroid responds, we’ll know how it is put together, and knowing how an asteroid is put together is maybe the most important information we could have if humanity ever faces an asteroid threat.”

In nearly seven decades of spaceflight, humans have only ever probed the interior of three celestial bodies: the Earth, the Moon, and Mars. We’re now being offered the opportunity to probe a fourth, right on our doorstep.

But time is ticking.

Chasing Apophis

On paper, at least, NASA has a plan to rendezvous with Apophis. About three years ago, after a senior-level review, NASA extended the mission of the OSIRIS-REx spacecraft to rendezvous with Apophis.

As you may recall, this oddly named spacecraft collected a sample from another asteroid, Bennu, in October 2020. Afterward, a small return capsule departed from the main spacecraft and made its way back to Earth. Since then, an $800 million spacecraft specifically designed to fly near and touch an asteroid has been chilling in space.

So it made sense when NASA decided to fire up the mission, newly rechristened OSIRIS-Apex, and re-vector it toward Apophis. It has been happily flying toward such a rendezvous for a few years. The plan was for Apex to catch up to Apophis shortly after its encounter with Earth and study it for about 18 months.

“The most cost-efficient thing you can do in spaceflight is continue with a heathy spacecraft that is already operating in space,” Binzel said.

And that was the plan until the Trump administration released its budget proposal for fiscal year 2026. In its detailed budget information, the White House provided no real rationale for the cancellation, simply stating, “Operating missions that have completed their prime missions (New Horizons and Juno) and the follow-on mission to OSIRIX-REx, OSIRIS-Apophis Explorer, are eliminated.”

It’s unclear how much of a savings this resulted in. However, Apex is a pittance in NASA’s overall budget. The operating funds to keep the mission alive in 2024, for example, were $14.5 million. Annual costs would be similar through the end of the decade. This is less than one-thousandth of NASA’s budget, by the way.

“Apex is already on its way to reach Apophis, and to turn it off would be an incredible waste of resources,” Binzel said.

Congress, of course, ultimately sets the budget. It will have the final say. But it’s clear that NASA’s primary mission to study a once-in-a-lifetime asteroid is at serious risk.

So what are the alternatives?

Going international and into the private sector

NASA was not the only space agency targeting Apophis. Nancy Chabot, a planetary scientist at the Johns Hopkins University Applied Physics Laboratory, has been closely tracking other approaches.

The European Space Agency has proposed a mission named Ramses to rendezvous with the asteroid and accompany it as it flies by Earth. This mission would be valuable, conducting a thorough before-and-after survey of the asteroid’s shape, surface, orbit, rotation, and orientation.

It would need to launch by April 2028. Recognizing this short deadline, the space agency has directed European scientists and engineers to begin preliminary work on the mission. But a final decision to proceed and commit to the mission will not be made before the space agency’s ministerial meeting in November.

Artist’s impression of ESA’s Rapid Apophis Mission for Space Safety (Ramses).

Credit: ESA

Artist’s impression of ESA’s Rapid Apophis Mission for Space Safety (Ramses). Credit: ESA

This is no sure thing. For example, Chabot said, in 2016, the Asteroid Impact Mission was expected to advance before European ministers decided not to fund it. It is also not certain that the Ramses mission would be ready to fly in less than three years, a short timeline for planetary science missions.

Japan’s space agency, JAXA, is also planning an asteroid mission named Destiny+ that has as its primary goal flying to an asteroid named 3200 Phaeton. The mission has been delayed multiple times, so its launch is now being timed to permit a single flyby of Apophis in February 2029 on the way to its destination. While this mission is designed to deliver quality science, a flyby mission provides limited data. It is also unclear how close Destiny+ will actually get to Apophis, Chabot said.

There are also myriad other concepts, commercial and otherwise, to characterize Apophis before, during, and after its encounter with Earth. Ideally, scientists say, a mission would fly to the asteroid before April 2029 and scatter seismometers on the surface to collect data.

But all of this would require significant funding. If not from NASA, who? The uncertain future of NASA’s support for Apex has led some scientists to think about philanthropy.

For example, NASA’s Janus spacecraft have been mothballed for a couple of years, but they could be used for observational purposes if they had—say—a Falcon 9 to launch them at the appropriate time.

A new, private reconnaissance mission could probably be developed for $250 million or less, industry officials told Ars. There is still enough time, barely, for a private group to work with scientists to develop instrumentation that could be added to an off-the-shelf spacecraft bus to get out to Apophis before its Earth encounter.

Private astronaut Jared Isaacman, who has recently indicated a willingness to support robotic exploration in strategic circumstances, confirmed to Ars that several people have reached out about his interest in financially supporting an Apophis mission. “I would say that I’m in info-gathering mode and not really rushing into anything,” Isaacman said.

The problem is that, at this very moment, Apophis is rushing this way.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

An exceedingly rare asteroid flyby will happen soon, but NASA may be left on the sidelines Read More »

the-axion-may-help-clean-up-the-messy-business-of-dark-matter

The axion may help clean up the messy business of dark matter


We haven’t found evidence of the theoretical particle, but it’s still worth investigating.

In recent years, a curious hypothetical particle called the axion, invented to address challenging problems with the strong nuclear force, has emerged as a leading candidate to explain dark matter. Although the potential for axions to explain dark matter has been around for decades, cosmologists have only recently begun to seriously search for them. Not only might they be able to resolve some issues with older hypotheses about dark matter, but they also offer a dizzying array of promising avenues for finding them.

But before digging into what the axion could be and why it’s so useful, we have to explore why the vast majority of physicists, astronomers, and cosmologists accept the evidence that dark matter exists and that it’s some new kind of particle. While it’s easy to dismiss the dark matter hypothesis as some sort of modern-day epicycle, the reality is much more complex (to be fair to epicycles, it was an excellent idea that fit the data extremely well for many centuries).

The short version is that nothing in the Universe adds up.

We have many methods available to measure the mass of large objects like galaxies and clusters. We also have various methods to assess the effects of matter in the Universe, like the details of the cosmic microwave background or the evolution of the cosmic web. There are two broad categories: methods that rely solely on estimating the amount of light-emitting matter and methods that estimate the total amount of matter, whether it’s visible or not.

For example, if you take a picture of a generic galaxy, you’ll see that most of the light-emitting matter is concentrated in the core. But when you measure the rotation rate of the galaxy and use that to estimate the total amount of matter, you get a much larger number, plus some hints that it doesn’t perfectly overlap with the light-emitting stuff. The same thing happens for clusters of galaxies—the dynamics of galaxies within a cluster suggest the presence of much more matter than what we can see, and the two types of matter don’t always align. When we use gravitational lensing to measure a cluster’s contents, we again see evidence for much more matter than is plainly visible.

The tiny variations in the cosmic microwave background tell us about the influence of both matter that interacts with light and matter that doesn’t. It clearly shows that some invisible component dominated the early Universe. When we look at the large-scale structure, invisible matter rules the day. Matter that doesn’t interact with light can form structures much more quickly than matter that gets tangled up by interacting with itself. Without invisible matter, galaxies like the Milky Way can’t form quickly enough to match observations of the early Universe.

The calculations of Big Bang nucleosynthesis, which correctly predict the abundances of hydrogen and helium in the Universe, put strict constraints on how much light-emitting matter there can be, and that number simply isn’t large enough to accommodate all these disparate results.

Across cosmic scales in time and space, the evidence just piles up: There’s more stuff out there than meets the eye, and it can’t simply be dim-but-otherwise-regular matter.

Weakness of WIMPs

Since pioneering astronomer Vera Rubin first revealed dark matter in a big way in the 1970s, the astronomical community has tried every idea it could think of to explain these observations. One tantalizing possibility is that the dark matter is the entirely wrong approach; instead, we’re misunderstanding gravity itself. But so far, half a century later, all attempts to modify gravity ultimately fail one observational test or another. In fact, the most popular modified gravity theory, known as MOND, still requires the existence of dark matter, just less of it.

As the evidence piled up for dark matter in the 1980s and ’90s, astronomers began to favor a particular explanation known as WIMPs, for weakly interacting massive particles. WIMPs weren’t just made up on the spot. They were motivated by particle physics and our attempts to create theories beyond the Standard Model. Many extensions to the Standard Model predicted the existence of WIMP-like particles that could be made in abundance in the early Universe, generating a population of heavy-ish particles that remained largely in the cosmic background.

WIMPs seemed like a good idea, as they could both explain the dark matter problem and bring us to a new understanding of fundamental physics. The idea is that we are swimming in an invisible sea of dark matter particles that almost always simply pass through us undetected. But every once in a while, a WIMP should interact via the weak nuclear force (hence the origin of its name) and give off a shower of byproducts. One problem: We needed to detect one of these rare interactions. So experiments sprang up around the world to catch an elusive dark matter candidate.

With amazing names like CRESST, SNOLAB, and XENON, these experiments have spent years searching for a WIMP to no avail. They’re not an outright failure, though; instead, with every passing year, we know more and more about what the WIMP can’t be—what mass ranges and interaction strengths are now excluded.

By now, that list of what the WIMP can’t be is rather long, and large regions within the space of possibilities are now hard-and-fast ruled out.

OK, that’s fine. I mean, it’s a huge bummer that our first best guess didn’t pan out, but nature is under no obligation to make this easy for us. Maybe the dark matter isn’t a WIMP at all.

More entities are sitting around the particle physics attic that we might be able to use to explain this deep cosmic mystery. And one of those hypothetical particles is called the axion.

Cleaning up with axions

It was the late 1970s, and physicist Frank Wilczek was shopping for laundry detergent. He found one brand standing out among the bottles: Axion. He thought that would make an excellent name for a particle.

He was right.

For decades, physicists had been troubled by a little detail of the theory used to explain the strong nuclear force, known as quantum chromodynamics. By all measurements, that force obeys charge-parity symmetry, which means if you take an interaction, flip all the charges around, and run it in a mirror, you’ll get the same result. But quantum chromodynamics doesn’t enforce that symmetry on its own.

It seemed to be a rather fine-tuned state of affairs, with the strong force unnaturally maintaining a symmetry when there was nothing in the theory to explain why.

In 1977, Roberto Peccei and Helen Quinn discovered an elegant solution. By introducing a new field into the Universe, it could naturally introduce charge-parity symmetry into the equations of quantum chromodynamics. The next year, Wilczek and Gerard ‘t Hooft independently realized that this new field would imply the existence of a particle.

The axion.

Dark matter was just coming on the cosmic scene. Axions weren’t invented to solve that problem, but physicists very quickly realized that the complex physics of the early Universe could absolutely flood the cosmos with axions. What’s more, they would largely ignore regular matter and sit quietly in the background. In other words, the axion was an excellent dark matter candidate.

But axions were pushed aside as the WIMPs hypothesis gained more steam. Back-of-the-envelope calculations showed that the natural mass range of the WIMP would precisely match the abundances needed to explain the amount of dark matter in the Universe, with no other fine-tuning or adjustments required.

Never ones to let the cosmologists get in the way of a good time, the particle physics community kept up interest in the axion, finding different variations on the particle and devising clever experiments to see if the axion existed. One experiment requires nothing more than a gigantic magnet since, in an extremely strong magnetic field, axions can spontaneously convert into photons.

To date, no hard evidence for the axion has shown up. But WIMPs have proven to be elusive, so cosmologists are showing more love to the axion and identifying surprising ways that it might be found.

A sloshy Universe

Axions are tiny, even for subatomic particles. The lightest known particle is the neutrino, which weighs no more than 0.086 electron-volts (or eV). Compare that to, say, the electron, which weighs over half a million eV. The exact mass of the axion isn’t known, and there are many models and versions of the particle, but it can have a mass all the way down to a trillionth of an eV… and even lower.

In fact, axions belong to a much broader class of “ultra-light” dark matter particle candidates, which can have masses down to 10^-24 eV. This is multiple billions of times lighter than the WIMPs—and indeed most of the particles of the Standard Model.

That means axions and their friends act nothing like most of the particles of the Standard Model.

First off, it may not even be appropriate to refer to them as particles. They have such little mass that their de Broglie wavelength—the size of the quantum wave associated with every particle—can stretch into macroscopic proportions. In some cases, this wavelength can be a few meters across. In others, it’s comparable to a star or a solar system. In still others, a single axion “particle” can stretch across an entire galaxy.

In this view, the individual axion particles would be subsumed into a larger quantum wave, like an ocean of dark matter so large and vast that it doesn’t make sense to talk about its individual components.

And because axions are bosons, they can synchronize their quantum wave nature, becoming a distinct state of matter: a Bose-Einstein condensate. In a Bose-Einstein condensate, most of the particles share the same low-energy state. When this happens, the de Broglie wavelength is larger than the average separation between the particles, and the waves of the individual particles all add up together, creating, in essence, a super-particle.

This way, we may get axion “stars”—clumps of axions acting as a single particle. Some of these axion stars may be a few thousand kilometers across, wandering across interstellar space. Still others may be the size of galactic cores, which might explain an issue with the traditional WIMP picture.

The best description of dark matter in general is that it is “cold,” meaning that the individual particles do not move fast compared to the speed of light. This allows them to gravitationally interact and form the seeds of structures like galaxies and clusters. But this process is a bit too efficient. According to simulations, cold dark matter tends to form more small, sub-galactic clumps than we observe, and it tends to make the cores of galaxies much, much denser than we see.

Axions, and ultra-light dark matter in general, can provide a solution here because they would operate in two modes. At large scales, they can act like regular cold dark matter. But inside galaxies, they can condense, forming tight clumps. Critically, these clumps have uniform densities within them. This smooths out the distribution of axions within galaxies, preventing the formation of smaller clumps and ultra-dense cores.

A messy affair

Over the decades, astronomers and physicists have found an astounding variety of ways that axions might reveal their presence in the Universe. Because of their curious ability to transmute into photons in the presence of strong magnetic fields, any place that features strong fields—think neutron stars or even the solar corona—could produce extra radiation due to axions. That makes them excellent hunting grounds for the particles.

Axion stars—also sometimes known provocatively as dark stars—would be all but invisible under most circumstances. That is, until they destabilize in a cascading chain reaction of axion-to-photon conversion and blow themselves up.

Even the light from distant galaxies could betray the existence of axions. If they exist in a dense swarm surrounding a galaxy, their conversion to photons will contribute to the galaxy’s light, creating a signal that the James Webb Space Telescope can pick up.

To date, despite all these ideas, there hasn’t been a single shred of solid evidence for the existence of axions, which naturally drops them down a peg or two on the credibility scale. But that doesn’t mean that axions aren’t worth investigating further. The experiments conducted so far only place limits on what properties they might have; there’s still plenty of room for viable axion and axion-like candidates, unlike their WIMPy cousins.

There’s definitely something funny going on with the Universe. The dark matter hypothesis—that there is a large, invisible component to matter in the Universe—isn’t that great of an idea, but it’s the best one we have that fits the widest amount of available evidence. For a while, we thought we knew what the identity of that matter might be, and we spent decades (and small fortunes) in that search.

But while WIMPs were the mainstay hypothesis, that didn’t snuff out alternative paths. Dozens of researchers have investigated modified forms of gravity to equal levels of unsuccessfulness. And a small cadre has kept the axion flame alive. It’s a good thing, too, since their obscure explorations of the corners of particle physics laid the groundwork to flesh out axions into a viable competitor to WIMPs.

No, we haven’t found any axions. And we still don’t know what the dark matter is. But it’s only by pushing forward—advancing new ideas, testing them against the reality of observations, and when they fail, trying again—will we come to a new understanding. Axions may or may not be dark matter; the best we can say is that they are promising. But who wouldn’t want to live in a Universe filled with dark stars, invisible Bose-Einstein condensates, and strange new particles?

Photo of Paul Sutter

The axion may help clean up the messy business of dark matter Read More »

curated-realities:-an-ai-film-festival-and-the-future-of-human-expression

Curated realities: An AI film festival and the future of human expression


We saw 10 AI films and interviewed Runway’s CEO as well as Hollywood pros.

An AI-generated frame of a person looking at an array of television screens

A still from Total Pixel Space, the Grand Prix winner at AIFF 2025.

A still from Total Pixel Space, the Grand Prix winner at AIFF 2025.

Last week, I attended a film festival dedicated to shorts made using generative AI. Dubbed AIFF 2025, it was an event precariously balancing between two different worlds.

The festival was hosted by Runway, a company that produces models and tools for generating images and videos. In panels and press briefings, a curated list of industry professionals made the case for Hollywood to embrace AI tools. In private meetings with industry professionals, I gained a strong sense that there is already a widening philosophical divide within the film and television business.

I also interviewed Runway CEO Cristóbal Valenzuela about the tightrope he walks as he pitches his products to an industry that has deeply divided feelings about what role AI will have in its future.

To unpack all this, it makes sense to start with the films, partly because the film that was chosen as the festival’s top prize winner says a lot about the issues at hand.

A festival of oddities and profundities

Since this was the first time the festival has been open to the public, the crowd was a diverse mix: AI tech enthusiasts, working industry creatives, and folks who enjoy movies and who were curious about what they’d see—as well as quite a few people who fit into all three groups.

The scene at the entrance to the theater at AIFF 2025 in Santa Monica, California.

The films shown were all short, and most would be more at home at an art film fest than something more mainstream. Some shorts featured an animated aesthetic (including one inspired by anime) and some presented as live action. There was even a documentary of sorts. The films could be made entirely with Runway or other AI tools, or those tools could simply be a key part of a stack that also includes more traditional filmmaking methods.

Many of these shorts were quite weird. Most of us have seen by now that AI video-generation tools excel at producing surreal and distorted imagery—sometimes whether the person prompting the tool wants that or not. Several of these films leaned into that limitation, treating it as a strength.

Representing that camp was Vallée Duhamel’s Fragments of Nowhere, which visually explored the notion of multiple dimensions bleeding into one another. Cars morphed into the sides of houses, and humanoid figures, purported to be inter-dimensional travelers, moved in ways that defied anatomy. While I found this film visually compelling at times, I wasn’t seeing much in it that I hadn’t already seen from dreamcore or horror AI video TikTok creators like GLUMLOT or SinRostroz in recent years.

More compelling were shorts that used this propensity for oddity to generate imagery that was curated and thematically tied to some aspect of human experience or identity. For example, More Tears than Harm by Herinarivo Rakotomanana was a rotoscope animation-style “sensory collage of childhood memories” of growing up in Madagascar. Its specificity and consistent styling lent it a credibility that Fragments of Nowhere didn’t achieve. I also enjoyed Riccardo Fusetti’s Editorial on this front.

More Tears Than Harm, an unusual animated film at AIFF 2025.

Among the 10 films in the festival, two clearly stood above the others in my impressions—and they ended up being the Grand Prix and Gold prize winners. (The judging panel included filmmakers Gaspar Noé and Harmony Korine, Tribeca Enterprises CEO Jane Rosenthal, IMAX head of post and image capture Bruce Markoe, Lionsgate VFX SVP Brianna Domont, Nvidia developer relations lead Richard Kerris, and Runway CEO Cristóbal Valenzuela, among others).

Runner-up Jailbird was the aforementioned quasi-documentary. Directed by Andrew Salter, it was a brief piece that introduced viewers to a program in the UK that places chickens in human prisons as companion animals, to positive effect. Why make that film with AI, you might ask? Well, AI was used to achieve shots that wouldn’t otherwise be doable for a small-budget film to depict the experience from the chicken’s point of view. The crowd loved it.

Jailbird, the runner-up at AIFF 2025.

Then there was the Grand Prix winner, Jacob Adler’s Total Pixel Space, which was, among other things, a philosophical defense of the very idea of AI art. You can watch Total Pixel Space on YouTube right now, unlike some of the other films. I found it strangely moving, even as I saw its selection as the festival’s top winner with some cynicism. Of course they’d pick that one, I thought, although I agreed it was the most interesting of the lot.

Total Pixel Space, the Grand Prix winner at AIFF 2025.

Total Pixel Space

Even though it risked navel-gazing and self-congratulation in this venue, Total Pixel Space was filled with compelling imagery that matched the themes, and it touched on some genuinely interesting ideas—at times, it seemed almost profound, didactic as it was.

“How many images can possibly exist?” the film’s narrator asked. To answer that, it explains the concept of total pixel space, which actually reflects how image generation tools work:

Pixels are the building blocks of digital images—tiny tiles forming a mosaic. Each pixel is defined by numbers representing color and position. Therefore, any digital image can be represented as a sequence of numbers…

Just as we don’t need to write down every number between zero and one to prove they exist, we don’t need to generate every possible image to prove they exist. Their existence is guaranteed by the mathematics that defines them… Every frame of every possible film exists as coordinates… To deny this would be to deny the existence of numbers themselves.

The nine-minute film demonstrates that the number of possible images or films is greater than the number of atoms in the universe and argues that photographers and filmmakers may be seen as discovering images that already exist in the possibility space rather than creating something new.

Within that framework, it’s easy to argue that generative AI is just another way for artists to “discover” images.

The balancing act

“We are all—and I include myself in that group as well—obsessed with technology, and we keep chatting about models and data sets and training and capabilities,” Runway CEO Cristóbal Valenzuela said to me when we spoke the next morning. “But if you look back and take a minute, the festival was celebrating filmmakers and artists.”

I admitted that I found myself moved by Total Pixel Space‘s articulations. “The winner would never have thought of himself as a filmmaker, and he made a film that made you feel something,” Valenzuela responded. “I feel that’s very powerful. And the reason he could do it was because he had access to something that just wasn’t possible a couple of months ago.”

First-time and outsider filmmakers were the focus of AIFF 2025, but Runway works with established studios, too—and those relationships have an inherent tension.

The company has signed deals with companies like Lionsgate and AMC Networks. In some cases, it trains on data provided by those companies; in others, it embeds within them to try to develop tools that fit how they already work. That’s not something competitors like OpenAI are doing yet, so that, combined with a head start in video generation, has allowed Runway to grow and stay competitive so far.

“We go directly into the companies, and we have teams of creatives that are working alongside them. We basically embed ourselves within the organizations that we’re working with very deeply,” Valenzuela explained. “We do versions of our film festival internally for teams as well so they can go through the process of making something and seeing the potential.”

Founded in 2018 at New York University’s Tisch School of the Arts by two Chileans and one Greek co-founder, Runway has a very different story than its Silicon Valley competitors. It was one of the first to bring an actually usable video-generation tool to the masses. Runway also contributed in foundational ways to the popular Stable Diffusion model.

Though it is vastly outspent by competitors like OpenAI, it has taken a hands-on approach to working with existing industries. You won’t hear Valenzuela or other Runway leaders talking about the imminence of AGI or anything so lofty; instead, it’s all about selling the product as something that can solve existing problems in creatives’ workflows.

Still, an artist’s mindset and relationships within the industry don’t negate some fundamental conflicts. There are multiple intellectual property cases involving Runway and its peers, and though the company hasn’t admitted it, there is evidence that it trained its models on copyrighted YouTube videos, among other things.

Cristóbal Valenzuela speaking on the AIFF 2025 stage. Credit: Samuel Axon

Valenzuela suggested that studios are worried about liability, not underlying principles, though, saying:

Most of the concerns on copyright are on the output side, which is like, how do you make sure that the model doesn’t create something that already exists or infringes on something. And I think for that, we’ve made sure our models don’t and are supportive of the creative direction you want to take without being too limiting. We work with every major studio, and we offer them indemnification.

In the past, he has also defended Runway by saying that what it’s producing is not a re-creation of what has come before. He sees the tool’s generative process as distinct—legally, creatively, and ethically—from simply pulling up assets or references from a database.

“People believe AI is sort of like a system that creates and conjures things magically with no input from users,” he said. “And it’s not. You have to do that work. You still are involved, and you’re still responsible as a user in terms of how you use it.”

He seemed to share this defense of AI as a legitimate tool for artists with conviction, but given that he’s been pitching these products directly to working filmmakers, he was also clearly aware that not everyone agrees with him. There is not even a consensus among those in the industry.

An industry divided

While in LA for the event, I visited separately with two of my oldest friends. Both of them work in the film and television industry in similar disciplines. They each asked what I was in town for, and I told them I was there to cover an AI film festival.

One immediately responded with a grimace of disgust, “Oh, yikes, I’m sorry.” The other responded with bright eyes and intense interest and began telling me how he already uses AI in his day-to-day to do things like extend shots by a second or two for a better edit, and expressed frustration at his company for not adopting the tools faster.

Neither is alone in their attitudes. Hollywood is divided—and not for the first time.

There have been seismic technological changes in the film industry before. There was the transition from silent films to talkies, obviously; moviemaking transformed into an entirely different art. Numerous old jobs were lost, and numerous new jobs were created.

Later, there was the transition from film to digital projection, which may be an even tighter parallel. It was a major disruption, with some companies and careers collapsing while others rose. There were people saying, “Why do we even need this?” while others believed it was the only sane way forward. Some audiences declared the quality worse, and others said it was better. There were analysts arguing it could be stopped, while others insisted it was inevitable.

IMAX’s head of post production, Bruce Markoe, spoke briefly about that history at a press mixer before the festival. “It was a little scary,” he recalled. “It was a big, fundamental change that we were going through.”

People ultimately embraced it, though. “The motion picture and television industry has always been very technology-forward, and they’ve always used new technologies to advance the state of the art and improve the efficiencies,” Markoe said.

When asked whether he thinks the same thing will happen with generative AI tools, he said, “I think some filmmakers are going to embrace it faster than others.” He pointed to AI tools’ usefulness for pre-visualization as particularly valuable and noted some people are already using it that way, but it will take time for people to get comfortable with.

And indeed, many, many filmmakers are still loudly skeptical. “The concept of AI is great,” The Mitchells vs. the Machines director Mike Rianda said in a Wired interview. “But in the hands of a corporation, it is like a buzzsaw that will destroy us all.”

Others are interested in the technology but are concerned that it’s being brought into the industry too quickly, with insufficient planning and protections. That includes Crafty Apes Senior VFX Supervisor Luke DiTomasso. “How fast do we roll out AI technologies without really having an understanding of them?” he asked in an interview with Production Designers Collective. “There’s a potential for AI to accelerate beyond what we might be comfortable with, so I do have some trepidation and am maybe not gung-ho about all aspects of it.

Others remain skeptical that the tools will be as useful as some optimists believe. “AI never passed on anything. It loved everything it read. It wants you to win. But storytelling requires nuance—subtext, emotion, what’s left unsaid. That’s something AI simply can’t replicate,” said Alegre Rodriquez, a member of the Emerging Technology committee at the Motion Picture Editors Guild.

The mirror

Flying back from Los Angeles, I considered two key differences between this generative AI inflection point for Hollywood and the silent/talkie or film/digital transitions.

First, neither of those transitions involved an existential threat to the technology on the basis of intellectual property and copyright. Valenzuela talked about what matters to studio heads—protection from liability over the outputs. But the countless creatives who are critical of these tools also believe they should be consulted and even compensated for their work’s use in the training data for Runway’s models. In other words, it’s not just about the outputs, it’s also about the sourcing. As noted before, there are several cases underway. We don’t know where they’ll land yet.

Second, there’s a more cultural and philosophical issue at play, which Valenzuela himself touched on in our conversation.

“I think AI has become this sort of mirror where anyone can project all their fears and anxieties, but also their optimism and ideas of the future,” he told me.

You don’t have to scroll for long to come across techno-utopians declaring with no evidence that AGI is right around the corner and that it will cure cancer and save our society. You also don’t have to scroll long to encounter visceral anger at every generative AI company from people declaring the technology—which is essentially just a new methodology for programming a computer—fundamentally unethical and harmful, with apocalyptic societal and economic ramifications.

Amid all those bold declarations, this film festival put the focus on the on-the-ground reality. First-time filmmakers who might never have previously cleared Hollywood’s gatekeepers are getting screened at festivals because they can create competitive-looking work with a fraction of the crew and hours. Studios and the people who work there are saying they’re saving time, resources, and headaches in pre-viz, editing, visual effects, and other work that’s usually done under immense time and resource pressure.

“People are not paying attention to the very huge amount of positive outcomes of this technology,” Valenzuela told me, pointing to those examples.

In this online discussion ecosystem that elevates outrage above everything else, that’s likely true. Still, there is a sincere and rigorous conviction among many creatives that their work is contributing to this technology’s capabilities without credit or compensation and that the structural and legal frameworks to ensure minimal human harm in this evolving period of disruption are still inadequate. That’s why we’ve seen groups like the Writers Guild of America West support the Generative AI Copyright Disclosure Act and other similar legislation meant to increase transparency about how these models are trained.

The philosophical question with a legal answer

The winning film argued that “total pixel space represents both the ultimate determinism and the ultimate freedom—every possibility existing simultaneously, waiting for consciousness to give it meaning through the act of choice.”

In making this statement, the film suggested that creativity, above all else, is an act of curation. It’s a claim that nothing, truly, is original. It’s a distillation of human expression into the language of mathematics.

To many, that philosophy rings undeniably true: Every possibility already exists, and artists are just collapsing the waveform to the frame they want to reveal. To others, there is more personal truth to the romantic ideal that artwork is valued precisely because it did not exist until the artist produced it.

All this is to say that the debate about creativity and AI in Hollywood is ultimately a philosophical one. But it won’t be resolved that way.

The industry may succumb to litigation fatigue and a hollowed-out workforce—or it may instead find its way to fair deals, new opportunities for fresh voices, and transparent training sets.

For all this lofty talk about creativity and ideas, the outcome will come down to the contracts, court decisions, and compensation structures—all things that have always been at least as big a part of Hollywood as the creative work itself.

Photo of Samuel Axon

Samuel Axon is the editorial lead for tech and gaming coverage at Ars Technica. He covers AI, software development, gaming, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Curated realities: An AI film festival and the future of human expression Read More »

how-a-grad-student-got-lhc-data-to-play-nice-with-quantum-interference

How a grad student got LHC data to play nice with quantum interference


New approach is already having an impact on the experiment’s plans for future work.

The ATLAS particle detector of the Large Hadron Collider (LHC) at the European Nuclear Research Center (CERN) in Geneva, Switzerland. Credit: EThamPhoto/Getty Images

The ATLAS particle detector of the Large Hadron Collider (LHC) at the European Nuclear Research Center (CERN) in Geneva, Switzerland. Credit: EThamPhoto/Getty Images

Measurements at the Large Hadron Collider have been stymied by one of the most central phenomena of the quantum world. But now, a young researcher has championed a new method to solve the problem using deep neural networks.

The Large Hadron Collider is one of the biggest experiments in history, but it’s also one of the hardest to interpret. Unlike seeing an image of a star in a telescope, saying anything at all about the data that comes out of the LHC requires careful statistical modeling.

“If you gave me a theory [that] the Higgs boson is this way or that way, I think people imagine, ‘Hey, you built the experiment, you should be able to tell me what you’re going to see under various hypotheses!’” said Daniel Whiteson, a professor at the University of California, Irvine. “But we don’t.”

One challenge with interpreting LHC data is interference, a core implication of quantum mechanics. Interference allows two possible events to inhibit each other, weakening the likelihood of seeing the result of either. In the presence of interference, physicists needed to use a fuzzier statistical method to analyze data, losing the data’s full power and increasing its uncertainty.

However, a recent breakthrough suggests a different way to tackle the problem. The ATLAS collaboration, one of two groups studying proton collisions at the LHC, released two papers last December that describe new ways of exploring data from their detector. One describes how to use a machine learning technique called Neural Simulation-Based Inference to maximize the potential of particle physics data. The other demonstrates its effectiveness with the ultimate test: re-doing a previous analysis with the new technique and seeing dramatic improvement.

The papers are the culmination of a young researcher’s six-year quest to convince the collaboration of the value of the new technique. Its success is already having an impact on the experiment’s plans for future work.

Making sense out of fusing bosons

Each particle collision at the LHC involves many possible pathways in which different particles combine to give rise to the spray of debris that experimenters see. In 2017, David Rousseau at IJCLab in Orsay, a member of the ATLAS collaboration, asked one of his students, Aishik Ghosh, to improve his team’s ability to detect a specific pathway. That particular pathway is quite important since it’s used to measure properties of the Higgs boson, a particle (first measured in 2012) that helps explain the mass of all other fundamental particles.

It was a pretty big ask. “When a grad student gets started in ATLAS, they’re a tiny cog in a giant, well-oiled machine of 3,500 physicists, who all seem to know exactly what they’re doing,” said Ghosh.

The pathway Ghosh was asked to study occurs via several steps. First, the two colliding protons each emit a W boson, a particle associated with the weak nuclear force. These two bosons fuse together, changing their identity to form a Higgs boson. The Higgs boson then decays, forming a pair of Z bosons, another particle associated with the weak force. Finally, those Z bosons themselves each decay into a lepton, like an electron, and its antimatter partner, like a positron.

A Feynman diagram for the pathway studied by Aishik Ghosh. Credit: ATLAS

Measurements like the one Ghosh was studying are a key way of investigating the properties of the Higgs boson. By precisely measuring how long it takes the Higgs boson to decay, physicists could find evidence of it interacting with new, undiscovered particles that are too massive for the LHC to produce directly.

Ghosh started on the project, hoping to find a small improvement in the collaboration’s well-tested methods. Instead, he noticed a larger issue. The goal he was given, of detecting a single pathway by itself, didn’t actually make sense.

“I was doing that and I realized, ‘What am I doing?’ There’s no clear objective,” said Ghosh.

The problem was quantum interference.

How quantum histories interfere

One of the most famous demonstrations of the mysterious nature of quantum mechanics is called the double-slit experiment. In this demonstration, electrons are shot through a screen with two slits that allow them to pass through to a photographic plate on the other side. With one slit covered, the electrons form a pattern centered on the opening. The photographic plate lights up bright right across from the slit and dims further away from it.

With both slits open, you would expect the pattern to get brighter as more electrons reach the photographic plate. Instead, the effect varies. The two slits do not give rise to two nice bright peaks; instead, you see a rippling pattern in which some areas get brighter while others get dimmer, even though the dimmer areas should, in principle, be easier for electrons to reach.

The effect happens even if the electrons are shot at the screen one by one to stop them from influencing each other directly. It’s as if each electron carries with it two possible histories, one in which it goes through one slit and another where it goes through the other before both end up at the same place. These two histories interfere with each other so that some destinations become less likely instead of more likely.

Results of the double-slit experiment. Credit: Jordgette (CC BY-SA 3.0)

For electrons in the double-slit experiment, the two different histories are two different paths through space. For a measurement at the Large Hadron Collider, the histories are more abstract—paths that lead through transformations of fields. One history might be like the pathway Ghosh was asked to study, in which two W bosons fuse to form a Higgs boson before the Higgs boson splits into two Z bosons. But in another history, the two W bosons might fuse and immediately split into two Z bosons without ever producing a Higgs.

Both histories have the same beginning, with two W bosons, and the same end, with two Z bosons. And just as the two histories of electrons in the double-slit experiment can interfere, so can the two histories for these particles.

Another possible history for colliding particles at the Large Hadron Collider, which interferes with the measurement Ghosh was asked to do. Credit: ATLAS

That interference makes the effect of the Higgs boson much more challenging to spot. ATLAS scientists wanted to look for two pairs of electrons and positrons, which would provide evidence that two Z bosons were produced. They would classify their observations into two types: observations that are evidence for the signal they were looking for (that of a decaying Higgs boson) and observations of events that generate this pattern of particles without the Higgs boson acting as an intermediate (the latter are called the background). But the two types of observations, signal and background, interfere. With a stronger signal, corresponding to more Higgs bosons decaying, you might observe more pairs of electrons and positrons… but if these events interfere, you also might see those pairs disappear.

Learning to infer

In traditional approaches, those disappearances are hard to cope with, even when using methods that already incorporate machine learning.

One of the most common uses of machine learning is classification—for example, distinguishing between pictures of dogs and cats. You train the machine on pictures of cats and pictures of dogs, and it tells you, given a picture, which animal is the most likely match. Physicists at the LHC were already using this kind of classification method to characterize the products of collisions, but it functions much worse when interference is involved.

“If you have something that disappears, you don’t quite know what to train on,” said David Rousseau. “Usually, you’re training signal versus background, exactly like you’re training cats versus dogs. When there is something that disappears, you don’t see what you trained on.”

At first, Ghosh tried a few simple tricks, but as time went on, he realized he needed to make a more fundamental change. He reached out to others in the community and learned about a method called Neural Simulation-Based Inference, or NSBI.

In older approaches, people had trained machine learning models to classify observations into signal and background, using simulations of particle collisions to make the training data. Then they used that classification to infer the most likely value of a number, like the amount of time it takes a Higgs boson to decay, based on data from an actual experiment. Neural Simulation-Based Inference skips the classification and goes directly to the inference.

Instead of trying to classify observations into signal and background, NSBI uses simulations to teach an artificial neural network to guess a formula called a likelihood ratio. Someone using NSBI would run several simulations that describe different situations, such as letting the Higgs boson decay at different rates, and then check how many of each type of simulation yielded a specific observation. The fraction of these simulations with a certain decay rate would provide the likelihood ratio, a method for inferring which decay rate is more likely given experimental evidence. If the neural network is good at guessing this ratio, it will be good at finding how long the Higgs takes to decay.

Because NSBI doesn’t try to classify observations into different categories, it handles quantum interference more effectively. Instead of trying to find the Higgs based on a signal that disappears, it examines all the data, trying to guess which decay time is the most likely.

Ghosh tested the method, which showed promising results on test data, and presented the results at a conference in 2019. But if he was going to convince the ATLAS collaboration that the method was safe to use, he still had a lot of work ahead of him.

Shifting the weight on ATLAS’ shoulders

Experiments like ATLAS have high expectations attached to them. A collaboration of thousands of scientists, ATLAS needs to not only estimate the laws of physics but also have a clear idea of just how uncertain those estimates are. At the time, NSBI hadn’t been tested in that way.

“None of this has actually been used on data,” said Ghosh. “Nobody knew how to quantify the uncertainties. So you have a neural network that gives you a likelihood. You don’t know how good the likelihood is. Is it well-estimated? What if it’s wrongly estimated just in some weird corner? That would completely bias your results.”

Checking those corners was too big a job for a single PhD student and too complex to complete within a single PhD degree. Aishik would have to build a team, and he would need time to build that team. That’s tricky in the academic world, where students go on to short-term postdoc jobs with the expectation that they quickly publish new results to improve their CV for the next position.

“We’re usually looking to publish the next paper within two to three years—no time to overhaul our methods,” said Ghosh. Fortunately, Ghosh had support. He received his PhD alongside Rousseau and went to work with Daniel Whiteson, who encouraged him to pursue his ambitious project.

“I think it’s really important that postdocs learn to take those risks because that’s what science is,” Whiteson said.

Ghosh gathered his team. Another student of Rousseau’s, Arnaud Maury, worked to calibrate the machine’s confidence in its answers. A professor at the University of Massachusetts, Rafael Coelho Lopes de Sa, joined the project. His student Jay Sandesara would have a key role in getting the calculation to work at full scale on a computer cluster. IJCLab emeritus RD Schaffer and University of Liège professor Gilles Loupe provided cross-checks and advice.

The team wanted a clear demonstration that their method worked, so they took an unusual step. They took data that ATLAS had already analyzed and performed a full analysis using their method instead, showing that it could pass every check the collaboration could think of. They would publish two papers, one describing the method and the other giving the results of their upgraded analysis. Zach Marshall, who was the computing coordinator for ATLAS at the time, helped get the papers through, ensuring that they were vetted by experts in multiple areas.

“It was a very small subset of our community that had that overlap between this technical understanding and the physics analysis experience and understanding that were capable of really speaking to whether that paper was sufficient and intelligible and useful. So we really had to make sure that we engaged that little group of humans by name,” said Marshall.

The new method showed significant improvements, getting a much more precise result than the collaboration’s previous analysis. That improvement, and the thorough checks, persuaded ATLAS to use NSBI more broadly going forward. It will give them much more precision than they expected, using the Higgs boson to search for new particles and clarify our understanding of the quantum world. When ATLAS discusses its future plans, it makes projections of the precision it expects to reach in the future. But those plans are now being upended.

“One of the fun things about this method that Aishik pushed hard is each time it feels like now we do that projection—here’s how well we’ll do in 15 years—we absolutely crush those projections,” said Marshall. “So we are just now having to redo a set of projections because we matched our old projections for 15 years out already today. It’s a very fun problem to have.”

How a grad student got LHC data to play nice with quantum interference Read More »

study:-meta-ai-model-can-reproduce-almost-half-of-harry-potter-book

Study: Meta AI model can reproduce almost half of Harry Potter book


Harry Potter and the Copyright Lawsuit

The research could have big implications for generative AI copyright lawsuits.

Meta CEO Mark Zuckerberg. Credit: Andrej Sokolow/picture alliance via Getty Images

In recent years, numerous plaintiffs—including publishers of books, newspapers, computer code, and photographs—have sued AI companies for training models using copyrighted material. A key question in all of these lawsuits has been how easily AI models produce verbatim excerpts from the plaintiffs’ copyrighted content.

For example, in its December 2023 lawsuit against OpenAI, The New York Times Company produced dozens of examples where GPT-4 exactly reproduced significant passages from Times stories. In its response, OpenAI described this as a “fringe behavior” and a “problem that researchers at OpenAI and elsewhere work hard to address.”

But is it actually a fringe behavior? And have leading AI companies addressed it? New research—focusing on books rather than newspaper articles and on different companies—provides surprising insights into this question. Some of the findings should bolster plaintiffs’ arguments, while others may be more helpful to defendants.

The paper was published last month by a team of computer scientists and legal scholars from Stanford, Cornell, and West Virginia University. They studied whether five popular open-weight models—three from Meta and one each from Microsoft and EleutherAI—were able to reproduce text from Books3, a collection of books that is widely used to train LLMs. Many of the books are still under copyright.

This chart illustrates their most surprising finding:

The chart shows how easy it is to get a model to generate 50-token excerpts from various parts of Harry Potter and the Sorcerer’s Stone. The darker a line is, the easier it is to reproduce that portion of the book.

Each row represents a different model. The three bottom rows are Llama models from Meta. And as you can see, Llama 3.1 70B—a mid-sized model Meta released in July 2024—is far more likely to reproduce Harry Potter text than any of the other four models.

Specifically, the paper estimates that Llama 3.1 70B has memorized 42 percent of the first Harry Potter book well enough to reproduce 50-token excerpts at least half the time. (I’ll unpack how this was measured in the next section.)

Interestingly, Llama 1 65B, a similar-sized model released in February 2023, had memorized only 4.4 percent of Harry Potter and the Sorcerer’s Stone. This suggests that despite the potential legal liability, Meta did not do much to prevent memorization as it trained Llama 3. At least for this book, the problem got much worse between Llama 1 and Llama 3.

Harry Potter and the Sorcerer’s Stone was one of dozens of books tested by the researchers. They found that Llama 3.1 70B was far more likely to reproduce popular books—such as The Hobbit and George Orwell’s 1984—than obscure ones. And for most books, Llama 3.1 70B memorized more than any of the other models.

“There are really striking differences among models in terms of how much verbatim text they have memorized,” said James Grimmelmann, a Cornell law professor who has collaborated with several of the paper’s authors.

The results surprised the study’s authors, including Mark Lemley, a law professor at Stanford. (Lemley used to be part of Meta’s legal team, but in January, he dropped them as a client after Facebook adopted more Trump-friendly moderation policies.)

“We’d expected to see some kind of low level of replicability on the order of 1 or 2 percent,” Lemley told me. “The first thing that surprised me is how much variation there is.”

These results give everyone in the AI copyright debate something to latch onto. For AI industry critics, the big takeaway is that—at least for some models and some books—memorization is not a fringe phenomenon.

On the other hand, the study only found significant memorization of a few popular books. For example, the researchers found that Llama 3.1 70B only memorized 0.13 percent of Sandman Slim, a 2009 novel by author Richard Kadrey. That’s a tiny fraction of the 42 percent figure for Harry Potter.

This could be a headache for law firms that have filed class-action lawsuits against AI companies. Kadrey is the lead plaintiff in a class-action lawsuit against Meta. To certify a class of plaintiffs, a court must find that the plaintiffs are in largely similar legal and factual situations.

Divergent results like these could cast doubt on whether it makes sense to lump J.K. Rowling, Kadrey, and thousands of other authors together in a single mass lawsuit. And that could work in Meta’s favor, since most authors lack the resources to file individual lawsuits.

The broader lesson of this study is that the details will matter in these copyright cases. Too often, online discussions have treated “do generative models copy their training data or merely learn from it?” as a theoretical or even philosophical question. But it’s a question that can be tested empirically—and the answer might differ across models and across copyrighted works.

It’s common to talk about LLMs predicting the next token. But under the hood, what the model actually does is generate a probability distribution over all possibilities for the next token. For example, if you prompt an LLM with the phrase “Peanut butter and,” it will respond with a probability distribution that might look like this made-up example:

  • P(“jelly”) = 70 percent
  • P(“sugar”) = 9 percent
  • P(“peanut”) = 6 percent
  • P(“chocolate”) = 4 percent
  • P(“cream”) = 3 percent

And so forth.

After the model generates a list of probabilities like this, the system will select one of these options at random, weighted by their probabilities. So 70 percent of the time the system will generate “Peanut butter and jelly.” Nine percent of the time, we’ll get “Peanut butter and sugar.” Six percent of the time, it will be “Peanut butter and peanut.” You get the idea.

The study’s authors didn’t have to generate multiple outputs to estimate the likelihood of a particular response. Instead, they could calculate probabilities for each token and then multiply them together.

Suppose someone wants to estimate the probability that a model will respond to “My favorite sandwich is” with “peanut butter and jelly.” Here’s how to do that:

  • Prompt the model with “My favorite sandwich is,” and look up the probability of “peanut” (let’s say it’s 20 percent).
  • Prompt the model with “My favorite sandwich is peanut,” and look up the probability of “butter” (let’s say it’s 90 percent).
  • Prompt the model with “My favorite sandwich is peanut butter” and look up the probability of “and” (let’s say it’s 80 percent).
  • Prompt the model with “My favorite sandwich is peanut butter and” and look up the probability of “jelly” (let’s say it’s 70 percent).

Then we just have to multiply the probabilities like this:

0.2 0.9 0.8 0.7 = 0.1008

So we can predict that the model will produce “peanut butter and jelly” about 10 percent of the time, without actually generating 100 or 1,000 outputs and counting how many of them were that exact phrase.

This technique greatly reduced the cost of the research, allowed the authors to analyze more books, and made it feasible to precisely estimate very low probabilities.

For example, the authors estimated that it would take more than 10 quadrillion samples to exactly reproduce some 50-token sequences from some books. Obviously, it wouldn’t be feasible to actually generate that many outputs. But it wasn’t necessary: the probability could be estimated just by multiplying the probabilities for the 50 tokens.

A key thing to notice is that probabilities can get really small really fast. In my made-up example, the probability that the model will produce the four tokens “peanut butter and jelly” is just 10 percent. If we added more tokens, the probability would get even lower. If we added 46 more tokens, the probability could fall by several orders of magnitude.

For any language model, the probability of generating any given 50-token sequence “by accident” is vanishingly small. If a model generates 50 tokens from a copyrighted work, that is strong evidence that the tokens “came from” the training data. This is true even if it only generates those tokens 10 percent, 1 percent, or 0.01 percent of the time.

The study authors took 36 books and divided each of them into overlapping 100-token passages. Using the first 50 tokens as a prompt, they calculated the probability that the next 50 tokens would be identical to the original passage. They counted a passage as “memorized” if the model had a greater than 50 percent chance of reproducing it word for word.

This definition is quite strict. For a 50-token sequence to have a probability greater than 50 percent, the average token in the passage needs a probability of at least 98.5 percent! Moreover, the authors only counted exact matches. They didn’t try to count cases where—for example—the model generates 48 or 49 tokens from the original passage but got one or two tokens wrong. If these cases were counted, the amount of memorization would be even higher.

This research provides strong evidence that significant portions of Harry Potter and the Sorcerer’s Stone were copied into the weights of Llama 3.1 70B. But this finding doesn’t tell us why or how this happened. I suspect that part of the answer is that Llama 3 70B was trained on 15 trillion tokens—more than 10 times the 1.4 trillion tokens used to train Llama 1 65B.

The more times a model is trained on a particular example, the more likely it is to memorize that example. Perhaps Meta had trouble finding 15 trillion distinct tokens, so it trained on the Books3 dataset multiple times. Or maybe Meta added third-party sources—such as online Harry Potter fan forums, consumer book reviews, or student book reports—that included quotes from Harry Potter and other popular books.

I’m not sure that either of these explanations fully fits the facts. The fact that memorization was a much bigger problem for the most popular books does suggest that Llama may have been trained on secondary sources that quote these books rather than the books themselves. There are likely exponentially more online discussions of Harry Potter than Sandman Slim.

On the other hand, it’s surprising that Llama memorized so much of Harry Potter and the Sorcerer’s Stone.

“If it were citations and quotations, you’d expect it to concentrate around a few popular things that everyone quotes or talks about,” Lemley said. The fact that Llama 3 memorized almost half the book suggests that the entire text was well represented in the training data.

Or there could be another explanation entirely. Maybe Meta made subtle changes in its training recipe that accidentally worsened the memorization problem. I emailed Meta for comment last week but haven’t heard back.

“It doesn’t seem to be all popular books,” Mark Lemley told me. “Some popular books have this result and not others. It’s hard to come up with a clear story that says why that happened.”

  1. Training on a copyrighted work is inherently infringing because the training process involves making a digital copy of the work.
  2. The training process copies information from the training data into the model, making the model a derivative work under copyright law.
  3. Infringement occurs when a model generates (portions of) a copyrighted work.

A lot of discussion so far has focused on the first theory because it is the most threatening to AI companies. If the courts uphold this theory, most current LLMs would be illegal, whether or not they have memorized any training data.

The AI industry has some pretty strong arguments that using copyrighted works during the training process is fair use under the 2015 Google Books ruling. But the fact that Llama 3.1 70B memorized large portions of Harry Potter could color how the courts consider these fair use questions.

A key part of fair use analysis is whether a use is “transformative”—whether a company has made something new or is merely profiting from the work of others. The fact that language models are capable of regurgitating substantial portions of popular works like Harry Potter1984, and The Hobbit could cause judges to look at these fair use arguments more skeptically.

Moreover, one of Google’s key arguments in the books case was that its system was designed to never return more than a short excerpt from any book. If the judge in the Meta lawsuit wanted to distinguish Meta’s arguments from the ones Google made in the books case, he could point to the fact that Llama can generate far more than a few lines of Harry Potter.

The new study “complicates the story that the defendants have been telling in these cases,” co-author Mark Lemley told me. “Which is ‘we just learn word patterns. None of that shows up in the model.’”

But the Harry Potter result creates even more danger for Meta under that second theory—that Llama itself is a derivative copy of Rowling’s book.

“It’s clear that you can in fact extract substantial parts of Harry Potter and various other books from the model,” Lemley said. “That suggests to me that probably for some of those books there’s something the law would call a copy of part of the book in the model itself.”

The Google Books precedent probably can’t protect Meta against this second legal theory because Google never made its books database available for users to download—Google almost certainly would have lost the case if it had done that.

In principle, Meta could still convince a judge that copying 42 percent of Harry Potter was allowed under the flexible, judge-made doctrine of fair use. But it would be an uphill battle.

“The fair use analysis you’ve gotta do is not just ‘is the training set fair use,’ but ‘is the incorporation in the model fair use?’” Lemley said. “That complicates the defendants’ story.”

Grimmelmann also said there’s a danger that this research could put open-weight models in greater legal jeopardy than closed-weight ones. The Cornell and Stanford researchers could only do their work because the authors had access to the underlying model—and hence to the token probability values that allowed efficient calculation of probabilities for sequences of tokens.

Most leading labs, including OpenAI, Anthropic, and Google, have increasingly restricted access to these so-called logits, making it more difficult to study these models.

Moreover, if a company keeps model weights on its own servers, it can use filters to try to prevent infringing output from reaching the outside world. So even if the underlying OpenAI, Anthropic, and Google models have memorized copyrighted works in the same way as Llama 3.1 70B, it might be difficult for anyone outside the company to prove it.

Moreover, this kind of filtering makes it easier for companies with closed-weight models to invoke the Google Books precedent. In short, copyright law might create a strong disincentive for companies to release open-weight models.

“It’s kind of perverse,” Mark Lemley told me. “I don’t like that outcome.”

On the other hand, judges might conclude that it would be bad to effectively punish companies for publishing open-weight models.

“There’s a degree to which being open and sharing weights is a kind of public service,” Grimmelmann told me. “I could honestly see judges being less skeptical of Meta and others who provide open-weight models.”

Timothy B. Lee was on staff at Ars Technica from 2017 to 2021. Today, he writes Understanding AI, a newsletter that explores how AI works and how it’s changing our world. You can subscribe here.

Photo of Timothy B. Lee

Timothy is a senior reporter covering tech policy and the future of transportation. He lives in Washington DC.

Study: Meta AI model can reproduce almost half of Harry Potter book Read More »

framework-laptop-12-review:-i’m-excited-to-see-what-the-2nd-generation-looks-like

Framework Laptop 12 review: I’m excited to see what the 2nd generation looks like


how much would you pay for personality?

A sturdy, thoughtful, cute design that just can’t compete in its price range.

Framework’s Laptop 12 has a lot of personality, but also a lot of shortcomings. Credit: Andrew Cunningham

Framework’s Laptop 12 has a lot of personality, but also a lot of shortcomings. Credit: Andrew Cunningham

“What’s this purple laptop? It’s cool.”

Over a decade-plus of doing gadget reviews and review-adjacent things, my wife (and, lately, my 5-year-old) have mostly stopped commenting on the ever-shifting selection of laptops I have in my bag or lying around the house at any given time. Maybe she can’t tell them apart, or maybe she just figures there isn’t that much to say about whatever black or silver metal slab I’m carrying around. Either way, they practically never elicit any kind of response, unless there are just too many of them sitting out in too many places.

But she did ask about the Framework Laptop 12, the third and latest major design in Framework’s slowly expanding lineup of modular, repairable, upgradeable laptops. With its five two-toned color options and sturdy plastic exterior, it’s definitely more approachable and friendly-looking than the Laptop 13 or Laptop 16, both metal slabs with a somewhat less-finished and prototype-y look to them. But it retains the features that a certain kind of PC geek likes about Framework’s other laptops—user-customizable and swappable ports, an easy-to-open design, first-class Linux support, and the promise of future upgrades that improve its performance and other specs.

Look and feel

The Laptop 12 stacked atop the Laptop 13. Credit: Andrew Cunningham

Plastic gets a bad rap, and there are indeed many subpar plastic gadgets out there. When done poorly, plastic can look and feel cheap, resulting in less durable devices that show more wear over time.

But well-done plastic can still feel solid and high-quality, in addition to being easier to make in different colors. Framework says the Laptop 12’s chassis is a combination of ABS plastic and TPU plastic (a more flexible, rubberized material), molded over a metal inner structure. The result is something that can probably actually take the shock of a drop or a fall better than many aluminum-and-glass laptops without feeling overly cheap or chintzy.

The five two-tone color options—the boring, businesslike black and gray, plus purple-and-gray lavender, pink-and-baby-blue bubblegum, and the green sage options—are the most fun thing about it, and the lavender and bubblegum colors are particularly eye-catching.

Keyboard and trackpad. Only the lavender and gray laptops get a color-matched trackpad; the keyboard and deck are always different shades of gray. Credit: Andrew Cunningham

Matching other components to the exterior of the system can be a bit of a crapshoot, though. The screwdriver and spudger that Framework provides for upgrading and repairing all of its systems does match the color of the laptop, and the two-tone styluses for the touchscreens will also match the laptops when they’re made available for purchase in the coming months.

The lavender option is the only one that can also be configured with a color-matched lavender trackpad—the only other trackpad option is gray, and the keyboard deck and the keyboard itself are all gray no matter what color laptop you pick. This is presumably meant to limit the number of different trackpad options that Framework has to manufacture and stock, but it is too bad that the laptop’s keyboard and palm rest aren’t as colorful as the rest of it.

The Laptop 12 also uses Framework’s still-unique Expansion Card system for customizing the built-in ports. These are all 10 Gbps USB 3.2 Gen 2 ports rather than the Thunderbolt ports on the Intel versions of the Laptop 13, but all four support the same speeds, all four support charging, and all four support display output, so you really can put whatever port you want wherever you want it.

A downside of the Laptop 12 is that, as of this writing, only the USB-C Expansion Modules are available in color-matched versions. If you want USB-A, HDMI, DisplayPort, or any other kind of port on your system, you’ll get the silver modules that were designed to match the finish on the Framework Laptops 13 and 16, so you’ll have to put up with at least one mismatched port on your otherwise adorable system.

Only the USB-C Expansion Cards are available in lavender, which can make for goofy-looking mismatches. But I do prefer the Framework 16-style retention switches to the Framework Laptop 13’s retention buttons, which you need to hold down as you pull out the Expansion Card. Credit: Andrew Cunningham

Once you get past the adorable design, the Expansion Modules, and the sturdy construction, the system’s downsides start to become more apparent. The 12.2-inch, 1920×1200 touchscreen gets plenty bright and has a respectable contrast ratio (440 nits and 1,775:1 in our testing, respectively). But it’s surrounded by thick black bezels on all sides, particularly on the bottom—it does seem that either a larger screen or a slightly smaller laptop design would be possible if so much space weren’t wasted by these thick borders.

The display has good viewing angles but a distinctly mediocre color gamut, covering around 60 percent of the SRGB color space (compared to the high 90s for the Laptop 13 and most midrange to high-end IPS screens in other laptops). This is low enough that most colors appear slightly muted and washed out—reds most noticeably, though greens aren’t much better. You definitely don’t need a colorimeter to see the difference here.

Framework’s color-matched stylus isn’t ready yet, but you won’t need to wait for one if you want to use a pen with this touchscreen. Both the Universal Stylus Initiative (USI) 2.0 and Microsoft Pen Protocol (MPP) 2.0 specs are supported, so the Surface Pen, a bunch of Lenovo styluses, and any number of inexpensive third-party Amazon styluses will all work just fine. That said, the screen can only support one of those stylus specs at a time—MPP is on by default, and you can swap between them in the BIOS settings.

The webcam and mic have locks to disable them so that the OS can’t see or use them. Credit: Andrew Cunningham

The keyboard feels mostly fine, with good key spacing and a nice amount of travel. I noticed that I was occasionally missing letters the first couple of days I used the laptop—I was pressing the keys, but they intermittently didn’t register. That got better as I adjusted to the system. The trackpad is also unremarkable in a good way. Finger tracking and multi-touch gestures all worked as intended.

But the keyboard lacks a backlight, and it doesn’t have the fingerprint sensor you get with the Laptop 13. With no fingerprint sensor and no IR webcam, there are no biometric authentication options available for use with Windows Hello, so you’ll either need a PIN or a password to unlock your laptop every time you want to use it. Either omission would be sort of annoying in a laptop in this price range (we complained about the lack of keyboard backlight in the $700 Surface Laptop Go 2 a few years ago), but to be missing both is particularly frustrating in a modern system that costs this much.

Repairs and upgrades

We’ve been inside the Framework Laptop 13 enough times that we don’t do deep dives into its insides anymore, but as a new (and, in some ways, more refined) design, the Laptop 12 warrants a closer look this time around.

Framework’s pack-in Torx screwdriver is still the only tool you need to work on the Laptop 12. Undo the eight captive screws on the bottom of the laptop, and you’ll be able to lift away the entire keyboard and trackpad area to expose all of the other internal components, including the RAM, SSD, battery, and the motherboard itself.

The motherboard is quite a bit smaller than the Framework Laptop 13 board, and the two are definitely not interchangeable. Framework has never said otherwise, but it’s worth highlighting that these are two totally separate models that will have their own distinct components and upgrade paths—that goes for parts like the speakers and battery, too.

Laptop 12 motherboard on top, Laptop 13 motherboard on bottom. Credit: Andrew Cunningham

As a result of that reduction in board space, the Laptop 12 can only fit a single DDR5 RAM slot, which reduces memory bandwidth and limits your RAM capacity to 48GB. It also uses shorter M.2 2230 SSDs, like the Surface lineup or the Steam Deck. Unlike a few years ago, these SSDs are now readily available at retail, and it’s also easy to buy warranty-less ones on eBay or elsewhere that have been pulled from OEM systems. But they’re still a bit more expensive than the more common M.2 2280 size, and you have fewer options overall.

Framework has already published a guide on setting up the DIY Edition of the laptop and a few repair guides for common components. Guides for replacing bigger or more co parts, like the display or the webcam, are still listed as “coming soon.”

Performance and battery life

I could politely describe the Laptop 12’s 2.5-year-old 13th-gen Intel Core processor as “mature.” This generation of Intel chips has stuck around for a lot longer than usual, to the point that Intel recently acknowledged that it has been dealing with shortages. They’re appealing to PC companies because they still offer decent everyday performance for basic computing without the additional costs imposed by things like on-package memory or having some or all of the chip manufactured outside of Intel’s own factories.

The upside of a slightly older processor is a more stable computing experience, in both Windows and Linux, since the companies and communities involved have had more time to add support and work out bugs; I had none of the sleep-and-wake issues or occasional video driver crashes I had while testing the Ryzen AI 300 version of the Framework Laptop 13.

The downside, of course, is that performance is pretty unexciting. These low-power U-series 12th- and 13th-gen Intel chips remain capable when it comes to day-to-day computing, but they fall far behind the likes of Intel and AMD’s newer chips, Qualcomm’s Snapdragon chips from the Microsoft Surface and other Copilot+ PCs, or the Apple M4 in the MacBook Air.

And while none of these chips are really intended for gaming laptops, the Laptop 12 isn’t even a great fit for that kind of casual Steam Deck-y 3D gaming that most Framework Laptop 13 models can handle. Technically, this is the same basic Intel Iris Xe GPU that the first few generations of Framework Laptop 13 used, which is not exciting as integrated GPUs go but is at least still minimally capable. But because the Laptop 12 only has a single RAM slot instead of two, memory bandwidth is halved, which makes the GPU identify itself as “Intel UHD Graphics” to the device manager and drags down performance accordingly. (This is something these GPUs have always done, but they usually ship in systems that either have two RAM slots or soldered-down memory, so it usually doesn’t come up.)

Framework has tuned these chips to consume the same amount of power in both the “Balanced” and “Best Performance” power modes in Windows, with a 15 W sustained power limit and a 40 W limit for shorter, bursty workloads. This keeps the laptop feeling nice and responsive for day-to-day use and helps keep a lid on power usage for battery life reasons, but it also limits its performance for extended CPU-intensive workloads like our Handbrake video encoding test.

The Laptop 12 takes a lot longer to accomplish these tasks than some other laptops we’ve tested with similar chips, either because of the lower memory bandwidth or because Best Performance mode doesn’t let the chip consume a bunch of extra power. I’m not inclined to complain too much about this because it’s not the kind of thing you really buy an ultraportable laptop to do, but as with light gaming, it’s worth noting that the Laptop 12 doesn’t hit that same “usable for these workloads in a pinch” balance that the Laptop 13 does.

The Laptop 12’s battery life is decent relative to most Laptop 13s. Credit: Andrew Cunningham

The Core i5 version of the Laptop 12 lasted around 10 hours in the PCMark Modern Office battery life test, which isn’t stunning but is a step up from what the fully specced versions of the Framework Laptop 13 can offer. It will be just fine for a long flight or a full day of work or school. Our Framework reviews often complain about battery life, but I don’t think it will be an issue here for most users.

About that price

In some ways, the Laptop 12 is trying to be a fundamentally different laptop from the Laptop 13. For all the Laptop 13’s upgrades over the years, it has never had a touchscreen option, stylus support, or a convertible hinge.

But in most of the ways that count, the Laptop 12 is meant to be an “entry-level, lower-cost laptop,” which is how Framework CEO Nirav Patel has positioned it in the company’s announcement blog posts and videos. It features a slightly smaller, lower-resolution, less colorful screen with a lower refresh rate; a non-backlit keyboard; and considerably weaker processors. It also lacks both a fingerprint reader and a face-scanning webcam for Windows Hello.

The issue is that these cost-cutting compromises come at a price that’s a bit outside of what you’d expect of a “budget” laptop.

The DIY Edition of the Laptop 12 we’re evaluating here—a version that ships with the Windows license and all the components you need but which you assemble yourself—will run you at least $1,176, depending on the Expansion Modules you choose for your ports. That includes 16GB of GDDR5 RAM and a 1TB M.2 2230 SSD, plus the Core i5-1334U processor option (2 P-cores, 8 E-cores). If you stepped down to a 500GB SSD instead, that’s still $1,116. A pre-built edition—only available in black, but with identical specifications—would run you $1,049.

The Laptop 13 compared to the Laptop 12. The Laptop 12 is missing quite a few quality-of-life things and has worse performance, but it isn’t all that much cheaper. Credit: Andrew Cunningham

This puts the Framework Laptop 12 in the same general price range as Apple’s MacBook Air, Microsoft’s 13-inch Surface Laptop, and even many editions of the Framework Laptop 13. And the Laptop 12 is charming, but its day-to-day user experience falls well short of any of those devices.

You can make it cheaper! Say you go for the Core i3-1315U version (two P-cores, four E-cores) instead, and you buy your own 16GB stick of DDR5 RAM (roughly $50 instead of $80) and 1TB SSD ($70 or $80 for a decent one, instead of $159). Say you have plenty of USB-C chargers at home so you don’t need to pay $55 for Framework’s version, and say you run Linux or ChromeOS, or you already have a Windows 11 product key, or you’ve brought your own Windows 11 key from one of those gray-market key selling sites (as little as $10).

Now we’re talking about a PC that’s a little under $700, which is closer to “reasonable” for a brand-new touchscreen PC. But the laptop’s old CPU and poky performance also mean it’s competing with a wide swath of refurbished, used, and closeout-priced older PCs from other manufacturers.

In December, for example, I bought an SSD-less Lenovo ThinkPad L13 Yoga Gen 3 from eBay for around $300, with around a year left on its warranty. After I’d added an SSD and reinstalled Windows—no additional cost because it had a valid Windows license already—I ended up with a PC with the same screen resolution and similar specs but with a better-quality display with smaller bezels that made the screen larger without making the laptop larger; a faster GPU configuration; a backlit keyboard; and a fingerprint reader.

I know it’s not possible for everyone to just go out and buy a laptop like this. The boring black outline of a midrange ThinkPad is also the polar opposite of the Framework Laptop 12, but it’s an example of what the tech-savvy buyer can find in the secondhand market if you’re trying to find a cost-effective alternative to what Framework is offering here.

A good laptop, but not a good value

The Framework Laptop 12. Credit: Andrew Cunningham

There are plenty of factors beyond Framework’s control that contribute to the Laptop 12’s price, starting with on-again-off-again global trade wars and the uncertainty that comes with them. There’s also Framework’s status as a niche independent PC company rather than a high-volume behemoth. When you ship the number of computers that Apple does, it’s almost certainly easier to make a $999 laptop that is both premium and profitable.

But whatever the reason, I can’t escape the feeling that the Laptop 12 was meant to be cheaper than it has ended up being. The result is a computer with many of the compromises of an entry-level system, but without a matching entry-level price tag. It’s hard to put a price on some of the less-tangible benefits of a Framework laptop, like ease of repairs and the promise of future upgrades, but my gut feeling is that the Framework Laptop 13 falls on the “right” side of that line, and the Laptop 12 doesn’t.

I am charmed by the Laptop 12. It’s cute and functional, and it stands out among high-end aluminum slabs. It adds some subtle refinement to elements of the original Framework Laptop 13 design, including some things I hope end up making it into some future iteration of its design—softer corners, more color options, and an easier-to-install keyboard and trackpad. And it’s far from a bad performer for day-to-day desktop use; it’s just that the old, poky processor limits its capabilities compared to other PCs that don’t cost that much more than it does.

I probably wouldn’t recommend this over the Laptop 13 for anyone interested in what Framework is doing, unless a touchscreen is a make-or-break feature, and even then, I’d encourage people to take a good, long look at Microsoft, Lenovo, Dell, or HP’s convertible offerings first. But I hope that Framework does what it’s done for the Laptop 13 over the last four or so years: introduce updated components, iterate on different elements of the design, and gradually bring the price down into a more reasonable range through refurbished and factory-second parts. As a $1,000-ish computer, this leaves a lot to be desired. But as the foundation for a new Framework platform, it has enough promise to be interesting.

The good

  • Eye-catching, colorful, friendly design that stands out among metal slabs.
  • Simple to build, repair, and upgrade.
  • Dual-plastic design over a metal frame is good for durability.
  • First convertible touchscreen in the Framework laptop.
  • Customizable ports.
  • Decent performance for everyday computing.
  • Respectable battery life.

The bad

  • Old, slow chip isn’t really suitable for light gaming or heavy productivity work that the larger Framework Laptop 13 can do.
  • Pre-built laptop only comes in boring black.
  • Mediocre colors and large bezels spoil the screen.
  • Keyboard sometimes felt like it was missing keystrokes until I had adjusted to compensate.

The ugly

  • It’s just too expensive for what it is. It looks and feels like a lower-cost laptop, but without a dramatically lower price than the nicer, faster Framework 13.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Framework Laptop 12 review: I’m excited to see what the 2nd generation looks like Read More »

the-macbook-air-is-the-obvious-loser-as-the-sun-sets-on-the-intel-mac-era

The MacBook Air is the obvious loser as the sun sets on the Intel Mac era


In the end, Intel Macs have mostly gotten a better deal than PowerPC Macs did.

For the last three years, we’ve engaged in some in-depth data analysis and tea-leaf reading to answer two questions about Apple’s support for older Macs that still use Intel chips.

First, was Apple providing fewer updates and fewer years of software support to Macs based on Intel chips as it worked to transition the entire lineup to its internally developed Apple Silicon? And second, how long could Intel Mac owners reasonably expect to keep getting updates?

The answer to the first question has always been “it depends, but generally yes.” And this year, we have a definitive answer to the second question: For the bare handful of Intel Macs it supports, macOS 26 Tahoe will be the final new version of the operating system to support any of Intel’s chips.

To its credit, Apple has also clearly spelled this out ahead of time rather than pulling the plug on Intel Macs with no notice. The company has also said that it plans to provide security updates for those Macs for two years after Tahoe is replaced by macOS 27 next year. These Macs aren’t getting special treatment—this has been Apple’s unspoken, unwritten policy for macOS security updates for decades now—but to look past its usual “we don’t comment on our future plans” stance to give people a couple years of predictability is something we’ve been pushing Apple to do for a long time.

With none of the tea leaf reading left to do, we can now present a fairly definitive look at how Apple has handled the entire Intel transition, compare it to how the PowerPC-to-Intel switch went two decades ago, and predict what it might mean about support for Apple Silicon Macs.

The data

We’ve assembled an epoch-spanning spreadsheet of every PowerPC or Intel Mac Apple has released since the original iMac kicked off the modern era of Apple back in 1998. On that list, we’ve recorded the introduction date for each Mac, the discontinuation date (when it was either replaced or taken off the market), the version of macOS it shipped with, and the final version of macOS it officially supported.

For those macOS versions, we’ve recorded the dates they received their last major point update—these are the feature-adding updates these releases get when they’re Apple’s latest and greatest version of macOS, as macOS 15 Sequoia is right now. After replacing them, Apple releases security-only patches and Safari browser updates for old macOS versions for another two years after replacing them, so we’ve also recorded the dates that those Macs would have received their final security update. For Intel Macs that are still receiving updates (versions 13, 14, and 15) and macOS 26 Tahoe, we’ve extrapolated end-of-support dates based on Apple’s past practices.

A 27-inch iMac model. It’s still the only Intel Mac without a true Apple Silicon replacement. Credit: Andrew Cunningham

We’re primarily focusing on two time spans: from the date of each Mac’s introduction to the date it stopped receiving major macOS updates, and from the date of each Mac’s introduction to the date it stopped receiving any updates at all. We consider any Macs inside either of these spans to be actively supported; Macs that are no longer receiving regular updates from Apple will gradually become less secure and less compatible with modern apps as time passes. We measure by years of support rather than number of releases, which controls for Apple’s transition to a once-yearly release schedule for macOS back in the early 2010s.

We’ve also tracked the time between each Mac model’s discontinuation and when it stopped receiving updates. This is how Apple determines which products go on its “vintage” and “obsolete” hardware lists, which determine the level of hardware support and the kinds of repairs that the company will provide.

We have lots of detailed charts, but here are some highlights:

  • For all Mac models tracked, the average Mac receives about 6.6 years of macOS updates that add new features, plus another two years of security-only updates.
  • If you only count the Intel era, the average is around seven years of macOS updates, plus two years of security-only patches.
  • Most (though not all) Macs released since 2016 come in lower than either of these averages, indicating that Apple has been less generous to most Intel Macs since the Apple Silicon transition began.
  • The three longest-lived Macs are still the mid-2007 15- and 17-inch MacBook Pros, the mid-2010 Mac Pro, and the mid-2007 iMac, which received new macOS updates for around nine years after their introduction (and security updates for around 11 years).
  • The shortest-lived Mac is still the late-2008 version of the white MacBook, which received only 2.7 years of new macOS updates and another 3.3 years of security updates from the time it was introduced. (Late PowerPC-era and early Intel-era Macs are all pretty bad by modern standards.)

The charts

If you bought a Mac any time between 2016 and 2020, you’re generally settling for fewer years of software updates than you would have gotten in the recent past. If you bought a Mac released in 2020, the tail end of the Intel era when Apple Silicon Macs were around the corner, your reward is the shortest software support window since 2006.

There are outliers in either direction. The sole iMac Pro, introduced in 2017 as Apple tried to regain some of its lost credibility with professional users, will end up with 7.75 years of updates plus another two years of security updates when all is said and done. Buyers of 2018–2020 MacBook Airs and the two-port version of the 2020 13-inch MacBook Pro, however, are treated pretty poorly, getting not quite 5.5 years of updates (plus two years of security patches) on average from the date they were introduced.

That said, most Macs usually end up getting a little over six years of macOS updates and two more years of security updates. If that’s a year or two lower than the recent past, it’s also not ridiculously far from the historical average.

If there’s something to praise here, it’s interesting that Apple doesn’t seem to treat any of its Macs differently based on how much they cost. Now that we have a complete overview of the Intel era, breaking out the support timelines by model rather than by model year shows that a Mac mini doesn’t get dramatically more or less support than an iMac or a Mac Pro, despite costing a fraction of the price. A MacBook Air doesn’t receive significantly more or less support than a MacBook Pro.

These are just averages, and some models are lucky while others are not. The no-adjective MacBook that Apple has sold on and off since 2006 is also an outlier, with fewer years of support on average than the other Macs.

If there’s one overarching takeaway, it’s that you should buy new Macs as close to the date of their introduction as possible if you want to maximize your software support window. Especially for Macs that were sold continuously for years and years—the 2013 and 2019 Mac Pro, the 2018 Mac mini, the non-Retina 2015 MacBook Air that Apple sold some version of for over four years—buying them toward the end of their retail lifecycle means settling for years of fewer updates than you would have gotten if you had waited for the introduction of a new model. And that’s true even though Apple’s hardware support timelines are all calculated from the date of last availability rather than the date of introduction.

It just puts Mac buyers in a bad spot when Apple isn’t prompt with hardware updates, forcing people to either buy something that doesn’t fully suit their needs or settle for something older that will last for fewer years.

What should you do with an older Intel Mac?

The big question: If your Intel Mac is still functional but Apple is no longer supporting it, is there anything you can do to keep it both secure and functional?

All late-model Intel Macs officially support Windows 10, but that OS has its own end-of-support date looming in October 2025. Windows 11 can be installed, but only if you bypass its system requirements, which can work well, but it does require additional fiddling when it comes time to install major updates. Consumer-focused Linux distributions like Ubuntu, Mint, or Pop!_OS may work, depending on your hardware, but they come with a steep learning curve for non-technical users. Google’s ChromeOS Flex may also work, but ChromeOS is more functionally limited than most other operating systems.

The OpenCore Legacy Patcher provides one possible stay of execution for Mac owners who want to stay on macOS for as long as they can. But it faces two steep uphill climbs in macOS Tahoe. First, as Apple has removed more Intel Macs from the official support list, it has removed more of the underlying code from macOS that is needed to support those Macs and other Macs with similar hardware. This leaves more for the OpenCore Configurator team to have to patch in from older OSes, and this kind of forward-porting can leave hardware and software partly functional or non-functional.

Second, there’s the Apple T2 to consider. The Macs with a T2 treat it as a load-bearing co-processor, responsible for crucial operating system functions such as enabling Touch ID, serving as an SSD controller, encoding and decoding videos, communicating with the webcam and built-in microphone, and other operations. But Apple has never opened the T2 up to anyone, and it remains a bit of a black box for both the OpenCore/Hackintosh community and folks who would run Linux-based operating systems like Ubuntu or ChromeOS on that hardware.

The result is that the 2018 and 2019 MacBook Airs that didn’t support macOS 15 Sequoia last year never had support for them added to the OpenCore Legacy Patcher because the T2 chip simply won’t communicate with OpenCore firmware booted. Some T2 Macs don’t have this problem. But if yours does, it’s unlikely that anyone will be able to do anything about it, and your software support will end when Apple says it does.

Does any of this mean anything for Apple Silicon Mac support?

Late-model Intel MacBook Airs have fared worse than other Macs in terms of update longevity. Credit: Valentina Palladino

It will likely be at least two or three years before we know for sure how Apple plans to treat Apple Silicon Macs. Will the company primarily look at specs and technical capabilities, as it did from the late-’90s through to the mid-2010s? Or will Apple mainly stop supporting hardware based on its age, as it has done for more recent Macs and most current iPhones and iPads?

The three models to examine for this purpose are the first ones to shift to Apple Silicon: the M1 versions of the MacBook Air, Mac mini, and 13-inch MacBook Pro, all launched in late 2020. If these Macs are dropped in, say, 2027 or 2028’s big macOS release, but other, later M1 Macs like the iMac stay supported, it means Apple is likely sticking to a somewhat arbitrary age-based model, with certain Macs cut off from software updates that they are perfectly capable of running.

But it’s our hope that all Apple Silicon Macs have a long life ahead of them. The M2, M3, and M4 have all improved on the M1’s performance and other capabilities, but the M1 Macs are much more capable than the Intel ones they supplanted, the M1 was used so widely in various Mac models for so long, and Mac owners can pay so much more for their devices than iPhone and iPad owners. We’d love to see macOS return to the longer-tail software support it provided in the late-’00s and mid-2010s, when models could expect to see seven or eight all-new macOS versions and another two years of security updates afterward.

All signs point to Apple using the launch date of any given piece of hardware as the determining factor for continued software support. But that isn’t how it has always been, nor is it how it always has to be.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

The MacBook Air is the obvious loser as the sun sets on the Intel Mac era Read More »

nintendo-switch-2:-the-ars-technica-review

Nintendo Switch 2: The Ars Technica review


Nintendo’s overdue upgrade is a strong contender, even amid competition from handheld PCs.

Maybe not the best showcase of the hardware, but squeezing 40+ years of Nintendo history into a single image was too compelling. Credit: Kyle Orland

Maybe not the best showcase of the hardware, but squeezing 40+ years of Nintendo history into a single image was too compelling. Credit: Kyle Orland

When Nintendo launched the Switch in 2017, the sheer novelty of the new hardware brought the company a lot of renewed attention. After the market disaster of the Wii U’s homebound “second screen” tablet, Nintendo exploited advances in system-on-a-chip miniaturization to create something of a minimum viable HD-capable system that could work as both a lightweight handheld and a slightly underpowered TV-based console. That unique combination, and Nintendo’s usual selection of first-party system sellers, set the console apart from what the rest of the gaming market was offering at the time.

Eight years later, the Switch 2 launched into a transformed gaming hardware market that the original Switch played a large role in shaping, one full of portable gaming consoles that can optionally be connected to a TV. That includes full-featured handheld gaming PCs like the Steam Deck and its many imitators, but also streaming-focused Android-based gaming handhelds and retro-focused emulation machines on the cheaper end. Even Microsoft is preparing to get in on the act, streamlining the Windows gaming experience for an Asus-powered handheld gaming PC that hides the Windows desktop.

Mario is excited! Are you?

Credit: Kyle Orland

Mario is excited! Are you? Credit: Kyle Orland

Those market changes make the Switch 2 a lot less of a novelty than its predecessor. As its name implies, it is essentially a direct sequel to the original Switch hardware, with improvements to the physical hardware and internal architecture. Rather than shaking things up with a new concept, Nintendo seems to be saying, “Hey, you liked the Switch? Here’s the same thing, but moreso.”

That “moreso” will surely be enough for players who complained about the Switch’s increasingly obvious struggles to play graphically demanding games in the last few years. But in a gaming world full of capable and usable handheld PCs, a “more of the same” Switch 2 might be a bit of a tougher sell.

Joyful Joy-Cons

Let’s start with one feature that the Switch line still can boast over most of its handheld gaming competition: the removable Joy-Cons. The new magnetic slotting system for these updated controllers on the Switch 2 is a sheer joy to use, allowing for easy and quick one-handed removal as well as a surprisingly secure portable mode connection. After a week spent snapping them on and off dozens of times, I still can’t get over how great the design feels.

The new Joy-Cons also ameliorate what was probably the largest complaint about the ones on the Switch: their size. Everything from the overall footprint to the buttons and joystick has been expanded to feel much more appropriate in larger hands. The days of average adults having to awkwardly scrunch their fingers around a Switch Joy-Con in each hand can be relegated to the past, where they belong.

Holding a single Joy-Con in two hands is still not ideal, but it works in a pinch.

Holding a single Joy-Con in two hands is still not ideal, but it works in a pinch.

Like the Switch before it, the removable Joy-Cons can also be used separately, essentially offering baseline purchasers two controllers for the price of one. The added size helps make holding an individual Joy-Con horizontally in two hands much more comfortable, especially when it comes to tapping the expanded shoulder buttons on the controllers’ inner edge. But the face buttons and joystick are still a bit too cramped and oddly placed to make this a preferred way to play for long stretches.

Still, for situations where you happen to have other players around—especially young children who might not mind the smaller-than-standard size—it’s nice to have a feasible multiplayer option without needing to invest in new controllers. And the Switch 2’s seamless compatibility with your old Switch controllers (in tabletop or docked mode, at least) provides even more control flexibility and value for upgraders.

Control compromises

The main problem with the Switch 2 Joy-Cons continues to be their thinness, which is practically unchanged from the original Switch. That’s handy for keeping the overall system profile nice and trim in portable mode, but it means the Joy-Cons are missing the bulbous, rounded palm grips you see on handhelds like the Steam Deck and standard console controllers dating back to the original PlayStation.

Without this kind of grip, the thin, rounded bottom corner of the Joy-Cons ends up wedged oddly between the fleshy parts of your palm. Your free fingers, meanwhile, are either awkwardly wrapped around the edge of the loose Joy-Cons or uncomfortably perched to support the flat back of a portable system that’s a noticeable 34 percent heavier than the original Switch. And while an included Joy-Con holster helps add these rounded grips for tabletop or docked play, the “flat finger” problem is unavoidable when playing the system in portable mode.

The included grip gives your palms a comfortable place to rest when holding the Joy-Cons.

The included grip gives your palms a comfortable place to rest when holding the Joy-Cons.

After spending a week with the Joy-Cons, I started to notice a few other compromises. Despite the added size, the face buttons are still slightly smaller than you’ll find on other controllers, meaning they can dig into the pad of your thumb when held down for extended periods. The shoulder buttons, which have also been expanded from the original Switch, still lack the increased travel and sensitivity of the analog triggers that are standard on nearly every competing controller. And the positioning of the right joystick encroaches quite close to the buttons just above it, making it easy to accidentally nudge the stick when pressing the lower B button.

Those kinds of control compromises help keep the portable Switch 2 notably smaller and lighter than most of its handheld PC competition. But they also mean my Switch 2 will probably need something like the Nyxi Hyperion Pro, which I’ve come to rely on to make portable play on the original Switch much more comfortable.

Improvements inside and out

Unlike the controllers, the screen on the Switch 2 is remarkably low on compromises. The full 1080p, 7.9-inch display supports HDR and variable refresh rates up to 120 Hz, making it a huge jump over both the original Switch and most of the screens you’ll find on competing handheld gaming PCs (or even some standard HDTVs when it comes to the maximum frame rate). While the screen lacks the truly deep blacks of a true OLED display, I found that the overall brightness (which reportedly peaks at about 450 nits) makes it hard to notice.

The bigger, brighter, sharper screen on the Switch 2 (top) is a huge improvement over the first Switch.

Credit: Kyle Orland

The bigger, brighter, sharper screen on the Switch 2 (top) is a huge improvement over the first Switch. Credit: Kyle Orland

The custom Nvidia processor inside the Switch 2 is also a welcome improvement over a Tegra processor that was already underpowered for the Switch in 2017. We’ve covered in detail how much of a difference this makes for Switch titles that have been specially upgraded to take advantage of that extra power, fixing fuzzy graphics and frame rate issues that were common on Nintendo’s previous system. It’s hard to imagine going back after seeing Tears of the Kingdom running in a silky-smooth 60 fps or enjoying the much sharper textures and resolution of portable No Man’s Sky on the Switch 2.

Link’s Awakening, Switch 1, docked. Andrew Cunningham

However, the real proof of the Switch 2’s improved power can be seen in early third-party ports like Cyberpunk 2077, Split Fiction, Hitman World of Assassination, and Street Fighter VI, which would have required significant visual downgrades to even run on the original Switch. To my eye, the visual impact of these ports is roughly comparable to what you’d get on a PS4 Pro (in handheld mode) or an Xbox Series S (in docked mode). In the medium term, that should be more than enough performance for all but the most determined pixel-counters, given the distinctly diminishing graphical returns we’re seeing from more advanced (and more expensive) hardware like the PS5 Pro.

The Switch 2 delivers a perfectly fine-looking version of Cyberpunk 2077

Credit: CD Projekt Red

The Switch 2 delivers a perfectly fine-looking version of Cyberpunk 2077 Credit: CD Projekt Red

The biggest compromise for all this extra power comes in the battery life department. Games like Mario Kart World or Cyberpunk 2077 can take the system from a full charge to completely drained in somewhere between 2 and 2.5 hours. This time span increases significantly for less demanding games like old-school 2D classics and can be slightly extended if you reduce the screen brightness. Still, it’s a bit grating to need to rely on an external battery pack just to play Mario Kart World for an entire cross-country flight.

Externally, the Switch 2 is full of tiny but welcome improvements, like an extra upper edge USB-C port for more convenient charging and a thin-but-sturdy U-shaped stand for tabletop play. Internally, the extremely welcome high-speed storage helps cut initial load times on games like Mario Kart 8 roughly in half (16.5 seconds on the Switch versus 8.5 seconds on the Switch 2 in our testing).

The embedded stand on the Switch 2 (right) is a massive improvement for tabletop mode play.

Credit: Kyle Orland

The embedded stand on the Switch 2 (right) is a massive improvement for tabletop mode play. Credit: Kyle Orland

But the 256GB of internal storage included in the Switch 2 is also laughably small, considering that individual digital games routinely require downloads of 50GB to 70GB. That’s especially true in a world where many third-party games are only available as Game Key Cards, which still require that the full game be downloaded. Most Switch 2 customers should budget $50 or more for a MicroSD Express card to add at least 256GB of additional storage.

Those Nintendo gimmicks

Despite the “more of the same” overall package, there are a few small areas where the Switch 2 does something truly new. Mouse mode is the most noticeable of these, letting you transform a Joy-Con into a PC-style mouse simply by placing it on its edges against most flat-ish surfaces. We tested this mode on surfaces ranging from a hard coffee table to a soft pillow-top mattress and this reviewer’s hairy thighs and found the mouse mode was surprisingly functional in every test. While the accuracy and precision fall off on the squishier and rounder of those tested surfaces, it’s something of a marvel that it works at all.

A bottom-up look at the awkward claw-like grip required for mouse mode.

Credit: Kyle Orland

A bottom-up look at the awkward claw-like grip required for mouse mode. Credit: Kyle Orland

Unfortunately, the ergonomics of mouse mode still leave much to be desired. This again comes down to the thinness of the Joy-Cons, which don’t have the large, rounded palm rest you’d expect from a good PC mouse. That means getting a good sense of control in mouse mode requires hooking your thumb, ring finger, and pinky finger into a weird modified claw-like grip around the Joy-Con, a pose that becomes uncomfortable after even moderate use. A holster that lets the Joy-Con slot into a more traditional mouse shape could help with this problem; failing that, mouse mode seems destined to remain a little-used gimmick.

GameChat is the Switch 2’s other major “new” feature, letting you communicate with friends directly through the system’s built-in microphone (which works rather well even across a large and noisy living room) or an optional webcam (many standard USB cameras we tested worked just fine). It’s a welcome and simple way to connect with other players without having to resort to Discord or the bizarre external smartphone app Nintendo relied on for voice chat on the original Switch.

In most ways, it feels like GameChat is just playing catch-up to the kind of social sharing features competitors like Microsoft were already including in their consoles back in 2005. However, we appreciate GameChat’s ability to easily share a live view of your screen with friends, even if the low-frame-rate video won’t give Twitch streams a run for their money.

Those kinds of complaints can also apply to GameShare, which lets Switch 2 owners stream video of their game with a second player, allowing them to join in the game from a secondary Switch or Switch 2 console (either locally or remotely). The usability of this feature seems heavily dependent on the wireless environment in the players’ house, ranging from smooth but grainy to unplayably laggy. And the fact that GameShare only works with specially coded games is a bit annoying when Steam Remote Play offers a much more generalized remote co-op solution on PC.

The best of both worlds?

This is usually the point in a console review where I warn you that buying a console at or near launch is a poor value proposition, as you’ll never pay more for a system with fewer games. That’s not necessarily true these days. The original Switch never saw an official price drop in its eight years on the market, and price increases are becoming increasingly common for some video game hardware. If you think you’re likely to ever be in the market for a Switch 2, now might be the best time to pull the trigger.

Mario Kart World offers plenty to see and do until more must-have games come to the Switch 2 library.

Credit: Nintendo

Mario Kart World offers plenty to see and do until more must-have games come to the Switch 2 library. Credit: Nintendo

That said, there’s not all that much to do with a brand new Switch 2 unit at the moment. Mario Kart World is being positioned as the major system seller at launch, revitalizing an ultra-popular, somewhat stale series with a mixed bag of bold new ideas. Nintendo’s other first-party launch title, the $10 Switch 2 Welcome Tour, is a tedious affair that offers a few diverting minigames amid dull slideshows and quizzes full of corny PR speak.

The rest of the Switch 2’s launch library is dominated by ports of games that have been available on major non-Switch platforms for anywhere from months to years. That’s nice if the Switch has been your only game console during that time or if you’ve been looking for an excuse to play these titles in full HD on a beautiful portable screen. For many gamers, though, these warmed-over re-releases won’t be that compelling.

Other than that, there are currently only the barest handful of completely original launch titles that require the Switch 2, none of which really provide a meaningful reason to upgrade right away. For now, once you tire of Mario Kart, you’ll be stuck replaying your old Switch games (often with welcome frame rate and resolution improvements) or checking out a trio of emulated GameCube games available to Switch Online Expansion Pack subscribers (they look and play just fine).

Looking to the future, the promise of further Nintendo first-party games is, as usual, the primary draw for the company’s hardware. In the near term, games like Donkey Kong Bananza, Pokémon Legends Z-A, and Metroid Prime 4 (which will also be available on the older Switch with less wow-inducing performance) are the biggest highlights in the pipeline. Projecting a little further out, the Switch 2 will be the only way to legitimately play Mario and Zelda adventures that seem highly likely to be can’t-miss classics, given past performance.

From top: Switch 2, Steam Deck OLED, Lenovo Legion Go S. Two of these three can play your entire Steam library. One of these three can play the new Mario Kart…

Credit: Kyle Orland

From top: Switch 2, Steam Deck OLED, Lenovo Legion Go S. Two of these three can play your entire Steam library. One of these three can play the new Mario Kart… Credit: Kyle Orland

Nintendo aside, the Switch 2 seems well-positioned to receive able portable-ready ports of some of the more demanding third-party games in the foreseeable future. Already, we’ve seen Switch 2 announcements for catalog titles like Elden Ring and future releases like 007 First Light, as well as a handful of third-party exclusives like FromSoft’s vampire-filled Duskbloods.

Those are pretty good prospects for a $450 portable/TV console hybrid. But even with a bevy of ports and exclusives, it could be hard for the Switch 2’s library to compete with the tens of thousands of games available on any handheld PC worth its salt. You’ll pay a bit more for one of those portables if you’re looking for something that matches the quality of the Switch 2’s screen and processor—for the moment, at least. But the PC ecosystem’s wider software selection and ease of customization might make that investment worth it for gamers who don’t care too much about Nintendo’s first-party efforts.

If you found yourself either regularly using or regularly coveting a Switch at any point over the last eight years, the Switch 2 is an obvious and almost necessary upgrade. If you’ve resisted the siren song for this long, though, you can probably continue to ignore Nintendo’s once-novel hardware line.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Nintendo Switch 2: The Ars Technica review Read More »

how-to-draft-a-will-to-avoid-becoming-an-ai-ghost—it’s-not-easy

How to draft a will to avoid becoming an AI ghost—it’s not easy


Why requests for “no AI resurrections” will probably go ignored.

Proton beams capturing the ghost of OpenAI to suck it into a trap where it belongs

All right! This AI is TOAST! Credit: Aurich Lawson

All right! This AI is TOAST! Credit: Aurich Lawson

As artificial intelligence has advanced, AI tools have emerged to make it possible to easily create digital replicas of lost loved ones, which can be generated without the knowledge or consent of the person who died.

Trained on the data of the dead, these tools, sometimes called grief bots or AI ghosts, may be text-, audio-, or even video-based. Chatting provides what some mourners feel is a close approximation to ongoing interactions with the people they love most. But the tech remains controversial, perhaps complicating the grieving process while threatening to infringe upon the privacy of the deceased, whose data could still be vulnerable to manipulation or identity theft.

Because of suspected harms and perhaps a general repulsion to the idea of it, not everybody wants to become an AI ghost.

After a realistic video simulation was recently used to provide a murder victim’s impact statement in court, Futurism summed up social media backlash, noting that the use of AI was “just as unsettling as you think.” And it’s not the first time people have expressed discomfort with the growing trend. Last May, The Wall Street Journal conducted a reader survey seeking opinions on the ethics of so-called AI resurrections. Responding, a California woman, Dorothy McGarrah, suggested there should be a way to prevent AI resurrections in your will.

“Having photos or videos of lost loved ones is a comfort. But the idea of an algorithm, which is as prone to generate nonsense as anything lucid, representing a deceased person’s thoughts or behaviors seems terrifying. It would be like generating digital dementia after your loved ones’ passing,” McGarrah said. “I would very much hope people have the right to preclude their images being used in this fashion after death. Perhaps something else we need to consider in estate planning?”

For experts in estate planning, the question may start to arise as more AI ghosts pop up. But for now, writing “no AI resurrections” into a will remains a complicated process, experts suggest, and such requests may not be honored by all unless laws are changed to reinforce a culture of respecting the wishes of people who feel uncomfortable with the idea of haunting their favorite people through AI simulations.

Can you draft a will to prevent AI resurrection?

Ars contacted several law associations to find out if estate planners are seriously talking about AI ghosts. Only the National Association of Estate Planners and Councils responded; it connected Ars to Katie Sheehan, an expert in the estate planning field who serves as a managing director and wealth strategist for Crestwood Advisors.

Sheehan told Ars that very few estate planners are prepared to answer questions about AI ghosts. She said not only does the question never come up in her daily work, but it’s also “essentially uncharted territory for estate planners since AI is relatively new to the scene.”

“I have not seen any documents drafted to date taking this into consideration, and I review estate plans for clients every day, so that should be telling,” Sheehan told Ars.

Although Sheehan has yet to see a will attempting to prevent AI resurrection, she told Ars that there could be a path to make it harder for someone to create a digital replica without consent.

“You certainly could draft into a power of attorney (for use during lifetime) and a will (for use post death) preventing the fiduciary (attorney in fact or executor) from lending any of your texts, voice, image, writings, etc. to any AI tools and prevent their use for any purpose during life or after you pass away, and/or lay the ground rules for when they can and cannot be used after you pass away,” Sheehan told Ars.

“This could also invoke issues with contract, property and intellectual property rights, and right of publicity as well if AI replicas (image, voice, text, etc.) are being used without authorization,” Sheehan said.

And there are likely more protections for celebrities than for everyday people, Sheehan suggested.

“As far as I know, there is no law” preventing unauthorized non-commercial digital replicas, Sheehan said.

Widely adopted by states, the Revised Uniform Fiduciary Access to Digital Assets Act—which governs who gets access to online accounts of the deceased, like social media or email accounts—could be helpful but isn’t a perfect remedy.

That law doesn’t directly “cover someone’s AI ghost bot, though it may cover some of the digital material some may seek to use to create a ghost bot,” Sheehan said.

“Absent any law” blocking non-commercial digital replicas, Sheehan expects that people’s requests for “no AI resurrections” will likely “be dealt with in the courts and governed by the terms of one’s estate plan, if it is addressed within the estate plan.”

Those potential fights seemingly could get hairy, as “it may be some time before we get any kind of clarity or uniform law surrounding this,” Sheehan suggested.

In the future, Sheehan said, requests prohibiting digital replicas may eventually become “boilerplate language in almost every will, trust, and power of attorney,” just as instructions on digital assets are now.

As “all things AI become more and more a part of our lives,” Sheehan said, “some aspects of AI and its components may also be woven throughout the estate plan regularly.”

“But we definitely aren’t there yet,” she said. “I have had zero clients ask about this.”

Requests for “no AI resurrections” will likely be ignored

Whether loved ones would—or even should—respect requests blocking digital replicas appears to be debatable. But at least one person who built a grief bot wished he’d done more to get his dad’s permission before moving forward with his own creation.

A computer science professor at the University of Washington Bothell, Muhammad Aurangzeb Ahmad, was one of the earliest AI researchers to create a grief bot more than a decade ago after his father died. He built the bot to ensure that his future kids would be able to interact with his father after seeing how incredible his dad was as a grandfather.

When Ahmad started his project, there was no ChatGPT or other advanced AI model to serve as the foundation, so he had to train his own model based on his dad’s data. Putting immense thought into the effort, Ahmad decided to close off the system from the rest of the Internet so that only his dad’s memories would inform the model. To prevent unauthorized chats, he kept the bot on a laptop that only his family could access.

Ahmad was so intent on building a digital replica that felt just like his dad that it didn’t occur to him until after his family started using the bot that he never asked his dad if this was what he wanted. Over time, he realized that the bot was biased to his view of his dad, perhaps even feeling off to his siblings who had a slightly different relationship with their father. It’s unclear if his dad would similarly view the bot as preserving just one side of him.

Ultimately, Ahmad didn’t regret building the bot, and he told Ars he thinks his father “would have been fine with it.”

But he did regret not getting his father’s consent.

For people creating bots today, seeking consent may be appropriate if there’s any chance the bot may be publicly accessed, Ahmad suggested. He told Ars that he would never have been comfortable with the idea of his dad’s digital replica being publicly available because the question of an “accurate representation” would come even more into play, as malicious actors could potentially access it and sully his dad’s memory.

Today, anybody can use ChatGPT’s model to freely create a similar bot with their own loved one’s data. And a wide range of grief tech services have popped up online, including HereAfter AI, SeanceAI, and StoryFile, Axios noted in an October report detailing the latest ways “AI could be used to ‘resurrect’ loved ones.” As this trend continues “evolving very fast,” Ahmad told Ars that estate planning is probably the best way to communicate one’s AI ghost preferences.

But in a recently published article on “The Law of Digital Resurrection,” law professor Victoria Haneman warned that “there is no legal or regulatory landscape against which to estate plan to protect those who would avoid digital resurrection, and few privacy rights for the deceased. This is an intersection of death, technology, and privacy law that has remained relatively ignored until recently.”

Haneman agreed with Sheehan that “existing protections are likely sufficient to protect against unauthorized commercial resurrections”—like when actors or musicians are resurrected for posthumous performances. However, she thinks that for personal uses, digital resurrections may best be blocked not through estate planning but by passing a “right to deletion” that would focus on granting the living or next of kin the rights to delete the data that could be used to create the AI ghost rather than regulating the output.

A “right to deletion” could help people fight inappropriate uses of their loved ones’ data, whether AI is involved or not. After her article was published, a lawyer reached out to Haneman about a client’s deceased grandmother whose likeness was used to create a meme of her dancing in a church. The grandmother wasn’t a public figure, and the client had no idea “why or how somebody decided to resurrect her deceased grandmother,” Haneman told Ars.

Although Haneman sympathized with the client, “if it’s not being used for a commercial purpose, she really has no control over this use,” Haneman said. “And she’s deeply troubled by this.”

Haneman’s article offers a rare deep dive into the legal topic. It sensitively maps out the vague territory of digital rights of the dead and explains how those laws—or the lack thereof—interact with various laws dealing with death, from human remains to property rights.

In it, Haneman also points out that, on balance, the rights of the living typically outweigh the rights of the dead, and even specific instructions on how to handle human remains aren’t generally considered binding. Some requests, like organ donation that can benefit the living, are considered critical, Haneman noted. But there are mixed results on how courts enforce other interests of the dead—like a famous writer’s request to destroy all unpublished work or a pet lover’s insistence to destroy their cat or dog at death.

She told Ars that right now, “a lot of people are like, ‘Why do I care if somebody resurrects me after I’m dead?’ You know, ‘They can do what they want.’ And they think that, until they find a family member who’s been resurrected by a creepy ex-boyfriend or their dead grandmother’s resurrected, and then it becomes a different story.”

Existing law may protect “the privacy interests of the loved ones of the deceased from outrageous or harmful digital resurrections of the deceased,” Haneman noted, but in the case of the dancing grandma, her meme may not be deemed harmful, no matter how much it troubles the grandchild to see her grandma’s memory warped.

Limited legal protections may not matter so much if, culturally, communities end up developing a distaste for digital replicas, particularly if it becomes widely viewed as disrespectful to the dead, Haneman suggested. Right now, however, society is more fixated on solving other problems with deepfakes rather than clarifying the digital rights of the dead. That could be because few people have been impacted so far, or it could also reflect a broader cultural tendency to ignore death, Haneman told Ars.

“We don’t want to think about our own death, so we really kind of brush aside whether or not we care about somebody else being digitally resurrected until it’s in our face,” Haneman said.

Over time, attitudes may change, especially if the so-called “digital afterlife industry” takes off. And there is some precedent that the law could be changed to reinforce any culture shift.

“The throughline revealed by the law of the dead is that a sacred trust exists between the living and the deceased, with an emphasis upon protecting common humanity, such that data afforded no legal status (or personal data of the deceased) may nonetheless be treated with dignity and receive some basic protections,” Haneman wrote.

An alternative path to prevent AI resurrection

Preventing yourself from becoming an AI ghost seemingly now falls in a legal gray zone that policymakers may need to address.

Haneman calls for a solution that doesn’t depend on estate planning, which she warned “is a structurally inequitable and anachronistic approach that maximizes social welfare only for those who do estate planning.” More than 60 percent of Americans die without a will, often including “those without wealth,” as well as women and racial minorities who “are less likely to die with a valid estate plan in effect,” Haneman reported.”We can do better in a technology-based world,” Haneman wrote. “Any modern framework should recognize a lack of accessibility as an obstacle to fairness and protect the rights of the most vulnerable through approaches that do not depend upon hiring an attorney and executing an estate plan.”

Rather than twist the law to “recognize postmortem privacy rights,” Haneman advocates for a path for people resistant to digital replicas that focuses on a right to delete the data that would be used to create the AI ghost.

“Put simply, the deceased may exert control over digital legacy through the right to deletion of data but may not exert broader rights over non-commercial digital resurrection through estate planning,” Haneman recommended.

Sheehan told Ars that a right to deletion would likely involve estate planners, too.

“If this is not addressed in an estate planning document and not specifically addressed in the statute (or deemed under the authority of the executor via statute), then the only way to address this would be to go to court,” Sheehan said. “Even with a right of deletion, the deceased would need to delete said data before death or authorize his executor to do so post death, which would require an estate planning document, statutory authority, or court authority.”

Haneman agreed that for many people, estate planners would still be involved, recommending that “the right to deletion would ideally, from the perspective of estate administration, provide for a term of deletion within 12 months.” That “allows the living to manage grief and open administration of the estate before having to address data management issues,” Haneman wrote, and perhaps adequately balances “the interests of society against the rights of the deceased.”

To Haneman, it’s also the better solution for the people left behind because “creating a right beyond data deletion to curtail unauthorized non-commercial digital resurrection creates unnecessary complexity that overreaches, as well as placing the interests of the deceased over those of the living.”

Future generations may be raised with AI ghosts

If a dystopia that experts paint comes true, Big Tech companies may one day profit by targeting grieving individuals to seize the data of the dead, which could be more easily abused since it’s granted fewer rights than data of the living.

Perhaps in that future, critics suggest, people will be tempted into free trials in moments when they’re missing their loved ones most, then forced to either pay a subscription to continue accessing the bot or else perhaps be subjected to ad-based models where their chats with AI ghosts may even feature ads in the voices of the deceased.

Today, even in a world where AI ghosts aren’t yet compelling ad clicks, some experts have warned that interacting with AI ghosts could cause mental health harms, New Scientist reported, especially if the digital afterlife industry isn’t carefully designed, AI ethicists warned. Some people may end up getting stuck maintaining an AI ghost if it’s left behind as a gift, and ethicists suggested that the emotional weight of that could also eventually take a negative toll. While saying goodbye is hard, letting go is considered a critical part of healing during the mourning process, and AI ghosts may make that harder.

But the bots can be a helpful tool to manage grief, some experts suggest, provided that their use is limited to allow for a typical mourning process or combined with therapy from a trained professional, Al Jazeera reported. Ahmad told Ars that working on his bot has not only kept his father close to him but also helped him think more deeply about relationships and memory.

Haneman noted that people have many ways of honoring the dead. Some erect statues, and others listen to saved voicemails or watch old home movies. For some, just “smelling an old sweater” is a comfort. And creating digital replicas, as creepy as some people might find them, is not that far off from these traditions, Haneman said.

“Feeding text messages and emails into existing AI platforms such as ChatGPT and asking the AI to respond in the voice of the deceased is simply a change in degree, not in kind,” Haneman said.

For Ahmad, the decision to create a digital replica of his dad was a learning experience, and perhaps his experience shows why any family or loved one weighing the option should carefully consider it before starting the process.

In particular, he warns families to be careful introducing young kids to grief bots, as they may not be able to grasp that the bot is not a real person. When he initially saw his young kids growing confused with whether their grandfather was alive or not—the introduction of the bot was complicated by the early stages of the pandemic, a time when they met many relatives virtually—he decided to restrict access to the bot until they were older. For a time, the bot only came out for special events like birthdays.

He also realized that introducing the bot also forced him to have conversations about life and death with his kids at ages younger than he remembered fully understanding those concepts in his own childhood.

Now, Ahmad’s kids are among the first to be raised among AI ghosts. To continually enhance the family’s experience, their father continuously updates his father’s digital replica. Ahmad is currently most excited about recent audio advancements that make it easier to add a voice element. He hopes that within the next year, he might be able to use AI to finally nail down his South Asian father’s accent, which up to now has always sounded “just off.” For others working in this space, the next frontier is realistic video or even augmented reality tools, Ahmad told Ars.

To this day, the bot retains sentimental value for Ahmad, but, as Haneman suggested, the bot was not the only way he memorialized his dad. He also created a mosaic, and while his father never saw it, either, Ahmad thinks his dad would have approved.

“He would have been very happy,” Ahmad said.

There’s no way to predict how future generations may view grief tech. But while Ahmad said he’s not sure he’d be interested in an augmented reality interaction with his dad’s digital replica, kids raised seeing AI ghosts as a natural part of their lives may not be as hesitant to embrace or even build new features. Talking to Ars, Ahmad fondly remembered his young daughter once saw that he was feeling sad and came up with her own AI idea to help her dad feel better.

“It would be really nice if you can just take this program and we build a robot that looks like your dad, and then add it to the robot, and then you can go and hug the robot,” she said, according to her father’s memory.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

How to draft a will to avoid becoming an AI ghost—it’s not easy Read More »

she-was-a-disney-star-with-platinum-records,-but-bridgit-mendler-gave-it-up-to-change-the-world

She was a Disney star with platinum records, but Bridgit Mendler gave it up to change the world


“The space industry has a ground bottleneck, and the problem is going to get worse.”

The Northwood Space team is all smiles after the first successful test of “Frankie.” Clockwise, from lower left: Shaurya Luthra, Marvin Shu, Josh Lehtonen, Thomas Row, Dan Meinzer, Griffin Cleverly, Bridgit Mendler. Credit: Shaurya Luthra

The Northwood Space team is all smiles after the first successful test of “Frankie.” Clockwise, from lower left: Shaurya Luthra, Marvin Shu, Josh Lehtonen, Thomas Row, Dan Meinzer, Griffin Cleverly, Bridgit Mendler. Credit: Shaurya Luthra

Bridgit Mendler was not in Hollywood anymore. Instead, she found herself in rural North Dakota, where the stars sparkled overhead rather than on the silver screen. And she was freezing.

When her team tumbled out of their rental cars after midnight, temperatures had already plummeted into the 40s. Howling winds carried their breath away before it could fog the air. So it was with no small sense of urgency that the group scrambled to assemble a jury-rigged antenna to talk to a spacecraft that would soon come whizzing over the horizon. A few hours later, the rosy light of dawn shone on the faces of a typically scrappy space startup: mostly male, mostly disheveled.

Then there was Mendler, the former Disney star and pop music sensation—and she was running the whole show.

Mendler followed an improbable path from the Disney Channel to North Dakota. She was among the brightest adolescent stars born in the early 1990s, along with Ariana Grande, Demi Lovato, and Selena Gomez, who gained fame as teenagers on the Disney Channel and Nickelodeon by enthralling Gen Z. During the first decade of the new millennium, before the rise of Musical.ly and then TikTok, television still dominated the attention of young children. And they were watching the Disney Channel in droves.

Like many of her fellow teenage stars, Mendler parlayed television fame into pop stardom, scoring a handful of platinum records. But in her mid-20s, Mendler left that world behind and threw herself into academia. She attended some of the country’s top universities and married an aerospace engineer. A couple of years ago, the two of them founded a company to address what they believed was a limiting factor in the space economy: transferring data from orbit.

Their company, Northwood Space, employed just six people when it deployed to North Dakota last October. But the team already had real hardware. On the windswept plain, they unpacked and assembled “Frankie,” their cobbled-together, phased-array satellite dish affectionately named after Mary Shelley’s masterpiece Frankenstein.

“We had the truck arrive at two o’clock in the morning,” Mendler said. “Six hours later, we were operational. We started running passes. We were able to transmit to a satellite on our first try.” The team had been up all night by then. “I guess that’s when my Celsius addiction kind of kicked in,” she said.

Guzzling energy drinks isn’t the healthiest activity, but it fits with the high-energy, frenetic rush of building a space startup. To survive without a billionaire’s backing, startups must stay lean and move quickly. And it’s not at all clear that Northwood will survive, as most space startups fail due to a lack of funding, long technology horizons, or regulatory hurdles. So within a year of seriously beginning operations, it’s notable that Northwood was already in the field, testing hardware and finding modest success.

From a technological standpoint, a space mission must usually complete three functions. A spacecraft must launch into orbit. It must deploy its solar panels, begin operations, and collect data. Finally, it must send its data back. If satellite data does not return to Earth in a timely manner, it’s worthless. This process is far more difficult than one might think—and not that many people think about it. “Ground stations,” Mendler acknowledges, are some of the most “unsexy and boring problems” in the space industry.

The 32-year-old Mendler now finds herself exactly where she wants to be. The life she has chosen—leading a startup in gritty El Segundo, California, delving into regulatory minutiae, and freezing in rural North Dakota to tackle “boring” problems—lies a world away from a seemingly glamorous life in the entertainment industry. That’s just fine with her.

“When I was growing up, I always said I wanted to be everything,” she said. “So in a certain sense, maybe I wouldn’t be surprised about where I ended up. But I would certainly be happy.”

Good Luck Charlie

Mendler may have wanted to be everything, but in her early years, what she most wanted to be was an actor. In 2001, when Mendler was eight, her parents moved across the country from Washington, DC, to the Bay Area. Her father designed fuel-efficient automobile engines, and her mother was an architect doing green design. Her mom, working from home, enrolled Mendler in an acting camp to help fill the days.

Mendler caught the bug. Although her parents were supportive of these dreams, they told her she would have to work to make it happen.

“We still had the Yellow Pages at the time, and so my little kid self was just flipping through the Yellow Pages trying to figure out how to get an agent,” she said. “And it was a long journey. Something that people outside of acting maybe don’t realize is that you encounter a shit ton of rejection. And so my introduction to acting was a ton of rejection in the entertainment industry. But I was like, ‘I’m gonna freaking figure this out.’”

After three years, Mendler began to get voice-acting roles in small films and video games. In November, 2006, she appeared on television for the first time in an episode of the soap opera General Hospital. Another three years would pass before she had a real breakthrough, appearing as a recurring character on Wizards of Waverly Place, a Disney Channel show starring Selena Gomez. She played a vampire girlfriend.

Mendler starred as “Teddy” in the Disney Channel show Good Luck Charlie. Here, she’s sharing a moment with her sister, “Charlie.”

Credit: Adam Taylor/Disney Channel via Getty Images

Mendler starred as “Teddy” in the Disney Channel show Good Luck Charlie. Here, she’s sharing a moment with her sister, “Charlie.” Credit: Adam Taylor/Disney Channel via Getty Images

Mendler impressed enough in this role to be offered the lead in a new sitcom on Disney Channel, Good Luck Charlie, playing the older sister to a toddler named Charlie. In this role, Mendler made a video diary for Charlie, offering advice on how to be a successful teenager. The warm-hearted series ran for four years. Episodes regularly averaged more than 5 million viewers.

My two daughters were among them. They were a decade younger than Mendler, who was 18 when the first episodes aired in 2010. I would sometimes watch the show with my girls. Mendler’s character was endearing, and her advice to Charlie, I believe, helped my own younger daughters anticipate their teenage years. A decade and a half later, my kids still look up to her not just for being on television but for everything else she has accomplished.

As her star soared on the Disney Channel, Mendler moved into music. She recorded gold and platinum records, including her biggest hit, “Ready or Not,” in 2012.

Prominent childhood actors have always struggled with the transition to adulthood. Disney stars like Lindsay Lohan and Demi Lovato developed serious substance abuse problems, while others, such as Miley Cyrus and Selena Gomez, abruptly adopted new, much more mature images that contrasted sharply with their characters on children’s TV shows.

Mendler chose a different path.

Making an impact

As a pre-teen, Mendler would lie in bed at night listening to her mom working upstairs in the kitchen. They lived in a small house amid the redwoods north of Sausalito, California. When Mendler awoke some mornings, her mom would still be tapping away at her architectural designs. “That’s kind of how I viewed work,” Mendler said.

One of her favorite books as a kid was Miss Rumphius, about a woman who spread lupine seeds (also known as bluebonnets) along the coast of Maine to make the countryside more beautiful. The picture book offered an empowering message: Every person has a choice about how to make an impact on the world.

This environment shaped Mendler. She saw her mom work all night, saw experimental engines built by her dad scattered around the house, and had conversations around the dinner table about the future and how she could find her place in it. As she aged into adulthood, performing before thousands of people on stage and making TV shows and movies, Mendler felt like she was missing something. In her words, life in Los Angeles felt “anemic.” She had always liked to create things herself, and she wasn’t doing that.

“The niche that I had wedged myself into was not allowing me to have my own voice and perspective,” she said. “I wound up going down a path where I was more the vessel for other people’s creations, and I wondered what it would be like to be a little bit more in charge of my voice than I was in Hollywood.”

So Mendler channeled her inner nerd. She began to bring textbooks on game theory to the set of movies and TV shows. She took a few college courses. When a topic intrigued her, she would email an author or professor or reach out to them on Twitter.

Her interest was turbocharged when she neared her 25th birthday. Throughout the mid-2010s, Mendler continued to act and release music. One day, while filming a movie called Father of the Year in Massachusetts for Netflix, she had a day off. Her uncle took Mendler to visit the famed Media Lab at the Massachusetts Institute of Technology. This research lab brings together grad students, researchers, and entrepreneurs from various disciplines to develop technology—things like socially engaging robots and biologically inspired engineering. It was a vibrant meeting space for brilliant minds who wanted to build a better future.

“I knew right then I needed to go there,” she said. “I needed to find a way.”

But there was a problem. The Media Lab only offered graduate student programs. Mender didn’t have an undergraduate degree. She’d only taken a handful of college courses. Officials at MIT told her that if she could build her own things, they would consider admitting her to the program. So she threw herself into learning how to code, working on starter projects in HTML, JavaScript, CSS, and Python. It worked.

In 2018, Mendler posted on Twitter that she was starting a graduate program at MIT to focus on better understanding social media. “As an entertainer, for years I struggled with social media because I felt like there was a more loving and human way to connect with fans. That is what I’m going to study,” she wrote. “Overall, I just hope that this time can be an adventure, and I have a thousand ideas I want to share with you so please stay tuned!”

That fall she did, in fact, start working on social media. Mendler was fascinated with it—Twitter in particular—and its role as the new public square. But at the Media Lab, there are all manner of interdisciplinary groups. The one right next to Mendler, for example, was focused on space.

Pop startup

In the months before she left Los Angeles for MIT, Mendler’s life changed in an important way. Through friends, she met an aerospace engineer named Griffin Cleverly. Southern California is swarming with aerospace engineers, but it’s perhaps indicative of the different circles between Hollywood and Hawthorne that Cleverly was the first rocket scientist Mendler had ever met.

“The conversations we had were totally different,” she said. “He has so many thoughts about so many things, both in aerospace and other topics.”

They hit it off. Not long after Mendler left for the MIT Lab, Cleverly followed her to Massachusetts, first applying himself to different projects at the lab before taking a job working on satellites for Lockheed Martin. The two married a year later, in 2019.

By the next spring, Mendler was finishing her master’s thesis at MIT on using technology to help resolve conflicts. Then the world shut down due to the COVID-19 pandemic. She and Cleverly suddenly had a lot of time on their hands.

They retreated to a lake house owned by Mendler’s family in rural New Hampshire. The house had been in the family since just after World War II, and the couple decided to experiment with antennas to see what they could do. They would periodically mask up and drive to a Home Depot in nearby Concord for supplies. They built different kinds of antennas, including parabolic and helical designs, to see what they could communicate with far away.

Mendler gave up a successful career in music and acting to earn a master’s degree at MIT.

Mendler gave up a successful career in music and acting to earn a master’s degree at MIT.

As they experimented, Mendler and Cleverly began to think about the changing nature of the space industry. At the time, SpaceX’s Starlink constellation was just coming online to deliver broadband around the world. The company’s Falcon 9 launches were ramping up. Satellites were becoming smaller and cheaper, constellations were proliferating, and companies like K2 were seeking to mass produce.

Mendler and Cleverly believed that the volume of data coming down from space was about to explode—and that existing commercial networks weren’t capable of handling it all.

“The space industry has been on even-keeled growth for a long time,” Cleverly said. “But what happens when you hit that hockey stick across the industry? Launch seemed like it was getting taken care of. Mass manufacturing of satellites appeared to be coming. We saw these trends and were trying to understand how the industry was going to react to them. When we looked at the ground side, it wasn’t clear that anyone really was thinking about the ramifications there.”

As the pandemic waned, the couple resumed more normal lives. Mendler continued her studies at MIT, but she was now thoroughly hooked on space. Her husband excelled at working with technology to communicate with satellites, so Mendler focused on the non-engineering side of the space industry. “With space, so many folks focus on how complicated it is from an engineering perspective, and for good reason, because there are massive engineering problems to solve,” she said. “But these are also really operationally complex problems.”

For example, ground systems that communicate with satellites as they travel around the world operate in different jurisdictions, necessitating contracts and transactions in many countries. Issues with liability, intellectual property, insurance, and regulations abound. So Mendler decided that the next logical step after MIT was to attend law school. Because she lacked an undergraduate degree, most schools wouldn’t admit her. But Harvard University has an exception for exceptional students.

“Harvard was one of the few schools that admitted me,” she said. “I ended up going to law school because I was curious about understanding the operational aspects of working in space.”

These were insanely busy years. In 2022, when she began law school, Mendler was still conducting research at MIT. She soon got an internship at the Federal Communications Commission that gave her a broader view of the space industry from a regulatory standpoint. And in August 2022, she and Cleverly, alongside a software expert from Capella Space named Shaurya Luthra, founded Northwood Space.

So Bridgit Mendler, while studying at MIT and Harvard simultaneously, added a new title to her CV: chief executive officer.

Wizards of Waverly Space

Initially, the founders of Northwood Space did little more than study the market and write a few research papers, assessing the demand for sending data down to Earth, whether there would be customers for a new commercial network to download this data, and if affordable technology solutions could be built for this purpose. After about a year, they were convinced.

“Here’s the vision we ended up with,” Mendler said. “The space industry has a ground bottleneck, and the problem is going to get worse. So let’s build a network that can address that bottleneck and accelerate space capabilities. The best way to go about that was building capacity.”

If you’re like most people, you don’t spend much time pondering how data gets to and from space. To the extent one thinks about Starlink, it’s probably the satellite trains and personal dishes that spring to mind. But SpaceX has also had to build large ground stations around the world, known as gateways, to pipe data into space from the terrestrial Internet. Most companies lack the resources to build global gateways, so they use a shared commercial network. This has drawbacks, though.

Getting data down in a timely manner is not a trivial problem. From the earliest days of NASA through commercial operations today, operators on Earth generally do not maintain continual contact with satellites in space. For spacecraft in a polar orbit, contact might be made several times a day, with a lag in data of perhaps 30 minutes or as high as 90 minutes in some cases.

This is not great. Let’s say you want to use satellite imagery to fight wildfires. Data on the spread of a wildfire can help operators on the ground deploy resources to fight it. But for this information to be useful in real time, it must be downlinked within minutes of its collection. The existing infrastructure incurs delays that make most currently collected data non-actionable for firefighters. So the first problem Northwood wants to solve is persistence, with a network of ground stations around the world that would allow operators to continually connect with their satellites.

After persistence, the next problem faced by satellite operators is constraints on bandwidth. Satellites collect reams of data in orbit and must either process it on board or throw a lot of it away.

Mendler said that within three years, Northwood aims to build a shared network capable of linking to 500 spacecraft at a time. This may not sound like a big deal, but it’s larger than every commercially available shared ground network and the US government’s Satellite Control Network combined. And these tracking centers took decades to build. Each of Northwood’s sites, spread across six continents, is intended to download far more data than can be brought down on commercial networks today, the equivalent of streaming tens of thousands of Blu-ray discs from space concurrently.

“Our job is to figure out how to most efficiently deliver those capabilities,” Mendler said. “We’re asking, how can we reliably deliver a new standard of connectivity to the industry, at a viable price point?”

With these aims in mind, Mendler and Cleverly got serious about their startup in the fall of 2023.

Frankie goes from Hollywood

Over the previous decade, SpaceX had revolutionized the rocket industry, and a second generation of private launch companies was maturing. Some, like Rocket Lab, were succeeding. Others, such as Virgin Orbit, had gone bankrupt. There were important lessons in these ashes for a space startup CEO.

Among the most critical for Mendler was keeping costs low. Virgin Orbit’s payroll had approached 700 people to support a rocket capable of limited revenue. That kind of payroll growth was a ticket to insolvency. She also recognized SpaceX’s relentless push to build things in-house and rapidly prototype hardware through iterative design as key to the company’s success.

By the end of 2023, Mendler was raising the company’s initial funding, a seed round worth $6.3 million. Northwood emerged from “stealth mode” in February 2024 and set about hiring a small team. Early that summer, it began pulling together components to build Frankie, a prototype for the team’s first product—modular phased-array antennas. Northwood put Frankie together in four months.

“Our goal was to build things quickly,” Mendler said. “That’s why the first thing we did after raising our seed round was to build something and put it in the field. We wanted to show people it was real.”

Unlike a parabolic dish antenna—think a DirecTV satellite dish or the large ground-based antennas that Ellie Arroway uses in Contact—phased-array antennas are electrically steerable. Instead of needing to point directly at their target to collect a signal, phased-array antennas produce a beam of radio waves that can “point” in different directions without moving the antenna. The technology is decades old, but its use in commercial applications has been limited because it’s more difficult to work with than parabolic dishes. In theory, however, phased array antennas should let Northwood build more capable ground stations, pulling down vastly more data within a smaller footprint. In business terms, the technology is “scalable.”

But before a technology can scale, it must work.

In late September 2024, the company’s six engineers, a business development director, and Mendler packed Frankie into a truck and sent it rolling off to the Dakotas. They soon followed, flying commercial to Denver and then into Devils Lake Regional Airport. On the first day of October, the party checked into Spirit Lake Casino.

That night, they drove out to a rural site owned by Planet Labs, nearly an hour away, that has a small network station to communicate with its Earth-imaging satellites. This site consisted of two large antennas, a small operations shed for the networking equipment, and a temporary trailer. The truck hauling Frankie arrived at 2 am local time.

The company’s antenna, “Frankie,” arrives early on October 2 and the team begins to unload it.

Credit: Bridgit Mendler

The company’s antenna, “Frankie,” arrives early on October 2 and the team begins to unload it. Credit: Bridgit Mendler

Before sunrise, as the team completed setup, Mendler went into the nearest town, Maddock. The village has one main establishment, Harriman’s Restaurant & Bobcat Bar. The protean facility also serves as an opera house, community library, and meeting place. When Mendler went to the restaurant’s counter and ordered eight breakfast burritos, she attracted notice. But the locals were polite.

Returning to her team, they gathered in the small Planet Labs trailer on the windswept site. There were no lights, so they carried their portable floodlights inside. The space lacked room for chairs, so they huddled around one another in what they affectionately began referring to as the “food closet.” At least it kept them out of the wind.

The team had some success on the first morning, as Frankie communicated with a SkySat flying overhead, a Planet satellite a little larger than a mini refrigerator. First contact came at 7: 34 am, and they had some additional successes throughout the day. But communication remained one-way, from the ground to space. For satellite telemetry, tracking, and command—TT&C in industry parlance—they needed to close the loop. But Frankie could not receive a clear X Band signal from space; it was coming in too weak.

“While we could command the satellite, we could not receive the acknowledgments of the command,” Mendler said.

The best satellite passes were clumped during the overnight hours. So over the next few days, the team napped in their rental cars, waiting to see if Frankie could hear satellites calling home. But as the days ticked by, they had no luck. Time was running out.

Solving their RF problems

As the Northwood engineers troubleshot the problem with low signal power, they realized that with some minor changes, they could probably boost the signal. But this would require reconfiguring and calibrating Frankie.

The team scrambled to make these changes on the afternoon of October 4, before four passes in a row that night starting at 3 am. This was one of their last, best chances to make things work. After implementing the fix, the bedraggled Northwood team ate a muted dinner at their casino hotel before heading back out to the ground station. There, they waited in nervous silence for the first pass of the night.

When the initial satellite passed overhead, the space-to-ground power finally reached the requisite level. But Northwood could not decode the message due to a coaxial cable being plugged into the wrong port.

Then they missed the second pass because an inline amplifier was mistakenly switched off.

The third satellite pass failed due to a misrouted switch in Planet’s radio-frequency equipment.

So they were down to the final pass. But this time, there were no technical snafus. The peak of the signal came in clean and, to the team’s delight, with an even higher signal-to-noise ratio than anticipated. Frankie had done it. High fives and hugs all around. The small team crashed that morning before successfully repeating the process the next day.

After that, it was time to celebrate, Dakota style. The team decamped to Harriman’s, where Mendler’s new friend Jim Walter, the proprietor, served them shots. After a while, he disappeared into the basement and returned with Bobcat Bar T-shirts he wanted them to have as mementos. Later that night, the Northwood team played blackjack at the casino and lost their money at the slot machines.

Yet in the bigger picture, they had gambled and won. Mendler wanted to build fast, to show the world that her company had technical chops. They had thrown Frankie together and rushed headlong into the rough-and-tumble countryside, plugged in the antenna, and waited to see what happened. A lot of bad things could have happened, but instead, the team hit the jackpot.

“We were able to go from the design to actually build and deploy in that four-month time period,” Mendler said. “That resulted in a lot of different customers knocking down our door and helping to shape requirements for this next version of the system that we’re going to be able to start demoing soon. So in half a year, we radically revised our product, and we will begin actually putting them out in the field and operating this year. Time is very much at the forefront of our mind.”

Can ground stations fly high?

The fundamental premise behind Northwood is that a bottleneck constrains the ability to bring down data from space and that a lean, new-space approach can disrupt the existing industry. But is this the case?

“The demand for ground-based connectivity is rising,” said Caleb Henry, director of research at Quilty Space. “And your satellites are only as effective as your gateways.”

This trend is being driven not only by the rise of satellites in general but also by higher-resolution imaging satellites like Planet’s Pelican satellites or BlackSky’s Gen-3 satellites. There has also been a corresponding increase in the volume of data from synthetic aperture radar satellites, Henry said. Recent regulatory filings, such as this one in the United Kingdom, underscore the notion that ongoing data bottlenecks persist. However, Henry said it’s not clear whether this growth in data will be linear or exponential.

The idea of switching from large, single-dish antennas to phased arrays is not new. This is partly because there are questions about how expensive it would be to build large, capable phased-array antennas to talk to satellites hundreds of miles away—and how energy intensive this would be.

Commercial satellite operators currently have a limited number of options for communicating with the ground. A Norwegian company, Kongsberg Satellite Services (or KSAT), has the largest network of ground stations. Other players include Swedish Space Systems, Leaf Space in Italy, Atlas Space Operations in Michigan, and more. Some of these companies have experimented with phased-array antennas, Henry said, but no one has made the technology the backbone of its network.

By far the largest data operator in low-Earth orbit, SpaceX, chose dish-based gateways for its ground stations around the world that talk to Starlink satellites. (The individual user terminals are phased-array antennas, however.)

Like reuse in the launch industry, a switch to phased-array antennas is potentially disruptive. Large dishes can only communicate with a single satellite at a time, whereas phased-array antennas can make multiple connections. This allows an operator to pack much more power into a smaller footprint on the ground. But as with SpaceX and reuse, the existing ground station operators seem to be waiting to see if anyone else can pull it off.

“The industry just has not trusted that the level of disruption phased-array antennas can bring is worth the cost,” Henry said. “Reusability wasn’t trusted, either, because no one could do it affordably and effectively.”

So can Northwood Space do it? One of the very first investors in SpaceX, the Founders Fund, believes so. It participated in the seed round for Northwood and again in a Series A round, valued at $30 million, which closed in April.

When Mendler first approached the fund about 18 months ago, it was an easy decision, said Delian Asparouhov, a partner at the fund.

“We probably only discussed it for about 15 minutes,” Asparouhov said. “Bridgit was perfect for this. I think we met on a Tuesday and had a term sheet signed on a Thursday night. It happened that fast.”

The Founders Fund had been studying the idea for a while. Rocket, satellites, and reentry vehicles get all of the attention, but Asparouhov said there is a huge need for ground systems and that phased-array technology has the ability to unlock a future of abundant data from space. His own company, Varda Space, is only able to communicate with its spacecraft for about 35 minutes every two hours. Varda vehicles conduct autonomous manufacturing in space, and the ability to have continuous data from its vehicles about their health and the work on board would be incredibly helpful.

“Infrastructure is not sexy,” Asparouhov said. “We needed someone who could turn that into a compelling story.”

Mendler, with her novel background, was the person. But she’s not just an eloquent spokesperson for the industry, he said. Building a company is hard, from finding facilities to navigating legal work to staffing up. Mendler appears to be acing these tasks. “Run through the LinkedIn of the team she’s recruited,” he said. “You’ll see that she’s knocked it out of the park.”

Ready or not

At Northwood, Mendler has entered a vastly different world from the entertainment industry or academia. She consults with fast-talking venture capitalists, foreign regulators, lawyers, rocket scientists, and occasionally the odd space journalist. It’s a challenging environment usually occupied by hotshot engineers—often arrogant, hard-charging men.

Mendler stands out in this setting. But her life has always been about thriving in tough environments.

Whatever happens, she has already achieved success in one important way. As an actor and singer, Mendler often felt as though she was dancing to someone else’s tune. No longer. At Northwood, she holds the microphone, but she is also a director and producer. If she fails—and let’s be honest, most new space companies do fail—it will be on her own terms.

Several weeks ago, Mendler was sitting at home, watching the movie Meet the Robinsons with her 6-year-old son. One of the main themes of the animated Disney film is that one should “keep moving forward” in life and that it’s possible to build a future that is optimistic for humanity—say, Star Trek rather than The Terminator or The Matrix.

“It shows you what the future could look like,” Mendler said of the movie. “And it gave me a little sad feeling, because it is so optimistic and beautiful. I think people can get discouraged by a dystopian outlook about what the future can look like. We need to remember we can build something positive.”

She will try to do just that.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

She was a Disney star with platinum records, but Bridgit Mendler gave it up to change the world Read More »