Features

the-npu-in-your-phone-keeps-improving—why-isn’t-that-making-ai-better?

The NPU in your phone keeps improving—why isn’t that making AI better?


Shrinking AI for your phone is no simple matter.

npu phone

The NPU in your phone might not be doing very much. Credit: Aurich Lawson | Getty Images

The NPU in your phone might not be doing very much. Credit: Aurich Lawson | Getty Images

Almost every technological innovation of the past several years has been laser-focused on one thing: generative AI. Many of these supposedly revolutionary systems run on big, expensive servers in a data center somewhere, but at the same time, chipmakers are crowing about the power of the neural processing units (NPU) they have brought to consumer devices. Every few months, it’s the same thing: This new NPU is 30 or 40 percent faster than the last one. That’s supposed to let you do something important, but no one really gets around to explaining what that is.

Experts envision a future of secure, personal AI tools with on-device intelligence, but does that match the reality of the AI boom? AI on the “edge” sounds great, but almost every AI tool of consequence is running in the cloud. So what’s that chip in your phone even doing?

What is an NPU?

Companies launching a new product often get bogged down in superlatives and vague marketing speak, so they do a poor job of explaining technical details. It’s not clear to most people buying a phone why they need the hardware to run AI workloads, and the supposed benefits are largely theoretical.

Many of today’s flagship consumer processors are systems-on-a-chip (SoC) because they incorporate multiple computing elements—like CPU cores, GPUs, and imaging controllers—on a single piece of silicon. This is true of mobile parts like Qualcomm’s Snapdragon or Google’s Tensor, as well as PC components like the Intel Core Ultra.

The NPU is a newer addition to chips, but it didn’t just appear one day—there’s a lineage that brought us here. NPUs are good at what they do because they emphasize parallel computing, something that’s also important in other SoC components.

Qualcomm devotes significant time during its new product unveilings to talk about its Hexagon NPUs. Keen observers may recall that this branding has been reused from the company’s line of digital signal processors (DSPs), and there’s a good reason for that.

“Our journey into AI processing started probably 15 or 20 years ago, wherein our first anchor point was looking at signal processing,” said Vinesh Sukumar, Qualcomm’s head of AI products. DSPs have a similar architecture compared to NPUs, but they’re much simpler, with a focus on processing audio (e.g., speech recognition) and modem signals.

Qualcomm chip design NPU

The NPU is one of multiple components in modern SoCs.

Credit: Qualcomm

The NPU is one of multiple components in modern SoCs. Credit: Qualcomm

As the collection of technologies we refer to as “artificial intelligence” developed, engineers began using DSPs for more types of parallel processing, like long short-term memory (LSTM). Sukumar explained that as the industry became enamored with convolutional neural networks (CNNs), the technology underlying applications like computer vision, DSPs became focused on matrix functions, which are essential to generative AI processing as well.

While there is an architectural lineage here, it’s not quite right to say NPUs are just fancy DSPs. “If you talk about DSPs in the general term of the word, yes, [an NPU] is a digital signal processor,” said MediaTek Assistant Vice President Mark Odani. “But it’s all come a long way and it’s a lot more optimized for parallelism, how the transformers work, and holding huge numbers of parameters for processing.”

Despite being so prominent in new chips, NPUs are not strictly necessary for running AI workloads on the “edge,” a term that differentiates local AI processing from cloud-based systems. CPUs are slower than NPUs but can handle some light workloads without using as much power. Meanwhile, GPUs can often chew through more data than an NPU, but they use more power to do it. And there are times you may want to do that, according to Qualcomm’s Sukumar. For example, running AI workloads while a game is running could favor the GPU.

“Here, your measurement of success is that you cannot drop your frame rate while maintaining the spatial resolution, the dynamic range of the pixel, and also being able to provide AI recommendations for the player within that space,” says Sukumar. “In this kind of use case, it actually makes sense to run that in the graphics engine, because then you don’t have to keep shifting between the graphics and a domain-specific AI engine like an NPU.”

Livin’ on the edge is hard

Unfortunately, the NPUs in many devices sit idle (and not just during gaming). The mix of local versus cloud AI tools favors the latter because that’s the natural habitat of LLMs. AI models are trained and fine-tuned on powerful servers, and that’s where they run best.

A server-based AI, like the full-fat versions of Gemini and ChatGPT, is not resource-constrained like a model running on your phone’s NPU. Consider the latest version of Google’s on-device Gemini Nano model, which has a context window of 32k tokens. That is a more than 2x improvement over the last version. However, the cloud-based Gemini models have context windows of up to 1 million tokens, meaning they can process much larger volumes of data.

Both cloud-based and edge AI hardware will continue getting better, but the balance may not shift in the NPU’s favor. “The cloud will always have more compute resources versus a mobile device,” said Google’s Shenaz Zack, senior product manager on the Pixel team.

“If you want the most accurate models or the most brute force models, that all has to be done in the cloud,” Odani said. “But what we’re finding is that, in a lot of the use cases where there’s just summarizing some text or you’re talking to your voice assistant, a lot of those things can fit within three billion parameters.”

Squeezing AI models onto a phone or laptop involves some compromise—for example, by reducing the parameters included in the model. Odani explained that cloud-based models run hundreds of billions of parameters, the weighting that determines how a model processes input tokens to generate outputs. You can’t run anything like that on a consumer device right now, so developers have to vastly scale back the size of models for the edge. Odani says MediaTek’s latest ninth-generation NPU can handle about 3 billion parameters—a difference of several orders of magnitude.

The amount of memory available in a phone or laptop is also a limiting factor, so mobile-optimized AI models are usually quantized. That means the model’s estimation of the next token runs with less precision. Let’s say you want to run one of the larger open models, like Llama or Gemma 7b, on your device. The de facto standard is FP16, known as half-precision. At that level, a model with 7 billion parameters will lock up 13 or 14 gigabytes of memory. Stepping down to FP4 (quarter-precision) brings the size of the model in memory to a few gigs.

“When you compress to, let’s say, between three and four gigabytes, it’s a sweet spot for integration into memory constrained form factors like a smartphone,” Sukumar said. “And there’s been a lot of investment in the ecosystem and at Qualcomm to look at various ways of compressing the models without losing quality.”

It’s difficult to create a generalized AI with these limitations for mobile devices, but computers—and especially smartphones—are a wellspring of data that can be pumped into models to generate supposedly helpful outputs. That’s why most edge AI is geared toward specific, narrow use cases, like analyzing screenshots or suggesting calendar appointments. Google says its latest Pixel phones run more than 100 AI models, both generative and traditional.

Even AI skeptics can recognize that the landscape is changing quickly. In the time it takes to shrink and optimize AI models for a phone or laptop, new cloud models may appear that make that work obsolete. This is also why third-party developers have been slow to utilize NPU processing in apps. They either have to plug into an existing on-device model, which involves restrictions and rapidly moving development targets, or deploy their own custom models. Neither is a great option currently.

A matter of trust

If the cloud is faster and easier, why go to the trouble of optimizing for the edge and burning more power with an NPU? Leaning on the cloud means accepting a level of dependence and trust in the people operating AI data centers that may not always be appropriate.

“We always start off with user privacy as an element,” said Qualcomm’s Sukumar. He explained that the best inference is not general in nature—it’s personalized based on the user’s interests and what’s happening in their lives. Fine-tuning models to deliver that experience calls for personal data, and it’s safer to store and process that data locally.

Even when companies say the right things about privacy in their cloud services, they’re far from guarantees. The helpful, friendly vibe of general chatbots also encourages people to divulge a lot of personal information, and if that assistant is running in the cloud, your data is there as well. OpenAI’s copyright fight with The New York Times could lead to millions of private chats being handed over to the publisher. The explosive growth and uncertain regulatory framework of gen AI make it hard to know what’s going to happen to your data.

“People are using a lot of these generative AI assistants like a therapist,” Odani said. “And you don’t know one day if all this stuff is going to come out on the Internet.”

Not everyone is so concerned. Zack claims Google has built “the world’s most secure cloud infrastructure,” allowing it to process data where it delivers the best results. Zack uses Video Boost and Pixel Studio as examples of this approach, noting that Google’s cloud is the only way to make these experiences fast and high-quality. The company recently announced its new Private AI Compute system, which it claims is just as safe as local AI.

Even if that’s true, the edge has other advantages—edge AI is just more reliable than a cloud service. “On-device is fast,” Odani said. “Sometimes I’m talking to ChatGPT and my Wi-Fi goes out or whatever, and it skips a beat.”

The services hosting cloud-based AI models aren’t just a single website—the Internet of today is massively interdependent, with content delivery networks, DNS providers, hosting, and other services that could degrade or shut down your favorite AI in the event of a glitch. When Cloudflare suffered a self-inflicted outage recently, ChatGPT users were annoyed to find their trusty chatbot was unavailable. Local AI features don’t have that drawback.

Cloud dominance

Everyone seems to agree that a hybrid approach is necessary to deliver truly useful AI features (assuming those exist), sending data to more powerful cloud services when necessary—Google, Apple, and every other phone maker does this. But the pursuit of a seamless experience can also obscure what’s happening with your data. More often than not, the AI features on your phone aren’t running in a secure, local way, even when the device has the hardware to do that.

Take, for example, the new OnePlus 15. This phone has Qualcomm’s brand-new Snapdragon 8 Elite Gen 5, which has an NPU that is 37 percent faster than the last one, for whatever that’s worth. Even with all that on-device AI might, OnePlus is heavily reliant on the cloud to analyze your personal data. Features like AI Writer and the AI Recorder connect to the company’s servers for processing, a system OnePlus assures us is totally safe and private.

Similarly, Motorola released a new line of foldable Razr phones over the summer that are loaded with AI features from multiple providers. These phones can summarize your notifications using AI, but you might be surprised how much of it happens in the cloud unless you read the terms and conditions. If you buy the Razr Ultra, that summarization happens on your phone. However, the cheaper models with less RAM and NPU power use cloud services to process your notifications. Again, Motorola says this system is secure, but a more secure option would have been to re-optimize the model for its cheaper phones.

Even when an OEM focuses on using the NPU hardware, the results can be lacking. Look at Google’s Daily Hub and Samsung’s Now Brief. These features are supposed to chew through all the data on your phone and generate useful recommendations and actions, but they rarely do anything aside from showing calendar events. In fact, Google has temporarily removed Daily Hub from Pixels because the feature did so little, and Google is a pioneer in local AI with Gemini Nano. Google has actually moved some parts of its mobile AI experience from local to cloud-based processing in recent months.

Those “brute force” models appear to be winning, and it doesn’t hurt that companies also get more data when you interact with their private computing cloud services.

Maybe take what you can get?

There’s plenty of interest in local AI, but so far, that hasn’t translated to an AI revolution in your pocket. Most of the AI advances we’ve seen so far depend on the ever-increasing scale of cloud systems and the generalized models that run there. Industry experts say that extensive work is happening behind the scenes to shrink AI models to work on phones and laptops, but it will take time for that to make an impact.

In the meantime, local AI processing is out there in a limited way. Google still makes use of the Tensor NPU to handle sensitive data for features like Magic Cue, and Samsung really makes the most of Qualcomm’s AI-focused chipsets. While Now Brief is of questionable utility, Samsung is cognizant of how reliance on the cloud may impact users, offering a toggle in the system settings that restricts AI processing to run only on the device. This limits the number of available AI features, and others don’t work as well, but you’ll know none of your personal data is being shared. No one else offers this option on a smartphone.

Galaxy AI toggle

Samsung offers an easy toggle to disable cloud AI and run all workloads on-device.

Credit: Ryan Whitwam

Samsung offers an easy toggle to disable cloud AI and run all workloads on-device. Credit: Ryan Whitwam

Samsung spokesperson Elise Sembach said the company’s AI efforts are grounded in enhancing experiences while maintaining user control. “The on-device processing toggle in One UI reflects this approach. It gives users the option to process AI tasks locally for faster performance, added privacy, and reliability even without a network connection,” Sembach said.

Interest in edge AI might be a good thing even if you don’t use it. Planning for this AI-rich future can encourage device makers to invest in better hardware—like more memory to run all those theoretical AI models.

“We definitely recommend our partners increase their RAM capacity,” said Sukumar. Indeed, Google, Samsung, and others have boosted memory capacity in large part to support on-device AI. Even if the cloud is winning, we’ll take the extra RAM.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

The NPU in your phone keeps improving—why isn’t that making AI better? Read More »

“players-are-selfish”:-fallout-2’s-chris-avellone-describes-his-game-design-philosophy

“Players are selfish”: Fallout 2’s Chris Avellone describes his game design philosophy


Avellone recaps his journey from learning on a TRS-80 to today.

Chris Avellone, storied game designer. Credit: Chris Avellone

Chris Avellone wants you to have a good time.

People often ask creatives—especially those in careers some dream of entering—”how did you get started?” Video game designers are no exception, and Avellone says that one of the most important keys to his success was one he learned early in his origin story.

“Players are selfish,” Avellone said, reflecting on his time designing the seminal computer roleplaying game Planescape: Torment. “The more you can make the experience all about them, the better. So Torment became that. Almost every single thing in the game is about you, the player.”

The true mark of a successful game is when players really enjoy themselves, and serving that essential egotism is one of the fundamental laws of game design.

It’s a lesson he learned long before he became an internationally renowned game designer, before Fallout 2 and Planescape: Torment were twinkles in the eyes of Avellone and his co-workers at Interplay. Avellone’s first introduction to building fictional worlds came not from the digital realm but from the analog world of pen and paper roleplaying games.

Table-top takeaways

Avellone discovered Dungeons and Dragons at the tender young age of nine, and it was a formative influence on his creative life and imagination.

“Getting exposed to the idea of Dungeons and Dragons early was a wake-up call,” he told me. “‘Oh, wow, it’s like make believe with rules!’—like putting challenges on your imagination where not everything was guaranteed to succeed, and that made it more fun. However, what I noticed is that I wasn’t usually altering the systems drastically; it was more using them as a foundation for the content.”

Dice on a table

As is so often the case with RPG developer origin stories, it began with Dungeons & Dragons. Credit: Scott Swigart (CC BY 2.0)

At first, Avellone wasn’t interested in engineering the games and stories himself. He wanted a more passive role, but life had different ideas.

“I never started out with a desire to be the game master,” Avellone remembered. “I wanted to be one of the players, but once it became clear that nobody else in my friend circle really wanted to be a game master—to be fair, it was a lot of work—I bit the bullet and tried my hand at it. Over time, I discovered I really enjoyed helping tell an interactive story with the players.”

That revelation, that he preferred being the one crafting the world and guiding the experience, led to some early experiments away from the table as well.

“I never pursued programming for a career, which is probably to the benefit of the world and engineering everywhere,” he joked. But he did start tinkering very young, inspired by the fantasy text adventure games he played as a kid. “I wanted to construct adventure games in the vein of the Scott Adams games… so I attempted to learn basic coding on the TRS-80 in order to do so. The results were a steaming, buggy mess, but [the experience] did give insights into how games operate under the hood.”

It was a different era, however, bereft of many of the resources that aspiring young game developers have at their fingertips today.

“It being the early ’80s, there wasn’t much access to Internet forums and online training courses like today,” Avellone said. “It was mostly book learning from various programming manuals available on order or from the library. These programming attempts were always solo endeavors at fantasy-style sword and sorcery adventures, and I definitely would have benefited from a community or at least one other person of skill who I could ask questions.”

Despite all of his remarkable successes in the space, Avellone didn’t originally dream of creating video games.

“Designing computer games was something I sort of fell into,” he told me. “The idea of a game designer was an almost unheard of career at the time and wasn’t even on my radar. I wanted to write pen and paper modules, adventure and character books, and comic books. As it turned out, though, that can be a miserable way to try and make a living, so when an opportunity came to work in the computer game industry, I took it with the expectation that I’d still use my off time to pursue comics, [pen and paper] writing, etc. But like with game mastering, I found computer game design and narrative design to be fun in itself, and it ended up being the bulk of my career. I did get the opportunity to write modules and comic books later on, but writing for games became my focus, as it was akin to being a virtual game master.”

Like many of the engineers and developers of that era, toiling in their garages and quietly building the future of computing, young Chris Avellone used other creators’ work as a foundation.

“One technique I tried was dissecting existing game engines,” he recalls, “more like an adventure game framework, and then finding ways to alter the content layer to create the game. But the attempts rarely compiled without a stream of errors.”

The shine moment

Every failure was an opportunity to learn, however, and like his experiences telling collaborative stories with his friends in Dungeons and Dragons, they taught him a number of lessons that would serve him later in his career. In our interview, he returned again and again to the player-first mentality that drives his design ethos.

First and foremost, a designer needs to “understand your players and understand why they are there,” Avellone said. “What is their power fantasy?”

Beyond that, every player, whether in a video game or a tabletop roleplaying adventure, should have an opportunity to stand in the spotlight.

“That shine moment is important because it gives everyone the chance to be a hero and to make a difference,” he explained. “The best adventures are the ones where you can point to how each player was instrumental in its success because of how they designed or role-played their character.”

And players should be able to get to that moment in the way they want, not the one most convenient to you, the game master or designer.

“Not everyone plays the way you do,” Avellone said, “and your job as game master is not to dictate how they choose to play or force them into a certain game mode. If a player is a min-maxer who doesn’t care much for the story, that shouldn’t be a problem. If the player is a heavy role-player, they should have some meat for their interactions. This applies strongly to digital game design. If players want to skip dialogue and story points, that’s how they choose to play the game, and they shouldn’t be crushingly penalized for their play style. It’s not your story, it should be a shared experience between the developer and player.”

A core part of his design philosophy, this was a takeaway from pen-and-paper games that Avellone has deployed throughout his career in video games.

“The first application was Planescape: Torment,” Avellone remembered.

Working on Planescape: Torment

It was 1995. Interplay had recently acquired the Planescape license from Wizards of the Coast, formerly TSR, the company behind Dungeons and Dragons. Interplay was looking for ideas for a video game adaptation and brought in Avellone for an interview. At the time, he was writing for Hero Games, a tabletop RPG publisher. Avellone was hired onto the project as a junior director after he sold the idea of a game where death was only the beginning.

That idea—the springboard that launched a successful, decades-spanning career—originated in Avellone’s frustration with save scumming, the process of repeatedly reloading save games to achieve the best result.

“Save scumming in RPGs up to that point felt like a waste of everyone’s time,” Avellone said. “If you died, you either reloaded or you quit. If they quit, you might lose them permanently. So I felt if you removed the middleman and just automatically respawned the character in interesting places and ways, that could keep the experience seamless and keep the flow of the adventure going. This didn’t quite work, because players were so used to save scumming and would still feel they had failed in some way. I was fighting typical gaming conventions and gaming habits at that point.”

That idea of death being just another narrative element rather than a fail state is emblematic of another pillar of Avellone’s design philosophy, also drawn from pen-and-paper games: Regardless of what happens, the story must go on.

“Let the dice fall where they may,” Avellone explained. “It will result in more interesting gaming stories. This was a hard one for me initially, because I would get so locked into a certain character, NPC, or letting a PC survive, that I would fight random chance to keep my story or their arc intact. This was a mistake and a huge missed opportunity. If the players have no fear of death or annoying adversaries who never seem to die because you are fudging the dice rolls to prevent them from being killed, then it undermines much of the drama, and it undermines their eventual success.”

A screenshot from Planescape Torment

Avellone is known for many classics, but among hardcore RPG fans, Planescape: Torment stands particularly tall. Credit: Beamdog

After Planescape: Torment, which received nearly universal critical acclaim, Avellone continued to evolve best practices for giving players what they wanted. He eventually landed on the idea that player input could be useful even before development begins.

“I would often do pre-game interviews with different players,” he recounted, “to get a sense of where they hoped their character arc would go, how they wanted to play.”

Lessons from Fallout Van Buren

Avellone expanded that process dramatically for Fallout Van Buren, Interplay’s vision for Fallout 3. He and the team built a Fallout tabletop roleplaying game to playtest some of the systems that would be implemented in the (ultimately canceled) video game.

“For the Fallout pen-and-paper we were doing for Fallout Van Buren, for example, doing those examinations proved helpful because there were so many different character builds—including ghouls and super mutants, as well as new archetypes like Science Boy—that you wanted to make sure you were creating an experience where everyone had the chance to shine.”

Though Van Buren never saw the light of day, Avellone has said that some of the elements from that design found their way into the wildly popular Fallout: New Vegas, a project for which Avellone served as senior designer (as well as project director for much of the DLC).

Another lesson he learned at the table is that you should never honor a player’s accomplishment with a reward if you plan to immediately snatch it away.

“Don’t give, then take away,” Avellone warns. “One of the worst mistakes I made was after an excruciatingly long treasure hunt for one of the biggest hordes in the world, I took away all the unique items the characters had struggled to win at the start of the very next adventure. While I knew they would get the items back, the players didn’t, and that almost caused a mutiny.”

Two polygonal figures in front of a Fallout 3 logo

A screenshot from Fallout Van Buren. Credit: No Mutants Allowed

I asked Avellone if his earliest experience playing with other people’s code or sitting around rolling dice with his friends had a throughline to his work today. It was clear in his answer, and throughout our interview, that the little boy who fell in love with architecting worlds of fantasy and adventure in his imagination is still very much alive in the seasoned developer building digital worlds for players today. The core idea persists: It’s all about the players, about their connection to your story and your world.

“It still has a strong impact on my game design today,” he told me. “It’s still important to me to see the range of archetypes and builds a player can make. How to make that feel important in a unique way, and how to structure plots and interactions so you try and keep the character goals so they cater to the player’s selfishness. Instead of some outward, forced goal you place on the player… find a way to make the internal player motivation match the goals in-game, and that makes for a stronger experience.”

Avellone carries that philosophy forward into his current project. He recently signed on to help develop the inaugural project at Republic Games, the studio founded by video game writer Adam Williams, formerly of Quantic Dream. The studio is developing a dystopian fantasy game that revolves around a scrappy rebellion fighting to overthrow brutal, tyrannical oppression.

“Some discussions at Republic Games have fallen back on old RPG designs in the past,” he teased, “As some older designs seemed relevant examples for how to solve a potential arc and direction in the game… but I’ll share that story after the game comes out.”

“Players are selfish”: Fallout 2’s Chris Avellone describes his game design philosophy Read More »

after-40-years-of-adventure-games,-ron-gilbert-pivots-to-outrunning-death

After 40 years of adventure games, Ron Gilbert pivots to outrunning Death


Escaping from Monkey Island

Interview: Storied designer talks lost RPG, a 3D Monkey Island, “Eat the Rich” philosophy.

Gilbert, seen here circa 2017 promoting the release of point-and-click adventure throwback Thimbleweed Park. Credit: Getty Images

If you know the name Ron Gilbert, it’s probably for his decades of work on classic point-and-click adventure games like Maniac Mansion, Indiana Jones and the Last Crusade, the Monkey Island series, and Thimbleweed Park. Given that pedigree, October’s release of the Gilbert-designed Death by Scrolling—a rogue-lite action-survival pseudo-shoot-em-up—might have come as a bit of a surprise.

In an interview from his New Zealand home, though, Gilbert noted that his catalog also includes some reflex-based games—Humungous Entertainment’s Backyard Sports titles and 2010’s Deathspank, for instance. And Gilbert said his return to action-oriented game design today stemmed from his love for modern classics like Binding of Isaac, Nuclear Throne, and Dead Cells.

“I mean, I’m certainly mostly known for adventure games, and I have done other stuff, [but] it probably is a little bit of a departure for me,” he told Ars. “While I do enjoy playing narrative games as well, it’s not the only thing I enjoy, and just the idea of making one of these kind of started out as a whim.”

Gilbert’s lost RPG

After spending years focused on adventure game development with 2017’s Thimbleweed Park and then 2022’s Return to Monkey Island, Gilbert said that he was “thinking about something new” for his next game project. But the first “new” idea he pursued wasn’t Death by Scrolling, but what he told Ars was “this vision for this kind of large, open world-type RPG” in the vein of The Legend of Zelda.

After hiring an artist and designer and spending roughly a year tinkering with that idea, though, Gilbert said he eventually realized his three-person team was never going to be able to realize his grand vision. “I just [didn’t] have the money or the time to build a big open-world game like that,” he said. “You know, it’s either a passion project you spent 10 years on, or you just need a bunch of money to be able to hire people and resources.”

And Gilbert said that securing that “bunch of money” to build out a top-down action-RPG in a reasonable time frame proved harder than he expected. After pitching the project around the industry, he found that “the deals that publishers were offering were just horrible,” a problem he blames in large part on the genre he was focusing on.

“Doing a pixelated old-school Zelda thing isn’t the big, hot item, so publishers look at us, and they didn’t look at it as ‘we’re gonna make $100 million and it’s worth investing in,’” he said. “The amount of money they’re willing to put up and the deals they were offering just made absolutely no sense to me to go do this.”

While crowdfunding helped Thimbleweed Park years ago, Gilbert says Kickstarter is “basically dead these days as a way of funding games.”

While crowdfunding helped Thimbleweed Park years ago, Gilbert says Kickstarter is “basically dead these days as a way of funding games.”

For point-and-click adventure Thimbleweed Park, Gilbert got around a similar problem in part by going directly to fans of the genre, raising $600,000 of a $375,000 goal via crowdfunding. But even then, Gilbert said that private investors needed to provide half of the game’s final budget to get it over the finish line. And while Gilbert said he’d love to revisit the world of Thimbleweed Park, “I just don’t know where I’d ever get the money. It’s tougher than ever in some ways… Kickstarter is basically dead these days as a way of funding games.”

Compared to the start of his career, Gilbert said that today’s big-name publishers “are very analytics-driven. The big companies, it’s like they just have formulas that they apply to games to try to figure out how much money they could make, and I think that just in the end you end up giving a whole lot of games that look exactly the same as last year’s games, because that makes some money.

“When we were starting out, we couldn’t do that because we didn’t know what made this money, so it was, yeah, it was a lot more experimenting,” he continued. “I think that’s why I really enjoy the indie game market because it’s kind of free of a lot of that stuff that big publishers bring to it, and there’s a lot more creativity and you know, strangeness, and bizarreness.”

Run for it

After a period where Gilbert said he “was kind of getting a little down” about the failure of his action-RPG project, he thought about circling back to a funny little prototype he developed as part of a 2019 game design meet-up organized by Spry Fox’s Daniel Cook. That prototype—initially simply called “Runner”—focused on outrunning the bottom of a continually scrolling screen, picking up ammo-limited weapons to fend off enemies as you did.

While the prototype initially required players to aim at those encroaching enemies as they ran, Gilbert said that the design “felt like cognitive overload.” So he switched to an automatic aiming and firing system, an idea he says was part of the prototype long before it became popularized by games like Vampire Survivors. And while Gilbert said he enjoyed Vampire Survivors, he added that the game’s style was “a little too much ‘ADHD’ for me. I look at those games and it’s like, wow, I feel like I’m playing a slot machine at some level. The flashing and upgrades and this and that… it’s a little too much.”

The 2019 “Runner” prototype that would eventually become Death by Scrolling.

But Gilbert said his less frenetic “Runner” prototype “just turned out to be a lot of fun, and I just played it all the time… It was really fun for groups of people to play, because one person will play and other people would kind of be laughing and cheering as you, you know, escape danger at the nick of time.”

Gilbert would end up using much of the art from his scrapped RPG project to help flesh out the “Runner” prototype into what would eventually become Death by Scrolling. But even late in the game’s development, Gilbert said the game was missing a unifying theme. “There was no reason initially for why you were doing any of this. You were just running, you know?”

That issue didn’t get solved until the last six months of development, when Gilbert hit on the idea of running through a repeating purgatory and evading Death, in the form of a grim reaper that regularly emerges to mercilessly hunt you down. While you can use weapons to temporarily stun Death, there’s no way to completely stop his relentless pursuit before the end of a stage.

That grim reaper really puts the Death in Death by Scrolling.

That grim reaper really puts the Death in Death by Scrolling.

“Because he can’t be killed and because he’s an instant kill for you, it’s a very unique thing you really kinda need to avoid,” Gilbert said. “You’re running along, getting gold, gaining gems, and then, boom, you hear that [music], and Death is on the screen, and you kind of panic for a moment until you orient yourself and figure out where he is and where he’s coming from.”

Is anyone reading this?

After spending so much of his career on slow-burn adventure games, Gilbert admitted there were special challenges to writing for an action game—especially one where the player is repeating the same basic loop over and over. “It’s a lot harder because you find very quickly that a lot of players just don’t care about your story, right? They’re there to run, they’re there to shoot stuff… You kind of watch them play, and they’re just kind of clicking by the dialogue so fast that they don’t even see it.”

Surprisingly, though, Gilbert said he’s seen that skip-the-story behavior among adventure game players, too. “Even in Thimbleweed Park and Monkey Island, people still kind of pound through the dialogue,” he said. “I think if they think they know what they need to do, they just wanna skip through the dialogue really fast.”

As a writer, Gilbert said it’s “frustrating” to see players doing the equivalent of “sitting down to watch a movie and just fast forwarding through everything except the action parts.” In the end, though, he said, a game developer has to accept that not everyone is playing for the same reasons.

Believe it or not, some players just breeze past quality dialogue like this.

Credit: LucasArts

Believe it or not, some players just breeze past quality dialogue like this. Credit: LucasArts

“There’s a certain percentage of people who will follow the story and enjoy it, and that’s OK,” he said. “And everyone else, if they skip the story, it’s got to be OK. You need to make sure you don’t embed things deep in the story that are critical for them to understand. It’s a little bit like really treating the story as truly optional.”

Those who do pay attention to the story in Death by Scrolling will come across what Gilbert said he hoped was a less-than-subtle critique of the capitalist system. That critique is embedded in the gameplay systems, which require you to collect more and more gold—and not just two pennies on your eyes—to pay a newly profit-focused River Styx ferryman that has been acquired by Purgatory Inc.

“It’s purgatory taken over by investment bankers,” Gilbert said of the conceit. “I think a lot of it is looking at the world today and realizing capitalism has just taken over, and it really is the thing that’s causing the most pain for people. I just wanted to really kind of drive that point in the game, in a kind of humorous, sarcastic way, that this capitalism is not good.”

While Gilbert said he’s always harbored these kinds of anti-capitalist feelings “at some level,” he said that “certainly recent events and recent things have gotten me more and more jumping on the ‘Eat the Rich’ bandwagon.” Though he didn’t detail which “recent events” drove that realization, he did say that “billionaires and all this stuff… I think are just causing more harm than good.”

Is the point-and-click adventure doomed?

Despite his history with point-and-click adventures, and the relative success of Thimbleweed Park less than 10 years ago, Gilbert says he isn’t interested in returning to the format popularized by LucasArts’ classic SCUMM Engine games. That style of “use verb on noun” gameplay is now comparable to a black-and-white silent movie, he said, and will feel similarly  dated to everything but a niche of aging, nostalgic players.

“You do get some younger people that do kind of enjoy those games, but I think it’s one of those things that when we’re all dead, it probably won’t be the kind of thing that survives,” he said.

Gilbert says modern games like Lorelei and the Laser Eyes show a new direction for adventure games without the point-and-click interface.

Gilbert says modern games like Lorelei and the Laser Eyes show a new direction for adventure games without the point-and-click interface.

But while the point-and-click interface might be getting long in the tooth, Gilbert said he’s more optimistic about the future of adventure games in general. He points to recent titles like Blue Prince and Lorelei and the Laser Eyes as examples of how clever designers can create narrative-infused puzzles using modern techniques and interfaces. “I think games like that are kind of the future for adventure games,” he said.

If corporate owner Disney ever gave him another chance to return to the Monkey Island franchise, Gilbert said he’d like to emulate those kinds of games by having players “go around in a true 3D world, rather than as a 2D point-and-click game… I don’t really know how you would do the puzzle solving in [that] way, and so that’s very interesting to me, to be able to kind of attack that problem of doing it in a 3D world.”

After what he said was a mixed reception to the gameplay changes in Return to Monkey Island, though, Gilbert allowed that franchise fans might not be eager for an even greater departure from tradition. “Maybe Monkey Island isn’t the right game to do as an adventure game in a 3D world, because there are a lot of expectations that come with it,” he said. “I mean if I was to do that, you just ruffle even more feathers, right? There’s more people that are very attached to Monkey Island, but more in its classic sense.”

Looking over his decades-long career, though, Gilbert also noted that the skills needed to promote a new game today are very different from those he used in the 1980s. “Back then, there were a handful of print magazines, and there were a bunch of reporters, and you had sent out press releases… That’s just not the way it works today,” he said. Now, the rise of game streamers and regular YouTube game development updates has forced game makers to be good on camera, much like MTV did for a generation of musicians, Gilbert said.

“The [developers] that are successful are not necessarily the good ones, but the good ones that also present well on YouTube,” he said. “And you know, I think that’s kind of a problem, that’s a gate now… In some ways, I think it’s too bad because as a developer, you have to be a performer. And I’m not a performer, right? If I was making movies, I would be a director, not an actor.”

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

After 40 years of adventure games, Ron Gilbert pivots to outrunning Death Read More »

we-put-the-new-pocket-size-vinyl-format-to-the-test—with-mixed-results

We put the new pocket-size vinyl format to the test—with mixed results


is that a record in your pocket?

It’s a fun new format, but finding a place in the market may be challenging.

A 4-inch Tiny Vinyl record. Credit: Chris Foresman

A 4-inch Tiny Vinyl record. Credit: Chris Foresman

We recently looked at Tiny vinyl, a new miniature vinyl single format developed through a collaboration between a toy industry veteran and the world’s largest vinyl record manufacturer. The 4-inch singles are pressed in a process nearly identical to standard 12-inch LPs or 7-inch singles, except everything is smaller. They have a standard-size spindle hole and play at 33⅓ RPM, and they hold up to four minutes of music per side.

Several smaller bands, like The Band Loula and Rainbow Kitten Surprise, and some industry veterans like Blake Shelton and Melissa Etheridge, have already experimented with the format. But Tiny Vinyl partnered with US retail giant Target for its big coming-out party this fall, with 44 exclusive titles launching throughout the end of this year.

Tiny Vinyl supplied a few promotional copies of releases from former America’s Got Talent finalist Grace VanderWaal, The Band Loula, country pop stars Florida Georgia Line, and jazz legends the Vince Guaraldi Trio so I could get a first-hand look at how the records actually play. I tested these titles as well as several others I picked up at retail, playing them on an Audio Technica LP-120 direct drive manual turntable connected to a Yamaha S-301 integrated amplifier and playing through a pair of vintage Klipsch kg4 speakers.

I also played them out on a Crosley portable suitcase-style turntable, and for fun, I tried to play them on the miniature RSD3 turntable made for 3-inch singles to try to see what’s possible with a variety of hardware.

Tiny Vinyl releases cover several genres, including hip-hop, rock, country, pop, indie, and show tunes. Credit: Chris Foresman

Automatic turntables need not apply

First and foremost, I’ll note that the 4-inch diameter is essentially the same size as the label on a standard 12-inch LP. So any sort of automatic turntable won’t really work for 4-inch vinyl; most aren’t equipped to set the stylus at anything other than 12 inches or 7 inches, and even if they could, the automatic return would kick in before reaching the grooves where the music starts. Some automatic turntables allow switching to a manual mode, but they otherwise cannot play Tiny Vinyl records.

But if you have a turntable with a fully manual tonearm—including a wide array from DJ-style direct drive turntables or audiophile belt-drive turntables like those from Fluance, U-turn, or Pro-ject—you’re in luck. The tonearm can be placed on these records, and they will track the grooves well.

Lining up the stylus can be a challenge with such small records, but once it’s in place, the stylus on my LP120—a nude elliptical—tracked well. I also tried a few listens with a standard conical stylus since that’s what would be most common across a variety of low- and mid-range turntables. The elliptical stylus tracked slightly better in our experience; higher-end styli may track the extremely fine grooves even better but would probably be overkill given that the physical limitations of the format introduce some distortion, which would likely be more apparent with such gear.

While Tiny Vinyl will probably appeal most to pop music fans, I played a variety of music styles, including rock, country, dance pop, hip-hop, jazz, and even showtunes. The main sonic difference I noted when a direct comparison was available was that the Tiny Vinyl version of a track tended to sound quieter than the same track playing on a 12-inch LP at the same volume setting on the amplifier.

This Kacey Musgraves Tiny Vinyl includes songs from her album Deeper Well. Credit: Chris Foresman

It’s not unusual for different records to be mastered at different volumes; making the overall sound quieter means smaller modulations in the groove so they can be placed closer together. This is true for any album that has a side running longer than about 22 minutes, but it’s especially important to maintain the four-minute runtime on Tiny Vinyl. (This is also why the last song or two on many LP slides tend to be quieter or slower songs; it’s easier for these songs to sound better at the center of the record, where linear tracking speed decreases.)

That said, most of the songs I listened to tended to have a slight but audible increase in distortion as the grooves approached the physical limits of alignment for the stylus. This was usually only perceptible in the last several seconds of a song, which more discerning listeners would likely find objectionable. But sound quality overall is still comparable to typical vinyl records. It won’t compare to the most exacting pressings from the likes of Mobile Fidelity Labs, for instance, but then again, the sort of audiophile who would pay for the equipment to get the most out of such records probably won’t buy Tiny Vinyl in the first place, except perhaps as a conversation piece.

I also tried playing our Tiny Vinyl on a Crosley suitcase-style turntable since it has a manual tone arm. The model I have on hand has an Audio Technica AT3600L cartridge and stereo speakers, so it’s a bit nicer than the entry-level Cruiser models you’ll typically find at malls or department stores. But these are extremely popular first turntables for a lot of young people, so it seemed reasonable to consider how Tiny Vinyl sounds on these devices.

Unfortunately, I couldn’t play Tiny Vinyl on this turntable. Despite having a manual tone arm and an option to turn off the auto-start and stop of the turntable platter, the Crosley platter is designed for 7-inch and 12-inch vinyl—the Tiny Vinyl we tried wouldn’t even spin on the turntable without the addition of a slipmat of some kind.

Once I got it spinning, though, the tone arm simply would not track beyond the first couple of grooves before hitting some physical limitation of its gimbal. Since many of the suitcase-style turntables often share designs and parts, I suspect this would be a problem for most of the Crosley, Victrola, or other brands you might find at a big-box retailer.

Some releases really take advantage of the extra real estate of the gatefold jacket and printed inner sleeve,  Chris Foresman

Additionally, I compared the classic track “Linus and Lucy” from A Charlie Brown Christmas with a 2012 pressing of the full album, as well as the 2019 3-inch version using an adapter, all on the LP-120, to give readers the best comparison across formats.

Again, the LP version of the seminal soundtrack from A Charlie Brown Christmas sounded bright and noticeably louder than its 4-inch counterpart. No major surprises here. And of course, the LP includes the entire soundtrack, so if you’re a big fan of the film or the kind of contemplative, piano-based jazz that Vince Guaraldi is famous for, you’ll probably spring for the full album.

The 3-inch version of “Linus and Lucy” unsurprisingly sounds fairly comparable to the Tiny Vinyl version, with a much quieter playback at the same amplifier settings. But it also sounds a lot noisier, likely due to the differences in materials used in manufacturing.

Though 3-inch records can play on standard turntables, as I did here, they’re designed to go hand-in-hand with one of the many Crosley RSD3 variants released in the last five years, or on the Crosley Mini Cruiser turntable. If you manage to pick up an original 8ban player, you could get the original lo-fi, “noisy analog” sound that Bandai had intended as well. That’s really part of the 3-inch vinyl aesthetic.

Newer 3-inch vinyl singles are coming with a standard spindle hole, which makes them easier to play on standard turntables. It also means there are now adapters for the tiny spindle to fit these holes, so you can technically put a 4-inch single on them. But due to the design of the tonearm and its rest, the stylus won’t swing out to the edge of Tiny Vinyl; instead, you can only play starting at grooves around the 3-inch mark. It’s a little unfortunate because it would otherwise be fun to play these miniature singles on hardware that is a little more right-sized ergonomically.

Big stack of tiny records. Credit: Chris Foresman

Four-inch Tiny Vinyl singles, on the other hand, are intended to be played on standard turntables, and they do that fairly well as long as you can manually place the tonearm and it’s not otherwise limited physically from tracking its grooves. The sound was not expected to compare to a quality 12-inch pressing, and it doesn’t. But it still sounds good. And especially if your available space is at a premium, you might consider a Tiny Vinyl with the most well-known and popular tracks from a certain album or artist (like these songs from A Charlie Brown Christmas) over a full album that may cost upward of $35.

Fun for casual listeners, not for audiophiles

Overall, Tiny Vinyl still offers much of the visceral experience of playing standard vinyl records—the cover art, the liner notes, handling the record as you place it on the turntable—just in miniature. The cost is less than a typical LP, and the weight is significantly less, so there are definite benefits for casual listeners. On the other hand, serious collectors will gravitate toward 12-inch albums and—perhaps less so—7-inch singles. Ironically, the casual listeners the format would most likely appeal to are the least likely to have the equipment to play it. That will limit Tiny Vinyl’s mass-market appeal outside of just being a cool thing to put on the shelf that technically could be played on a turntable.

The Good:

  • Small enough to easily fit in a jacket pocket or the like
  • Use less resources to make and ship
  • With the gatefold jacket, printed inner sleeve, and color vinyl options, these look as cool as most full-size albums
  • Plays fine on manual turntables

The Bad:

  • Sound quality is (unsurprisingly) compromised
  • Price isn’t lower than typical 7-inch singles

The Ugly:

  • Won’t work on automatic-only turntables, like the very popular AT-LP60 series or the very popular suitcase-style turntables that are often an inexpensive “first” turntable for many

We put the new pocket-size vinyl format to the test—with mixed results Read More »

vision-pro-m5-review:-it’s-time-for-apple-to-make-some-tough-choices

Vision Pro M5 review: It’s time for Apple to make some tough choices


A state of the union from someone who actually sort of uses the thing.

The M5 Vision Pro with the Dual Knit Band. Credit: Samuel Axon

With the recent releases of visionOS 26 and newly refreshed Vision Pro hardware, it’s an ideal time to check in on Apple’s Vision Pro headset—a device I was simultaneously amazed and disappointed by when it launched in early 2024.

I still like the Vision Pro, but I can tell it’s hanging on by a thread. Content is light, developer support is tepid, and while Apple has taken action to improve both, it’s not enough, and I’m concerned it might be too late.

When I got a Vision Pro, I used it a lot: I watched movies on planes and in hotel rooms, I walked around my house placing application windows and testing out weird new ways of working. I tried all the neat games and educational apps, and I watched all the immersive videos I could get ahold of. I even tried my hand at developing my own applications for it.

As the months went on, though, I used it less and less. The novelty wore off, and as cool as it remained, practicality beat coolness. By the time Apple sent me the newer model a couple of weeks ago, I had only put the original one on a few times in the prior couple of months. I had mostly stopped using it at home, but I still took it on trips as an entertainment device for hotel rooms now and then.

That’s not an uncommon story. You even see it in the subreddit for Vision Pro owners, which ought to be the home of the device’s most dedicated fans. Even there, people say, “This is really cool, but I have to go out of my way to keep using it.”

Perhaps it would have been easier to bake it into my day-to-day habits if developer and content creator support had been more robust, a classic chicken-and-egg problem.

After a few weeks of using the new Vision Pro hardware refresh daily, it’s clear to me that the platform needs a bigger rethink. As a fan of the device, I’m concerned it won’t get that, because all the rumors point to Apple pouring its future resources into smart glasses, which, to me, are a completely different product category.

What changed in the new model?

For many users, the most notable change here will be something you can buy separately (albeit at great expense) for the old model: A new headband that balances the device’s weight on your head better, making it more comfortable to wear for long sessions.

Dubbed the Dual Knit Band, it comes with an ingeniously simple adjustment knob that can be used to tighten or loosen either the band that goes across the back of your head (similar to the old band) or the one that wraps around the top.

It’s well-designed, and it will probably make the Vision Pro easier to use for many people who found the old model to be too uncomfortable—even though this model is slightly heavier than its predecessor.

The band fit is adjusted with this knob. You can turn it to loosen or tighten one strap, then pull it out and turn it again to adjust the other. Credit: Samuel Axon

I’m one of the lucky few who never had any discomfort problems with the Vision Pro, but I know a bunch of folks who said the pressure the device put on their foreheads was unbearable. That’s exactly what this new band remedies, so it’s nice to see.

The M5 chip offers more than just speed

Whereas the first Vision Pro had Apple’s M2 chip—which was already a little behind the times when it launched—the new one adds the M5. It’s much faster, especially for graphics-processing and machine-learning tasks. We’ve written a lot about the M5 in our articles on other Apple products if you’re interested to learn more about it.

Functionally, this means a lot of little things are a bit faster, like launching certain applications or generating a Persona avatar. I’ll be frank: I didn’t notice any difference that significantly impacted the user experience. I’m not saying I couldn’t tell it was faster sometimes. I’m just saying it wasn’t faster in a way that’s meaningful enough to change any attitudes about the device.

It’s most noticeable with games—both native mixed-reality Vision Pro titles and the iPad versions of demanding games that you can run on a virtual display on the device. Demanding 3D games look and run nicer, in many cases. The M5 also supports more recent graphics advancements like ray tracing and mesh shading, though very few games support them, even in terms of iPad versions.

All this is to say that while I always welcome performance improvements, they are definitely not enough to convince an M2 Vision Pro owner to upgrade, and they won’t tip things over for anyone who has been on the fence about buying one of these things.

The main perk of the new chip is improved efficiency, which is the driving force behind modestly increased battery life. When I first took the M2 Vision Pro on a plane, I tried watching 2021’s Dune. I made it through the movie, but just barely; the battery ran out during the closing credits. It’s not a short movie, but there are longer ones.

Now, the new headset can easily get another 30 or 60 minutes, depending on what you’re doing, which finally puts it in “watch any movie you want” territory.

Given how short battery life was in the original version, even a modest bump like that makes a big difference. That, alongside a marginally increased field of view (about 10 percent) and a new 120 Hz maximum refresh rate for passthrough are the best things about the new hardware. These are nice-to-haves, but they’re not transformational by any means.

We already knew the Vision Pro offered excellent hardware (even if it’s overkill for most users), but the platform’s appeal is really driven by software. Unfortunately, this is where things are running behind expectations.

For content, it’s quality over quantity

When the first Vision Pro launched, I was bullish about the promise of the platform—but a lot of that was contingent on a strong content cadence and third-party developer support.

And as I’ve written since, the content cadence for the first year was a disappointment. Whereas I expected weekly episodes of Apple’s Immersive Videos in the TV app, those short videos arrived with gaps of several months. There’s an enormous wealth of great immersive content outside of Apple’s walled garden, but Apple didn’t seem interested in making that easily accessible to Vision Pro owners. Third-party apps did some of that work, but they lagged behind those on other platforms.

The first-party content cadence picked up after the first year, though. Plus, Apple introduced the Spatial Gallery, a built-in app that aggregates immersive 3D photos and the like. It’s almost TikTok-like in that it lets you scroll through short-form content that leverages what makes the device unique, and it’s exactly the sort of thing that the platform so badly needed at launch.

The Spatial Gallery is sort of like a horizontally-scrolling TikTok for 3D photos and video. Credit: Samuel Axon

The content that is there—whether in the TV app or the Spatial Gallery—is fantastic. It’s beautifully, professionally produced stuff that really leans on the hardware. For example, there is an autobiographical film focused on U2’s Bono that does some inventive things with the format that I had never seen or even imagined before.

Bono, of course, isn’t everybody’s favorite, but if you can stomach the film’s bloviating, it’s worth watching just with an eye to what a spatial video production can or should be.

I still think there’s significant room to grow, but the content situation is better than ever. It’s not enough to keep you entertained for hours a day, but it’s enough to make putting on the headset for a bit once a week or so worth it. That wasn’t there a year ago.

The software support situation is in a similar state.

App support is mostly frozen in the year 2024

Many of us have a suite of go-to apps that are foundational to our individual approaches to daily productivity. For me, primarily a macOS user, they are:

  • Firefox
  • Spark
  • Todoist
  • Obsidian
  • Raycast
  • Slack
  • Visual Studio Code
  • Claude
  • 1Password

As you can see, I don’t use most of Apple’s built-in apps—no Safari, no Mail, no Reminders, no Passwords, no Notes… no Spotlight, even. All that may be atypical, but it has never been a problem on macOS, nor has it been on iOS for a few years now.

Impressively, almost all of these are available on visionOS—but only because it can run iPad apps as flat, virtual windows. Firefox, Spark, Todoist, Obsidian, Slack, 1Password, and even Raycast are all available as supported iPad apps, but surprisingly, Claude isn’t, even though there is a Claude app for iPads. (ChatGPT’s iPad app works, though.) VS Code isn’t available, of course, but I wasn’t expecting it to be.

Not a single one of these applications has a true visionOS app. That’s too bad, because I can think of lots of neat things spatial computing versions could do. Imagine browsing your Obsidian graph in augmented reality! Alas, I can only dream.

You can tell the native apps from the iPad ones: The iPad ones have rectangular icons nested within circles, whereas the native apps fill the whole circle. Credit: Samuel Axon

If you’re not such a huge productivity software geek like me and you use Apple’s built-in apps, things look a little better, but surprisingly, there are still a few apps that you would imagine would have really cool spatial computing features—like Apple Maps—that don’t. Maps, too, is just an iPad app.

Even if you set productivity aside and focus on entertainment, there are still frustrating gaps. Almost two years later, there is still no Netflix or YouTube app. There are decent-enough third-party options for YouTube, but you have to watch Netflix in a browser, which is lower-quality than in a native app and looks horrible on one of the Vision Pro’s big virtual screens.

To be clear, there is a modest trickle of interesting spatial app experiences coming in—most of them games, educational apps, or cool one-off ideas that are fun to check out for a few minutes.

All this is to say that nothing has really changed since February 2024. There was an influx of apps at launch that included a small number of show-stoppers (mostly educational apps), but the rest ranged from “basically the iPad app but with one or two throwaway tech-demo-style spatial features you won’t try more than once” to “basically the iPad app but a little more native-feeling” to “literally just the iPad app.” As far as support from popular, cross-platform apps, it’s mostly the same list today as it was then.

Its killer app is that it’s a killer monitor

Even though Apple hasn’t made a big leap forward in developer support, it has made big strides in making the Vision Pro a nifty companion to the Mac.

From the start, it has had a feature that lets you simply look at a Mac’s built-in display, tap your fingers, and launch a large, resizable virtual monitor. I have my own big, multi-monitor setup at home, but I have used the Vision Pro this way sometimes when traveling.

I had some complaints at the start, though. It could only do one monitor, and that monitor was limited to 60 Hz and a standard widescreen resolution. That’s better than just using a 14-inch MacBook Pro screen, but it’s a far cry from the sort of high-end setup a $3,500 price tag suggests. Furthermore, it didn’t allow you to switch audio between the two devices.

Thanks to both software and hardware updates, that has all changed. visionOS now supports three different monitor sizes: the standard widescreen aspect ratio, a wider one that resembles a standard ultra-wide monitor, and a gigantic, ultra-ultra-wide wrap-around display that I can assure you will leave no one wanting for desktop space. It looks great. Problem solved! Likewise, it will now transfer your Mac audio to the Vision Pro or its Bluetooth headphones automatically.

All of that works not just on the new Vision Pro, but also on the M2 model. The new M5 model exclusively addresses the last of my complaints: You can now achieve higher refresh rates for that virtual monitor than 60 Hz. Apple says it goes “up to 120 Hz,” but there’s no available tool for measuring exactly where it’s landing. Still, I’m happy to see any improvement here.

This is the standard width for the Mac monitor feature… Samuel Axon

Through a series of updates, Apple has turned a neat proof-of-concept feature into something that is genuinely valuable—especially for folks who like ultra-wide or multi-monitor setups but have to travel a lot (like myself) or who just don’t want to invest in the display hardware at home.

You can also play your Mac games on this monitor. I tried playing No Man’s Sky and Cyberpunk 2077 on it with a controller, and it was a fantastic experience.

This, alongside spatial video and watching movies, is the Vision Pro’s current killer app and one of the main areas where Apple has clearly put a lot of effort into improving the platform.

Stop trying to make Personas happen

Strangely, another area where Apple has invested quite a bit to make things better is in the Vision Pro’s usefulness as a communications and meetings device. Personas—the 3D avatars of yourself that you create for Zoom calls and the like—were absolutely terrible when the M2 Vision Pro came out.

There is also EyeSight, which uses your Persona to show a simulacrum of your eyes to people around you in the real world, letting them know you are aware of your surroundings and even allowing them to follow your gaze. I understand the thought behind this feature—Apple doesn’t want mixed reality to be socially isolating—but it sometimes puts your eyes in the wrong place, it’s kind of hard to see, and it honestly seems like a waste of expensive hardware.

Primarily via software updates, I’m pleased to report that Personas are drastically improved. Mine now actually looks like me, and it moves more naturally, too.

I joined a FaceTime call with Apple reps where they showed me how Personas float and emote around each other, and how we could look at the same files and assets together. It was indisputably cool and way better than before, thanks to the improved Personas.

I can’t say as much for EyeSight, which looks the same. It’s hard for me to fathom that Apple has put multiple sensors and screens on this thing to support this feature.

In my view, dropping EyeSight would be the single best thing Apple could do for this headset. Most people don’t like  it, and most people don’t want it, yet there is no question that its inclusion adds a not-insignificant amount to both the price and the weight, the product’s two biggest barriers to adoption.

Likewise, Personas are theoretically cool, and it is a novel and fun experience to join a FaceTime call with people and see how it works and what you could do. But it’s just that: a novel experience. Once you’ve done it, you’ll never feel the need to do it again. I can barely imagine anyone who would rather show up to a call as a Persona than take the headset off for 30 minutes to dial in on their computer.

Much of this headset is dedicated to this idea that it can be a device that connects you with others, but maintaining that priority is simply the wrong decision. Mixed reality is isolating, and Apple is treating that like a problem to be solved, but I consider that part of its appeal.

If this headset were capable of out-in-the-world AR applications, I would not feel that way, but the Vision Pro doesn’t support any application that would involve taking it outside the home into public spaces. A lot of the cool, theoretical AR uses I can think of would involve that, but still no dice here.

The metaverse (it’s telling that this is the first time I’ve typed that word in at least a year) already exists: It’s on our phones, in Instagram and TikTok and WeChat and Fortnite. It doesn’t need to be invented, and it doesn’t need a new, clever approach to finally make it take off. It has already been invented. It’s already in orbit.

Like the iPad and the Apple Watch before it, the Vision Pro needs to stop trying to be a general-purpose device and instead needs to lean into what makes it special.

In doing so, it will become a better user experience, and it will get lighter and cheaper, too. There’s real potential there. Unfortunately, Apple may not go that route if leaks and insider reports are to be believed.

There’s still a ways to go, so hopefully this isn’t a dead end

The M5 Vision Pro was the first of four planned new releases in the product line, according to generally reliable industry analyst Ming-Chi Kuo. Next up, he predicted, would be a full Vision Pro 2 release with a redesign, and a Vision Air, a cheaper, lighter alternative. Those would all precede true smart glasses many years down the road.

I liked that plan: keep the full-featured Vision Pro for folks who want the most premium mixed reality experience possible (but maybe drop EyeSight), and launch a cheaper version to compete more directly with headsets like Meta’s Quest line of products, or the newly announced Steam Frame VR headset from Valve, along with planned competitors by Google, Samsung, and others.

True augmented reality glasses are an amazing dream, but there are serious problems of optics and user experience that we’re still a ways off from solving before those can truly replace the smartphone as Tim Cook once predicted.

All that said, it looks like that plan has been called into question. A Bloomberg report in October claimed that Apple CEO Tim Cook had told employees that the company was redirecting resources from future passthrough HMD products to accelerate work on smart glasses.

Let’s be real: It’s always going to be a once-in-a-while device, not a daily driver. For many people, that would be fine if it cost $1,000. At $3,500, it’s still a nonstarter for most consumers.

I believe there is room for this product in the marketplace. I still think it’s amazing. It’s not going to be as big as the iPhone, or probably even the iPad, but it has already found a small audience that could grow significantly if the price and weight could come down. Removing all the hardware related to Personas and EyeSight would help with that.

I hope Apple keeps working on it. When Apple released the Apple Watch, it wasn’t entirely clear what its niche would be in users’ lives. The answer (health and fitness) became crystal clear over time, and the other ambitions of the device faded away while the company began building on top of what was working best.

You see Apple doing that a little bit with the expanded Mac spatial display functionality. That can be the start of an intriguing journey. But writers have a somewhat crass phrase: “kill your darlings.” It means that you need to be clear-eyed about your work and unsentimentally cut anything that’s not working, even if you personally love it—even if it was the main thing that got you excited about starting the project in the first place.

It’s past time for Apple to start killing some darlings with the Vision Pro, but I truly hope it doesn’t go too far and kill the whole platform.

Photo of Samuel Axon

Samuel Axon is the editorial lead for tech and gaming coverage at Ars Technica. He covers AI, software development, gaming, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Vision Pro M5 review: It’s time for Apple to make some tough choices Read More »

“go-generate-a-bridge-and-jump-off-it”:-how-video-pros-are-navigating-ai

“Go generate a bridge and jump off it”: How video pros are navigating AI


I talked with nine creators about economic pressures and fan backlash.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

In 2016, the legendary Japanese filmmaker Hayao Miyazaki was shown a bizarre AI-generated video of a misshapen human body crawling across a floor.

Miyazaki declared himself “utterly disgusted” by the technology demo, which he considered an “insult to life itself.”

“If you really want to make creepy stuff, you can go ahead and do it,” Miyazaki said. “I would never wish to incorporate this technology into my work at all.”

Many fans interpreted Miyazaki’s remarks as rejecting AI-generated video in general. So they didn’t like it when, in October 2024, filmmaker PJ Accetturo used AI tools to create a fake trailer for a live-action version of Miyazaki’s animated classic Princess Mononoke. The trailer earned him 22 million views on X. It also earned him hundreds of insults and death threats.

“Go generate a bridge and jump off of it,” said one of the funnier retorts. Another urged Accetturo to “throw your computer in a river and beg God’s forgiveness.”

Someone tweeted that Miyazaki “should be allowed to legally hunt and kill this man for sport.”

PJ Accetturo is a director and founder of Genre AI, an AI ad agency. Credit: PJ Accetturo

The development of AI image and video generation models has been controversial, to say the least. Artists have accused AI companies of stealing their work to build tools that put people out of a job. Using AI tools openly is stigmatized in many circles, as Accetturo learned the hard way.

But as these models have improved, they have sped up workflows and afforded new opportunities for artistic expression. Artists without AI expertise might soon find themselves losing work.

Over the last few weeks, I’ve spoken to nine actors, directors, and creators about how they are navigating these tricky waters. Here’s what they told me.

Actors have emerged as a powerful force against AI. In 2023, SAG-AFTRA, the Hollywood actors’ union, had its longest-ever strike, partly to establish more protections for actors against AI replicas.

Actors have lobbied to regulate AI in their industry and beyond. One actor I talked with, Erik Passoja, has testified before the California Legislature in favor of several bills, including for greater protections against pornographic deepfakes. SAG-AFTRA endorsed SB 1047, an AI safety bill regulating frontier models. The union also organized against the proposed moratorium on state AI bills.

A recent flashpoint came in September, when Deadline Hollywood reported that talent agencies were interested in signing “AI actress” Tilly Norwood.

Actors weren’t happy. Emily Blunt told Variety, “This is really, really scary. Come on agencies, don’t do that.”

Natasha Lyonne, star of Russian Doll, posted on an Instagram Story: “Any talent agency that engages in this should be boycotted by all guilds. Deeply misguided & totally disturbed.”

The backlash was partly specific to Tilly Norwood—Lyonne is no AI skeptic, having cofounded an AI studio—but it also reflects a set of concerns around AI common to many in Hollywood and beyond.

Here’s how SAG-AFTRA explained its position:

Tilly Norwood is not an actor, it’s a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation. It has no life experience to draw from, no emotion and, from what we’ve seen, audiences aren’t interested in watching computer-generated content untethered from the human experience. It doesn’t solve any “problem” — it creates the problem of using stolen performances to put actors out of work, jeopardizing performer livelihoods and devaluing human artistry.

This statement reflects three broad criticisms that come up over and over in discussions of AI art:

Content theft: Most leading AI video models have been trained on broad swathes of the Internet, including images and films made by artists. In many cases, companies have not asked artists for permission to use this content, nor compensated them. Courts are still working out whether this is fair use under copyright law. But many people I talked to consider AI companies’ training efforts to be theft of artists’ work.

Job loss:  If AI tools can make passable video quickly or drastically speed up editing tasks, that potentially takes jobs away from actors or film editors. While past technological advancements have also eliminated jobs—the adoption of digital cameras drastically reduced the number of people cutting physical film—AI could have an even broader impact.

Artistic quality:  A lot of people told me they just didn’t think AI-generated content could ever be good art. Tess Dinerstein stars in vertical dramas—episodic programs optimized for viewing on smartphones. She told me that AI is “missing that sort of human connection that you have when you go to a movie theater and you’re sobbing your eyes out because your favorite actor is talking about their dead mom.”

The concern about theft is potentially solvable by changing how models are trained. Around the time Accetturo released the “Princess Mononoke” trailer, he called for generative AI tools to be “ethically trained on licensed datasets.”

Some companies have moved in this direction. For instance, independent filmmaker Gille Klabin told me he “feels pretty good” using Adobe products because the company trains its AI models on stock images that it pays royalties for.

But the other two issues—job losses and artistic integrity—will be harder to finesse. Many creators—and fans—believe that AI-generated content misses the fundamental point of art, which is about creating an emotional connection between creators and viewers.

But while that point is compelling in theory, the details can be tricky.

Dinerstein, the vertical drama actress, told me that she’s “not fundamentally against AI”—she admits “it provides a lot of resources to filmmakers” in specialized editing tasks—but she takes a hard stance against it on social media.

“It’s hard to ever explain gray areas on social media,” she said, and she doesn’t want to “come off as hypocritical.”

Even though she doesn’t think that AI poses a risk to her job—“people want to see what I’m up to”—she does fear people (both fans and vertical drama studios) making an AI representation of her without her permission. And she has found it easiest to just say, “You know what? Don’t involve me in AI.”

Others see it as a much broader issue. Actress Susan Spano told me it was “an issue for humans, not just actors.”

“This is a world of humans and animals,” she said. “Interaction with humans is what makes it fun. I mean, do we want a world of robots?”

It’s relatively easy for actors to take a firm stance against AI because they inherently do their work in the physical world. But things are more complicated for other Hollywood creatives, such as directors, writers, and film editors. AI tools can genuinely make them more productive, and they’re at risk of losing work if they don’t stay on the cutting edge.

So the non-actors I talked to took a range of approaches to AI. Some still reject it. Others have used the tools reluctantly and tried to keep their heads down. Still others have openly embraced the technology.

Kavan Cardoza is a director and AI filmmaker. Credit: Phantom X

Take Kavan Cardoza, for example. He worked as a music video director and photographer for close to a decade before getting his break into filmmaking with AI.

After the image model Midjourney was first released in 2022, Cardoza started playing around with image generation and later video generation. Eventually, he “started making a bunch of fake movie trailers” for existing movies and franchises. In December 2024, he made a fan film in the Batman universe that “exploded on the Internet,” before Warner Bros. took it down for copyright infringement.

Cardoza acknowledges that he re-created actors in former Batman movies “without their permission.” But he insists he wasn’t “trying to be malicious or whatever. It was truly just a fan film.”

Whereas Accetturo received death threats, the response to Cardoza’s fan film was quite positive.

“Every other major studio started contacting me,” Cardoza said. He set up an AI studio, Phantom X, with several of his close friends. Phantom X started by making ads (where AI video is catching on quickest), but Cardoza wanted to focus back on films.

In June, Cardoza made a short film called Echo Hunter, a blend of Blade Runner and The Matrix. Some shots look clearly AI-generated, but Cardoza used motion-capture technology from Runway to put the faces of real actors into his AI-generated world. Overall, the piece pretty much hangs together.

Cardoza wanted to work with real actors because their artistic choices can help elevate the script he’s written: “There’s a lot more levels of creativity to it.” But he needed SAG-AFTRA’s approval to make a film that blends AI techniques with the likenesses of SAG-AFTRA actors. To get it, he had to promise not to reuse the actors’ likenesses in other films.

In Cardoza’s view, AI is “giving voices to creators that otherwise never would have had the voice.”

But Cardoza isn’t wedded to AI. When an interviewer asked him whether he’d make a non-AI film if required to, he responded, “Oh, 100 percent.” Cardoza added that if he had the budget to do it now, “I’d probably still shoot it all live action.”

He acknowledged to me that there will be losers in the transition—“there’s always going to be changes”—but he compares the rise of AI with past technological developments in filmmaking, like the rise of visual effects. This created new jobs making visual effects digitally, but reduced jobs making elaborate physical sets.

Cardoza expressed interest in reducing the amount of job loss. In another interview, Cardoza said that for his film project, “we want to make sure we include as many people as possible,” not just actors, but sound designers, script editors, and other specialized roles.

But he believes that eventually, AI will get good enough to do everyone’s job. “Like I say with tech, it’s never about if, it’s just when.”

Accetturo’s entry into AI was similar. He told me that he worked for 15 years as a filmmaker, “mostly as a commercial director and former documentary director.” During the pandemic, he “raised millions” for an animated TV series, but it got caught up in development hell.

AI gave him a new chance at success. Over the summer of 2024, he started playing around with AI video tools. He realized that he was in the sweet spot to take advantage of AI: experienced enough to make something good, but not so established that he was risking his reputation. After Google released Veo 3 in May, Accetturo released a fake medicine ad that went viral. His studio now produces ads for prominent companies like Oracle and Popeyes.

Accetturo says the backlash against him has subsided: “It truly is nothing compared to what it was.” And he says he’s committed to working on AI: “Everyone understands that it’s the future.”

Between the anti- and pro-AI extremes, there are a lot of editors and artists quietly using AI tools without disclosing it. Unsurprisingly, it’s difficult to find people who will speak about this on the record.

“A lot of people want plausible deniability right now,” according to Ryan Hayden, a Hollywood talent agent. “There is backlash about it.”

But if editors don’t use AI tools, they risk becoming obsolete. Hayden says that he knows a lot of people in the editing field trying to master AI because “there’s gonna be a massive cut” in the total number of editors. Those who know AI might survive.

As one comedy writer involved in an AI project told Wired, “We wanted to be at the table and not on the menu.”

Clandestine AI usage extends into the upper reaches of the industry. Hayden knows an editor who works with a major director who has directed $100 million films. “He’s already using AI, sometimes without people knowing.”

Some artists feel morally conflicted but don’t think they can effectively resist. Vinny Dellay, a storyboard artist who has worked on Marvel films and Super Bowl ads, released a video detailing his views on the ethics of using AI as a working artist. Dellay said that he agrees that “AI being trained off of art found on the Internet without getting permission from the artist, it may not be fair, it may not be honest.” But refusing to use AI products won’t stop their general adoption. Believing otherwise is “just being delusional.”

Instead, Dellay said that the right course is to “adapt like cockroaches after a nuclear war.” If they’re lucky, using AI in storyboarding workflows might even “let a storyboard artist pump out twice the boards in half the time without questioning all your life’s choices at 3 am.”

Gille Klabin is an independent writer, director, and visual effects artist. Credit: Gille Klabin

Gille Klabin is an indie director and filmmaker currently working on a feature called Weekend at the End of the World.

As an independent filmmaker, Klabin can’t afford to hire many people. There are many labor-intensive tasks—like making a pitch deck for his film—that he’d otherwise have to do himself. An AI tool “essentially just liberates us to get more done and have more time back in our life.”

But he’s careful to stick to his own moral lines. Any time he mentioned using an AI tool during our interview, he’d explain why he thought that was an appropriate choice. He said he was fine with AI use “as long as you’re using it ethically in the sense that you’re not copying somebody’s work and using it for your own.”

Drawing these lines can be difficult, however. Hayden, the talent agent, told me that as AI tools make low-budget films look better, it gets harder to make high-budget films, which employ the most people at the highest wage levels.

If anything, Klabin’s AI uptake is limited more by the current capabilities of AI models. Klabin is an experienced visual effects artist, and he finds AI products to generally be “not really good enough to be used in a final project.”

He gave me a concrete example. Rotoscoping is a process in which you trace out the subject of the shot so you can edit the background independently. It’s very labor-intensive—one has to edit every frame individually—so Klabin has tried using Runway’s AI-driven rotoscoping. While it can make for a decent first pass, the result is just too messy to use as a final project.

Klabin sent me this GIF of a series of rotoscoped frames from his upcoming movie. While the model does a decent job of identifying the people in the frame, its boundaries aren’t consistent from frame to frame. The result is noisy.

Current AI tools are full of these small glitches, so Klabin only uses them for tasks that audiences don’t see (like creating a movie pitch deck) or in contexts where he can clean up the result afterward.

Stephen Robles reviews Apple products on YouTube and other platforms. He uses AI in some parts of the editing process, such as removing silences or transcribing audio, but doesn’t see it as disruptive to his career.

Stephen Robles is a YouTuber, podcaster, and creator covering tech, particularly Apple. Credit: Stephen Robles

“I am betting on the audience wanting to trust creators, wanting to see authenticity,” he told me. AI video tools don’t really help him with that and can’t replace the reputation he’s sought to build.

Recently, he experimented with using ChatGPT to edit a video thumbnail (the image used to advertise a video). He got a couple of negative reactions about his use of AI, so he said he “might slow down a little bit” with that experimentation.

Robles didn’t seem as concerned about AI models stealing from creators like him. When I asked him about how he felt about Google training on his data, he told me that “YouTube provides me enough benefit that I don’t think too much about that.”

Professional thumbnail artist Antioch Hwang has a similarly pragmatic view toward using AI. Some channels he works with have audiences that are “very sensitive to AI images.” Even using “an AI upscaler to fix up the edges” can provoke strong negative reactions. For those channels, he’s “very wary” about using AI.

Antioch Hwang is a YouTube thumbnail artist. Credit: Antioch Creative

But for most channels he works for, he’s fine using AI, at least for technical tasks. “I think there’s now been a big shift in the public perception of these AI image generation tools,” he told me. “People are now welcoming them into their workflow.”

He’s still careful with his AI use, though, because he thinks that having human artistry helps in the YouTube ecosystem. “If everyone has all the [AI] tools, then how do you really stand out?” he said.

Recently, top creators have started using more rough-looking thumbnails for their videos. AI has made polished thumbnails too easy to create, so top creators are using what Hwang would call “poorly made thumbnails” to help videos stand out.

Hwang told me something surprising: even as AI makes it easier for creators to make thumbnails themselves, business has never been better for thumbnail artists, even at the lower end. He said that demand has soared because “AI as a whole has lowered the barriers for content creation, and now there’s more creators flooding in.”

Still, Hwang doesn’t expect the good times to last forever. “I don’t see AI completely taking over for the next three-ish years. That’s my estimated timeline.”

Everyone I talked to had different answers to when—if ever—AI would meaningfully disrupt their part of the industry.

Some, like Hwang, were pessimistic. Actor Erik Passoja told me he thought the big movie studios—like Warner Bros. or Paramount—would be gone in three to five years.

But others were more optimistic. Tess Dinerstein, the vertical drama actor, said, “I don’t think that verticals are ever going to go fully AI.” Even if it becomes technologically feasible, she argued, “that just doesn’t seem to be what the people want.”

Gille Klabin, the independent filmmaker, thought there would always be a place for high-quality human films. If someone’s work is “fundamentally derivative,” then they are at risk. But he thinks the best human-created work will still stand out. “I don’t know how AI could possibly replace the borderline divine element of consciousness,” he said.

The people who were most bullish on AI were, if anything, the least optimistic about their own career prospects. “I think at a certain point it won’t matter,” Kavan Cardoza told me. “It’ll be that anyone on the planet can just type in some sentences” to generate full, high-quality videos.

This might explain why Accetturo has become something of an AI evangelist; his newsletter tries to teach other filmmakers how to adapt to the coming AI revolution.

AI “is a tsunami that is gonna wipe out everyone” he told me. “So I’m handing out surfboards—teaching people how to surf. Do with it what you will.”

Kai Williams is a reporter for Understanding AI, a Substack newsletter founded by Ars Technica alum Timothy B. Lee. His work is supported by a Tarbell FellowshipSubscribe to Understanding AI to get more from Tim and Kai.

“Go generate a bridge and jump off it”: How video pros are navigating AI Read More »

stoke-space-goes-for-broke-to-solve-the-only-launch-problem-that-“moves-the-needle”

Stoke Space goes for broke to solve the only launch problem that “moves the needle”


“Does the world really need a 151st rocket company?”

Stoke Space’s full-flow staged combustion is tested in Central Washington in 2024. Credit: Stoke Space

Stoke Space’s full-flow staged combustion is tested in Central Washington in 2024. Credit: Stoke Space

LAUNCH COMPLEX 14, Cape Canaveral, Fla.—The platform atop the hulking steel tower offered a sweeping view of Florida’s rich, sandy coastline and brilliant blue waves beyond. Yet as captivating as the vista might be for an aspiring rocket magnate like Andy Lapsa, it also had to be a little intimidating.

To his right, at Launch Complex 13 next door, a recently returned Falcon 9 booster stood on a landing pad. SpaceX has landed more than 500 large orbital rockets. And next to SpaceX sprawled the launch site operated by Blue Origin. Its massive New Glenn rocket is also reusable, and founder Jeff Bezos has invested tens of billions of dollars into the venture.

Looking to the left, Lapsa saw a graveyard of sorts for commercial startups. Launch Complex 15 was leased to a promising startup, ABL Space, two years ago. After two failed launches, ABL Space pivoted away from commercial launch. Just beyond lies Launch Complex 16, where Relativity Space aims to launch from. The company has already burned through $4 billion in its efforts to reach orbit. Had billionaire Eric Schmidt not stepped in earlier this year, Relativity would have gone bankrupt.

Andy Lapsa may be a brainy rocket scientist, but he is not a billionaire. Far from it.

“When you start a company like this, you have no idea how far you’re going to be able to make it, you know?” he admitted.

Lapsa and another aerospace engineer, Tom Feldman, founded Stoke Space a little more than five years ago. Both had worked the better part of a decade at Blue Origin and decided they wanted to make their mark on the industry. It was not an easy choice to start a rocket company at a time when there were dozens of other entrants in the field.

Andy Lapsa speaks at the Space Economy Summit in November 2025.

Credit: The Economist Group

Andy Lapsa speaks at the Space Economy Summit in November 2025. Credit: The Economist Group

“It was a huge question in my head: Does the world really need a 151st rocket company?” he said. “And in order for me to say yes to that question, I had to very systematically go through all the other players, thinking about the economics of launch, about the business plan, about the evolution of these companies over time. It was very non-intuitive to me to start another launch company.”

So why did he do it?

I traveled to Florida in November to answer this question and to see if the world’s 151st rocket company had any chance of success.

Launch Complex 14

It takes a long time to build a launch site. Probably longer than you might think.

Lapsa and Feldman spent much of 2020 working on the basic design of a rocket that would eventually be named Nova and deciding whether they could build a business around it. In December of that year, they closed their seed round of funding, raising $9.1 million. After this, finding somewhere to launch from became a priority.

They zeroed in on Cape Canaveral because it’s where the majority of US launch companies and customers are, as well as the talent to assemble and launch rockets. They learned in 2021 that the US Space Force was planning to lease an old pad, Space Launch Complex 14, to a commercial company. This was not just a good location to launch from; it was truly a historic location—John Glenn launched into orbit from here in 1962 aboard the Friendship 7 spacecraft. It was retired in 1967 and designated a National Historic Landmark.

But in recent years, the Space Force has sought to support the flourishing US commercial space industry, and it has offered Launch Complex 14. After the competition opened in 2021, Stoke Space won the lease a year later. Then began the long and arduous process of conducting an Environmental Assessment. It took nearly two years, and it was not until October 20, 2024, that Stoke was allowed to break ground.

None of the structures on the site were usable, and aside from the historic blockhouse dating to the Mercury program, everything else had to be demolished and cleared before work could begin.

As we walked the large ring encompassing the site, Lapsa explained that all of the tanks and major hardware needed to support a Nova launch were now on site. There is a large launch tower, as well as a launch mount upon which the rocket will be stood up. The company has mostly turned toward integrating all of the ground infrastructure and wiring up the site. A nearby building to assemble rockets and process payloads is well underway.

Lapsa seemed mostly relieved. “A year ago, this was my biggest concern,” he said.

He need not have worried. A few months before the company completed its environmental permitting, a tall, lanky, thickly bearded engineer named Jonathan Lund hired on. A Stanford graduate who got his start with the US Army Corps of Engineers, Lund worked at SpaceX during the second half of the 2010s, helping to lead the reconstruction of one launch pad, the crew tower project at Launch Complex 39A, and a pad at Vandenberg Space Force Base. He also worked on multiple landing sites for the Falcon 9 rocket. Lund arrived to lead the development of Stoke’s site.

This is Lund’s fifth launch pad. Each one presents different challenges. In Florida, for example, the water table lies only a few feet below the ground. But for most rockets, including Nova, a large trench must be dug to allow flames from the rocket engines to be carried away from the vehicle at ignition and liftoff. As we stood in this massive flame diverter, there were a few indications of water seeping in.

Still, the company recently completed a major milestone by testing the water suppression system, which dampens the energy of a rocket at liftoff to protect the launch pad. Essentially, the plume from the rocket’s engines flows downward where it meets a sheet of water, turning it into steam. This creates an insulating barrier of sorts.

Water suppression test at LC-14 complete. ✅ Flowed the diverter and rain birds in a “launch like” scenario. pic.twitter.com/rs1lEloPul

— Stoke Space (@stoke_space) October 21, 2025

The water comes from large pipes running down the flame diverter, each of which has hundreds of holes not unlike a garden sprinkler hose. Lund said the pipes and the frame they rest on were built near where we stood.

“We fabricated these pieces on site, at the north end of the flame trench,” Lund explained. “Then we built this frame in Cocoa Beach and shipped it in four different sections and assembled it on site. Then we set the frame on the ramp, put together this surface (with the pipes), and then Egyptian-style we slide it down the ramp right into position. We used some old-school methods, but simple sometimes works best. Nothing fancy.”

At this point, Lapsa interrupted. “I was pretty nervous,” he said. “The way you’re describing this sounded good on a PowerPoint. But I wasn’t sure it actually would work.”

But it did.

Waiting on Nova

So if the pad is rounding into shape, how’s that rocket coming?

It sounds like Stoke Space is doing the right things. Earlier this year, the company shipped a full-scale version of its second stage to its test site at Moses Lake in central Washington. There, it underwent qualification testing, during which the vehicle is loaded with cryogenic fuels on multiple occasions, pressurized, and put through other exercises. Lapsa said that testing went well.

The company also built a stubby version of its first stage. The tanks and domes had full-size diameters, but the stage was not its full height. That vehicle also underwent qualification testing and passed.

The company has begun building flight hardware for the first Nova rocket. The vehicle’s software is maturing. Work is well underway on the development of an automated flight termination system. “Having a team that’s been through this cycle many times, it’s something we started putting attention on very early,” Lapsa said. “It’s on a good path as well.”

And yet the final, frenetic months leading to a debut launch are crunch time for any rocket company: first assembly of the full vehicle, first time test-firing it all. Things will inevitably go wrong. The question is how bad will the problems be?

For as long as I’ve known Lapsa, he has been cagey about launch dates for Stoke. This is smart because in reality, no one knows. And seasoned industry people (and journalists) know that projected launch dates for new rockets are squishy. The most precise thing Lapsa will say is that Stoke is targeting “next year” for Nova’s debut.

The company has a customer for the first flight. If all goes well, its first mission will sail to the asteroid belt. Asteroid mining startup AstroForge has signed on for Nova 1.

Stoke Space isn’t shooting for the Moon. It’s shooting for something 1 million times farther.

Too good to believe it’s true?

Stoke Space is far from the first company to start with grand ambitions. And when rocket startups think too big, it can be their undoing.

A little more than a decade ago, Firefly Space Systems in Texas based the design of its Alpha rocket on an aerospike engine, a technology that had never been flown to space before. Although this was theoretically a more efficient engine design, it also brought more technical risk and proved a bridge too far. By 2017, the company was bankrupt. When Ukrainian investor Max Polyakov rescued Firefly later that year, he demanded that Alpha have a more conventional rocket engine design.

Around the same time that Firefly struggled with its aerospike engine, another launch company, Relativity Space, announced its intent to 3D-print the entirety of its rockets. The company finally launched its Terran 1 rocket after eight years. But it struggled with additively manufacturing rockets. Relativity was on the brink of bankruptcy before a former Google executive, Eric Schmidt, stepped in to rescue the company financially. Relativity is now focused on a traditionally manufactured rocket, the Terran R.

Stoke Space’s Hopper 2 takes to the skies in September 2023 in Moses Lake, Washington.

Credit: Stoke Space

Stoke Space’s Hopper 2 takes to the skies in September 2023 in Moses Lake, Washington. Credit: Stoke Space

So what to make of Stoke Space, which has an utterly novel design for its second stage? The stage is powered by a ring of 24 thrusters, an engine collectively named Andromeda. Stoke has also eschewed a tile-based heat shield to protect the vehicle during atmospheric reentry in favor of a regeneratively cooled design.

In this, there are echoes of Firefly, Relativity, and other companies with grand plans that had to be abandoned in favor of simpler designs to avoid financial ruin. After all, it’s hard enough to reach orbit with a conventional rocket.

But the company has already done a lot of testing of this design. Its first iteration of Andromeda even completed a hop test back in 2023.

“Andromeda is wildly new,” Lapsa said. “But the question of can it work, in my opinion, is a resounding yes.”

The engineering team had all manner of questions when designing Andromeda several years ago. How will all of those thrusters and their plumbing interact with one another? Will there be feedback? Is the heat shield idea practical?

“Those are the kind of unknowns that we knew we were walking into from an engineering perspective,” Lapsa said. “We knew there should be an answer in there, but we didn’t know exactly what it would be. It’s very hard to model all that stuff in the transient. So you just had to get after it, and do it, and we were able to do that. So can it work? Absolutely yes. Will it work out of the box? That’s a different question.”

First stage, too

Stoke’s ambitions did not stop with the upper stage. Early on, Lapsa, Feldman, and the small engineering team also decided to develop a full-flow staged combustion engine. This, Lapsa acknowledges, was a “risky” decision for the company. But it was a necessary one, he believes.

Full-flow staged combustion engines had been tested before this decade but were never flown. From an engineering standpoint, they are significantly more complex than a traditional staged combustion engine in that the oxidizer and propellant—which began as cryogenic liquids—arrive in the combustion chamber in a fully gaseous state. This interaction between two gases is more efficient and produces less wear and tear on turbines within the engine.

“You want to get the highest efficiency you can without driving the turbine temperature to a place where you have a short lifetime,” Lapsa said. “Full-flow is the right answer for that. If you do anything else, it’s a distraction.”

Stoke Space successfully tests its advanced full-flow staged combustion rocket engine, designed to power the Nova launch vehicle’s first stage.

Credit: Stoke Space

Stoke Space successfully tests its advanced full-flow staged combustion rocket engine, designed to power the Nova launch vehicle’s first stage. Credit: Stoke Space

It was also massively unproven. When Stoke Space was founded in 2020, no full-flow staged combustion engine had ever gotten close to space. SpaceX was developing the Raptor engine using the technology, but it would not make its first “spaceflight” until the spring of 2023 on the Super Heavy rocket that powers Starship. Multiple Raptors failed shortly after ignition.

But for a company choosing full reusability of its rocket, as SpaceX sought to do with Starship, there ultimately is no choice.

“Anything you build for full and rapid reuse needs to find margin somewhere in the system,” Lapsa said. “And really that’s fuel efficiency. It makes fuel efficiency a very strong, very important driver.”

In June 2024, Stoke Space announced it had just completed a successful hot fire test of its full-flow, staged combustion engine for Nova’s first stage. The propulsion team had, Lapsa said at the time, “worked tirelessly” to reach that point.

Not just another launch company?

Stoke Space got to the party late. After SpaceX’s success with the first Falcon 9 in 2010, a wave of new entrants entered the field over the next decade. They were drawing down billions in venture capital funding, and some were starting to go public at huge valuations as special purpose acquisition companies. But by 2020, the market seemed saturated. The gold rush for new launch companies was nearing the cops-arrive-to-bust-up-the-festivities stage.

Every new company seemed to have its own spin on how to conquer low-Earth orbit.

“There were a lot of other business plans being proposed and tried,” Lapsa said. “There were low-cost, mass-produced disposable rockets. There were rockets under the wings of aircraft. There were rocket engine companies that were going to sell to 150 launch companies. All of those ideas raised big money and deserve to be considered. The question is, which one is the winner in the end?”

And that’s the question he was trying to answer in his own mind. He was in his 30s. He had a family. And he was looking to commit his best years, professionally, to solving a major launch problem.

“What’s the thing that fundamentally moves the needle on what’s out there already today?” he said. “The only thing, in my opinion, is rapid reuse. And once you get it, the economics are so powerful that nothing else matters. That’s the thing I couldn’t get out of my head. That’s the only problem I wanted to work on, and so we started a company in order to work on it.”

Stoke was one of many launch companies five years ago. But in the years since, the field has narrowed considerably. Some promising companies, such as Virgin Orbit and ABL Space, launched a few times and folded. Others never made it to the launch pad. Today, by my count, there are fewer than 10 serious commercial launch companies in the United States, Stoke among them. The capital markets seem convinced. In October, Stoke announced a massive $510 million Series D funding round. That was a lot of money in a challenging time to raise launch firm funding.

So Stoke has the money it needs. It has a team of sharp engineers and capable technicians. It has a launch pad and qualified hardware. That’s all good because this is the point in the journey for a launch startup where things start to get very, very difficult.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Stoke Space goes for broke to solve the only launch problem that “moves the needle” Read More »

oneplus-15-review:-the-end-of-range-anxiety

OnePlus 15 review: The end of range anxiety


It keeps going and going and…

OnePlus delivers its second super-fast phone of 2025.

OnePlus 15 back

The OnePlus 15 represents a major design change. Credit: Ryan Whitwam

The OnePlus 15 represents a major design change. Credit: Ryan Whitwam

OnePlus got its start courting the enthusiast community by offering blazing-fast phones for a low price. While the prices aren’t quite as low as they once were, the new OnePlus 15 still delivers on value. Priced at $899, this phone sports the latest and most powerful Snapdragon processor, the largest battery in a mainstream smartphone, and a super-fast screen.

The OnePlus 15 still doesn’t deliver the most satisfying software experience, and the camera may actually be a step back for the company, but the things OnePlus gets right are very right. It’s a fast, sleek phone that runs for ages on a charge, and it’s a little cheaper than the competition. But its shortcomings make it hard to recommend this device over the latest from Google or Samsung—or even the flagship phone OnePlus released 10 months ago.

US buyers have time to mull it over, though. Because of the recent government shutdown, Federal Communications Commission approval of the OnePlus 15 has been delayed. The company says it will release the phone as soon as it can, but there’s no exact date yet.

A sleek but conventional design

After a few years of phones with a distinctly “OnePlus” look, the OnePlus 15 changes up the formula by looking more like everything else. The overall shape is closer to that of phones from Samsung, Apple, and Google than the OnePlus 13. That said, the OnePlus 15 is extremely well-designed, and it’s surprisingly lightweight (211g) for how much power it packs. It’s sturdy, offering full IP69K sealing, and it uses the latest Gorilla Glass Victus 2 on the screen. An ultrasonic fingerprint scanner under the display works just as well as any other flagship phone’s fingerprint unlock.

Specs at a glance: OnePlus 15
SoC Snapdragon 8 Elite Gen 5
Memory 12GB, 16GB
Storage 256GB, 512GB
Display 2772 x 1272 6.78″ OLED, 1-165 Hz
Cameras 50 MP primary, f/1.8, OIS; 50 MP ultrawide, f/2.0; 50 MP 3.5x telephoto, OIS, f/2.8; 32 MP selfie, f/2.4
Software Android 16, 4 years of OS updates, six years of security patches
Battery 7,300 mAh, 100 W wired charging (80 W with included plug), 50 W wireless charging
Connectivity Wi-Fi 7, NFC, Bluetooth 6.0, sub-6 GHz 5G, USB-C 3.2 Gen 1
Measurements 161.4 x 76.7 x 8.1 mm; 211 g

OnePlus managed to cram a 7,300 mAh battery in this phone without increasing the weight compared to last year’s model. Flagship phones like the Samsung Galaxy S25 Ultra and Pixel 10 Pro XL are at 5,000 mAh or a little more, and they weigh the same or a bit more. Adding almost 50 percent capacity on top of that without making the phone ungainly is an impressive feat of engineering.

OnePlus 15 in hand

The display is big, bright, and fast.

Credit: Ryan Whitwam

The display is big, bright, and fast. Credit: Ryan Whitwam

That said, this is still a very large phone. The OLED screen measures 6.78 inches and has a resolution of 1272 x 2772. That’s a little lower than last year’s phone, which almost exactly matched the Galaxy S25 Ultra’s 1440p screen. Even looking at the OP13 and OP15 side-by-side, the difference in display resolution is negligible. You might notice the increased refresh rate, though. During normal use, the OnePlus 15 can hit 120 Hz (or as low as 1 Hz to save power), but in supported games, it can reach 165 Hz.

While the phone’s peak brightness is a bit lower than last year’s phone (3,600 vs. 4,500 nits), that’s not the full-screen brightness you’ll see day to day. The standard high-brightness mode (HMB) rating is a bit higher at 1,800 nits, which is even better than what you’ll get on phones like the Galaxy S25 Ultra. The display is not just readable outside—it looks downright good.

OnePlus offers the phone in a few colors, but the differences are more significant than in your average smartphone lineup. The Sand Storm unit we’ve tested is a light tan color that would be impossible to anodize. Instead, this version of the phone uses a finish known as micro-arc oxidation (MAO), which is supposedly even more durable than PVD titanium. OnePlus says this is the first phone with this finish, but it’s actually wrong about that. The 2012 HTC One S also had an MAO finish that was known to chip over time. OnePlus says its take on MAO is more advanced and was tested with a device known as a nanoindenter that can assess the mechanical properties of a material with microscopic precision.

OnePlus 15 keyboard glamour shot

The OnePlus 15 looks nice, but it also looks more like everything else. It does have an IR blaster, though.

Credit: Ryan Whitwam

The OnePlus 15 looks nice, but it also looks more like everything else. It does have an IR blaster, though. Credit: Ryan Whitwam

Durability aside, the MAO finish feels very interesting—it’s matte and slightly soft to the touch but cool like bare metal. It’s very neat, but it’s probably not neat enough to justify an upgrade if you’re looking at the base model. You can only get Sand Storm with the upgraded $999 model, which has 512GB of storage and 16GB of RAM.

The Sand Storm variant also has a fiberglass back panel rather than the glass used on other versions of the phone. All colorways have the same squircle camera module in the corner, sporting three large-ish sensors. Unlike some competing devices, the camera bump isn’t too prominent. So the phone almost lies flat—it still rocks a bit when sitting on a table, but not as much as phones like the Galaxy S25 Ultra.

For years, OnePlus set itself apart with the alert slider, but this is the company’s first flagship phone to drop that feature. Instead, you get a configurable action button similar to the iPhone. By default, the “Plus Key” connects to the Plus Mind AI platform, allowing you to take screenshots and record voice notes to load them instantly into the AI. More on that later.

Alert slider and button

The Plus Key (bottom) has replaced the alert slider (top). We don’t like this.

Credit: Ryan Whitwam

The Plus Key (bottom) has replaced the alert slider (top). We don’t like this. Credit: Ryan Whitwam

You can change the key to controlling ring mode, the flashlight, or several other features. However, the button feels underutilized, and the default behavior is odd. You don’t exactly need an entire physical control to take screenshots when that’s already possible by holding the power and volume down buttons like on any other phone. The alert slider will be missed.

Software and AI

The OnePlus 15 comes with OxygenOS 16, which is based on Android 16. The software is essentially the same as what you’d find on OnePlus and Oppo phones in China but with the addition of Google services. The device inherits some quirks from the Chinese version of the software, known as ColorOS. Little by little, the international OxygenOS has moved closer to the software used in China. For example, OnePlus is very invested in slick animations in OxygenOS, which can be a bit distracting at times.

Some things that should be simple often take multiple confirmation steps in OxygenOS. Case in point: Removing an app from your home screen requires a long-press and two taps, and OnePlus chose to separate icon colors and system colors in the labyrinthian theming menu. There are also so many little features vying for your attention that it takes a day or two just to encounter all of them and tap through the on-screen tutorials.

Mind Space OnePlus

Plus Mind aims to organize your data in screenshots and voice notes.

Credit: Ryan Whitwam

Plus Mind aims to organize your data in screenshots and voice notes. Credit: Ryan Whitwam

OnePlus has continued aping the iPhone to an almost embarrassing degree with this phone. There are Dynamic Island-style notifications for Android’s live alerts, which look totally alien in this interface. The app drawer also has a category view like iOS, but the phone doesn’t know what most of our installed apps are. Thus, “Other” becomes the largest category, making this view rather useless.

OnePlus was a bit slower than most to invest in generative AI features, but there are plenty baked into the OnePlus 15. The most prominent AI feature is Mind Space, which lets you save voice notes and screenshots with the Plus Key; they become searchable after being processed with AI. This is most similar to Nothing’s Essential Space. Google’s Pixel Screenshots app doesn’t do voice, but it offers a more conversational interface that can pull information from your screens rather than just find them, which is all Mind Space can do.

While OnePlus has arguably the most capable on-device AI hardware with the Snapdragon 8 Elite Gen 5, it’s not relying on it for much AI processing. Only some content from Plus Mind is processed locally, and the rest is uploaded to the company’s Private Computing Cloud. Features like AI Writer and the AI Recorder operate entirely in the cloud system. There’s also an AI universal search feature that sends information to the cloud, but this is thankfully disabled by default. OnePlus says it has full control of these servers, noting that encryption prevents anyone else (even OnePlus itself) from accessing your data.

OnePlus apps

The categorized app drawer is bad at recognizing apps.

Credit: Ryan Whitwam

The categorized app drawer is bad at recognizing apps. Credit: Ryan Whitwam

So OnePlus is at least saying the right things about privacy—Google has a similar pitch for its new private AI cloud compute environment. Regardless of whether you believe that, though, there are other drawbacks to leaning so heavily on the cloud. Features that run workloads in the Private Computing Cloud will have more latency and won’t work without a solid internet connection. It also just seems like a bit of a waste not to take advantage of Qualcomm’s super-powerful on-device capabilities.

AI features on the OnePlus 15 are no more or less useful than the versions on other current smartphones. If you want a robot to write Internet comments for you, the OnePlus 15 can do that just fine. If you don’t want to use AI on your phone, you can remap the Plus Key to something else and ignore the AI-infused stock apps. There are plenty of third-party alternatives that don’t have AI built in.

OnePlus doesn’t have the best update policy, but it’s gotten better over time. The OnePlus 15 is guaranteed four years of OS updates and six years of security patches. The market leaders are Google and Samsung, which offer seven years of full support.

Performance and battery

There’s no two ways about it: The OnePlus 15 is a ridiculously fast phone. This is the first Snapdragon 8 Elite Gen 5 device we’ve tested, and it definitely puts Qualcomm’s latest silicon to good use. This chip has eight Oryon CPU cores, with clock speeds as high as 4.6 GHz. It’s almost as fast as the Snapdragon X Elite laptop chips.

Even though OnePlus has some unnecessarily elaborate animations, you never feel like you’re waiting on the phone to catch up. Every tap is detected accurately, and app launches are near instantaneous. The Gen 5 is faster than last year’s flagship processor, but don’t expect the OnePlus 15 to run at full speed indefinitely.

In our testing, the phone pulls back 10 to 20 percent under thermal load to manage heat. The OP15 has a new, larger vapor chamber that seems to keep the chipset sufficiently cool during extended gaming sessions. That heat has to go somewhere, though. The phone gets noticeably toasty in the hand during sustained use.

The OnePlus 15 behaves a bit differently in benchmark apps, maintaining high speeds longer to attain higher scores. This tuning reveals just how much heat an unrestrained Snapdragon 8 Elite Gen 5 can produce. After running flat-out for 20 minutes, the phone loses only a little additional speed, but the case gets extremely hot. Parts of the phone reached a scorching 130° Fahrenheit, which is hot enough to burn your skin after about 30 seconds. During a few stress tests, the phone completely closed all apps and disabled functions like the LED flash to manage heat.

The unthrottled benchmarks do set a new record. The OnePlus 15 tops almost every test—Apple’s iPhone 17 Pro eked out the only win in Geekbench single-core—Snapdragon has always fallen short in single-core throughput in past Apple-Qualcomm matchups, but it wins on multicore performance.

The Snapdragon chip uses a lot of power when it’s cranked up, but the OnePlus 15 has battery to spare. The 7,300 mAh silicon-carbide cell is enormous compared to the competition, which hovers around 5,000 mAh in other big phones. This is one of the very few smartphones that you don’t have to charge every night. In fact, making it through two or three days with this device is totally doable. And that’s without toggling on the phone’s battery-saving mode.

OnePlus also shames the likes of Google and Samsung when it comes to charging speed. The phone comes with a charger in the box—a rarity these days. This adapter can charge the phone at an impressive 80 W, and OnePlus will offer a 100 W charger on its site. With the stock charger, you can completely charge the massive battery in a little over 30 minutes. It almost doesn’t matter that the battery is so big because a few minutes plugged in gives you more than enough to head out the door. Just plug the phone in while you look for your keys, and you’re good to go. The phone also supports 50 W wireless charging with a OnePlus dock, but that’s obviously not included.

OnePlus 15 side

There is somehow a 7,300 mAh battery in there.

Credit: Ryan Whitwam

There is somehow a 7,300 mAh battery in there. Credit: Ryan Whitwam

Unfortunately, only chargers and cables compatible with Oppo’s SuperVOOC system will reach these speeds. It’s nice to see one in the box because spares will cost you the better part of $100. Even if you aren’t using an official OnePlus charger/cable, a standard USB-PD plug can still hit 36 W, which is faster than phones like the Pixel 10 Pro and Galaxy S25 and about the same as the iPhone 17.

Cameras

OnePlus partnered with imaging powerhouse Hasselblad on its last several flagship phones, but that pairing is over with the launch of the OnePlus 15. The phone maker is now going it alone, swapping Hasselblad’s processing for a new imaging engine called DetailMax. The hardware is changing, too.

OnePlus 15 cameras

The OnePlus 15 camera setup is a slight downgrade from the 13.

Credit: Ryan Whitwam

The OnePlus 15 camera setup is a slight downgrade from the 13. Credit: Ryan Whitwam

OnePlus 15 has new camera sensors despite featuring the same megapixel count. There’s a 50 MP primary wide-angle, a 50 MP telephoto with 3.5x effective zoom, and a 50 MP ultrawide with support for macro shots. There’s a 32 MP selfie camera peeking through the OLED as well.

Each of these sensors is physically smaller than last year’s OnePlus cameras by a small margin. That means they can’t collect as much light, but good processing can make up for minor physical changes like that. That’s the problem, though.

Taking photos with the OnePlus 15 can be frustrating because the image processing misses as much as it hits. The colors, temperature, dynamic range, and detail are not very consistent. Images taken in similar conditions of similar objects—even those taken one after the other—can have dramatically different results. Color balance is also variable across the three rear sensors.

Bright outdoor light, fast movement. Ryan Whitwam

By that token, some of the photos we’ve taken on the OnePlus 15 are great. These are usually outdoor shots, where the phone has plenty of light. It’s not bad at capturing motion in these instances, and photos are sharp as long as the frame isn’t too busy. However, DetailMax has a tendency to oversharpen, which obliterates fine details and makes images look the opposite of detailed. This is much more obvious in dim lighting, with longer exposures that lead to blurry subjects more often than not.

Adding any digital zoom to your framing is generally a bad idea on the OnePlus 15. The processing just doesn’t have the capacity to clean up those images like a Google Pixel or even a Samsung Galaxy. The telephoto lens is good for getting closer to your subject, but the narrow aperture and smaller pixels make it tough to rely on indoors. Again, outdoor images are substantially better.

Shooting landscapes with the ultrawide is a good experience. The oversharpening isn’t as apparent in bright outdoor conditions, and there’s very little edge distortion. However, the field of view is narrower than on the OnePlus 13’s ultrawide camera, so that makes sense. Macro shots are accomplished with this same lens, and the results are better than you’ll get with any dedicated macro lens on a phone. That said, blurriness and funky processing creep in often enough that backing up and shooting a normal photo can serve you better, particularly if there isn’t much light.

A tale of two flagships

The OnePlus 15 is not the massive leap you might expect from skipping a number. The formula is largely unchanged from its last few devices—it’s blazing fast and well-built, but everything else is something of an afterthought.

You probably won’t be over the moon for the OnePlus 15, but it’s a good, pragmatic choice. It runs for days on a charge, you barely have to touch it with a power cable to get a full day’s use, and it manages that incredible battery life while being fast as hell. Honestly, it’s a little too fast in benchmarks, with the frame reaching borderline dangerous temperatures. The phone might get a bit warm in games, but it will maintain frame rates better than anything else on the market, up to 165 fps in titles that support its ultra-fast screen.

OnePlus 13 and 15

The OnePlus 13 (left) looked quite different compared to the 15 (right)

Credit: Ryan Whitwam

The OnePlus 13 (left) looked quite different compared to the 15 (right) Credit: Ryan Whitwam

However, the software can be frustrating at times, with inconsistent interfaces and unnecessarily arduous usage flows. OnePlus is also too dependent on sending your data to the cloud for AI analysis. You can avoid that by simply not using OnePlus’ AI features, and luckily, it’s pretty easy to avoid them.

It’s been less than a year since the OnePlus 13 arrived, but the company really wanted to be the first to get the new Snapdragon in everyone’s hands. So here we are with a second 2025 OnePlus flagship. If you have the OnePlus 13, there’s no reason to upgrade. That phone is arguably better, even though it doesn’t have the latest Snapdragon chip or an enormous battery. It still lasts more than long enough on a charge, and the cameras perform a bit better. You also can’t argue with that alert slider.

The Good

  • Incredible battery life and charging speed
  • Great display
  • Durable design, cool finish on Sand Storm colorway
  • Blazing fast

The Bad

  • Lots of AI features that run in the cloud
  • Cameras a step down from OnePlus 13
  • OxygenOS is getting cluttered
  • RIP the alert slider
  • Blazing hot

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

OnePlus 15 review: The end of range anxiety Read More »

i’ve-already-been-using-a-“steam-machine”-for-months,-and-i-think-it’s-great

I’ve already been using a “Steam Machine” for months, and I think it’s great


or, “the impatient person’s guide to buying a Steam Machine”

With a little know-how, you can get yourself a Steam Machine right this minute.

I started trying to install SteamOS on other PCs basically as soon as Valve made it possible. Credit: Andrew Cunningham

I started trying to install SteamOS on other PCs basically as soon as Valve made it possible. Credit: Andrew Cunningham

Valve’s second big foray into first-party PC hardware isn’t a sequel to the much-imitated Steam Deck portable, but rather a desktop computer called the Steam Machine. And while it could go on your desk, Valve clearly intends for it to fit in an entertainment center under a TV—next to, or perhaps even instead of, a game console like the Xbox or PlayStation 5.

I am pretty sure this idea could work, and it’s because I’ve already been experimenting with what is essentially a “Steam Machine” underneath my own TV for months, starting in May when Valve began making it possible to install SteamOS on certain kinds of generic PC hardware.

Depending on what it costs—and we can only guess what it will cost—the Steam Machine could be a good fit for people who just want to plug a more powerful version of the Steam Deck experience into their TVs. But for people who like tinkering or who, like me, have been messing with miniature TV-connecting gaming PCs for years and are simply tired of trying to make Windows workable, the future promised by the Steam Machine is already here.

My TV PC setup

I had always been sort of TV PC-curious, but I can trace my current setup to December 2018, when, according to a Micro Center receipt in my inbox, I built a $504.51 PC in a tiny InWin Chopin case centered on an AMD Ryzen 5 2400G processor.

At the time, the Ryzen brand was only a couple of years old, and the 2400G had impressed reviewers by combining a competent-enough quad-core CPU with a usably performant integrated GPU. And the good news was: It worked! It was nowhere near as good as the graphical experience that, say, a PlayStation 4 could provide, but it worked well for older and indie games, while also giving me access to a TV-connected computer for the occasions when I wanted to stream things from a browser, or participate in a living room-scale Zoom call (something that would become the box’s main job during the pandemic-induced isolation of 2020 and early 2021).

(This PC evolved over time and currently uses a Ryzen 8700G processor, which includes AMD’s best CPU and integrated GPU for socketed desktop motherboards. I did this to get more stable 1080p performance in more games, but I would not recommend this build to most people right now—more on that in a bit.)

The main problem was Windows, which was not and still is not particularly well-optimized for controller-driven living room use. What I really wanted was a startup process that felt more or less like a game console: hit the power button, and automatically get launched into a gamepad-navigable interface that would let me launch and play things without touching a mouse or keyboard.

There are third-party apps like Launchbox that make a go of providing this functionality for people more interested in emulation or who own games from multiple PC storefronts. What I eventually settled on was a sort of hacky fix that allowed my user account to log in automatically, and then automatically launch Steam in Big Picture Mode.

This worked… fine—except when I needed to interact with a mouse and keyboard to install driver updates, or when some component of the Windows UI would steal focus from the Big Picture Mode window and make it impossible to use the controller to navigate.

So when reports indicated that Valve was working on a SteamOS version that would run on more hardware, I was immediately interested. SteamOS was designed to boot right into its gaming interface, and the desktop mode was its own separate thing that you needed to open up manually—ideal for my usage model, since I didn’t want to give up the desktop mode but also didn’t need to use it often. But I did run into some bumps during the installation process, which I’ll share here in case it helps you avoid them.

SteamOS or Bazzite

Bazzite’s desktop mode wallpaper. A community supported alternative to SteamOS, Bazzite offers much wider hardware compatibility but can have rough edges. Credit: Bazzite

I had trouble using Valve’s official restore image (SteamOS version 3.7.7, from this support page) to get newer hardware working, which may be one reason why that language was softened. It was no problem to install official first-party SteamOS on slightly older hardware, like the Ryzen 7040 version of the Framework Laptop 13 or an older Acer laptop with a Ryzen 6000-series processor installed. But trying to install the software on newer hardware failed no matter what I tried. Those systems included the Ryzen AI 300 version of the Framework Laptop; a socket AM5 testbed desktop with a dedicated Radeon RX 7800 XT GPU; and, to my great disappointment, my TV desktop’s Ryzen 7 8700G.

There’s very little information out there about installing or troubleshooting SteamOS on generic hardware, but if you poke around on Reddit about much of anything, you’ll quickly meet a specific Type of Guy who believes that anyone with hardware compatibility issues should just use Bazzite, a community-developed alternate operating system that attempts to provide a SteamOS-ish alternative with wider hardware support (including for Intel and Nvidia hardware, which isn’t likely to be supported by the official SteamOS any time soon).

And so Bazzite I tried! Indicating that I used an AMD GPU and wanted to boot into the SteamOS interface offered me the exact same image that Bazzite offers for the Steam Deck and other handhelds, and it installed on my Ryzen desktop with minimal fuss.

Bazzite also came painfully close to what I wanted it to be, in terms of user experience—a desktop mode to boot into on the occasions I needed one, but otherwise I could just fire up the Xbox controller I had paired to the PC and jump right into a game.

But Bazzite was sunk by the same kind of bugs and edge cases that often chase me away from Linux operating systems when I try them. The main issue was that periodically, the system would boot up into desktop mode without asking (usually this seemed to happen when the Steam client software needed an update, but I can’t say for sure). Restarting the system would usually boot it back into the SteamOS interface—but I’d need to log in all over again, and the OS would switch Bluetooth off by default. Not only am I having to dig out a keyboard and mouse to solve this problem, but I’m needing to use a wired keyboard until I could get Bluetooth turned back on.

By the time this had happened twice, I was sure it wasn’t a fluke; by the time it had happened four or five times, I was determined to blow the entire operating system away and try again. And I was particularly interested in trying actual, for-real SteamOS again, just in case a new Bazzite install would have the same problems as the one I was already using.

After some digging, I found this directory. If you look through those folders, you’ll see OS images for various versions of SteamOS, including newer versions of SteamOS 3.7 (the “stable” version you’ll find on the Deck) and builds of both SteamOS 3.8 and 3.9 (the Deck will pull these down if you switch from the “stable” OS channel to “main”). Not all of those folders include the repair image you need to wipe a device and install SteamOS, but a few do—this one, dated October 27, is the most recent as of this writing.

Those newer versions of the operating system include changes that expand SteamOS’s hardware support, most notably a step up from Linux kernel version 6.11 to version 6.16. And it was that steamdeck-repair-main-20251027.1000-3.8.0.img.zip file that I was finally able to flash to a USB drive and install on my TV desktop using Valve’s instructions.

It has only been a week or so since then, but at least so far I’m finally getting what I wanted: the same experience as on my Deck, just on my TV, with hardware that is somewhat better-suited for a larger and higher-resolution screen (and that’s the main reason to do this, rather than use a docked Steam Deck for everything).

The SteamOS experience

The “console-like experience” designed for the Steam Deck also works well with a TV and a gamepad. Credit: Valve

Once the OS is installed and is up and running, anyone who has used a Steam Deck will find it instantly familiar, and all you’ll need to do to get going is connect or pair a gamepad and/or a keyboard and mouse.

Most of the bugs and quirks I’ve run into stem from the fact that this software was developed for standalone handheld gaming consoles first and foremost. There are multiple settings toggles—including those for adaptive brightness and HDMI-CEC—that serve a purpose on the Steam Deck but just don’t function on a desktop, where these features usually aren’t present or aren’t supported.

SteamOS is also pretty hit or miss about selecting the correct resolution and refresh rate for a connected display. Navigate to the Settings, to Display, and then turn off the “Automatically Set Resolution” toggle, and you’ll see a full list of supported resolutions and refresh rates that you can pick from. You may also want to scroll down and change the “Maximum Game Resolution” from “Native” to the actual native resolution of your screen, since I occasionally encountered games that wouldn’t offer resolutions that were supported by the display I was using.

Similarly, you may need to navigate to the Audio settings and switch output devices if you’re sending audio over HDMI. I also needed to turn the audio output volume up to around 80 percent before the sound coming out of my Steam Machine would match the volume of all the other boxes connected to my TV.

And if you’ve never used SteamOS before, it’s worth reading up on some of its limitations. While its compatibility with Windows games is quite good, Valve’s Proton compatibility layer is in continuous development, and not every game will play perfectly or play at all. Games that use anti-cheat software are still broadly incompatible with SteamOS, since many anti-cheat programs hook into the Windows kernel in ways that are impossible to translate or emulate. And while it’s possible to run games from other storefronts like Epic or GOG, it’s best done with third-party software like the Heroic Games Launcher, adding an extra layer of complexity.

And although SteamOS includes a useful desktop mode, it’s really not meant to be used as a day-to-day workhorse operating system—security features like “using a password to log in” are off by default in the interest of expediency, and you need to open your system to bootloader tampering just to install it. It’s fine for installing and running the odd desktop app every once in a while, but I’d hesitate to trust it with anything sensitive.

Finally, while our tests have shown that SteamOS generally performs at least as well, if not better, than Windows running on the same hardware, the first-party version of SteamOS is still made with handhelds and other low-power hardware in mind. In my limited testing of SteamOS on desktops with both integrated and with more powerful dedicated GPUs, I’ve generally found that those observations hold up. But I’ve only tested on a narrow range of hardware, and you could easily encounter a setup where SteamOS just doesn’t run games as well as Windows does.

Rolling your own Steam Machine

A Ryzen 7 8700G-based “Steam Machine,” in an InWin Chopin Max case. I enjoy PC building, but the economics of this box aren’t great for most people. Credit: Andrew Cunningham

Say you’re interested in having a Steam Machine, you don’t want to wait for Valve, and you don’t just happen to have a spare ideally configured AMD-based PC to sacrifice to the testing gods.

I am more or less happy with my custom-built mini ITX Steam Machine, but I find it difficult to recommend this hardware combination to basically anybody at this point. For me, it scratched a PC-building itch, and the potential for future upgradability is mildly interesting to me. But given the high cost of AMD’s Socket AM5 platform and spiking costs for RAM and SSDs, it’s going to be difficult to put together an 8700G-focused system in an InWin Chopin for less than $800. And that’s a whole lot to pay for a years-old Radeon 780M GPU.

For a more budget-friendly Steam Machine, consider the range of no-name mini PCs available on Amazon and some other places. We’ve dabbled with systems from manufacturers like Aoostar, Beelink, Bosgame, and GMKtec before and come away conditionally impressed by the ratio of utility-to-performance, and YouTubers like RetroGameCorps and ETA Prime periodically cover new ones and generally have positive things to say. You’re rolling the dice on long-term reliability and support, but it’s also tough to argue with the convenience of the form factor or the pricing compared to a custom-built system.

If you’re going this route, we have some general recommendations and performance numbers, based on testing of similar chips in other laptops and desktops. Note that the Ryzen 6800U/Radeon 680M system is an Acer Swift Edge 16 laptop with 16GB of soldered DDR5, while the Ryzen 7840U/Radeon 780M system is a Framework Laptop 13 with non-soldered DDR5. Performance may differ a few FPS in either direction depending on your hardware configuration. The Ryzen 7700X/Radeon RX 7600 system is a custom-built testbed desktop similar to the one we use for testing CPUs and GPUs; based on hardware alone, we’d expect the real Steam Machine to perform near or slightly below .

A handful of numbers from a single game, to show relative performance differences between some integrated and low-end dedicated AMD GPUs. Credit: Andrew Cunningham

In the $350 to $400 range, look for PCs with a Ryzen 6800-series chip in them, like the 6800H or 6850H (here’s one from GMKTec for $385, and one from Beelink for $379). These processors come with a Radeon 680M integrated GPU, with 12 compute units (CUs) based on the RDNA2 architecture. These boxes will offer performance slightly superior to the actual Steam Deck, which uses eight RDNA2 CUs and squeezes them into a system with a small power envelope.

If you can spend around $500, that generally seems to get you the best performance for the price right now. Look for processors in the Ryzen 7040 or 8040 series, or the Ryzen 250 series (here’s one for $$490 from GMKtec, one for $499 from Bosgame, and one for $449 from Aoostar). These chips all offer broadly similar combinations of eight Zen 4-based CPU cores, and a 12-core Radeon 780M GPU based on the RDNA3 architecture.

In a mini desktop, this GPU can come pretty close to doubling the performance of the Steam Deck, though it will still fall short of most dedicated graphics cards. It’s similar to the performance level of the non-Extreme version of the Ryzen Z2 chip for competing handhelds. The 780M is also the same GPU that comes with the Ryzen 8700G desktop chip I use, and I’ve found that it gets you decent 1080p performance in many games.

The GPU is the most important thing to focus on in these systems, since it’s going to have the most impact on the way games actually run. But keep an eye on RAM and storage, too; a 1TB SSD is obviously preferable to a 500GB SSD. And while most of these come with a healthy 32GB of RAM by default, pay attention to the type of RAM. If it just says “DDR5,” that’s most likely to be socketed RAM that’s a bit slower, but which you can upgrade yourself if you want. If it comes with LPDDR5X, that’s going to be soldered down, but also a bit faster, maximizing your graphics performance.

The Steam Deck is a useful benchmark here, because it’s a fixed hardware platform that’s popular enough that PC game developers sometimes go out of their way to target. Games often include Steam Deck-specific graphics presets, which are a useful starting point when you’re fiddling with settings.

I would generally try to avoid systems with Ryzen AI 300-series chips in them—their Radeon 890M GPUs are faster, but they can also be twice as expensive as the Radeon 780M boxes. I’d also stay away from anything with Ryzen 5000 or 3000-series chips, or Ryzen 7030-series chips. The price tags on these $200 to $300 systems are tempting, and they will probably run SteamOS, but their older Vega-based GPUs will fall far short of the Steam Deck’s GPU, let alone the Radeon 680M or 780M.

The Framework Desktop is a compelling alternative to the actual Steam Machine, if you don’t mind paying for it. Credit: Andrew Cunningham

OK, but what if you have more money to spend, and you’re more interested in 1440p or 4K gaming performance (roughly what Valve is targeting with the actual Steam Machine)? I think that the Framework Desktop is a surprisingly good fit here; $1,200 will get you a console-sized PC with an eight-core Zen 5 CPU, a Radeon 8050S GPU with 32 CUs based on the RDNA 3.5 architecture (the Steam Machine has 28 RDNA3 CUs), 32GB of RAM, and a 1TB SSD.  I can confirm firsthand that SteamOS 3.8/3.9 installs and runs just fine.

This desktop is probably a bit more expensive than the Steam Machine will end up being, but it’s impossible to say how much more expensive before Valve actually puts out a price.

The TV PC is ready for its close-up

TV-connected PCs have historically been a niche thing. They’re expensive, they’re finicky, and purpose-built game consoles have always provided a more pleasant and seamless experience for people who just want to do everything with a controller from the couch.

But the TV PC could finally be ready for its moment. In SteamOS, Valve has created a pretty good, pretty widely compatible Windows substitute that buries a lot of the PC’s complexity (without totally removing it, for the people who want it sometimes). Like the Nintendo Switch, Valve has crafted a user interface that feels good to use on a handheld screen and on a TV from 10 feet away.

And this is happening at the same time as a weird detente in the console wars, where Sony seems to be embracing PC ports and easing up on exclusive releases at the same time as Microsoft seems, for all intents and purposes, to be winding down the Xbox hardware operation in favor of Windows. Valve is way out in front of Microsoft on its console-style PC interface at the same time as the PC is becoming a sort of universally compatible über-console.

I’m kind of the ideal audience for the Steam Machine; nearly all my PC games are on Steam, I play practically nothing that requires anti-cheat software, and I play mostly graphically undemanding indie games rather than GPU-bruising AAA titles. So, you know, take my enthusiasm for the concept with a grain of salt.

But as someone who has already functionally been living with a Steam Machine for months, I think that Valve’s new hardware could do for living room PCs what the Steam Deck has done for handhelds: defining and expanding a product category that others have tried and failed to crack. This year, my Steam Machine has ably kept up with me as I’ve played SilksongUFO 50, Dave the Diver, both HD-2D Dragon Quest remakes, part of a bad-guy run through Baldur’s Gate III, some multiplayer Vampire Survivors experimentation, several Jackbox Party Pack sessions, and more besides. I’ve never been less tempted to buy a PlayStation 5.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

I’ve already been using a “Steam Machine” for months, and I think it’s great Read More »

us-spy-satellites-built-by-spacex-send-signals-in-the-“wrong-direction”

US spy satellites built by SpaceX send signals in the “wrong direction”


Spy satellites emit surprising signals

It seems US didn’t coordinate Starshield’s unusual spectrum use with other countries.

Image of a satellite in space and the Earth in the background.

Image of a Starshield satellite from SpaceX’s website. Credit: SpaceX

Image of a Starshield satellite from SpaceX’s website. Credit: SpaceX

About 170 Starshield satellites built by SpaceX for the US government’s National Reconnaissance Office (NRO) have been sending signals in the wrong direction, a satellite researcher found.

The SpaceX-built spy satellites are helping the NRO greatly expand its satellite surveillance capabilities, but the purpose of these signals is unknown. The signals are sent from space to Earth in a frequency band that’s allocated internationally for Earth-to-space and space-to-space transmissions.

There have been no public complaints of interference caused by the surprising Starshield emissions. But the researcher who found them says they highlight a troubling lack of transparency in how the US government manages the use of spectrum and a failure to coordinate spectrum usage with other countries.

Scott Tilley, an engineering technologist and amateur radio astronomer in British Columbia, discovered the signals in late September or early October while working on another project. He found them in various parts of the 2025–2110 MHz band, and from his location, he was able to confirm that 170 satellites were emitting the signals over Canada, the United States, and Mexico. Given the global nature of the Starshield constellation, the signals may be emitted over other countries as well.

“This particular band is allocated by the ITU [International Telecommunication Union], the United States, and Canada primarily as an uplink band to spacecraft on orbit—in other words, things in space, so satellite receivers will be listening on these frequencies,” Tilley told Ars. “If you’ve got a loud constellation of signals blasting away on the same frequencies, it has the potential to interfere with the reception of ground station signals being directed at satellites on orbit.”

In the US, users of the 2025–2110 MHz portion of the S-Band include NASA and the National Oceanic and Atmospheric Administration (NOAA), as well as nongovernmental users like TV news broadcasters that have vehicles equipped with satellites to broadcast from remote locations.

Experts told Ars that the NRO likely coordinated with the US National Telecommunications and Information Administration (NTIA) to ensure that signals wouldn’t interfere with other spectrum users. A decision to allow the emissions wouldn’t necessarily be made public, they said. But conflicts with other governments are still possible, especially if the signals are found to interfere with users of the frequencies in other countries.

Surprising signals

A man standing outdoors in front of two large antennas.

Scott Tilley and his antennas.

Credit: Scott Tilley

Scott Tilley and his antennas. Credit: Scott Tilley

Tilley previously made headlines in 2018 when he located a satellite that NASA had lost contact with in 2005. For his new discovery, Tilley published data and a technical paper describing the “strong wideband S-band emissions,” and his work was featured by NPR on October 17.

Tilley’s technical paper said emissions were detected from 170 satellites out of the 193 known Starshield satellites. Emissions have since been detected from one more satellite, making it 171 out of 193, he told Ars. “The apparent downlink use of an uplink-allocated band, if confirmed by authorities, warrants prompt technical and regulatory review to assess interference risk and ensure compliance” with ITU regulations, Tilley’s paper said.

Tilley said he uses a mix of omnidirectional antennas and dish antennas at his home to receive signals, along with “software-defined radios and quite a bit of proprietary software I’ve written or open source software that I use for analysis work.” The signals did not stop when the paper was published. Tilley said the emissions are powerful enough to be received by “relatively small ground stations.”

Tilley’s paper said that Starshield satellites emit signals with a width of 9 MHz and signal-to-noise (SNR) ratios of 10 to 15 decibels. “A 10 dB SNR means the received signal power is ten times greater than the noise power in the same bandwidth,” while “20 dB means one hundred times,” Tilley told Ars.

Other Starshield signals that were 4 or 5 MHz wide “have been observed to change frequency from day to day with SNR exceeding 20dB,” his paper said. “Also observed from time to time are other weaker wide signals from 2025–2110 MHz what may be artifacts or actual intentional emissions.”

The 2025–2110 MHz band is used by NASA for science missions and by other countries for similar missions, Tilley noted. “Any other radio activity that’s occurring on this band is intentionally limited to avoid causing disruption to its primary purpose,” he said.

The band is used for some fully terrestrial, non-space purposes. Mobile service is allowed in 2025–2110 MHz, but ITU rules say that “administrations shall not introduce high-density mobile systems” in these frequencies. The band is also licensed in the US for non-federal terrestrial services, including the Broadcast Auxiliary Service, Cable Television Relay Service, and Local Television Transmission Service.

While Earth-based systems using the band, such as TV links from mobile studios, have legal protection against interference, Tilley noted that “they normally use highly directional and local signals to link a field crew with a studio… they’re not aimed into space but at a terrestrial target with a very directional antenna.” A trade group representing the US broadcast industry told Ars that it hasn’t observed any interference from Starshield satellites.

“There without anybody knowing it”

Spectrum consultant Rick Reaser told Ars that Starshield’s space-to-Earth transmissions likely haven’t caused any interference problems. “You would not see this unless you were looking for it, or if it turns out that your receiver looks for everything, which most receivers aren’t going to do,” he said.

Reaser said it appears that “whatever they’re doing, they’ve come up with a way to sort of be there without anybody knowing it,” or at least until Tilley noticed the signals.

“But then the question is, can somebody prove that that’s caused a problem?” Reaser said. Other systems using the same spectrum in the correct direction probably aren’t pointed directly at the Starshield satellites, he said.

Reaser’s extensive government experience includes managing spectrum for the Defense Department, negotiating a spectrum-sharing agreement with the European Union, and overseeing the development of new signals for GPS. Reaser said that Tilley’s findings are interesting because the signals would be hard to discover.

“It is being used in the wrong direction, if they’re coming in downlink, that’s supposed to be an uplink,” Reaser said. As for what the signals are being used for, Reaser said he doesn’t know. “It could be communication, it could be all sorts of things,” he said.

Tilley’s paper said the “results raise questions about frequency-allocation compliance and the broader need for transparent coordination among governmental, commercial, and scientific stakeholders.” He argues that international coordination is becoming more important because of the ongoing deployment of large constellations of satellites that could cause harmful interference.

“Cooperative disclosure—without compromising legitimate security interests—will be essential to balance national capability with the shared responsibility of preserving an orderly and predictable radio environment,” his paper said. “The findings presented here are offered in that spirit: not as accusation, but as a public-interest disclosure grounded in reproducible measurement and open analysis. The data, techniques, and references provided enable independent verification by qualified parties without requiring access to proprietary or classified information.”

While Tilley doesn’t know exactly what the emissions are for, his paper said the “signal characteristics—strong, coherent, and highly predictable carriers from a large constellation—create the technical conditions under which opportunistic or deliberate PNT exploitation could occur.”

PNT refers to Positioning, Navigation, and Timing (PNT) applications. “While it is not suggested that the system was designed for that role, the combination of wideband data channels and persistent carrier tones in a globally distributed or even regionally operated network represents a practical foundation for such use, either by friendly forces in contested environments or by third parties seeking situational awareness,” the paper said.

Emissions may have been approved in secret

Tilley told us that a few Starshield satellites launched just recently, in late September, have not emitted signals while moving toward their final orbits. He said this suggests the emissions are for an “operational payload” and not merely for telemetry, tracking, and control (TT&C).

“This could mean that [the newest satellites] don’t have this payload or that the emissions are not part of TT&C and may begin once these satellites achieve their place within the constellation,” Tilley told Ars. “If these emissions are TT&C, you would expect them to be active especially during the early phases of the mission, when the satellites are actively being tested and moved into position within the constellation.”

Whatever they’re for, Reaser said the emissions were likely approved by the NTIA and that the agency would likely have consulted with the Federal Communications Commission. For federal spectrum use, these kinds of decisions aren’t necessarily made public, he said.

“NRO would have to coordinate that through the NTIA to make sure they didn’t have an interference problem,” Reaser said. “And by the way, this happens a lot. People figure out a way [to transmit] on what they call a non-interference basis, and that’s probably how they got this approved. They say, ‘listen, if somebody reports interference, then you have to shut down.’”

Tilley said it’s clear that “persistent S-band emissions are occurring in the 2025–2110 MHz range without formal ITU coordination.” Claims that the downlink use was approved by the NTIA in a non-public decision “underscore, rather than resolve, the transparency problem,” he told Ars.

An NTIA spokesperson declined to comment. The NRO and FCC did not provide any comment in response to requests from Ars.

SpaceX just “a contractor for the US government”

Randall Berry, a Northwestern University professor of electrical and computer engineering, agreed with Reaser that it’s likely the NTIA approved the downlink use of the band and that this decision was not made public. Getting NTIA clearance is “the proper way this should be done,” he said.

“It would be surprising if NTIA was not aware, as Starshield is a government-operated system,” Berry told Ars. While NASA and other agencies use the band for Earth-to-space transmissions, “they may have been able to show that the Starshield space-to-Earth signals do not create harmful interference with these Earth-to-space signals,” he said.

There is another potential explanation that is less likely but more sinister. Berry said it’s possible that “SpaceX did not make this known to NTIA when the system was cleared for federal use.” Berry said this would be “surprising and potentially problematic.”

Digital rendering of a satellite in space.

SpaceX rendering of a Starshield satellite.

Credit: SpaceX

SpaceX rendering of a Starshield satellite. Credit: SpaceX

Tilley doesn’t think SpaceX is responsible for the emissions. While Starshield relies on technology built for the commercial Starlink broadband system of low Earth orbit satellites, Elon Musk’s space company made the Starshield satellites in its role as a contractor for the US government.

“I think [SpaceX is] just operating as a contractor for the US government,” Tilley said. “They built a satellite to the government specs provided for them and launched it for them. And from what I understand, the National Reconnaissance Office is the operator.”

SpaceX did not respond to a request for comment.

TV broadcasters conduct interference analysis

TV broadcasters with news trucks that use the same frequencies “protect their band vigorously” and would have reported interference if it was affecting their transmissions, Reaser said. This type of spectrum use is known as Electronic News Gathering (ENG).

The National Association of Broadcasters told Ars that it “has been closely tracking recent reports concerning satellite downlink operation in the 2025–2110 MHz frequency band… While it’s not clear that satellite downlink operations are authorized by international treaty in this range, such operations are uncommon, and we are not aware of any interference complaints related to downlink use.”

The NAB investigated after Tilley’s report. “When the Tilley report first surfaced, NAB conducted an interference analysis—based on some assumptions given that Starshield’s operating parameters have not been publicly disclosed,” the group told us. “That analysis found that interference with ENG systems is unlikely. We believe the proposed downlink operations are likely compatible with broadcaster use of the band, though coordination issues with the International Telecommunication Union (ITU) could still arise.”

Tilley said that a finding of interference being unlikely “addresses only performance, not legality… coordination conducted only within US domestic channels does not meet international requirements under the ITU Radio Regulations. This deployment is not one or two satellites, it is a distributed constellation of hundreds of objects with potential global implications.”

Canada agency: No coordination with ITU or US

When contacted by Ars, an ITU spokesperson said the agency is “unable to provide any comment or additional information on the specific matter referenced.” The ITU said that interference concerns “can be formally raised by national administrations” and that the ITU’s Radio Regulations Board “carefully examines the specifics of the case and determines the most appropriate course of action to address it in line with ITU procedures.”

The Canadian Space Agency (CSA) told Ars that its “missions operating within the frequency band have not yet identified any instances of interference that negatively impact their operations and can be attributed to the referenced emissions.” The CSA indicated that there hasn’t been any coordination with the ITU or the US over the new emissions.

“To date, no coordination process has been initiated for the satellite network in question,” the CSA told Ars. “Coordination of satellite networks is carried out through the International Telecommunication Union (ITU) Radio Regulation, with Innovation, Science and Economic Development Canada (ISED) serving as the responsible national authority.”

The European Space Agency also uses the 2025–2100 band for TT&C. We contacted the agency but did not receive any comment.

The lack of coordination “remains the central issue,” Tilley told Ars. “This band is globally allocated for Earth-to-space uplinks and limited space-to-space use, not continuous space-to-Earth transmissions.”

NASA needs protection from interference

An NTIA spectrum-use report updated in 2015 said NASA “operates earth stations in this band for tracking and command of manned and unmanned Earth-orbiting satellites and space vehicles either for Earth-to-space links for satellites in all types of orbits or through space-to-space links using the Tracking Data and Relay Satellite System (TDRSS). These earth stations control ninety domestic and international space missions including the Space Shuttle, the Hubble Space Telescope, and the International Space Station.”

Additionally, the NOAA “operates earth stations in this band to control the Geostationary Operational Environmental Satellite (GOES) and Polar Operational Environmental Satellite (POES) meteorological satellite systems,” which collect data used by the National Weather Service. We contacted NASA and NOAA, but neither agency provided comment to Ars.

NASA’s use of the band has increased in recent years. The NTIA told the FCC in 2021 that 2025–2110 MHz is “heavily used today and require[s] extensive coordination even among federal users.” The band “has seen dramatically increased demand for federal use as federal operations have shifted from federal bands that were repurposed to accommodate new commercial wireless broadband operations.”

A 2021 NASA memo included in the filing said that NASA would only support commercial launch providers using the band if their use was limited to sending commands to launch vehicles for recovery and retrieval purposes. Even with that limit, commercial launch providers would cause “significant interference” for existing federal operations in the band if the commercial use isn’t coordinated through the NTIA, the memo said.

“NASA makes extensive use of this band (i.e., currently 382 assignments) for both transmissions from earth stations supporting NASA spacecraft (Earth-to-space) and transmissions from NASA’s Tracking and Data Relay Satellite System (TDRSS) to user spacecraft (space-to-space), both of which are critical to NASA operations,” the memo said.

In 2024, the FCC issued an order allowing non-federal space launch operations to use the 2025–2110 MHz band on a secondary basis. The allocation is “limited to space launch telecommand transmissions and will require commercial space launch providers to coordinate with non-Federal terrestrial licensees… and NTIA,” the FCC order said.

International non-interference rules

While US agencies may not object to the Starshield emissions, that doesn’t guarantee there will be no trouble with other countries. Article 4.4 of ITU regulations says that member nations may not assign frequencies that conflict with the Table of Frequency Allocations “except on the express condition that such a station, when using such a frequency assignment, shall not cause harmful interference to, and shall not claim protection from harmful interference caused by, a station operating in accordance with the provisions.”

Reaser said that under Article 4.4, entities that are caught interfering with other spectrum users are “supposed to shut down.” But if the Starshield users were accused of interference, they would probably “open negotiations with the offended party” instead of immediately stopping the emissions, he said.

“My guess is they were allowed to operate on a non-interference basis and if there is an interference issue, they’d have to go figure a way to resolve them,” he said.

Tilley told Ars that Article 4.4 allows for non-interference use domestically but “is not a blank check for continuous, global downlinks from a constellation.” In that case, “international coordination duties still apply,” he said.

Tilley pointed out that under the Convention on Registration of Objects Launched into Outer Space, states must report the general function of a space object. “Objects believed to be part of the Starshield constellation have been registered with UNOOSA [United Nations Office for Outer Space Affairs] under the broad description: ‘Spacecraft engaged in practical applications and uses of space technology such as weather or communications,’” his paper said.

Tilley told Ars that a vague description such as this “may satisfy the letter of filing requirements, but it contradicts the spirit” of international agreements. He contends that filings should at least state whether a satellite is for military purposes.

“The real risk is that we are no longer dealing with one or two satellites but with massive constellations that, by their very design, are global in scope,” he told Ars. “Unilateral use of space and spectrum affects every nation. As the examples of US and Chinese behavior illustrate, we are beginning from uncertain ground when it comes to large, militarily oriented mega-constellations, and, at the very least, this trend distorts the intent and spirit of international law.”

China’s constellation

Tilley said he has tracked China’s Guowang constellation and its use of “spectrum within the 1250–1300 MHz range, which is not allocated for space-to-Earth communications.” China, he said, “filed advance notice and coordination requests with the ITU for this spectrum but was not granted protection for its non-compliant use. As a result, later Chinese filings notifying and completing due diligence with the ITU omit this spectrum, yet the satellites are using it over other nations. This shows that the Chinese government consulted internationally and proceeded anyway, while the US government simply did not consult at all.”

By contrast, Canada submitted “an unusual level of detail” to the ITU for its military satellite Sapphire and coordinated fully with the ITU, he said.

Tilley said he reported his findings on Starshield emissions “directly to various western space agencies and the Canadian government’s spectrum management regulators” at the ISED.

“The Canadian government has acknowledged my report, and it has been disseminated within their departments, according to a senior ISED director’s response to me,” Tilley said, adding that he is continuing to collaborate “with other researchers to assist in the gathering of more data on the scope and impact of these emissions.”

The ISED told Ars that it “takes any reports of interference seriously and is not aware of any instances or complaints in these bands. As a general practice, complaints of potential interference are investigated to determine both the cause and possible resolutions. If it is determined that the source of interference is not Canadian, ISED works with its regulatory counterparts in the relevant administration to resolve the issue. ISED has well-established working arrangements with counterparts in other countries to address frequency coordination or interference matters.”

Accidental discovery

Two pictures of large antennas set up outdoors.

Antennas used by Scott Tilley.

Credit: Scott Tilley

Antennas used by Scott Tilley. Credit: Scott Tilley

Tilley’s discovery of Starshield signals happened because of “a clumsy move at the keyboard,” he told NPR. “I was resetting some stuff, and then all of a sudden, I’m looking at the wrong antenna, the wrong band,” he said.

People using the spectrum for Earth-to-space transmissions generally wouldn’t have any reason to listen for transmissions on the same frequencies, Tilley told Ars. Satellites using 2025–2100 MHz for Earth-to-space transmissions have their downlink operations on other frequencies, he said.

“The whole reason why I publicly revealed this rather than just quietly sit on it is to alert spacecraft operators that don’t normally listen on this band… that they should perform risk assessments and assess whether their missions have suffered any interference or could suffer interference and be prepared to deal with that,” he said.

A spacecraft operator may not know “a satellite is receiving interference unless the satellite is refusing to communicate with them or asking for the ground station to repeat the message over and over again,” Tilley said. “Unless they specifically have a reason to look or it becomes particularly onerous for them, they may not immediately realize what’s going on. It’s not like they’re sitting there watching the spectrum to see unusual signals that could interfere with the spacecraft.”

While NPR paraphrased Tilley as saying that the transmissions could be “designed to hide Starshield’s operations,” he told Ars that this characterization is “maybe a bit strongly worded.”

“It’s certainly an unusual place to put something. I don’t want to speculate about what the real intentions are, but it certainly could raise a question in one’s mind as to why they would choose to emit there. We really don’t know and probably never will know,” Tilley told us.

How amateurs track Starshield

After finding the signals, Tilley determined they were being sent by Starshield satellites by consulting data collected by amateurs on the constellation. SpaceX launches the satellites into what Tilley called classified orbits, but the space company distributes some information that can be used to track their locations.

For safety reasons, SpaceX publishes “a notice to airmen and sailors that they’re going to be dropping boosters and debris in hazard areas… amateurs use those to determine the orbital plane the launch is going to go into,” Tilley said. “Once we know that, we just basically wait for optical windows when the lighting is good, and then we’re able to pick up the objects and start tracking them and then start cataloguing them and generating orbits. A group of us around the world do that. And over the last year and a half or so since they started launching the bulk of this constellation, the amateurs have amassed considerable body of orbital data on this constellation.”

After accidentally discovering the emissions, Tilley said he used open source software to “compare the Doppler signal I was receiving to the orbital elements… and immediately started coming back with hits to Starshield and nothing else.” He said this means that “the tens of thousands of other objects in orbit didn’t match the radio Doppler characteristics that these objects have.”

Tilley is still keeping an eye on the transmissions. He told us that “I’m continuing to hear the signals, record them, and monitor developments within the constellation.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

US spy satellites built by SpaceX send signals in the “wrong direction” Read More »

review:-new-framework-laptop-16-takes-a-fresh-stab-at-the-upgradeable-laptop-gpu

Review: New Framework Laptop 16 takes a fresh stab at the upgradeable laptop GPU


framework laptop 16, take two

New components make it more useful and powerful but no less odd.

Credit: Andrew Cunningham

Credit: Andrew Cunningham

The original Framework Laptop 16 was trying to crack a problem that laptop makers have wrestled with on and off for years: Can you deliver a reasonably powerful, portable workstation and gaming laptop that supports graphics card upgrades just like a desktop PC?

Specs at a glance: Framework Laptop 16 (2025)
OS Windows 11 25H2
CPU AMD Ryzen AI 7 350 (4 Zen 5 cores, 4 Zen 5c cores)
RAM 32GB DDR5-5600 (upgradeable)
GPU AMD Radeon 860M (integrated)/Nvidia GeForce RTX 5070 Mobile (dedicated)
SSD 1TB Western Digital Black SN770
Battery 85 WHr
Display 16-inch 2560×1600 165 Hz matte non-touchscreen
Connectivity 6x recessed USB-C ports (2x USB 4, 4x USB 3.2) with customizable “Expansion Card” dongles
Weight 4.63 pounds (2.1 kg) without GPU, 5.29 pounds (2.4 kg) with GPU
Price as tested Roughly $2,649 for pre-built edition; $2,517 for DIY edition with no OS

Even in these days of mostly incremental, not-too-exciting GPU upgrades, the graphics card in a gaming PC or graphics-centric workstation will still feel its age faster than your CPU will. And the chance to upgrade that one component for hundreds of dollars instead of spending thousands replacing the entire machine is an appealing proposition.

Upgradeable, swappable GPUs would also make your laptop more flexible—you can pick and choose from various GPUs from multiple vendors based on what you want and need, whether that’s raw performance, power efficiency, Linux support, or CUDA capabilities.

Framework’s first upgrade to the Laptop 16—the company’s first upgrade to any of its products aside from the original Laptop 13—gets us pretty close to that reality. The laptop can now support two interchangeable motherboards: one with an older AMD Ryzen 7040-series CPU and one with a new Ryzen AI 300-series CPU. And both motherboards can be used either with just an integrated GPU or with dedicated GPUs from both AMD and Nvidia.

The Nvidia GeForce 5070 graphics module is the most exciting and significant part of this batch of updates, but there are plenty of other updates and revisions to the laptop’s external and internal components, too. These upgrades don’t address all of our problems with the initial version of the laptop, but they do help quite a bit. And a steady flow of updates like these would definitely make the Laptop 16 a platform worth investing in.

Re-meet the Framework Laptop 16

Framework’s Laptop 13 stacked on top of the 16. Credit: Andrew Cunningham

Framework treats each of its laptops as a platform to be modified and built upon rather than something to be wholly redesigned and replaced every time it’s updated. So these reviews necessarily re-cover ground we have already covered—I’ve also reused some of the photos from last time, since this is quite literally the same laptop in most respects. I’ll point you to the earlier review for detailed notes on the build process and how the laptop is put together.

To summarize our high-level notes about the look, feel, and design of the Framework Laptop 16: While the Framework Laptop 13 can plausibly claim to be in the same size and weight class as portables like the 13-inch MacBook Air, the Framework Laptop 16 is generally larger and heavier than the likes of the 16-inch MacBook Pro or portable PC workstations like the Lenovo ThinkPad P1 or Dell 16 Premium. That’s doubly true once you actually add a dedicated graphics module to the Laptop 16—these protrude a couple of inches from the back of the laptop and add around two-thirds of a pound to its weight.

Frame-work 16 (no GPU) Frame-work 16 (GPU) Apple 16-inch MBP Dell 16 Premium Lenovo ThinkPad P1 Gen 8 HP ZBook X G1i Lenovo Legion Pro 5i Gen 10 Razer Blade 16
Size (H x W x D inches) 0.71 x 14.04 x 10.63 0.82 x 14.04 x 11.43 0.66 x 14.01 x 9.77 0.75 x 14.1 x 9.4 0.39-0.62 x 13.95 x 9.49 0.9 x 14.02 x 9.88 0.85-1.01 x 14.34 x 10.55 0.59-0.69 x 13.98 x 9.86
Weight 4.63 lbs 5.29 lbs 4.7-4.8 lbs 4.65 pounds 4.06 lbs 4.5 lbs 5.56 lbs 4.71 lbs

You certainly can find laptops from the major PC OEMs that come close to or even exceed the size and weight of the Laptop 16. But in most cases, you’ll find that comparably specced and priced laptops are an inch or two less deep and at least half a pound lighter than the Laptop 16 with a dedicated GPU installed.

But if you’re buying from Framework, you’re probably at least notionally interested in customizing, upgrading, and repairing your laptop over time, all things that Framework continues to do better than any other company.

The Laptop 16’s customizable keyboard deck is still probably its coolest feature—it’s a magnetically attached series of panels that allows you to remove and replace components without worrying about the delicate and finicky ribbon cables the Laptop 13 uses. Practically, the most important aspect of this customizable keyboard area is that it lets you decide whether you want to install a dedicated number pad or not; this also allows you to choose whether you want the trackpad to be aligned with the center of the laptop or with wherever the middle of the keyboard is.

It might look a little rough, but the customizable keyboard deck is still probably the coolest thing about the Laptop 16 in day-to-day use. Andrew Cunningham

But Framework also sells an assortment of other functional and cosmetic panels and spacers to let users customize the laptop to their liking. The coolest, oddest accessories are still probably the LED matrix spacers and the clear, legend-less keyboard and number pad modules. We still think this assortment of panels gives the system a vaguely unfinished look, but Framework is clearly going for function over form here.

The Laptop 16 also continues to use Framework’s customizable, swappable Expansion Card modules. In theory, these let you pick the number and type of ports your laptop has, as well as customize your port setup on the fly based on what you need. But as with all AMD Ryzen-based Framework Laptops, there are some limits to what each port can do.

According to Framework’s support page, there’s no single Expansion Card slot that is truly universal:

  • Ports 1 and 4 support full 40Gbps USB 4 transfer speeds, display outputs, and up to 240 W charging, but if you use a USB-A Expansion Card in those slots, you’ll increase power use and reduce battery life.
  • Ports 2 and 4 support display outputs, up to 240 W charging, and lower power usage for USB-A ports, but they top out at 10Gbps USB 3.2 transfer speeds. Additionally, port 5 (the middle port on the right side of the laptop, if you’re looking at it head-on) supports the DisplayPort 1.4 standard where the others support DisplayPort 2.1.
  • Ports 3 and 4 are limited to 10Gbps USB 3.2 transfer speeds and don’t support display outputs or charging.

The Laptop 16 also doesn’t include a dedicated headphone jack, so users will need to burn one of their Expansion Card slots to get one.

Practically speaking, most users will be able to come up with a port arrangement that fits their needs, and it’s still handy to be able to add and remove things like Ethernet ports, HDMI ports, or SD card readers on an as-needed basis. But choosing the right Expansion Card slot for the job will still require some forethought, and customizable ports aren’t as much of a selling point for a 16-inch laptop as they are for a 13-inch laptop (the Framework Laptop 13 was partly a response to laptops like the MacBook Air and Dell XPS 13 that only came with a small number of USB-C ports; larger laptops have mostly kept their larger number and variety of ports).

What’s new in 2025’s Framework Laptop 16?

An upgraded motherboard and a new graphics module form the heart of this year’s Laptop 16 upgrade. The motherboard steps up from AMD Ryzen 7040-series processors to AMD Ryzen AI 7 350 and Ryzen AI 9 HX 370 chips. These are the same processors Framework put into the Laptop 13 earlier this year, though they ought to be able to run a bit faster in the Laptop 16 due to its larger heatsink and dual-fan cooling system.

Along with an upgrade from Zen 4-based CPU cores to Zen 5 cores, the Ryzen AI series includes an upgraded neural processing unit (NPU) that is fast enough to earn Microsoft’s Copilot+ PC label. These PCs have access to a handful of unique Windows 11 AI and machine-learning features (yes, Recall, but not just Recall) that are processed locally rather than in the cloud. If you don’t care about these features, you can mostly just ignore them, but if you do care, this is the first version of the Laptop 16 to support them.

Most of the new motherboard’s other specs and features are pretty similar to the first-generation version; there are two SO-DIMM slots for up to 96GB of DDR5-5600, one M.2 2280 slot for the system’s main SSD, and one M.2 2230 slot for a secondary SSD. Wi-Fi 7 and Bluetooth connectivity are provided by an AMD RZ717 Wi-Fi card that can at least theoretically also be replaced with something faster down the line if you want.

The more exciting upgrade, however, may be the GeForce RTX 5070 GPU. This is the first time Framework has offered an Nvidia product—its other GPUs have all come from either Intel or AMD—and it gives the new Laptop 16 access to Nvidia technologies like DLSS and CUDA, as well as much-improved performance for games with ray-traced lighting effects.

Those hoping for truly high-end graphics options for the Laptop 16 will need to keep waiting, though. The laptop version of the RTX 5070 is actually the same chip as the desktop version of the RTX 5060, a $300 graphics card with 8GB of RAM. As much as it adds to the Laptop 16, it still won’t let you come anywhere near 4K in most modern games, and for some, it may even struggle to take full advantage of the internal 165 Hz 1600p screen. Professional workloads (including AI workloads) that require more graphics RAM will also find the mobile 5070 lacking.

Old 180 W charger on top, new 240 W charger on bottom. Credit: Andrew Cunningham

Other components have gotten small updates as well. For those who upgrade an existing Laptop 16 with the new motherboard, Framework is selling 2nd-generation keyboard and number pad components. But their main update over the originals is new firmware that “includes a fix to prevent the system from waking while carried in a bag.” Owners of the original keyboard can install a firmware update to get the same functionality (and make their input modules compatible with the new board).

Upgraders should also note that the original system’s 180 W power adapter has been replaced with a 240 W model, the maximum amount of power that current USB-C and USB-PD standards are capable of delivering. You can charge the laptop with just about any USB-C power brick, but anything lower than 240 W risks reducing performance (or having the battery drain faster than it can charge).

Finally, the laptop uses a second-generation 16-inch, 2560×1600, 165 Hz LCD screen. It’s essentially identical in every way to the first-generation screen, but it formally supports G-Sync, Nvidia’s adaptive sync implementation. The original screen can still be used with the new motherboard, but it only supports AMD’s FreeSync, and Framework told us a few months ago that the panel supplier had no experience providing consumer-facing firmware updates that might add G-Sync to the old display. It’s probably not worth replacing the entire screen for, but it’s worth noting whether you’re upgrading the laptop or buying a new one.

Performance

Framework sent us the lower-end Ryzen AI 7 350 processor configuration for our new board, making it difficult to do straightforward apples-to-apples comparisons to the high-end Ryzen 9 7940HS in our first-generation Framework board. We did test the new chip, and you’ll see its results in our charts.

We’ve also provided numbers from the Ryzen AI 9 HX 370 in the Asus Zenbook S16 UM5606W to show approximately where you can expect the high-end Framework Laptop 16 configuration to land (Framework’s integrated graphics performance will be marginally worse since it’s using slower socketed RAM rather than LPDDR5X; other numbers may differ based on how each manufacturer has configured the chip’s power usage and thermal behavior). We’ve also included numbers from the same chip in the Framework Laptop 13, though Framework’s spec sheets indicate that the chips have different power limits and thus will perform differently.

We were able to test the new GeForce GPU in multiple configurations—both paired with the new Ryzen AI 7 350 processor and with the old Ryzen 9 7940HS chip. This should give anyone who bought the original Laptop 16 an idea of what kind of performance increase they can expect from the new GPU alone. In all, we’ve tested or re-tested:

  • The Ryzen 7 7940HS CPU from the first-generation Laptop 16 and its integrated Radeon 780M GPU
  • The Ryzen 7 7940HS and the original Radeon RX 7700S GPU module
  • The Ryzen 7 7940HS and the new GeForce RTX 5070 GPU module, for upgraders who only want to grab the new GPU
  • The Ryzen AI 7 350 CPU and the GeForce RTX 5070 GPU

We also did some light testing on the Radeon 860M integrated GPU included with the Ryzen AI 7 350.

All the Laptop 16 performance tests were run with Windows’ Best Performance power preset enabled, which will slightly boost performance at the expense of power efficiency.

Given all of those hardware combinations, we simply ran out of time to test the new motherboard with the old Radeon RX 7700S GPU—Framework is continuing to sell it, so it is a realistic combination of components. But our RTX 5070 testing suggests that these GPUs will perform pretty much the same regardless of which CPU you pair them with.

If you’re buying the cheaper Laptop 16 with the Ryzen AI 7 350, the good news is that it generally performs at least as well as—and usually a bit better than—the high-end Ryzen 9 7940HS from the last-generation model. Performance is also pretty similar to the Ryzen AI 9 HX 370 in smaller, thinner laptops—the extra power and cooling capacity in the Laptop 16 is paying off here. People choosing between a PC and a Mac should note that none of these Ryzen chips come anywhere near the M4 Pro used in comparably priced 16-inch MacBook Pros, but that’s just where the PC ecosystem is these days.

How big an upgrade the GeForce 5070 will be depends on the game you’re playing. In titles like Borderlands 3 that naturally run a bit better on AMD’s GPUs, there’s not much of a difference at all. In games like Cyberpunk 2077 with heavy ray-tracing effects enabled, the mobile RTX 5070 can be nearly twice as fast as the RX 7700S.

Most games will fall somewhere in between those two extremes; our tests show that the improvements hover between 20 and 30 percent most of the time, just a shade less than the 30 to 40 percent improvement that Framework claimed in its original announcement.

Beyond raw performance, the other thing you get with an Nvidia GPU is access to a bunch of important proprietary technologies like DLSS upscaling and CUDA—these technologies are often better and more widely supported than the equivalent technologies that AMD’s or Intel’s GPUs use, thanks in part to Nvidia’s overall dominance of the dedicated GPU market.

In the tests we’ve run on them, the Radeon 860M and 890M are both respectable integrated GPUs (the lower-end 860M typically falls just short of last generation’s top-end 780M, but it’s very close). They’re never able to provide more than a fraction of the Radeon RX 7700S’s performance, let alone the RTX 5070, but they’ll handle a lot of lighter games at 1080p. I would not buy a system this large or heavy just to use it with an integrated GPU.

Better to be unique than perfect

It’s expensive and quirky, but the Framework Laptop 16 is worth considering because it’s so different from what most other laptop makers are doing. Credit: Andrew Cunningham

Our original Framework Laptop 16 review called it “fascinating but flawed,” and the parts that made it flawed haven’t really changed much over the last two years. It’s still relatively large and heavy; the Expansion Card system still makes less sense in a larger laptop than it does in a thin-and-light; the puzzle-like grid of input modules and spacers looks kind of rough and unfinished.

But the upgrades do help to shift things in the Laptop 16’s favor. Its modular and upgradeable design was always a theoretical selling point, but the laptop now actually offers options that other laptops don’t.

The presence of both AMD and Nvidia GPUs is a big step up in flexibility for both gaming and professional applications. The GeForce module is a better all-around choice, with slightly to significantly faster game performance and proprietary technologies like DLSS and CUDA, while the Radeon GPU is a cheaper option with better support for Linux.

Given their cost, I still wish that these GPUs were more powerful—they’re between $350 or $449 for the Radeon RX 7700S and between $650 and $699 for the RTX 5070 (prices vary a bit and are cheaper when you’re buying them together with a new laptop rather than buying them separately). You’ll basically always spend more for a gaming laptop than you will for a gaming desktop with similar or better performance, but that does feel like an awful lot to spend for GPUs that are still limited to 8GB of RAM.

Cost is a major issue for the Laptop 16 in general. You may save money in the long run by buying a laptop that you can replace piece-by-piece as you need to rather than all at once. But it’s not even remotely difficult to find similar specs from the major PC makers for hundreds of dollars less. We can’t vouch for the build quality or longevity of any of those PCs, but it does mean that you have to be willing to pay an awful lot just for Framework’s modularity and upgradeability. That’s true to some degree of the Laptop 13 as well, but the price gap between the 13 and competing systems isn’t as large as it is for the 16.

Whatever its lingering issues, the Framework Laptop 16 is still worth considering because there’s nothing else quite like it, at least if you’re in the market for something semi-portable and semi-powerful. The MacBook Pro exists if you want something more appliance-like, and there’s a whole spectrum of gaming and workstation PCs in between with all kinds of specs, sizes, and prices. To stand out from those devices, it’s probably better to be unique than to be perfect, and the reformulated Laptop 16 certainly clears that bar.

The good

  • Modular, repairable, upgradeable design that’s made to last
  • Cool, customizable keyboard deck
  • Nvidia GeForce GPU option gives the Laptop 16 access to some gaming and GPU computing features that weren’t usable with AMD GPUs
  • GPU upgrade can be added to first-generation Framework Laptop 16
  • New processors are a decent performance improvement and are worth considering for new buyers
  • Old Ryzen 7040-series motherboard is sticking around as an entry-level option, knocking $100 off the former base price ($1,299 and up for a barebones DIY edition, $1,599 and up for the cheapest pre-built)
  • Framework’s software support has gotten better in the last year

The bad

  • Big and bulky for the specs you get
  • Mix-and-match input modules and spacers give it a rough, unfinished sort of look
  • Ryzen AI motherboards are more expensive than the originals were when they launched

The ugly

  • It’ll cost you—the absolute bare minimum price for Ryzen AI 7 350 and RTX 5070 combo is $2,149, and that’s without RAM, an SSD, or an operating system

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Review: New Framework Laptop 16 takes a fresh stab at the upgradeable laptop GPU Read More »

you-won’t-believe-the-excuses-lawyers-have-after-getting-busted-for-using-ai

You won’t believe the excuses lawyers have after getting busted for using AI


I got hacked; I lost my login; it was a rough draft; toggling windows is hard.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Amid what one judge called an “epidemic” of fake AI-generated case citations bogging down courts, some common excuses are emerging from lawyers hoping to dodge the most severe sanctions for filings deemed misleading.

Using a database compiled by French lawyer and AI researcher Damien Charlotin, Ars reviewed 23 cases where lawyers were sanctioned for AI hallucinations. In many, judges noted that the simplest path to avoid or diminish sanctions was to admit that AI was used as soon as it’s detected, act humble, self-report the error to relevant legal associations, and voluntarily take classes on AI and law. But not every lawyer takes the path of least resistance, Ars’ review found, with many instead offering excuses that no judge found credible. Some even lie about their AI use, judges concluded.

Since 2023—when fake AI citations started being publicized—the most popular excuse has been that the lawyer didn’t know AI was used to draft a filing.

Sometimes that means arguing that you didn’t realize you were using AI, as in the case of a California lawyer who got stung by Google’s AI Overviews, which he claimed he took for typical Google search results. Most often, lawyers using this excuse tend to blame an underling, but clients have been blamed, too. A Texas lawyer this month was sanctioned after deflecting so much that the court had to eventually put his client on the stand after he revealed she played a significant role in drafting the aberrant filing.

“Is your client an attorney?” the court asked.

“No, not at all your Honor, just was essentially helping me with the theories of the case,” the lawyer said.

Another popular dodge comes from lawyers who feign ignorance that chatbots are prone to hallucinating facts.

Recent cases suggest this excuse may be mutating into variants. Last month, a sanctioned Oklahoma lawyer admitted that he didn’t expect ChatGPT to add new citations when all he asked the bot to do was “make his writing more persuasive.” And in September, a California lawyer got in a similar bind—and was sanctioned a whopping $10,000, a fine the judge called “conservative.” That lawyer had asked ChatGPT to “enhance” his briefs, “then ran the ‘enhanced’ briefs through other AI platforms to check for errors,” neglecting to ever read the “enhanced” briefs.

Neither of those tired old excuses hold much weight today, especially in courts that have drawn up guidance to address AI hallucinations. But rather than quickly acknowledge their missteps, as courts are begging lawyers to do, several lawyers appear to have gotten desperate. Ars found a bunch citing common tech issues as the reason for citing fake cases.

When in doubt, blame hackers?

For an extreme case, look to a New York City civil court, where a lawyer, Innocent Chinweze, first admitted to using Microsoft Copilot to draft an errant filing, then bizarrely pivoted to claim that the AI citations were due to malware found on his computer.

Chinweze said he had created a draft with correct citations but then got hacked, allowing bad actors “unauthorized remote access” to supposedly add the errors in his filing.

The judge was skeptical, describing the excuse as an “incredible and unsupported statement,” particularly since there was no evidence of the prior draft existing. Instead, Chinweze asked to bring in an expert to testify that the hack had occurred, requesting to end the proceedings on sanctions until after the court weighed the expert’s analysis.

The judge, Kimon C. Thermos, didn’t have to weigh this argument, however, because after the court broke for lunch, the lawyer once again “dramatically” changed his position.

“He no longer wished to adjourn for an expert to testify regarding malware or unauthorized access to his computer,” Thermos wrote in an order issuing sanctions. “He retreated” to “his original position that he used Copilot to aid in his research and didn’t realize that it could generate fake cases.”

Possibly more galling to Thermos than the lawyer’s weird malware argument, though, was a document that Chinweze filed on the day of his sanctions hearing. That document included multiple summaries preceded by this text, the judge noted:

Some case metadata and case summaries were written with the help of AI, which can produce inaccuracies. You should read the full case before relying on it for legal research purposes.

Thermos admonished Chinweze for continuing to use AI recklessly. He blasted the filing as “an incoherent document that is eighty-eight pages long, has no structure, contains the full text of most of the cases cited,” and “shows distinct indications that parts of the discussion/analysis of the cited cases were written by artificial intelligence.”

Ultimately, Thermos ordered Chinweze to pay $1,000, the most typical fine lawyers received in the cases Ars reviewed. The judge then took an extra non-monetary step to sanction Chinweze, referring the lawyer to a grievance committee, “given that his misconduct was substantial and seriously implicated his honesty, trustworthiness, and fitness to practice law.”

Ars could not immediately reach Chinweze for comment.

Toggling windows on a laptop is hard

In Alabama, an attorney named James A. Johnson made an “embarrassing mistake,” he said, primarily because toggling windows on a laptop is hard, US District Judge Terry F. Moorer noted in an October order on sanctions.

Johnson explained that he had accidentally used an AI tool that he didn’t realize could hallucinate. It happened while he was “at an out-of-state hospital attending to the care of a family member recovering from surgery.” He rushed to draft the filing, he said, because he got a notice that his client’s conference had suddenly been “moved up on the court’s schedule.”

“Under time pressure and difficult personal circumstance,” Johnson explained, he decided against using Fastcase, a research tool provided by the Alabama State Bar, to research the filing. Working on his laptop, he opted instead to use “a Microsoft Word plug-in called Ghostwriter Legal” because “it appeared automatically in the sidebar of Word while Fastcase required opening a separate browser to access through the Alabama State Bar website.”

To Johnson, it felt “tedious to toggle back and forth between programs on [his] laptop with the touchpad,” and that meant he “unfortunately fell victim to the allure of a new program that was open and available.”

Moorer seemed unimpressed by Johnson’s claim that he understood tools like ChatGPT were unreliable but didn’t expect the same from other AI legal tools—particularly since “information from Ghostwriter Legal made it clear that it used ChatGPT as its default AI program,” Moorer wrote.

The lawyer’s client was similarly horrified, deciding to drop Johnson on the spot, even though that risked “a significant delay of trial.” Moorer noted that Johnson seemed shaken by his client’s abrupt decision, evidenced by “his look of shock, dismay, and display of emotion.”

Moorer further noted that Johnson had been paid using public funds while seemingly letting AI do his homework. “The harm is not inconsequential as public funds for appointed counsel are not a bottomless well and are limited resource,” the judge wrote in justifying a more severe fine.

“It has become clear that basic reprimands and small fines are not sufficient to deter this type of misconduct because if it were, we would not be here,” Moorer concluded.

Ruling that Johnson’s reliance on AI was “tantamount to bad faith,” Moorer imposed a $5,000 fine. The judge also would have “considered potential disqualification, but that was rendered moot” since Johnson’s client had already dismissed him.

Asked for comment, Johnson told Ars that “the court made plainly erroneous findings of fact and the sanctions are on appeal.”

Plagued by login issues

As a lawyer in Georgia tells it, sometimes fake AI citations may be filed because a lawyer accidentally filed a rough draft instead of the final version.

Other lawyers claim they turn to AI as needed when they have trouble accessing legal tools like Westlaw or LexisNexis.

For example, in Iowa, a lawyer told an appeals court that she regretted relying on “secondary AI-driven research tools” after experiencing “login issues her with her Westlaw subscription.” Although the court was “sympathetic to issues with technology, such as login issues,” the lawyer was sanctioned, primarily because she only admitted to using AI after the court ordered her to explain her mistakes. In her case, however, she got to choose between paying a minimal $150 fine or attending “two hours of legal ethics training particular to AI.”

Less sympathetic was a lawyer who got caught lying about the AI tool she blamed for inaccuracies, a Louisiana case suggested. In that case, a judge demanded to see the research history after a lawyer claimed that AI hallucinations came from “using Westlaw Precision, an AI-assisted research tool, rather than Westlaw’s standalone legal database.”

It turned out that the lawyer had outsourced the research, relying on a “currently suspended” lawyer’s AI citations, and had only “assumed” the lawyer’s mistakes were from Westlaw’s AI tool. It’s unclear what tool was actually used by the suspended lawyer, who likely lost access to a Westlaw login, but the judge ordered a $1,000 penalty after the lawyer who signed the filing “agreed that Westlaw did not generate the fabricated citations.”

Judge warned of “serial hallucinators”

Another lawyer, William T. Panichi in Illinois, has been sanctioned at least three times, Ars’ review found.

In response to his initial penalties ordered in July, he admitted to being tempted by AI while he was “between research software.”

In that case, the court was frustrated to find that the lawyer had contradicted himself, and it ordered more severe sanctions as a result.

Panichi “simultaneously admitted to using AI to generate the briefs, not doing any of his own independent research, and even that he ‘barely did any personal work [him]self on this appeal,’” the court order said, while also defending charging a higher fee—supposedly because this case “was out of the ordinary in terms of time spent” and his office “did some exceptional work” getting information.

The court deemed this AI misuse so bad that Panichi was ordered to disgorge a “payment of $6,925.62 that he received” in addition to a $1,000 penalty.

“If I’m lucky enough to be able to continue practicing before the appellate court, I’m not going to do it again,” Panichi told the court in July, just before getting hit with two more rounds of sanctions in August.

Panichi did not immediately respond to Ars’ request for comment.

When AI-generated hallucinations are found, penalties are often paid to the court, the other parties’ lawyers, or both, depending on whose time and resources were wasted fact-checking fake cases.

Lawyers seem more likely to argue against paying sanctions to the other parties’ attorneys, hoping to keep sanctions as low as possible. One lawyer even argued that “it only takes 7.6 seconds, not hours, to type citations into LexisNexis or Westlaw,” while seemingly neglecting the fact that she did not take those precious seconds to check her own citations.

The judge in the case, Nancy Miller, was clear that “such statements display an astounding lack of awareness of counsel’s obligations,” noting that “the responsibility for correcting erroneous and fake citations never shifts to opposing counsel or the court, even if they are the first to notice the errors.”

“The duty to mitigate the harms caused by such errors remains with the signor,” Miller said. “The sooner such errors are properly corrected, either by withdrawing or amending and supplementing the offending pleadings, the less time is wasted by everyone involved, and fewer costs are incurred.”

Texas US District Judge Marina Garcia Marmolejo agreed, explaining that even more time is wasted determining how other judges have responded to fake AI-generated citations.

“At one of the busiest court dockets in the nation, there are scant resources to spare ferreting out erroneous AI citations in the first place, let alone surveying the burgeoning caselaw on this subject,” she said.

At least one Florida court was “shocked, shocked” to find that a lawyer was refusing to pay what the other party’s attorneys said they were owed after misusing AI. The lawyer in that case, James Martin Paul, asked to pay less than a quarter of the fees and costs owed, arguing that Charlotin’s database showed he might otherwise owe penalties that “would be the largest sanctions paid out for the use of AI generative case law to date.”

But caving to Paul’s arguments “would only benefit serial hallucinators,” the Florida court found. Ultimately, Paul was sanctioned more than $85,000 for what the court said was “far more egregious” conduct than other offenders in the database, chastising him for “repeated, abusive, bad-faith conduct that cannot be recognized as legitimate legal practice and must be deterred.”

Paul did not immediately respond to Ars’ request to comment.

Michael B. Slade, a US bankruptcy judge in Illinois, seems to be done weighing excuses, calling on all lawyers to stop taking AI shortcuts that are burdening courts.

“At this point, to be blunt, any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud,” Slade wrote.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

You won’t believe the excuses lawyers have after getting busted for using AI Read More »