Features

removing-the-weakest-link-in-electrified,-autonomous-transport:-humans

Removing the weakest link in electrified, autonomous transport: humans


Hands-off charging could open the door to a revolution in autonomous freight.

An electric car charger plugs itself into a driverless cargo truck.

Driverless truck meets robot EV charger in Sweden as Einride and Rocsys work together. Credit: Einride and Rocsys

Driverless truck meets robot EV charger in Sweden as Einride and Rocsys work together. Credit: Einride and Rocsys

Thanks to our new global tariff war, the wild world of importing and exporting has been thrust into the forefront. There’s a lot of logistics involved in keeping your local Walmart stocked and your Amazon Prime deliveries happening, and you might be surprised at how much of that world has already been automated.

While cars from autonomy providers like Waymo are still extremely rare in most stretches of the open road, the process of loading and unloading cargo has become almost entirely automated at some major ports around the world. Likewise, there’s an increasing shift to electrify the various vehicles involved along the way, eliminating a significant source of global emissions.

But there’s been one sticking point in this automated, electrified logistical dream: plugging in. The humble act of charging still happens via human hands, but that’s changing. At a testing facility in Sweden, a company called Rocsys has demonstrated an automated charger that works with self-driving electric trucks from Einride in a hands-free and emissions-free partnership that could save time, money, and even lives.

People-free ports

Shipping ports are pretty intimidating places. Towering cranes stand 500 feet above the ground, swinging 30-ton cargo crates into the air and endlessly moving them from giant ships to holding pens and then turning around and sending off the next set of shipments.

A driverless truck heads out onto the road from a light industrial estate

This is Einride’s autonomous cargo truck. Credit: Einride

That cargo is then loaded onto container handlers that operate exclusively within the confines of the port, bringing the crates closer to the roads or rail lines that will take them further. They’re stacked again until the arrival of their next ride, semi-trucks for cargo about to hit the highway or empty rail cars for anything train-bound.

Believe it or not, that entire process happens autonomously at some of the most advanced ports in the world. “The APM terminal in Rotterdam port is, I would say, in the top three of the most advanced terminals in the world. It’s completely automated. There are hardly any people,” Crijn Bouman, the CEO and co-founder of Rocsys, said.

Eliminating the human factor at facilities like ports reduces cost and increases safety at a workplace that is, according to the CDC, five times more dangerous than average. But the one link in the chain that hasn’t been automated is recharging.

Those cargo haulers may be able to drive themselves to the charger, but they still can’t plug themselves in. They need a little help, and that’s where Rocsys comes in.

The person-free plug

The genesis of Rocsys came in 2017, when cofounder Bouman visited a fledgling robotaxi operator in the Bay Area.

“The vehicles were driving themselves, but after a couple of test laps, they would park themselves in the corner, and a person would walk over and plug them in,” Bouman said.

Bouman wouldn’t tell me which autonomy provider was operating the place, but he was surprised to see that the company was focused only on the wildly complex task of shuttling people from place to place on open roads. Meanwhile, the seemingly simple act of plugging and unplugging was handled exclusively by human operators.

A Rocsys charging robot extends its plug towards the EV charge port of a cargo truck.

No humans required. Credit: Einride and Rocsys

Fast-forward eight years, and The Netherlands-based Rocsys now has more than 50 automated chargers deployed globally, with a goal to install thousands more. While the company is targeting robotaxi operators for its automated charging solution, initial interest is primarily in port and fleet operators as those businesses become increasingly electrified.

Bouman calls Rocsys’s roboticized charger a “robotic steward,” a charming moniker for an arm that sticks a plug in a hole. But it’s all more complicated than that, of course.

The steward relies on an AI- and vision-based system to move an arm holding the charger plug. That arm offers six degrees of freedom and, thanks to the wonders of machine learning, largely trains itself to interface with new cars and new chargers.

It can reach high and low enough and at enough angles to cover everything from consumer cars to commercial trucks. It even works with plugs of all shapes and sizes.

The biggest complication? Manual charging flaps on some consumer cars. This has necessitated a little digital extension to the steward’s robotic arm. “We’ll have sort of a finger assembly to open the charge port cover, connect the plug, and also the system can close it. So no change to the vehicle,” Bouman said.

A Rocsys charging robot extends its plug towards the EV charge port of a cargo truck.

Manually opening charge port covers complicates things a bit. Credit: Einride and Rocsys

That said, Bouman hopes manufacturers will ditch manual charge port covers and switch to powered, automatic ones in future vehicles.

Automating the autonomous trucks

Plenty of companies around the globe are promising to electrify trucking, from medium-duty players like Harbinger to the world’s largest piece of rolling vaporware, the Tesla Semi. Few are actually operating the things, though.

Stockholm-based Einride is one of those companies. Its electric trucks are making deliveries every day, taking things a step further by removing the driver from the equation.

The wild-looking, cab-less autonomous electric transport (AET) vehicles, which would not look out of place thundering down the highway in any science-fiction movie, are self-driving in most situations. But they do have a human backup in the form of operators at what Einride’s general manager of autonomous technology, Henrik Green, calls control towers.

Here, operators can oversee multiple trucks, ensuring safe operation and handling any unexpected happenings on the road. In this way, a single person can operate multiple trucks from afar, only connecting when it requires manual intervention.

“The more vehicles we can use with the same workforce of people, the higher the efficiency,” he said.

Green said Einride has multiple remote control towers overseeing the company’s pilot deployments. Here in the US, Einride has been running a route at GE Appliance’s Selmer, Tennessee facility, where autonomous forklifts load cargo onto the autonomous trucks for hands-off hauling of your next refrigerator.

A woman monitors a video feed of an autonomous truck. A younger woman stands to her side.

The trucks are overseen remotely. Credit: Einride

Right now, the AETs must be manually plugged in by an on-site operator. It’s a minor task, but Green said that automating this process could be game-changing.

“There are, surprisingly, a lot of trucks today that are standing still or running empty,” Green said. Part of this comes down to poor logistical planning, but a lot is due to the human factor. “With automated electric trucks, we can make the transportation system more sustainable, more efficient, more resilient, and absolutely more safe.”

Getting humans out of the loop could result in Einride’s machines operating 24/7, only pausing to top off their batteries.

Self-charging, self-driving trucks could also help open the door to longer-distance deliveries without having to saddle them with giant batteries. Even with regular charging stops, these trucks could operate at a higher utilization than human-driven machines, which can only run for as long as their operators are legally or physically able to.

That could result in significant cost savings for businesses, and, since everything is electric, the environmental potential is strong, too.

“Around seven percent of the world’s global CO2 footprint today comes from land transportation, which is what we are addressing with electric heavy-duty transportation,” Green said.

Integrations and future potential

This first joining of a Rocsys robotic steward and an Einride AET took place at the AstaZero proving ground in Sandhult, Sweden, an automation test facility that has been a safe playground for driverless vehicles of all shapes and sizes for over a decade.

This physical connection between Rocsys and Einride is a small step, with one automated charger connected to one automated truck, compared to the nearly three million diesel-powered semis droning around our highways in the United States alone. But you have to start somewhere, and while bringing this technology to more open roads is the goal, closed logistics centers and ports are a great first step.

“The use case is simpler,” Bouman said. “There are no cats and dogs jumping, or children, or people on bicycles.”

And how complicated was it to connect Einride’s systems to those of the Rocsys robotic steward? Green said the software integration with the Rocsys system was straightforward but that “some adaptations” were required to make Einride’s machine compatible. “We had to make a ‘duct tape solution’ for this particular demo,” Green said.

Applying duct tape, at least, seems like a safe job for humans for some time to come.

Removing the weakest link in electrified, autonomous transport: humans Read More »

sierra-made-the-games-of-my-childhood.-are-they-still-fun-to-play?

Sierra made the games of my childhood. Are they still fun to play?


Get ready for some nostalgia.

My Ars colleagues were kicking back at the Orbital HQ water cooler the other day, and—as gracefully aging gamers are wont to do—they began to reminisce about classic Sierra On-Line adventure games. I was a huge fan of these games in my youth, so I settled in for some hot buttered nostalgia.

Would we remember the limited-palette joys of early King’s Quest, Space Quest, or Quest for Glory titles? Would we branch out beyond games with “Quest” in their titles, seeking rarer fare like Freddy Pharkas: Frontier Pharmacist? What about the gothic stylings of The Colonel’s Bequest or the voodoo-curious Gabriel Knight?

Nope. The talk was of acorns. [Bleeping] acorns, in fact.

The scene in question came from King’s Quest III, where our hero Gwydion must acquire some exceptionally desiccated acorns to advance the plot. It sounds simple enough. As one walkthrough puts it, “Go east one screen and north one screen to the acorn tree. Try picking up acorns until you get some dry ones. Try various spots underneath the tree.” Easy! And clear!

Except it wasn’t either one because the game rather notoriously won’t always give you the acorns, even when you enter the right command. This led many gamers to believe they were in the wrong spot, when in reality, they just had to keep entering the “get acorns” command while moving pixel by pixel around the tree until the game finally supplied them. One of our staffers admitted to having purchased the King’s Quest III hint book solely because of this “puzzle.” (The hint book, which is now online, says that players should “move around” the particular oak tree in question because “you can only find the right kind of acorns in one spot.”)

This wasn’t quite the “fun” I had remembered from these games, but as I cast my mind back, I dimly recalled similar situations. Space Quest II: Vohaul’s Revenge had been my first Sierra title. After my brother and I spent weeks on the game only to die repeatedly in some pitch-dark tunnels, we implored my dad to call Sierra’s 1-900 pay hint line. He thought about it. I could see it pained him because he had never before (and never since!) called a 1-900 number. In this case, the call cost a piratical 75 cents for the first minute and 50 cents for each additional minute. After listening to us whine for several days straight, my dad decided that his sanity was worth the fee, and he called.

Like the acorn example above, we had known what to do—we had just not done it to the game’s rather exacting standards. The key was to use a glowing gem as a light source, which my brother and I had long understood. The problem was the text parser, which demanded that we “put gem in mouth” to use the gem’s light in the tunnels. There was no other place to put the gem, no other way to hold or attach it. (We tried them all.) No other attempt to use the light of this shining crystal, no matter how clear, well-intentioned, or succinctly expressed, would work. You put the gem in your mouth, or you died in the darkness.

Returning from my reveries to the conversation at hand, I caught Ars Senior Editor Lee Hutchinson’s cynical remark that these kinds of puzzles were “the only way to make 2–3 hours of ‘game’ last for months.” This seemed rather shocking, almost offensive. How could one say such a thing about the games that colored my memories of childhood?

So I decided to replay Space Quest II for the first time in 35 years in an attempt to defend my own past.

Big mistake.

Space Quest II screenshot.

We’re not on Endor anymore, Dorothy.

Play it again, Sam

In my memory, the Space Quest series was filled with sharply written humor, clever puzzles, and enchanting art. But when I fired up the original version of the game, I found that only one of these was true. The art, despite its blockiness and limited colors, remained charming.

As for the gameplay, the puzzles were not so much “clever” as “infuriating,” “obvious,” or (more often) “rather obscure.”

Finding the glowing gem discussed above requires you to swim into one small spot of a multi-screen river, with no indication in advance that anything of importance is in that exact location. Trying to “call” a hunter who has captured you does nothing… until you do it a second time. And the less said about trying to throw a puzzle at a Labian Terror Beast, typing out various word permutations while death bears down upon you, the better.

The whole game was also filled with far more no-warning insta-deaths than I had remembered. On the opening screen, for instance, after your janitorial space-broom floats off into the cosmic ether, you can walk your character right off the edge of the orbital space station he is cleaning. The game doesn’t stop you; indeed, it kills you and then mocks you for “an obvious lack of common sense.” It then calls you a “wing nut” with an “inability to sustain life.” Game over.

The game’s third screen, which features nothing more to do than simply walking around, will also kill you in at least two different ways. Walk into the room still wearing your spacesuit and your boss will come over and chew you out. Game over.

If you manage to avoid that fate by changing into your indoor uniform first, it’s comically easy to tap the wrong arrow key and fall off the room’s completely guardrail-free elevator platform. Game over.

Space Quest II screenshot.

Do NOT touch any part of this root monster.

Get used to it because the game will kill you in so, so many ways: touching any single pixel of a root monster whose branches form a difficult maze; walking into a giant mushroom; stepping over an invisible pit in the ground; getting shot by a guard who zips in on a hovercraft; drowning in an underwater tunnel; getting swiped at by some kind of giant ape; not putting the glowing gem in your mouth; falling into acid; and many more.

I used the word “insta-death” above, but the game is not even content with this. At one key point late in the game, a giant Aliens-style alien stalks the hallways, and if she finds you, she “kisses” you. But then she leaves! You are safe after all! Of course, if you have seen the films, you will recognize that you are not safe, but the game lets you go on for a bit before the alien’s baby inevitably bursts from your chest, killing you. Game over.

This is why the official hint book suggests that you “save your game a lot, especially when it seems that you’re entering a dangerous area. That way, if you die, you don’t have to retrace your steps much.” Presumably, this was once considered entertaining.

When it comes to the humor, most of it is broad. (When you are told to “say the word,” you have to say “the word.”) Sometimes it is condescending. (“You quickly glance around the room to see if anyone saw you blow it.”) Or it might just be potty jokes. (Plungers, jock straps, toilet paper, alien bathrooms, and fouling one’s trousers all make appearances.)

My total gameplay time: a few hours.

“By Grabthar’s hammer!” I thought. “Lee was right!”

When I admitted this to him, Lee told me that he had actually spent time learning to speedrun the Space Quest games during the pandemic. “According to my notes, a clean run of SQ2 in ‘fast’ mode—assuming good typing skills—takes about 20 minutes straight-up,” he said. Yikes.

Space Quest II screenshot.

What a fiendish plot!

And yet

The past was a different time. Computer memory was small, graphics capabilities were low, and computer games had emerged from the “let them live just long enough to encourage spending another quarter” arcade model. Mouse adoption took a while; text parsers made sense even though they created plenty of frustration. So yes—some of these games were a few hours of gameplay stretched out with insta-death, obscure puzzles, and the sheer amount of time it took just to walk across the game’s various screens. (Seriously, “walking around” took a ridiculous amount of the game’s playtime, especially when a puzzle made you backtrack three screens, type some command, and then return.)

Space Quest II screenshot.

Let’s get off this rock.

Judged by current standards, the Sierra games are no longer what I would play for fun.

All the same, I loved them. They introduced me to the joy of exploring virtual worlds and to the power of evocative artwork. I went into space, into fairy tales, and into the past, and I did so while finding the games’ humor humorous and their plotlines compelling. (“An army of life insurance salesmen?” I thought at the time. “Hilarious and brilliant!”)

If the games can feel a bit arbitrary or vexing today, my child-self’s love of repetition was able to treat them as engaging challenges rather than “unfair” design.

Replaying Space Quest II, encountering the half-remembered jokes and visual designs, brought back these memories. The novelist Thomas Wolfe knew that you can’t go home again, and it was probably inevitable that the game would feel dated to me now. But playing it again did take me back to that time before the Internet, when not even hint lines, insta-death, and EGA graphics could dampen the wonder of the new worlds computers were capable of showing us.

Space Quest II screenshot.

Literal bathroom humor.

Space Quest II, along with several other Sierra titles, is freely and legally available online at sarien.net—though I found many, many glitches in the implementation. Windows users can buy the entire Space Quest collection through Steam or Good Old Games. There’s even a fan remake that runs on macOS, Windows, and Linux.

Photo of Nate Anderson

Sierra made the games of my childhood. Are they still fun to play? Read More »

motorola-razr-and-razr-ultra-(2025)-review:-cool-as-hell,-but-too-much-ai

Motorola Razr and Razr Ultra (2025) review: Cool as hell, but too much AI


The new Razrs are sleek, capable, and overflowing with AI features.

Razr Ultra and Razr (2025)

Motorola’s 2025 Razr refresh includes its first Ultra model. Credit: Ryan Whitwam

Motorola’s 2025 Razr refresh includes its first Ultra model. Credit: Ryan Whitwam

For phone nerds who’ve been around the block a few times, the original Motorola Razr is undeniably iconic. The era of foldables has allowed Motorola to resurrect the Razr in an appropriately flexible form, and after a few generations of refinement, the 2025 Razrs are spectacular pieces of hardware. They look great, they’re fun to use, and they just about disappear in your pocket.

The new Razrs also have enormous foldable OLEDs, along with external displays that are just large enough to be useful. Moto has upped its design game, offering various Pantone shades with interesting materials and textures to make the phones more distinctive, but Motorola’s take on mobile AI could use some work, as could its long-term support policy. Still, these might be the coolest phones you can get right now.

An elegant tactile experience

Many phone buyers couldn’t care less about how a phone’s body looks or feels—they’ll just slap it in a case and never look at it again. Foldables tend not to fit as well in cases, so the physical design of the Razrs is important. The good news is that Motorola has refined the foldable formula with an updated hinge and some very interesting material choices.

Razr Ultra back

The Razr Ultra is available with a classy wood back.

Credit: Ryan Whitwam

The Razr Ultra is available with a classy wood back. Credit: Ryan Whitwam

The 2025 Razrs come in various colors, all of which have interesting material choices for the back panel. There are neat textured plastics, wood, vegan leather, and synthetic fabrics. We’ve got wood (Razr Ultra) and textured plastic (Razr) phones to test—they look and feel great. The Razr is very grippy, and the wooden Ultra looks ultra-stylish, though not quite as secure in the hand. The aluminum frames are also colored to match the back with a smooth matte finish. Motorola has gone to great lengths to make these phones feel unique without losing the premium vibe. It’s nice to see a phone maker do that without resorting to a standard glass sandwich body.

The buttons are firm and tactile, but we’re detecting just a bit of rattle in the power button. That’s also where you’ll find the fingerprint sensor. It’s reasonably quick and accurate, whether the phone is open or closed. The Razr Ultra also has an extra AI button on the opposite side, which is unnecessary, for reasons we’ll get to later. And no, you can’t remap it to something else.

Motorola Razr 2025

The Razrs have a variety of neat material options.

Credit: Ryan Whitwam

The Razrs have a variety of neat material options. Credit: Ryan Whitwam

The front of the flip on these phones features a big sheet of Gorilla Glass Ceramic, which is supposedly similar to Apple’s Ceramic Shield glass. That should help ward off scratches. The main camera sensors poke through this front OLED, which offers some interesting photographic options we’ll get to later. The Razr Ultra has a larger external display, clocking in at 4 inches. The cheaper Razr gets a smaller 3.6-inch front screen, but that’s still plenty of real estate, even with the camera lenses at the bottom.

Specs at a glance: 2025 Motorola Razrs
Motorola Razr ($699.99) Motorola Razr+ ($999.99) Motorola Razr Ultra ($1,299.99)
SoC MediaTek Dimensity 7400X Snapdragon 8s Gen 3 Snapdragon 8 Elite
Memory 8GB 12GB 16GB
Storage 256GB 256GB 512GB, 1TB
Display 6.9″ foldable OLED (120 Hz, 2640 x 1080), 3.6″ external (90 Hz) 6.9″ foldable OLED (165 Hz, 2640 x 1080), 4″ external (120 Hz, 1272 x 1080) 7″ foldable OLED (165 Hz, 2992 x 1224), 4″ external (165 Hz)
Cameras 50 MP f/1.7 OIS primary; 13 MP f/2.2  ultrawide, 32 MP selfie 50 MP f/1.7 OIS primary; 50 MP 2x telephoto f/2.0, 32 MP selfie 50 MP f/1.8 OIS primary, 50 MP ultrawide + macro, f/2.0, 50 MP selfie
Software Android 15 Android 15 Android 15
Battery 4,500 mAh, 30 W wired charging, 15 W wireless charging 4,000 mAh, 45 W wired charging, 15 W wireless charging 4,700 mAh, 68 W wired charging, 15 W wireless charging
Connectivity Wi-Fi 6e, NFC, Bluetooth 5.4, sub-6 GHz 5G, USB-C 2.0 Wi-Fi 7, NFC, Bluetooth 5.4, sub-6 GHz 5G, USB-C 2.0 Wi-Fi 7, NFC, Bluetooth 5.4, sub-6 GHz 5G, USB-C 2.0
Measurements Open: 73.99 x 171.30 x 7.25 mm;

Closed: 73.99 x 88.08 x 15.85 mm; 188 g
Open: 73.99 x 171.42 x 7.09 mm;

Closed: 73.99 x 88.09x 15.32 mm; 189 g
Open: 73.99 x 171.48 x 7.19 mm;

Closed: 73.99 x 88.12 x 15.69 mm; 199 g

Motorola says the updated foldable hinge has been reinforced with titanium. This is the most likely point of failure for a flip phone, but the company’s last few Razrs already felt pretty robust. It’s good that Moto is still thinking about durability, though. The hinge is smooth, allowing you to leave the phone partially open, but there are magnets holding the two halves together with no gap when closed. The magnets also allow for a solid snap when you shut it. Hanging up on someone is so, so satisfying when you’re using a Razr flip phone.

Flip these phones open, and you get to the main event. The Razr has a 6.9-inch, 2640×1080 foldable OLED, and the Ultra steps up to 7 inches at an impressive 2992×1224. These phones have almost exactly the same dimensions, so the additional bit of Ultra screen comes from thinner bezels. Both phones are extremely tall when open, but they’re narrow enough to be usable in one hand. Just don’t count on reaching the top of the screen easily. While Motorola has not fully eliminated the display crease, it’s much smoother and less noticeable than it is on Samsung’s or Google’s foldables.

Motorola Razr Ultra

The Razr Ultra has a 7-inch foldable OLED.

Credit: Ryan Whitwam

The Razr Ultra has a 7-inch foldable OLED. Credit: Ryan Whitwam

The Razr can hit 3,000 nits of brightness, and the $1,300 Razr Ultra tops out at 4,500 nits. Both are bright enough to be usable outdoors, though the Ultra is noticeably brighter. However, both suffer from the standard foldable drawbacks of having a plastic screen. The top layer of the foldable screen is a non-removable plastic protector, which has very high reflectivity that makes it harder to see the display. That plastic layer also means you have to be careful not to poke or scratch the inner screen. It’s softer than your fingernails, so it’s not difficult to permanently damage the top layer.

Too much AI

Motorola’s big AI innovation for last year’s Razr was putting Gemini on the phone, making it one of the first to ship with Google’s generative AI system. This time around, it has AI features based on Gemini, Meta Llama, Perplexity, and Microsoft Copilot. It’s hard to say exactly how much AI is worth having on a phone with the rapid pace of change, but Motorola has settled on the wrong amount. To be blunt, there’s too much AI. What is “too much” in this context? This animation should get the point across.

Moto AI

Motorola’s AI implementation is… a lot.

Credit: Ryan Whitwam

Motorola’s AI implementation is… a lot. Credit: Ryan Whitwam

The Ask and Search bar appears throughout the UI, including as a floating Moto AI icon. It’s also in the app drawer and is integrated with the AI button on the Razr Ultra. You can use it to find settings and apps, but it’s also a full LLM (based on Copilot) for some reason. Gemini is a better experience if you’re looking for a chatbot, though.

Moto AI also includes a raft of other features, like Pay Attention, which can record and summarize conversations similar to the Google recorder app. However, unlike that app, the summarizing happens in the cloud instead of locally. That’s a possible privacy concern. You also get Perplexity integration, allowing you to instantly search based on your screen contents. In addition, the Perplexity app is preloaded with a free trial of the premium AI search service.

There’s so much AI baked into the experience that it can be difficult to keep all the capabilities straight, and there are some more concerning privacy pitfalls. Motorola’s Catch Me Up feature is a notification summarizer similar to a feature of Apple Intelligence. On the Ultra, this feature works locally with a Llama 3 model, but the less powerful Razr can’t do that. It sends your notifications to a remote server for processing when you use Catch Me Up. Motorola says data is “anonymous and secure” and it does not retain any user data, but you have to put a lot of trust in a faceless corporation to send it all your chat notifications.

Razr Ultra and Razr (2025)

The Razrs have additional functionality if you prop them up in “tent” or “stand” mode.

Credit: Ryan Whitwam

The Razrs have additional functionality if you prop them up in “tent” or “stand” mode. Credit: Ryan Whitwam

If you can look past Motorola’s frenetic take on mobile AI, the version of Android 15 on the Razrs is generally good. There are a few too many pre-loaded apps and experiences, but it’s relatively simple to debloat these phones. It’s quick, doesn’t diverge too much from the standard Android experience, and avoids duplicative apps.

We appreciate the plethora of settings and features for the external display. It’s a much richer experience than you get with Samsung’s flip phones. For example, we like how easy it is to type out a reply in a messaging app without even opening the phone. In fact, you can run any app on the phone without opening it, even though many of them won’t work quite right on a smaller square display. Still, it can be useful for chat apps, email, and other text-based stuff. We also found it handy for using smart home devices like cameras and lights. There are also customizable panels for weather, calendar, and Google “Gamesnack” games.

Razr Ultra and Razr (2025)

The Razr Ultra (left) has a larger screen than the Razr (right).

Credit: Ryan Whitwam

The Razr Ultra (left) has a larger screen than the Razr (right). Credit: Ryan Whitwam

Motorola promises three years of full OS updates and an additional year of security patches. This falls far short of the seven-year update commitment from Samsung and Google. For a cheaper phone like the Razr, four years of support might be fine, but it’s harder to justify that when the Razr Ultra costs as much as a Galaxy S25 Ultra.

One fast foldable, one not so much

Motorola is fond of saying the Razr Ultra is the fastest flip phone in the world, which is technically true. It has the Snapdragon 8 Elite chip with 16GB of RAM, but we expect to see the Elite in Samsung’s 2025 foldables later this year. For now, though, the Razr Ultra stands alone. The $700 Razr runs a Mediatek Dimensity 7400X, which is a distinctly midrange processor with just 8GB of RAM.

Razr geekbench

The Razr Ultra gets close to the S25.

Credit: Ryan Whitwam

The Razr Ultra gets close to the S25. Credit: Ryan Whitwam

In daily use, neither phone feels slow. Side by side, you can see the Razr is slower to open apps and unlock, and the scrolling exhibits occasional jank. However, it’s not what we’d call a slow phone. It’s fine for general smartphone tasks like messaging, browsing, and watching videos. You may have trouble with gaming, though. Simple games run well enough, but heavy 3D titles like Diablo Immortal are rough with the Dimensity 7400X.

The Razr Ultra is one of the fastest Android phones we’ve tested, thanks to the Snapdragon chip. You can play complex games and multitask to your heart’s content without fear of lag. It does run a little behind the Galaxy S25 series in benchmarks, but it thankfully doesn’t get as toasty as Samsung’s phones.

We never expect groundbreaking battery life from foldables. The hinge takes up space, which limits battery capacity. That said, Motorola did fairly well cramming a 4,700 mAh battery in the Razr Ultra and a 4,500 mAh cell in the Razr.

Based on our testing, both of these phones should last you all day. The large external displays can help by giving you just enough information that you don’t have to use the larger, more power-hungry foldable OLED. If you’re playing games or using the main display exclusively, you may find the Razrs just barely make it to bedtime. However, no matter what you do, these are not multi-day phones. The base model Razr will probably eke out a few more hours, even with its smaller battery, due to the lower-power MediaTek processor. The Snapdragon 8 Elite in the Razr Ultra really eats into the battery when you take advantage of its power.

Motorola Razr Ultra

The Razrs are extremely pocketable.

Credit: Ryan Whitwam

The Razrs are extremely pocketable. Credit: Ryan Whitwam

While the battery life is just this side of acceptable, the Razr Ultra’s charging speed makes this less of a concern. This phone hits an impressive 68 W, which is faster than the flagship phones from Google, Samsung, and Apple. Just a few minutes plugged into a compatible USB-C charger and you’ve got enough power that you can head out the door without worry. Of course, the phone doesn’t come with a charger, but we’ve tested a few recent models, and they all hit the max wattage.

OK cameras with super selfies

Camera quality is another area where foldable phones tend to compromise. The $1,300 Razr Ultra has just two sensors—a 50 MP primary sensor and a 50 MP ultrawide lens. The $700 Razr has a slightly different (and less capable) 50 MP primary camera and a 13 MP ultrawide. There are also selfie cameras peeking through the main foldable OLED panels—50 MP for the Ultra and 32 MP for the base model.

Motorola Razr 2025 in hand

The cheaper Razr has a smaller external display, but it’s still large enough to be usable.

Credit: Ryan Whitwam

The cheaper Razr has a smaller external display, but it’s still large enough to be usable. Credit: Ryan Whitwam

Motorola’s Razrs tend toward longer exposures compared to Pixels—they’re about on par with Samsung phones. That means capturing fast movement indoors is difficult, and you may miss your subject outside due to a perceptible increase in shutter lag compared to Google’s phones. Images from the base model Razr’s primary camera also tend to look a bit more overprocessed than they do on the Ultra, which leads to fuzzy details and halos in bright light.

Razr Ultra outdoors. Ryan Whitwam

That said, Motorola’s partnership with Pantone is doing some good. The colors in our photos are bright and accurate, capturing the vibe of the scene quite well. You can get some great photos of stationary or slowly moving subjects.

Razr 2025 indoor medium light. Ryan Whitwam

The 50 MP ultrawide camera on the Razr Ultra has a very wide field of view, but there’s little to no distortion at the edges. The colors are also consistent between the two sensors, but that’s not always the case for the budget Razr. Its ultrawide camera also lacks detail compared to the Ultra, which isn’t surprising considering the much lower resolution.

You should really only use the dedicated front-facing cameras for video chat. For selfies, you’ll get much better results by taking advantage of the Razr’s distinctive form factor. When closed, the Razrs let you take selfies with the main camera sensors, using the external display as the viewfinder. These are some of the best selfies you’ll get with a smartphone, and having the ultrawide sensor makes group shots excellent as well.

Flip phones are still fun

While we like these phones for what they are, they are objectively not the best value. Whether you’re looking at the Razr or the Razr Ultra, you can get more phone for the same money from other companies—more cameras, more battery, more updates—but those phones don’t fold in half. There’s definitely a cool-factor here. Flip phones are stylish, and they’re conveniently pocket-friendly in a world where giant phones barely fit in your pants. We also like the convenience and functionality of the external displays.

Motorola Razr Ultra

The Razr Ultra is all screen from the front.

Credit: Ryan Whitwam

The Razr Ultra is all screen from the front. Credit: Ryan Whitwam

The Razr Ultra makes the usual foldable compromises, but it’s as capable a flip phone as you’ll find right now. It’s blazing fast, it has two big displays, and the materials are top-notch. However, $1,300 is a big ask.

Is the Ultra worth $500 more than the regular Razr? Probably not. Most of what makes the foldable Razrs worth using is present on the cheaper model. You still get the solid construction, cool materials, great selfies, and a useful (though slightly smaller) outer display. Yes, it’s a little slower, but it’s more than fast enough as long as you’re not a heavy gamer. Just be aware of the potential for Moto AI to beam your data to the cloud.

There is also the Razr+, which slots in between the models we have tested at $1,000. It’s faster than the base model and has the same large external display as the Ultra. This model could be the sweet spot if neither the base model nor the flagship does it for you.

The good

  • Sleek design with distinctive materials
  • Great performance from Razr Ultra
  • Useful external display
  • Big displays in a pocket-friendly package

The bad

  • Too much AI
  • Razr Ultra is very expensive
  • Only three years of OS updates, four years of security patches
  • Cameras trail the competition

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Motorola Razr and Razr Ultra (2025) review: Cool as hell, but too much AI Read More »

gm’s-lmr-battery-breakthrough-means-more-range-at-a-lower-cost

GM’s LMR battery breakthrough means more range at a lower cost

Kelty also believes it just makes sense to localize production. He pointed out that when consumer electronics with batteries took off, the supply chain developed around the customers in Southeast Asia. The customers, in that case, are the electronics manufacturers. He said the same thing makes sense in the United States.

There might be an inclination to give President Trump and his administration credit for this onshoring initiative, but the company has been working on localizing battery production for years. Even development on the LMR battery technology had been happening long before the current administration took over.

A battery technician at the General Motors Wallace Battery Cell Innovation Center takes a chemistry slurry sample. Credit: Steve Fecht for General Motors

That research and development of new technologies remains ongoing. In addition to testing battery cells in every known condition on Earth, GM can produce packs in production-ready format on site in Warren, just at a slower pace, to fine-tune the process and ensure a better-quality product. The company is currently working on a facility that will be able to make production-quality batteries at production speeds, so when a new line or a new plant is brought online somewhere else, all the kinks will already have been worked out.

GM’s LMR batteries feel like a logical evolution of the lithium-ion batteries that appear in EVs already. The company now has the facilities to build the highest-quality battery solution that it can. It’s also clear that the company has been working on this for quite some time.

If this all sounds like what Ford announced recently, it is. For its part, Ford says its research is not a lab experiment and that it will appear in vehicles before the end of the decade. While I can’t say who landed on the technology first, it’s clear that GM has a production plan and knows what specific products you’ll see it in to start.

A building with a truck in front of it.

General Motors Wallace Battery Cell Innovation Center focuses on advanced technical work for cutting-edge battery technology and prototyping full-size cells. Credit: General Motors

If LMR delivers on the promise, we’ll have a battery technology that delivers more range for less money. If there’s one takeaway from talking to the folks working on batteries in Warren, it’s that their guiding star is to make EVs affordable.

Kelty even challenged the room full of reporters. “Can anybody name a reason why you would not buy an EV if it’s price parity with ICE? I’ll argue it,” he said.

Kelty also hinted at some upcoming technology to help GM’s batteries work better in sub-optimal weather conditions, though he wouldn’t comment or elaborate on future products.

We’re still a couple of years away from production, but if General Motors can deliver on the tech, we’ll be one step closer to mainstream adoption.

GM’s LMR battery breakthrough means more range at a lower cost Read More »

the-tinkerers-who-opened-up-a-fancy-coffee-maker-to-ai-brewing

The tinkerers who opened up a fancy coffee maker to AI brewing

(Ars contacted Fellow Products for comment on AI brewing and profile sharing and will update this post if we get a response.)

Opening up brew profiles

Fellow’s brew profiles are typically shared with buyers of its “Drops” coffees or between individual users through a phone app.

Credit: Fellow Products

Fellow’s brew profiles are typically shared with buyers of its “Drops” coffees or between individual users through a phone app. Credit: Fellow Products

Aiden profiles are shared and added to Aiden units through Fellow’s brew.link service. But the profiles are not offered in an easy-to-sort database, nor are they easy to scan for details. So Aiden enthusiast and hobbyist coder Kevin Anderson created brewshare.coffee, which gathers both general and bean-based profiles, makes them easy to search and load, and adds optional but quite helpful suggested grind sizes.

As a non-professional developer jumping into a public offering, he had to work hard on data validation, backend security, and mobile-friendly design. “I just had a bit of an idea and a hobby, so I thought I’d try and make it happen,” Anderson writes. With his tool, brew links can be stored and shared more widely, which helped both Dixon and another AI/coffee tinkerer.

Gabriel Levine, director of engineering at retail analytics firm Leap Inc., lost his OXO coffee maker (aka the “Barista Brain”) to malfunction just before the Aiden debuted. The Aiden appealed to Levine as a way to move beyond his coffee rut—a “nice chocolate-y medium roast, about as far as I went,” he told Ars. “This thing that can be hyper-customized to different coffees to bring out their characteristics; [it] really kind of appealed to that nerd side of me,” Levine said.

Levine had also been doing AI stuff for about 10 years, or “since before everyone called it AI—predictive analytics, machine learning.” He described his career as “both kind of chief AI advocate and chief AI skeptic,” alternately driving real findings and talking down “everyone who… just wants to type, ‘how much money should my business make next year’ and call that work.” Like Dixon, Levine’s work and fascination with Aiden ended up intersecting.

The coffee maker with 3,588 ideas

The author’s conversation with the Aiden Profile Creator, which pulled in both brewing knowledge and product info for a widely available coffee:

Levine’s Aiden Profile Creator is a ChatGPT prompt set up with a custom prompt and told to weight certain knowledge more heavily. What kind of prompt and knowledge? Levine didn’t want to give away his exact work. But he cited resources like the Specialty Coffee Association of America and James Hoffman’s coffee guides as examples of what he fed it.

What it does with that knowledge is something of a mystery to Levine himself. “There’s this kind of blind leap, where it’s grabbing the relevant pieces of information from the knowledge base, biasing toward all the expert advice and extraction science, doing something with it, and then I take that something and coerce it back into a structured output I can put on your Aiden,” Levine said.

It’s a blind leap, but it has landed just right for me so far. I’ve made four profiles with Levine’s prompt based on beans I’ve bought: Stumptown’s Hundred Mile, a light-roasted batch from Jimma, Ethiopia, from Small Planes, Lost Sock’s Western House filter blend, and some dark-roast beans given as a gift. With the Western House, Levine’s profile creator said it aimed to “balance nutty sweetness, chocolate richness, and bright cherry acidity, using a slightly stepped temperature profile and moderate pulse structure.” The resulting profile has worked great, even if the chatbot named it “Cherry Timber.”

Levine’s chatbot relies on two important things: Dixon’s work in revealing Fellow’s Aiden API and his own workhorse Aiden. Every Aiden profile link is created on a machine, so every profile created by Levine’s chat is launched, temporarily, from the Aiden in his kitchen, then deleted. “I’ve hit an undocumented limit on the number of profiles you can have on one machine, so I’ve had to do some triage there,” he said. As of April 22, nearly 3,600 profiles had passed through Levine’s Aiden.

“My hope with this is that it lowers the bar to entry,” Levine said, “so more people get into these specialty roasts and it drives people to support local roasters, explore their world a little more. I feel like that certainly happened to me.”

Something new is brewing

Credit: Fellow Products

Having admitted to myself that I find something generated by ChatGPT prompts genuinely useful, I’ve softened my stance slightly on LLM technology, if not the hype. Used within very specific parameters, with everything second-guessed, I’m getting more comfortable asking chat prompts for formatted summaries on topics with lots of expertise available. I do my own writing, and I don’t waste server energy on things I can, and should, research myself. I even generally resist calling language model prompts “AI,” given the term’s baggage. But I’ve found one way to appreciate its possibilities.

This revelation may not be new to someone already steeped in the models. But having tested—and tasted—my first big experiment while willfully engaging with a brewing bot, I’m a bit more awake.

This post was updated at 8: 40 am with a different capture of a GPT-created recipe.

The tinkerers who opened up a fancy coffee maker to AI brewing Read More »

doom:-the-dark-ages-review:-shields-up!

Doom: The Dark Ages review: Shields up!


Prepare to add a more defensive stance to the usual dodge-and-shoot gameplay loop.

There’s a reason that shield is so prominent in this image. Credit: Bethesda Game Studios

There’s a reason that shield is so prominent in this image. Credit: Bethesda Game Studios

For decades now, you could count on there being a certain rhythm to a Doom game. From the ’90s originals to the series’ resurrection in recent years, the Doom games have always been about using constant, zippy motion to dodge through a sea of relatively slow-moving bullets, maintaining your distance while firing back at encroaching hordes of varied monsters. The specific guns and movement options you could call on might change from game to game, but the basic rhythm of that dodge-and-shoot gameplay never has.

Just a few minutes in, Doom: The Dark Ages throws out that traditional Doom rhythm almost completely. The introduction of a crucial shield adds a whole suite of new verbs to the Doom vocabulary; in addition to running, dodging, and shooting, you’ll now be blocking, parrying, and stunning enemies for counterattacks. In previous Doom games, standing still for any length of time often led to instant death. In The Dark Ages, standing your ground to absorb and/or deflect incoming enemy attacks is practically required at many points.

During a preview event earlier this year, the game’s developers likened this change to the difference between flying a fighter jet and piloting a tank. That’s a pretty apt metaphor, and it’s not exactly an unwelcome change for a series that might be in need of a shake-up. But it only works if you go in ready to play like a tank and not like the fighter jet that has been synonymous with Doom for decades.

Stand your ground

Don’t get me wrong, The Dark Ages still features its fair share of the Doom series’ standard position-based Boomer Shooter action. The game includes the usual stockpile of varied weapons—from short-range shotguns to long-range semi-automatics to high-damage explosives with dangerous blowback—and doles them out slowly enough that major new options are still being introduced well into the back half of the game.

But the shooting side has simplified a bit since Doom Eternal. Gone are the secondary weapon modes, grenades, chainsaws, and flamethrowers that made enemy encounters a complicated weapon and ammo juggling act. Gone too are the enemies that practically forced you to use a specific weapon to exploit their One True Weakness; I got by for most of The Dark Ages by leaning on my favored plasma rifle, with occasional switches to a charged steel ball-and-chain launcher for heavily armored enemies.

See green, get ready to parry…

Credit: Bethesda Game Studios

See green, get ready to parry… Credit: Bethesda Game Studios

In their place is the shield, which gives you ample (but not unlimited) ability to simply deflect enemy attacks damage-free. You can also throw the shield for a ranged attack that’s useful for blowing up frequent phalanxes of shielded enemies or freezing larger unarmored enemies in place for a safe, punishing barrage.

But the shield’s most important role comes when you stand face to face with a particularly punishing demon, waiting for a flash of green to appear on the screen. When that color appears, it’s your signal that the associated projectile and/or incoming melee attack can be parried by raising your shield just before it lands. A successful parry knocks that attack back entirely, returning projectiles to their source and/or temporarily deflecting the encroaching enemy themselves.

A well-timed, powerful parry is often the only reasonable option for attacks that are otherwise too quick or overwhelming to dodge effectively. The overall effect ends up feeling a bit like Doom by way of Mike Tyson’s Punch-Out!! Instead of dancing around a sea of hazards and looking for an opening, you’ll often find yourself just standing still for a few seconds, waiting to knock back a flash of green so you can have the opportunity to unleash your own counterattack. Various shield sigils introduced late in the game encourage this kind of conservative turtling strategy even more by adding powerful bonus effects to each successful parry.

The window for executing a successful parry is pretty generous, and the dramatic temporal slowdown and sound effects make each one feel like an impactful moment. But they start to feel less impactful as the game goes on, and battles often devolve into vast seas of incoming green flashes. There were countless moments in my Dark Ages playthrough where I found myself more or less pinned down by a deluge of green attacks, frantically clicking the right mouse button four or five times in quick succession to parry off threats from a variety of angles.

In between all the parrying, you do get to shoot stuff.

Credit: Bethesda Game Studios

In between all the parrying, you do get to shoot stuff. Credit: Bethesda Game Studios

In between these parries, the game seems to go out of its way to encourage a more fast-paced, aggressive style of play. A targeted shield slam move lets you leap quickly across great distances to get up close and personal with enemy demons, at which point you can use one of a variety of melee weapons for some extremely satisfying, crunchy close quarters beatdowns (though these melee attacks are limited by their own slowly recharging ammo system).

You might absorb some damage in the process of going in for these aggressive close-up attacks, but don’t worry—defeated enemies tend to drop heaps of health, armor, and ammo, depending on the specific way they were killed. I’d often find myself dancing on the edge of critically low health after an especially aggressive move, only to recover just in time by finishing off a major demon. Doubling back for a shield slam on a far-off “fodder” enemy can also be an effective strategy for quickly escaping a sticky situation and grabbing some health in the process.

The back-and-forth tug between these aggressive encroachments and the more conservative parry-based turtling makes for some exciting moment-to-moment gameplay, with enough variety in the enemy mix to never feel too stale. Effectively managing your movement and attack options in any given firefight feels complex enough to be engaging without ever tipping into overwhelming, as well.

Even so, working through Doom: The Dark Ages, there was a part of me that missed the more free-form, three-dimensional acrobatics of Doom Eternal’s double jumps and air dashes. Compared to the almost balletic, improvisational movement in that game, playing The Dark Ages too often felt like it devolved into something akin to a simple rhythm game; simply wait for each green “note” to reach the bottom of the screen, then hit the button to activate your counterattack.

Stories and secrets

In between chapters, Doom: The Dark Ages breaks things up with some extremely ponderous cutscenes featuring a number of religious and political factions, both demon and human, jockeying for position and control in an interdimensional war. This mostly involves a lot of tedious standing around discussing the Heart of Argent (a McGuffin that’s supposed to grant the bearer the power of a god) and debating how, where, and when to deploy the Slayer (that’s you) as a weapon.

I watched these cutscenes out of a sense of professional obligation, but I tuned out at points and thus had trouble following the internecine intrigue that seemed to develop between factions whose motivations and backgrounds never seemed to be sufficiently explained or delineated. Most players who aren’t reviewing the game should feel comfortable skipping these scenes and getting back to the action as quickly as possible.

I hope you like red and black, because there’s a lot of it here…

Credit: Bethesda Game Studios

I hope you like red and black, because there’s a lot of it here… Credit: Bethesda Game Studios

The levels themselves are all dripping with the usual mix of Hellish symbology and red-and-black gore, with mood lighting so dark that it can be hard to see a wall right in front of your face. Design-wise, the chapters seem to alternate between Doom’s usual system of twisty enemy-filled corridors and more wide-open outdoor levels. The latter are punctuated by a number of large, open areas where huge groups of demons simply teleport in as soon as you set foot in the pre-set engagement zone. These battle arenas might have a few inclines or spires to mix things up, but for the most part, they all feel depressingly similar and bland after a while. If you’ve stood your ground in one canyon, you’ve stood your ground in them all.

Each level is also absolutely crawling with secret collectibles hidden in various nooks and crannies, which often tease you with a glimpse through a hole in some impassable wall or rock formation. Studying the map screen for a minute more often than not reveals the general double-back path you’ll need to follow to find the hidden entrance behind these walls, even as finding the precise path can involve solving some simple puzzles or examining your surroundings for one particularly well-hidden bit that will allow you to advance.

After all the enemies were cleared in one particularly vast open level, I spent a good half hour picking through every corner of the map until I tracked down the hidden pathways leading to every stray piece of gold and collectible trinket. It was fine as a change of pace—and lucrative in terms of upgrading my weapons and shield for later fights—but it felt kind of lonely and quiet compared to the more action-packed battles.

Don’t unleash the dragon

Speaking of changes of pace, by far the worst parts of Doom: The Dark Ages come when the game insists on interrupting the usual parry-and-shoot gameplay to put you in some sort of vehicle. This includes multiple sections where your quick-moving hero is replaced with a lumbering 30-foot-tall mech, which slouches pitifully down straight corridors toward encounters with equally large demons.

These mech battles play out as the world’s dullest fistfights, where you simply wail on the attack buttons while occasionally tapping the dodge button to step away from some incredibly slow and telegraphed counterattacks. I found myself counting the minutes until these extremely boring interludes were over.

Believe me, this is less exciting than it looks.

Credit: Bethesda Game Studios

Believe me, this is less exciting than it looks. Credit: Bethesda Game Studios

The sections where your Slayer rides a dragon for some reason are ever-so-slightly more interesting, if only because the intuitive, fast-paced flight controls can be a tad more exciting. Unfortunately, these sections don’t give you any thrilling dogfights or complex obstacle courses to take advantage of these controls, topping out instead in a few simplistic chase sequences where you take literally no incoming fire.

Between those semi-engaging chase sequences is a seemingly endless parade of showdowns with stationary turrets. These require your dragon to hover frustratingly still in mid-air, waiting patiently for an incoming energy attack to dodge, which in turn somehow powers up your gun enough to take out the turret in a counterattack. How anyone thought that this was the most engaging use of a seemingly competent third-person flight-combat system is utterly baffling.

Those too-frequent interludes aside, Doom: The Dark Ages is a more-than-suitable attempt to shake up the Doom formula with a completely new style of gameplay. While the more conservative, parry-based shield system takes some getting used to—and may require adjusting some of your long-standing Doom muscle memory in the process—it’s ultimately a welcome and engaging way to add new types of interaction to the long-running franchise.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Doom: The Dark Ages review: Shields up! Read More »

spacex-pushed-“sniper”-theory-with-the-feds-far-more-than-is-publicly-known

SpaceX pushed “sniper” theory with the feds far more than is publicly known


“It came out of nowhere, and it was really violent.”

The Amos 6 satellite is lost atop a Falcon 9 rocket. Credit: USLaunchReport

The Amos 6 satellite is lost atop a Falcon 9 rocket. Credit: USLaunchReport

The rocket was there. And then it decidedly was not.

Shortly after sunrise on a late summer morning nearly nine years ago at SpaceX’s sole operational launch pad, engineers neared the end of a static fire test. These were still early days for their operation of a Falcon 9 rocket that used super-chilled liquid propellants, and engineers pressed to see how quickly they could complete fueling. This was because the liquid oxygen and kerosene fuel warmed quickly in Florida’s sultry air, and cold propellants were essential to maximizing the rocket’s performance.

On this morning, September 1, 2016, everything proceeded more or less nominally up until eight minutes before the ignition of the rocket’s nine Merlin engines. It was a stable point in the countdown, so no one expected what happened next.

“I saw the first explosion,” John Muratore, launch director for the mission, told me. “It came out of nowhere, and it was really violent. I swear, that explosion must have taken an hour. It felt like an hour. But it was only a few seconds. The second stage exploded in this huge ball of fire, and then the payload kind of teetered on top of the transporter erector. And then it took a swan dive off the top rails, dove down, and hit the ground. And then it exploded.”

The dramatic loss of the Falcon 9 rocket and its Amos-6 satellite, captured on video by a commercial photographer, came at a pivotal moment for SpaceX and the broader commercial space industry. It was SpaceX’s second rocket failure in a little more than a year, and it occurred as NASA was betting heavily on the company to carry its astronauts to orbit. SpaceX was not the behemoth it is today, a company valued at $350 billion. It remained vulnerable to the vicissitudes of the launch industry. This violent failure shook everyone, from the engineers in Florida to satellite launch customers to the suits at NASA headquarters in Washington, DC.

As part of my book on the Falcon 9 and Dragon years at SpaceX, Reentry, I reported deeply on the loss of the Amos-6 mission. In the weeks afterward, the greatest mystery was what had precipitated the accident. It was understood that a pressurized helium tank inside the upper stage had ruptured. But why? No major parts on the rocket were moving at the time of the failure. It was, for all intents and purposes, akin to an automobile idling in a driveway with half a tank of gasoline. And then it exploded.

This failure gave rise to one of the oddest—but also strangely compelling—stories of the 2010s in spaceflight. And we’re still learning new things today.

The “sniper” theory

The lack of a concrete explanation for the failure led SpaceX engineers to pursue hundreds of theories. One was the possibility that an outside “sniper” had shot the rocket. This theory appealed to SpaceX founder Elon Musk, who was asleep at his home in California when the rocket exploded. Within hours of hearing about the failure, Musk gravitated toward the simple answer of a projectile being shot through the rocket.

This is not as crazy as it sounds, and other engineers at SpaceX aside from Musk entertained the possibility, as some circumstantial evidence to support the notion of an outside actor existed. Most notably, the first rupture in the rocket occurred about 200 feet above the ground, on the side of the vehicle facing the southwest. In this direction, about one mile away, lay a building leased by SpaceX’s main competitor in launch, United Launch Alliance. A separate video indicated a flash on the roof of this building, now known as the Spaceflight Processing Operations Center. The timing of this flash matched the interval it would take a projectile to travel from the building to the rocket.

A sniper on the roof of a competitor’s building—forget the Right Stuff, this was the stuff of a Mission: Impossible or James Bond movie.

At Musk’s direction, SpaceX worked this theory both internally and externally. Within the company, engineers and technicians actually took pressurized tanks that stored helium—one of these had burst, leading to the explosion—and shot at them in Texas to determine whether they would explode and what the result looked like. Externally, they sent the site director for their Florida operations, Ricky Lim, to inquire whether he might visit the roof of the United Launch Alliance building.

SpaceX pursued the sniper theory for more than a month. A few SpaceX employees told me that they did not stop this line of inquiry until the Federal Aviation Administration sent the company a letter definitively saying that there was no gunman involved. It would be interesting to see this letter, so I submitted a Freedom of Information Act request to the FAA in the spring of 2023. Because the federal FOIA process moves slowly, I did not expect to receive a response in time for the book. But it was worth a try anyway.

No reply came in 2023 or early 2024, when the final version of my book was due to my editor. Reentry was published last September, and still nothing. However, last week, to my great surprise and delight, I got a response from the FAA. It was the very letter I requested, sent from the FAA to Tim Hughes, the general counsel of SpaceX, on October 13, 2016. And yes, the letter says there was no gunman involved.

However, there were other things I did not know—namely, that the FBI had also investigated the incident.

The ULA rivalry

One of the most compelling elements of this story is that it involves SpaceX’s heated rival, United Launch Alliance. For a long time, ULA had the upper hand, but in recent years, it has taken a dramatic turn. Now we know that David would grow up and slay Goliath: Between the final rocket ULA launched last year (the Vulcan test flight on October 4) and the first rocket the company launched this year (Atlas V, April 28), SpaceX launched 90 rockets.

Ninety.

But it was a different story in the summer of 2016 in the months leading up to the Amos 6 failure. Back then, ULA was launching about 15 rockets a year, compared to SpaceX’s five. And ULA was launching all of the important science missions for NASA and the critical spy satellites for the US military. They were the big dog, SpaceX the pup.

In the early days of the Falcon 9 rocket, some ULA employees would drive to where SpaceX was working on the first booster and jeer at their efforts. And rivalry played out not just on the launch pad but in courtrooms and on Capitol Hill. After ULA won an $11 billion block buy contract from the US Air Force to launch high-value military payloads into the early 2020s, Musk sued in April 2014. He alleged that the contract had been awarded without a fair competition and said the Falcon 9 rocket could launch the missions at a substantially lower price. Taxpayers, he argued, were being taken for a ride.

Eventually, SpaceX and the Air Force resolved their claims. The Air Force agreed to open some of its previously awarded national security missions to competitive bids. Over time, SpaceX has overtaken ULA even in this arena. During the most recent round of awards, SpaceX won 60 percent of the contracts compared to ULA’s 40 percent.

So when SpaceX raised the possibility of a ULA sniper, it came at an incendiary moment in the rivalry, when SpaceX was finally putting forth a very serious challenge to ULA’s dominance and monopoly.

It is no surprise, therefore, that ULA told SpaceX’s Ricky Lim to get lost when he wanted to see the roof of their building in Florida.

“Hair-on-fire stuff”

NASA officials were also deeply concerned by the loss of the Falcon 9 rocket in September 2016.

The space agency spent much of the 2010s working with SpaceX and Boeing to develop, test, and fly spacecraft that could fly humans into space. These were difficult years for the space agency, which had to rely on Russia to get its astronauts into space. NASA also had a challenging time balancing costs with astronaut safety. Then rockets started blowing up.

Consider this sequence from mid-2015 to mid-2016. In June 2015, the second stage of a Falcon 9 rocket carrying a cargo version of the Dragon spacecraft into orbit exploded. Less than two weeks later, NASA named four astronauts to its “commercial crew” cadre from which the initial pilots of Dragon and Starliner spacecraft would be selected. Finally, a little more than a year after this, a second Falcon 9 rocket upper stage detonated.

Video of CRS-7 launch and failure.

Even as it was losing Falcon 9 rockets, SpaceX revealed that it intended to upend NASA’s long-standing practice of fueling a rocket and then, when the vehicle reached a stable condition, putting crew on board. Rather, SpaceX said it would put the astronauts on board before fueling. This process became known as “load and go.”

NASA’s safety community went nuts.

“When SpaceX came to us and said we want to load the crew first and then the propellant, mushroom clouds went off in our safety community,” Phil McAlister, the head of NASA’s commercial programs, told me for Reentry. “I mean, hair-on-fire stuff. It was just conventional wisdom that you load the propellant first and get it thermally stable. Fueling is a very dynamic operation. The vehicle is popping and hissing. The safety community was adamantly against this.”

Amos-6 compounded these concerns. That’s because the rocket was not shot by a sniper. After months of painful investigation and analysis, engineers determined the rocket was lost due to the propellant-loading process. In their goal of rapidly fueling the Falcon 9 rocket, the SpaceX teams had filled the pressurized helium tanks too quickly, heating the aluminum liner and causing it to buckle. In their haste to load super-chilled propellant onto the Falcon 9, SpaceX had found its speed limit.

At NASA, it was not difficult to visualize astronauts in a Dragon capsule sitting atop an exploding rocket during propellant loading rather than a commercial satellite.

Enter the FBI

We should stop and appreciate the crucible that SpaceX engineers and technicians endured in the fall of 2016. They were simultaneously attempting to tease out the physics of a fiendishly complex failure; prove to NASA their exploding rocket was safe; convince safety officials that even though they had just blown up their rocket by fueling it too quickly, load-and-go was feasible for astronaut missions; increase the cadence of Falcon 9 missions to catch and surpass ULA; and, oh yes, gently explain to the boss that a sniper had not shot their rocket.

So there had to be some relief when, on October 13, Hughes received that letter from Dr. Michael C. Romanowski, director of Commercial Space Integration at the FAA.

According to this letter (see a copy here), three weeks after the launch pad explosion, SpaceX submitted “video and audio” along with its analysis of the failure to the FAA. “SpaceX suggested that in the company’s view, this information and data could be indicative of sabotage or criminal activity associated with the on-pad explosion of SpaceX’s Falcon 9,” the letter states.

This is notable because it suggests that Musk directed SpaceX to elevate the “sniper” theory to the point that the FAA should take it seriously. But there was more. According to the letter, SpaceX reported the same data and analysis to the Federal Bureau of Investigation in Florida.

After this, the Tampa Field Office of the FBI and its Criminal Investigative Division in Washington, DC, looked into the matter. And what did they find? Nothing, apparently.

“The FBI has informed us that based upon a thorough and coordinated review by the appropriate Federal criminal and security investigative authorities, there were no indications to suggest that sabotage or any other criminal activity played a role in the September 1 Falcon 9 explosion,” Romanowski wrote. “As a result, the FAA considers this matter closed.”

The failure of the Amos-6 mission would turn out to be a low point for SpaceX. For a few weeks, there were non-trivial questions about the company’s financial viability. But soon, SpaceX would come roaring back. In 2017, the Falcon 9 rocket launched a record 18 times, surpassing ULA for the first time. The gap would only widen. Last year, SpaceX launched 137 rockets to ULA’s five.

With Amos-6, therefore, SpaceX lost the battle. But it would eventually win the war—without anyone firing a shot.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

SpaceX pushed “sniper” theory with the feds far more than is publicly known Read More »

redditor-accidentally-reinvents-discarded-’90s-tool-to-escape-today’s-age-gates

Redditor accidentally reinvents discarded ’90s tool to escape today’s age gates


The ’90s called. They want their flawed age verification methods back.

A boys head with a fingerprint revealing something unclear but perhaps evocative

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Back in the mid-1990s, when The Net was among the top box office draws and Americans were just starting to flock online in droves, kids had to swipe their parents’ credit cards or find a fraudulent number online to access adult content on the web. But today’s kids—even in states with the strictest age verification laws—know they can just use Google.

Last month, a study analyzing the relative popularity of Google search terms found that age verification laws shift users’ search behavior. It’s impossible to tell if the shift represents young users attempting to circumvent the child-focused law or adult users who aren’t the actual target of the laws. But overall, enforcement causes nearly half of users to stop searching for popular adult sites complying with laws and instead search for a noncompliant rival (48 percent) or virtual private network (VPN) services (34 percent), which are used to mask a location and circumvent age checks on preferred sites, the study found.

“Individuals adapt primarily by moving to content providers that do not require age verification,” the study concluded.

Although the Google Trends data prevented researchers from analyzing trends by particular age groups, the findings help confirm critics’ fears that age verification laws “may be ineffective, potentially compromise user privacy, and could drive users toward less regulated, potentially more dangerous platforms,” the study said.

The authors warn that lawmakers are not relying enough on evidence-backed policy evaluations to truly understand the consequences of circumvention strategies before passing laws. Internet law expert Eric Goldman recently warned in an analysis of age-estimation tech available today that this situation creates a world in which some kids are likely to be harmed by the laws designed to protect them.

Goldman told Ars that all of the age check methods carry the same privacy and security flaws, concluding that technology alone can’t solve this age-old societal problem. And logic-defying laws that push for them could end up “dramatically” reshaping the Internet, he warned.

Zeve Sanderson, a co-author of the Google Trends study, told Ars that “if you’re a policymaker, in addition to being potentially nervous about the more dangerous content, it’s also about just benefiting a noncompliant firm.”

“You don’t want to create a regulatory environment where noncompliance is incentivized or they benefit in some way,” Sanderson said.

Sanderson’s study pointed out that search data is only part of the picture. Some users may be using VPNs and accessing adult sites through direct URLs rather than through search. Others may rely on social media to find adult content, a 2025 conference paper noted, “easily” bypassing age checks on the largest platforms. VPNs remain the most popular circumvention method, a 2024 article in the International Journal of Law, Ethics, and Technology confirmed, “and yet they tend to be ignored or overlooked by statutes despite their popularity.”

While kids are ducking age gates and likely putting their sensitive data at greater risk, adult backlash may be peaking over the red wave of age-gating laws already blocking adults from visiting popular porn sites in several states.

Some states started controversially requiring checking IDs to access adult content, which prompted Pornhub owner Aylo to swiftly block access to its sites in certain states. Pornhub instead advocates for device-based age verification, which it claims is a safer choice.

Aylo’s campaign has seemingly won over some states that either explicitly recommend device-based age checks or allow platforms to adopt whatever age check method they deem “reasonable.” Other methods could include app store-based age checks, algorithmic age estimation (based on a user’s web activity), face scans, or even tools that guess users’ ages based on hand movements.

On Reddit, adults have spent the past year debating the least intrusive age verification methods, as it appears inevitable that adult content will stay locked down, and they dread a future where more and more adult sites might ask for IDs. Additionally, critics have warned that showing an ID magnifies the risk of users publicly exposing their sexual preferences if a data breach or leak occurs.

To avoid that fate, at least one Redditor has attempted to reinvent the earliest age verification method, promoting a resurgence of credit card-based age checks that society discarded as unconstitutional in the early 2000s.

Under those systems, an entire industry of age verification companies emerged, selling passcodes to access adult sites for a supposedly nominal fee. The logic was simple: Only adults could buy credit cards, so only adults could buy passcodes with credit cards.

If “a person buys, for a nominal fee, a randomly generated passcode not connected to them in any way” to access adult sites, one Redditor suggested about three months ago, “there won’t be any way to tie the individual to that passcode.”

“This could satisfy the requirement to keep stuff out of minors’ hands,” the Redditor wrote in a thread asking how any site featuring sexual imagery could hypothetically comply with US laws. “Maybe?”

Several users rushed to educate the Redditor about the history of age checks. Those grasping for purely technology-based solutions today could be propping up the next industry flourishing from flawed laws, they said.

And, of course, since ’90s kids easily ducked those age gates, too, history shows why investing millions to build the latest and greatest age verification systems probably remains a fool’s errand after all these years.

The cringey early history of age checks

The earliest age verification systems were born out of Congress’s “first attempt to outlaw pornography online,” the LA Times reported. That attempt culminated in the Communications Decency Act of 1996.

Although the law was largely overturned a year later, the million-dollar age verification industry was already entrenched, partly due to its intriguing business model. These companies didn’t charge adult sites any fee to add age check systems—which required little technical expertise to implement—and instead shared a big chunk of their revenue with porn sites that opted in. Some sites got 50 percent of revenues, estimated in the millions, simply for adding the functionality.

The age check business was apparently so lucrative that in 2000, one adult site, which was sued for distributing pornographic images of children, pushed fans to buy subscriptions to its preferred service as a way of helping to fund its defense, Wired reported. “Please buy an Adult Check ID, and show your support to fight this injustice!” the site urged users. (The age check service promptly denied any association with the site.)

In a sense, the age check industry incentivized adult sites’ growth, an American Civil Liberties Union attorney told the LA Times in 1999. In turn, that fueled further growth in the age verification industry.

Some services made their link to adult sites obvious, like Porno Press, which charged a one-time fee of $9.95 to access affiliated adult sites, a Congressional filing noted. But many others tried to mask the link, opting for names like PayCom Billing Services, Inc. or CCBill, as Forbes reported, perhaps enticing more customers by drawing less attention on a credit card statement. Other firms had names like Adult Check, Mancheck, and Adult Sights, Wired reported.

Of these firms, the biggest and most successful was Adult Check. At its peak popularity in 2001, the service boasted 4 million customers willing to pay “for the privilege of ogling 400,000 sex sites,” Forbes reported.

At the head of the company was Laith P. Alsarraf, the CEO of the Adult Check service provider Cybernet Ventures.

Alsarraf testified to Congress several times, becoming a go-to expert witness for lawmakers behind the 1998 Child Online Protection Act (COPA). Like the version of the CDA that prompted it, this act was ultimately deemed unconstitutional. And some judges and top law enforcement officers defended Alsarraf’s business model with Adult Check in court—insisting that it didn’t impact adult speech and “at most” posed a “modest burden” that was “outweighed by the government’s compelling interest in shielding minors” from adult content.

But his apparent conflicts of interest also drew criticism. One judge warned in 1999 that “perhaps we do the minors of this country harm if First Amendment protections, which they will with age inherit fully, are chipped away in the name of their protection,” the American Civil Liberties Union (ACLU) noted.

Summing up the seeming conflict, Ann Beeson, an ACLU lawyer, told the LA Times, “the government wants to shut down porn on the Net. And yet their main witness is this guy who makes his money urging more and more people to access porn on the Net.”

’90s kids dodged Adult Check age gates

Adult Check’s subscription costs varied, but the service predictably got more expensive as its popularity spiked. In 1999, customers could snag a “lifetime membership” for $76.95 or else fork over $30 every two years or $20 annually, the LA Times reported. Those were good deals compared to the significantly higher costs documented in the 2001 Forbes report, which noted a three-month package was available for $20, or users could pay $20 monthly to access supposedly premium content.

Among Adult Check’s customers were apparently some savvy kids who snuck through the cracks in the system. In various threads debating today’s laws, several Redditors have claimed that they used Adult Check as minors in the ’90s, either admitting to stealing a parent’s credit card or sharing age-authenticated passcodes with friends.

“Adult Check? I remember signing up for that in the mid-late 90s,” one commenter wrote in a thread asking if anyone would ever show ID to access porn. “Possibly a minor friend of mine paid for half the fee so he could use it too.”

“Those years were a strange time,” the commenter continued. “We’d go see tech-suspense-horror-thrillers like The Net and Disclosure where the protagonist has to fight to reclaim their lives from cyberantagonists, only to come home to send our personal information along with a credit card payment so we could look at porn.”

“LOL. I remember paying for the lifetime package, thinking I’d use it for decades,” another commenter responded. “Doh…”

Adult Check thrived even without age check laws

Sanderson’s study noted that today, minors’ “first exposure [to adult content] typically occurs between ages 11–13,” which is “substantially earlier than pre-Internet estimates.” Kids seeking out adult content may be in a period of heightened risk-taking or lack self-control, while others may be exposed without ever seeking it out. Some studies suggest that kids who are more likely to seek out adult content could struggle with lower self-esteem, emotional problems, body image concerns, or depressive symptoms. These potential negative associations with adolescent exposure to porn have long been the basis for lawmakers’ fight to keep the content away from kids—and even the biggest publishers today, like Pornhub, agree that it’s a worthy goal.

After parents got wise to ’90s kids dodging age gates, pressure predictably mounted on Adult Check to solve the problem, despite Adult Check consistently admitting that its system wasn’t foolproof. Alsarraf claimed that Adult Check developed “proprietary” technology to detect when kids were using credit cards or when multiple kids were attempting to use the same passcode at the same time from different IP addresses. He also claimed that Adult Check could detect stolen credit cards, bogus card numbers, card numbers “posted on the Internet,” and other fraud.

Meanwhile, the LA Times noted, Cybernet Ventures pulled in an estimated $50 million in 1999, ensuring that the CEO could splurge on a $690,000 house in Pasadena and a $100,000 Hummer. Although Adult Check was believed to be his most profitable venture at that time, Alsarraf told the LA Times that he wasn’t really invested in COPA passing.

“I know Adult Check will flourish,” Alsarraf said, “with or without the law.”

And he was apparently right. By 2001, subscriptions banked an estimated $320 million.

After the CDA and COPA were blocked, “many website owners continue to use Adult Check as a responsible approach to content accessibility,” Alsarraf testified.

While adult sites were likely just in it for the paychecks—which reportedly were dependably delivered—he positioned this ongoing growth as fueled by sites voluntarily turning to Adult Check to protect kids and free speech. “Adult Check allows a free flow of ideas and constitutionally protected speech to course through the Internet without censorship and unreasonable intrusion,” Alsarraf said.

“The Adult Check system is the least restrictive, least intrusive method of restricting access to content that requires minimal cost, and no parental technical expertise and intervention: It does not judge content, does not inhibit free speech, and it does not prevent access to any ideas, word, thoughts, or expressions,” Alsarraf testified.

Britney Spears aided Adult Check’s downfall

Adult Check’s downfall ultimately came in part thanks to Britney Spears, Wired reported in 2002. Spears went from Mickey Mouse Club child star to the “Princess of Pop” at 16 years old with her hit “Baby One More Time” in 1999, the same year that Adult Check rose to prominence.

Today, Spears is well-known for her activism, but in the late 1990s and early 2000s, she was one of the earliest victims of fake online porn.

Spears submitted documents in a lawsuit raised by the publisher of a porn magazine called Perfect 10. The publisher accused Adult Check of enabling the infringement of its content featured on the age check provider’s partner sites, and Spears’ documents helped prove that Adult Check was also linking to “non-existent nude photos,” allegedly in violation of unfair competition laws. The case was an early test of online liability, and Adult Check seemingly learned the hard way that the courts weren’t on its side.

That suit prompted an injunction blocking Adult Check from partnering with sites promoting supposedly illicit photos of “models and celebrities,” which it said was no big deal because it only comprised about 6 percent of its business.

However, after losing the lawsuit in 2004, Adult Check’s reputation took a hit, and it fell out of the pop lexicon. Although Cybernet Ventures continued to exist, Adult Check screening was dropped from sites, as it was no longer considered the gold standard in age verification. Perhaps more importantly, it was no longer required by law.

But although millions validated Adult Check for years, not everybody in the ’90s bought into Adult Check’s claims that it was protecting kids from porn. Some critics said it only provided a veneer of online safety without meaningfully impacting kids. Most of the country—more than 250 million US residents—never subscribed.

“I never used Adult Check,” one Redditor said in a thread pondering whether age gate laws might increase the risks of government surveillance. “My recollection was that it was an untrustworthy scam and unneeded barrier for the theater of legitimacy.”

Alsarraf keeps a lower profile these days and did not respond to Ars’ request to comment.

The rise and fall of Adult Check may have prevented more legally viable age verification systems from gaining traction. The ACLU argued that its popularity trampled the momentum of the “least restrictive” method for age checks available in the ’90s, a system called the Platform for Internet Content Selection (PICS).

Based on rating and filtering technology, PICS allowed content providers or third-party interest groups to create private rating systems so that “individual users can then choose the rating system that best reflects their own values, and any material that offends them will be blocked from their homes.”

However, like all age check systems, PICS was also criticized as being imperfect. Legal scholar Lawrence Lessig called it “the devil” because “it allows censorship at any point on the chain of distribution” of online content.

Although the age verification technology has changed, today’s lawmakers are stuck in the same debate decades later, with no perfect solutions in sight.

SCOTUS to rule on constitutionality of age gate laws

This summer, the Supreme Court will decide whether a Texas law blocking minors’ access to porn is constitutional. The decision could either stunt the momentum or strengthen the backbone of nearly 20 laws in red states across the country seeking to age-gate the Internet.

For privacy advocates opposing the laws, the SCOTUS ruling feels like a sink-or-swim moment for age gates, depending on which way the court swings. And it will come just as blue states like Colorado have recently begun pushing for age gates, too. Meanwhile, other laws increasingly seek to safeguard kids’ privacy and prevent social media addiction by also requiring age checks.

Since the 1990s, the US has debated how to best keep kids away from harmful content without trampling adults’ First Amendment rights. And while cruder credit card-based systems like Adult Check are no longer seen as viable, it’s clear that for lawmakers today, technology is still viewed as both the problem and the solution.

While lawmakers claim that the latest technology makes it easier than ever to access porn, advancements like digital IDs, device-based age checks, or app store age checks seem to signal salvation, making it easier to digitally verify user ages. And some artificial intelligence solutions have likely made lawmakers’ dreams of age-gating the Internet appear even more within reach.

Critics have condemned age gates as unconstitutionally limiting adults’ access to legal speech, at the furthest extreme accusing conservatives of seeking to censor all adult content online or expand government surveillance by tracking people’s sexual identity. (Goldman noted that “Russell Vought, an architect of Project 2025 and President Trump’s Director of the Office of Management and Budget, admitted that he favored age authentication mandates as a ‘back door’ way to censor pornography.”)

Ultimately, SCOTUS could end up deciding if any kind of age gate is ever appropriate. The court could perhaps rule that strict scrutiny, which requires a narrowly tailored solution to serve a compelling government interest, must be applied, potentially ruling out all of lawmakers’ suggested strategies. Or the court could decide that strict scrutiny applies but age checks are narrowly tailored. Or it could go the other way and rule that strict scrutiny does not apply, so all state lawmakers need to show is that their basis for requiring age verification is rationally connected to their interest in blocking minors from adult content.

Age verification remains flawed, experts say

If there’s anything the ’90s can teach lawmakers about age gates, it’s that creating an age verification industry dependent on adult sites will only incentivize the creation of more adult sites that benefit from the new rules. Back then, when age verification systems increased sites’ revenues, compliant sites were rewarded, but in today’s climate, it’s the noncompliant sites that stand to profit by not authenticating ages.

Sanderson’s study noted that Louisiana “was the only state that implemented age verification in a manner that plausibly preserved a user’s anonymity while verifying age,” which is why Pornhub didn’t block the state over its age verification law. But other states that Pornhub blocked passed copycat laws that “tended to be stricter, either requiring uploads of an individual’s government identification,” methods requiring providing other sensitive data, “or even presenting biometric data such as face scanning,” the study noted.

The technology continues evolving as the debate rages on. Some of the most popular platforms and biggest tech companies have been testing new age estimation methods this year. Notably, Discord is testing out face scans in the United Kingdom and Australia, and both Meta and Google are testing technology to supposedly detect kids lying about their ages online.

But a solution has not yet been found as parents and their lawyers circle social media companies they believe are harming their kids. In fact, the unreliability of the tech remains an issue for Meta, which is perhaps the most motivated to find a fix, having long faced immense pressure to improve child safety on its platforms. Earlier this year, Meta had to yank its age detection tool after the “measure didn’t work as well as we’d hoped and inadvertently locked out some parents and guardians who shared devices with their teens,” the company said.

On April 21, Meta announced that it started testing the tech in the US, suggesting the flaws were fixed, but Meta did not directly respond to Ars’ request to comment in more detail on updates.

Two years ago, Ash Johnson, a senior policy manager at the nonpartisan nonprofit think tank the Information Technology and Innovation Foundation (ITIF), urged Congress to “support more research and testing of age verification technology,” saying that the government’s last empirical evaluation was in 2014. She noted then that “the technology is not perfect, and some children will break the rules, eventually slipping through the safeguards,” but that lawmakers need to understand the trade-offs of advocating for different tech solutions or else risk infringing user privacy.

More research is needed, Johnson told Ars, while Sanderson’s study suggested that regulators should also conduct circumvention research or be stuck with laws that have a “limited effectiveness as a standalone policy tool.”

For example, while AI solutions are increasingly more accurate—and in one Facebook survey overwhelmingly more popular with users, Goldman’s analysis noted—the tech still struggles to differentiate between a 17- or 18-year-old, for example.

Like Aylo, ITIF recommends device-based age authentication as the least restrictive method, Johnson told Ars. Perhaps the biggest issue with that option, though, is that kids may have an easy time accessing adult content on devices shared with parents, Goldman noted.

Not sharing Johnson’s optimism, Goldman wrote that “there is no ‘preferred’ or ‘ideal’ way to do online age authentication.” Even a perfect system that accurately authenticates age every time would be flawed, he suggested.

“Rather, they each fall on a spectrum of ‘dangerous in one way’ to ‘dangerous in a different way,'” he wrote, concluding that “every solution has serious privacy, accuracy, or security problems.”

Kids at “grave risk” from uninformed laws

As a “burgeoning” age verification industry swells, Goldman wants to see more earnest efforts from lawmakers to “develop a wider and more thoughtful toolkit of online child safety measures.” They could start, he suggested, by consistently defining minors in laws so it’s clear who is being regulated and what access is being restricted. They could then provide education to parents and minors to help them navigate online harms.

Without such careful consideration, Goldman predicts a dystopian future prompted by age verification laws. If SCOTUS endorses them, users could become so accustomed to age gates that they start entering sensitive information into various web platforms without a second thought. Even the government knows that would be a disaster, Goldman said.

“Governments around the world want people to think twice before sharing sensitive biometric information due to the information’s immutability if stolen,” Goldman wrote. “Mandatory age authentication teaches them the opposite lesson.”

Goldman recommends that lawmakers start seeking an information-based solution to age verification problems rather than depending on tech to save the day.

“Treating the online age authentication challenges as purely technological encourages the unsupportable belief that its problems can be solved if technologists ‘nerd harder,'” Goldman wrote. “This reductionist thinking is a categorical error. Age authentication is fundamentally an information problem, not a technology problem. Technology can help improve information accuracy and quality, but it cannot unilaterally solve information challenges.”

Lawmakers could potentially minimize risks to kids by only verifying age when someone tries to access restricted content or “by compelling age authenticators to minimize their data collection” and “promptly delete any highly sensitive information” collected. That likely wouldn’t stop some vendors from collecting or retaining data anyway, Goldman suggested. But it could be a better standard to protect users of all ages from inevitable data breaches, since we know that “numerous authenticators have suffered major data security failures that put authenticated individuals at grave risk.”

“If the policy goal is to protect minors online because of their potential vulnerability, then forcing minors to constantly decide whether or not to share highly sensitive information with strangers online is a policy failure,” Goldman wrote. “Child safety online needs a whole-of-society response, not a delegate-and-pray approach.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Redditor accidentally reinvents discarded ’90s tool to escape today’s age gates Read More »

monty-python-and-the-holy-grail-turns-50

Monty Python and the Holy Grail turns 50


Ars staffers reflect upon the things they love most about this masterpiece of absurdist comedy.

king arthur's and his knights staring up at something.

Credit: EMI Films/Python (Monty) Pictures

Credit: EMI Films/Python (Monty) Pictures

Monty Python and the Holy Grail is widely considered to be among the best comedy films of all time, and it’s certainly one of the most quotable. This absurdist masterpiece sending up Arthurian legend turns 50 (!) this year.

It was partly Python member Terry Jones’ passion for the Middle Ages and Arthurian legend that inspired Holy Grail and its approach to comedy. (Jones even went on to direct a 2004 documentary, Medieval Lives.) The troupe members wrote several drafts beginning in 1973, and Jones and Terry Gilliam were co-directors—the first full-length feature for each, so filming was one long learning process. Reviews were mixed when Holy Grail was first released—much like they were for Young Frankenstein (1974), another comedic masterpiece—but audiences begged to differ. It was the top-grossing British film screened in the US in 1975. And its reputation has only grown over the ensuing decades.

The film’s broad cultural influence extends beyond the entertainment industry. Holy Grail has been the subject of multiple scholarly papers examining such topics as its effectiveness at teaching Arthurian literature or geometric thought and logic, the comedic techniques employed, and why the depiction of a killer rabbit is so fitting (killer rabbits frequently appear drawn in the margins of Gothic manuscripts). My personal favorite was a 2018 tongue-in-cheek paper on whether the Black Knight could have survived long enough to make good on his threat to bite King Arthur’s legs off (tl;dr: no).

So it’s not at all surprising that Monty Python and the Holy Grail proved to be equally influential and beloved by Ars staffers, several of whom offer their reminiscences below.

They were nerd-gassing before it was cool

The Monty Python troupe famously made Holy Grail on a shoestring budget—so much so that they couldn’t afford to have the knights ride actual horses. (There are only a couple of scenes featuring a horse, and apparently it’s the same horse.) Rather than throwing up their hands in resignation, that very real constraint fueled the Pythons’ creativity. The actors decided the knights would simply pretend to ride horses while their porters followed behind, banging halves of coconut shells together to mimic the sound of horses’ hooves—a time-honored Foley effect dating back to the early days of radio.

Being masters of absurdist humor, naturally, they had to call attention to it. Arthur and his trusty servant, Patsy (Gilliam), approach the castle of their first potential recruit. When Arthur informs the guards that they have “ridden the length and breadth of the land,” one of the guards isn’t having it. “What, ridden on a horse? You’re using coconuts! You’ve got two empty halves of coconut, and you’re bangin’ ’em together!”

That raises the obvious question: Where did they get the coconuts? What follows is one of the greatest examples of nerd-gassing yet to appear on film. Arthur claims he and Patsy found them, but the guard is incredulous since the coconut is tropical and England is a temperate zone. Arthur counters by invoking the example of migrating swallows. Coconuts do not migrate, but Arthur suggests they could be carried by swallows gripping a coconut by the husk.

The guard still isn’t having it. It’s a question of getting the weight ratios right, you see, to maintain air-speed velocity. Another guard gets involved, suggesting it might be possible with an African swallow, but that species is non-migratory. And so on. The two are still debating the issue as an exasperated Arthur rides off to find another recruit.

The best part? There’s a callback to that scene late in the film when the knights must answer three questions to cross the Bridge of Death or else be chucked into the Gorge of Eternal Peril. When it’s Arthur’s turn, the third question is “What is the air-speed velocity of an unladen swallow?” Arthur asks whether this is an African or a European swallow. This stumps the Bridgekeeper, who gets flung into the gorge. Sir Belvedere asks how Arthur came to know so much about swallows. Arthur replies, “Well, you have to know these things when you’re a king, you know.”

The plucky Black Knight (“It’s just a flesh wound!”) will always hold a special place in my heart, but that debate over air-speed velocities of laden versus unladen swallows encapsulates what makes Holy Grail a timeless masterpiece.

Jennifer Ouellette

A bunny out for blood

“Oh, it’s just a harmless little bunny, isn’t it?”

Despite their appearances, rabbits aren’t always the most innocent-looking animals. Recent reports of rabbit strikes on airplanes are the latest examples of the mayhem these creatures of chaos can inflict on unsuspecting targets.

I learned that lesson a long time ago, though, thanks partly to my way-too-early viewings of the animated Watership Down and Monty Python and the Holy Grail. There I was, about 8 years old and absent of paternal accompaniment, watching previously cuddly creatures bloodying each other and severing the heads of King Arthur’s retinue. While Watership Down’s animal-on-animal violence might have been a bit scarring at that age, I enjoyed the slapstick humor of the Rabbit of Caerbannog scene (many of the jokes my colleagues highlight went over my head upon my initial viewing).

Despite being warned of the creature’s viciousness by Tim the Enchanter, the Knights of the Round Table dismiss the Merlin stand-in’s fear and charge the bloodthirsty creature. But the knights quickly realize they’re no match for the “bad-tempered rodent,” which zips around in the air, goes straight for the throat, and causes the surviving knights to run away in fear. If Arthur and his knights possessed any self-awareness, they might have learned a lesson about making assumptions about appearances.

But hopefully that’s a takeaway for viewers of 1970s British pop culture involving rabbits. Even cute bunnies, as sweet as they may seem initially, can be engines of destruction: “Death awaits you all with nasty, big, pointy teeth.”

Jacob May

Can’t stop the music

The most memorable songs from Monty Python and the Holy Grail were penned by Neil Innes, who frequently collaborated with the troupe and appears in the film. His “Brave Sir Robin” amusingly parodied minstrel tales of valor by imagining all the torturous ways that one knight might die. Then there’s his “Knights of the Round Table,” the first musical number performed by the cast—if you don’t count the monk chants punctuated with slaps on the head with wooden planks. That song hilariously rouses not just wild dancing from knights but also claps from prisoners who otherwise dangle from cuffed wrists.

But while these songs have stuck in my head for decades, Monty Python’s Terry Jones once gave me a reason to focus on the canned music instead, and it weirdly changed the way I’ve watched the movie ever since.

Back in 2001, Jones told Billboard that an early screening for investors almost tanked the film. He claimed that after the first five minutes, the movie got no laughs whatsoever. For Jones, whose directorial debut could have died in that moment, the silence was unthinkable. “It can’t be that unfunny,” he told Billboard. “There must be something wrong.”

Jones soon decided that the soundtrack was the problem, immediately cutting the “wonderfully rich, atmospheric” songs penned by Innes that seemed to be “overpowering the funny bits” in favor of canned music.

Reading this prompted an immediate rewatch because I needed to know what the first bit was that failed to get a laugh from that fateful audience. It turned out to be the scene where King Arthur encounters peasants in a field who deny knowing that there even was a king. As usual, I was incapable of holding back a burst of laughter when one peasant woman grieves, “Well, I didn’t vote for you” while packing random clumps of mud into the field. It made me wonder if any song might have robbed me of that laugh, and that made me pay closer attention to how Jones flipped the script and somehow meticulously used the canned music to extract more laughs.

The canned music was licensed from a British sound library that helped the 1920s movie business evolve past silent films. They’re some of the earliest songs to summon emotion from viewers whose eyes were glued to a screen. In Monty Python and the Holy Grail, which features a naive King Arthur enduring his perilous journey on a wood stick horse, the canned music provides the most predictable soundtrack you could imagine that might score a child’s game of make-believe. It also plays the straight man by earnestly pulsing to convey deep trouble as knights approach the bridge of death or heavenly trumpeting the anticipated appearance of the Holy Grail.

It’s easy to watch the movie without noticing the canned music, as the colorful performances are Jones’ intended focus. Not relying on punchlines, the group couldn’t afford any nuance to be lost. But there is at least one moment where Jones obviously relies on the music to overwhelm the acting to compel a belly laugh. Just before “the most foul, cruel, bad-tempered rodent” appears, a quick surge of dramatic music that cuts out just as suddenly makes it all the more absurd when the threat emerges and appears to be an “ordinary rabbit.”

It’s during this scene, too, that King Arthur delivers a line that sums up how predictably odd but deceptively artful the movie’s use of canned music really is. When he meets Tim the Enchanter—who tries to warn the knights about the rabbit’s “pointy teeth” by evoking loud thunder rolls and waggling his fingers in front of his mouth—Arthur turns to the knights and says, “What an eccentric performance.”

Ashley Belanger

Thank the “keg rock conclave”

I tried to make music a big part of my teenage identity because I didn’t have much else. I was a suburban kid with a B-minus/C-plus average, no real hobbies, sports, or extra-curriculars, plus a deeply held belief that Nine Inch Nails, the Beastie Boys, and Aphex Twin would never get their due as geniuses. Classic Rock, the stuff jocks listened to at parties and practice? That my dad sang along to after having a few? No thanks.

There were cultural heroes, there were musty, overwrought villains, and I knew the score. Or so I thought.

I don’t remember exactly where I found the little fact that scarred my oppositional ego forever. It might have been Spin magazine, a weekend MTV/VH1 feature, or that Rolling Stone book about the ’70s (I bought it for the punks, I swear). But at some point, I learned that a who’s-who of my era’s played-out bands—Led Zeppelin, Pink Floyd, even Jethro (freaking) Tull—personally funded one of my favorite subversive movies. Jimmy Page and Robert Plant, key members of the keg-rock conclave, attended the premiere.

It was such a small thing, but it raised such big, naive, adolescent questions. Somebody had to pay for Holy Grail—it didn’t just arrive as something passed between nerds? People who make things I might not enjoy could financially support things I do enjoy? There was a time when today’s overcelebrated dinosaurs were cool and hip in the subculture? I had common ground with David Gilmour?

Ever since, when a reference to Holy Grail is made, especially to how cheap it looks, I think about how I once learned that my beloved nerds (or theater kids) wouldn’t even have those coconut horses were it not for some decent-hearted jocks.

Kevin Purdy

A masterpiece of absurdism

“I blow my nose at you, English pig-dog!” EMI Films/Python (Monty) Pictures

I was young enough that I’d never previously stayed awake until midnight on New Year’s Eve. My parents were off to a party, my younger brother was in bed, and my older sister had a neglectful attitude toward babysitting me. So I was parked in front of the TV when the local PBS station aired a double feature of The Yellow Submarine and The Holy Grail.

At the time, I probably would have said my mind was blown. In retrospect, I’d prefer to think that my mind was expanded.

For years, those films mostly existed as a source of one-line evocations of sketch comedy nirvana that I’d swap with my friends. (I’m not sure I’ve ever lacked a group of peers where a properly paced “With… a herring!” had meaning.) But over time, I’ve come to appreciate other ways that the films have stuck with me. I can’t say whether they set me on an aesthetic trajectory that has continued for decades or if they were just the first things to tickle some underlying tendencies that were lurking in my not-yet-fully-wired brain.

In either case, my brain has developed into a huge fan of absurdism, whether in sketch comedy, longer narratives like Arrested Development or the lyrics of Courtney Barnett. Or, let’s face it, any stream of consciousness lyrics I’ve been able to hunt down. But Monty Python remains a master of the form, and The Holy Grail’s conclusion in a knight bust remains one of its purest expressions.

A bit less obviously, both films are probably my first exposures to anti-plotting, where linearity and a sense of time were really besides the point. With some rare exceptions—the eating of Sir Robin’s minstrels, Ringo putting a hole in his pocket—the order of the scenes were completely irrelevant. Few of the incidents had much consequence for future scenes. Since I was unused to staying up past midnight at that age, I’d imagine the order of events was fuzzy already by the next day. By the time I was swapping one-line excerpts with friends, it was long gone. And it just didn’t matter.

In retrospect, I think that helped ready my brain for things like Catch-22 and its convoluted, looping, non-Euclidean plotting. The novel felt like a revelation when I first read it, but I’ve since realized it fits a bit more comfortably within a spectrum of works that play tricks with time and find clever connections among seemingly random events.

I’m not sure what possessed someone to place these two films together as appropriate New Year’s Eve programming. But I’d like to think it was more intentional than I had any reason to suspect at the time. And I feel like I owe them a debt.

—John Timmer

A delightful send-up of autocracy

King Arthur attempting to throttle a peasant in the field

“See the violence inherent in the system!” Credit: Python (Monty) Pictures

What an impossible task to pick just a single thing I love about this film! But if I had to choose one scene, it would be when a lost King Arthur comes across an old woman—but oops, it’s actually a man named Dennis—and ends up in a discussion about medieval politics. Arthur explains that he is king because the Lady of the Lake conferred the sword Excalibur on him, signifying that he should rule as king of the Britons by divine right.

To this, Dennis replies, “Strange women lying in ponds distributing swords is no basis for a system of government. Supreme executive power derives from a mandate from the masses, not from some farcical aquatic ceremony.”

Even though it was filmed half a century ago, the scene offers a delightful send-up of autocracy. And not to be too much of a downer here, but all of us living in the United States probably need to be reminded that living in an autocracy would suck for a lot of reasons. So let’s not do that.

Eric Berger

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Monty Python and the Holy Grail turns 50 Read More »

ios-and-android-juice-jacking-defenses-have-been-trivial-to-bypass-for-years

iOS and Android juice jacking defenses have been trivial to bypass for years


SON OF JUICE JACKING ARISES

New ChoiceJacking attack allows malicious chargers to steal data from phones.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

About a decade ago, Apple and Google started updating iOS and Android, respectively, to make them less susceptible to “juice jacking,” a form of attack that could surreptitiously steal data or execute malicious code when users plug their phones into special-purpose charging hardware. Now, researchers are revealing that, for years, the mitigations have suffered from a fundamental defect that has made them trivial to bypass.

“Juice jacking” was coined in a 2011 article on KrebsOnSecurity detailing an attack demonstrated at a Defcon security conference at the time. Juice jacking works by equipping a charger with hidden hardware that can access files and other internal resources of phones, in much the same way that a computer can when a user connects it to the phone.

An attacker would then make the chargers available in airports, shopping malls, or other public venues for use by people looking to recharge depleted batteries. While the charger was ostensibly only providing electricity to the phone, it was also secretly downloading files or running malicious code on the device behind the scenes. Starting in 2012, both Apple and Google tried to mitigate the threat by requiring users to click a confirmation button on their phones before a computer—or a computer masquerading as a charger—could access files or execute code on the phone.

The logic behind the mitigation was rooted in a key portion of the USB protocol that, in the parlance of the specification, dictates that a USB port can facilitate a “host” device or a “peripheral” device at any given time, but not both. In the context of phones, this meant they could either:

  • Host the device on the other end of the USB cord—for instance, if a user connects a thumb drive or keyboard. In this scenario, the phone is the host that has access to the internals of the drive, keyboard or other peripheral device.
  • Act as a peripheral device that’s hosted by a computer or malicious charger, which under the USB paradigm is a host that has system access to the phone.

An alarming state of USB security

Researchers at the Graz University of Technology in Austria recently made a discovery that completely undermines the premise behind the countermeasure: They’re rooted under the assumption that USB hosts can’t inject input that autonomously approves the confirmation prompt. Given the restriction against a USB device simultaneously acting as a host and peripheral, the premise seemed sound. The trust models built into both iOS and Android, however, present loopholes that can be exploited to defeat the protections. The researchers went on to devise ChoiceJacking, the first known attack to defeat juice-jacking mitigations.

“We observe that these mitigations assume that an attacker cannot inject input events while establishing a data connection,” the researchers wrote in a paper scheduled to be presented in August at the Usenix Security Symposium in Seattle. “However, we show that this assumption does not hold in practice.”

The researchers continued:

We present a platform-agnostic attack principle and three concrete attack techniques for Android and iOS that allow a malicious charger to autonomously spoof user input to enable its own data connection. Our evaluation using a custom cheap malicious charger design reveals an alarming state of USB security on mobile platforms. Despite vendor customizations in USB stacks, ChoiceJacking attacks gain access to sensitive user files (pictures, documents, app data) on all tested devices from 8 vendors including the top 6 by market share.

In response to the findings, Apple updated the confirmation dialogs in last month’s release of iOS/iPadOS 18.4 to require a user authentication in the form of a PIN or password. While the researchers were investigating their ChoiceJacking attacks last year, Google independently updated its confirmation with the release of version 15 in November. The researchers say the new mitigation works as expected on fully updated Apple and Android devices. Given the fragmentation of the Android ecosystem, however, many Android devices remain vulnerable.

All three of the ChoiceJacking techniques defeat the original Android juice-jacking mitigations. One of them also works against those defenses in Apple devices. In all three, the charger acts as a USB host to trigger the confirmation prompt on the targeted phone.

The attacks then exploit various weaknesses in the OS that allow the charger to autonomously inject “input events” that can enter text or click buttons presented in screen prompts as if the user had done so directly into the phone. In all three, the charger eventually gains two conceptual channels to the phone: (1) an input one allowing it to spoof user consent and (2) a file access connection that can steal files.

An illustration of ChoiceJacking attacks. (1) The victim device is attached to the malicious charger. (2) The charger establishes an extra input channel. (3) The charger initiates a data connection. User consent is needed to confirm it. (4) The charger uses the input channel to spoof user consent. Credit: Draschbacher et al.

It’s a keyboard, it’s a host, it’s both

In the ChoiceJacking variant that defeats both Apple- and Google-devised juice-jacking mitigations, the charger starts as a USB keyboard or a similar peripheral device. It sends keyboard input over USB that invokes simple key presses, such as arrow up or down, but also more complex key combinations that trigger settings or open a status bar.

The input establishes a Bluetooth connection to a second miniaturized keyboard hidden inside the malicious charger. The charger then uses the USB Power Delivery, a standard available in USB-C connectors that allows devices to either provide or receive power to or from the other device, depending on messages they exchange, a process known as the USB PD Data Role Swap.

A simulated ChoiceJacking charger. Bidirectional USB lines allow for data role swaps. Credit: Draschbacher et al.

With the charger now acting as a host, it triggers the file access consent dialog. At the same time, the charger still maintains its role as a peripheral device that acts as a Bluetooth keyboard that approves the file access consent dialog.

The full steps for the attack, provided in the Usenix paper, are:

1. The victim device is connected to the malicious charger. The device has its screen unlocked.

2. At a suitable moment, the charger performs a USB PD Data Role (DR) Swap. The mobile device now acts as a USB host, the charger acts as a USB input device.

3. The charger generates input to ensure that BT is enabled.

4. The charger navigates to the BT pairing screen in the system settings to make the mobile device discoverable.

5. The charger starts advertising as a BT input device.

6. By constantly scanning for newly discoverable Bluetooth devices, the charger identifies the BT device address of the mobile device and initiates pairing.

7. Through the USB input device, the charger accepts the Yes/No pairing dialog appearing on the mobile device. The Bluetooth input device is now connected.

8. The charger sends another USB PD DR Swap. It is now the USB host, and the mobile device is the USB device.

9. As the USB host, the charger initiates a data connection.

10. Through the Bluetooth input device, the charger confirms its own data connection on the mobile device.

This technique works against all but one of the 11 phone models tested, with the holdout being an Android device running the Vivo Funtouch OS, which doesn’t fully support the USB PD protocol. The attacks against the 10 remaining models take about 25 to 30 seconds to establish the Bluetooth pairing, depending on the phone model being hacked. The attacker then has read and write access to files stored on the device for as long as it remains connected to the charger.

Two more ways to hack Android

The two other members of the ChoiceJacking family work only against the juice-jacking mitigations that Google put into Android. In the first, the malicious charger invokes the Android Open Access Protocol, which allows a USB host to act as an input device when the host sends a special message that puts it into accessory mode.

The protocol specifically dictates that while in accessory mode, a USB host can no longer respond to other USB interfaces, such as the Picture Transfer Protocol for transferring photos and videos and the Media Transfer Protocol that enables transferring files in other formats. Despite the restriction, all of the Android devices tested violated the specification by accepting AOAP messages sent, even when the USB host hadn’t been put into accessory mode. The charger can exploit this implementation flaw to autonomously complete the required user confirmations.

The remaining ChoiceJacking technique exploits a race condition in the Android input dispatcher by flooding it with a specially crafted sequence of input events. The dispatcher puts each event into a queue and processes them one by one. The dispatcher waits for all previous input events to be fully processed before acting on a new one.

“This means that a single process that performs overly complex logic in its key event handler will delay event dispatching for all other processes or global event handlers,” the researchers explained.

They went on to note, “A malicious charger can exploit this by starting as a USB peripheral and flooding the event queue with a specially crafted sequence of key events. It then switches its USB interface to act as a USB host while the victim device is still busy dispatching the attacker’s events. These events therefore accept user prompts for confirming the data connection to the malicious charger.”

The Usenix paper provides the following matrix showing which devices tested in the research are vulnerable to which attacks.

The susceptibility of tested devices to all three ChoiceJacking attack techniques. Credit: Draschbacher et al.

User convenience over security

In an email, the researchers said that the fixes provided by Apple and Google successfully blunt ChoiceJacking attacks in iPhones, iPads, and Pixel devices. Many Android devices made by other manufacturers, however, remain vulnerable because they have yet to update their devices to Android 15. Other Android devices—most notably those from Samsung running the One UI 7 software interface—don’t implement the new authentication requirement, even when running on Android 15. The omission leaves these models vulnerable to ChoiceJacking. In an email, principal paper author Florian Draschbacher wrote:

The attack can therefore still be exploited on many devices, even though we informed the manufacturers about a year ago and they acknowledged the problem. The reason for this slow reaction is probably that ChoiceJacking does not simply exploit a programming error. Rather, the problem is more deeply rooted in the USB trust model of mobile operating systems. Changes here have a negative impact on the user experience, which is why manufacturers are hesitant. [It] means for enabling USB-based file access, the user doesn’t need to simply tap YES on a dialog but additionally needs to present their unlock PIN/fingerprint/face. This inevitably slows down the process.

The biggest threat posed by ChoiceJacking is to Android devices that have been configured to enable USB debugging. Developers often turn on this option so they can troubleshoot problems with their apps, but many non-developers enable it so they can install apps from their computer, root their devices so they can install a different OS, transfer data between devices, and recover bricked phones. Turning it on requires a user to flip a switch in Settings > System > Developer options.

If a phone has USB Debugging turned on, ChoiceJacking can gain shell access through the Android Debug Bridge. From there, an attacker can install apps, access the file system, and execute malicious binary files. The level of access through the Android Debug Mode is much higher than that through Picture Transfer Protocol and Media Transfer Protocol, which only allow read and write access to system files.

The vulnerabilities are tracked as:

    • CVE-2025-24193 (Apple)
    • CVE-2024-43085 (Google)
    • CVE-2024-20900 (Samsung)
    • CVE-2024-54096 (Huawei)

A Google spokesperson confirmed that the weaknesses were patched in Android 15 but didn’t speak to the base of Android devices from other manufacturers, who either don’t support the new OS or the new authentication requirement it makes possible. Apple declined to comment for this post.

Word that juice-jacking-style attacks are once again possible on some Android devices and out-of-date iPhones is likely to breathe new life into the constant warnings from federal authorities, tech pundits, news outlets, and local and state government agencies that phone users should steer clear of public charging stations. Special-purpose cords that disconnect data access remain a viable mitigation, but the researchers noted that “data blockers also interfere with modern

power negotiation schemes, thereby degrading charge speed.”

As I reported in 2023, these warnings are mostly scaremongering, and the advent of ChoiceJacking does little to change that, given that there are no documented cases of such attacks in the wild. That said, people using Android devices that don’t support Google’s new authentication requirement may want to refrain from public charging.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

iOS and Android juice jacking defenses have been trivial to bypass for years Read More »

in-the-age-of-ai,-we-must-protect-human-creativity-as-a-natural-resource

In the age of AI, we must protect human creativity as a natural resource


Op-ed: As AI outputs flood the Internet, diverse human perspectives are our most valuable resource.

Ironically, our present AI age has shone a bright spotlight on the immense value of human creativity as breakthroughs in technology threaten to undermine it. As tech giants rush to build newer AI models, their web crawlers vacuum up creative content, and those same models spew floods of synthetic media, risking drowning out the human creative spark in an ocean of pablum.

Given this trajectory, AI-generated content may soon exceed the entire corpus of historical human creative works, making the preservation of the human creative ecosystem not just an ethical concern but an urgent imperative. The alternative is nothing less than a gradual homogenization of our cultural landscape, where machine learning flattens the richness of human expression into a mediocre statistical average.

A limited resource

By ingesting billions of creations, chatbots learn to talk, and image synthesizers learn to draw. Along the way, the AI companies behind them treat our shared culture like an inexhaustible resource to be strip-mined, with little thought for the consequences.

But human creativity isn’t the product of an industrial process; it’s inherently throttled precisely because we are finite biological beings who draw inspiration from real lived experiences while balancing creativity with the necessities of life—sleep, emotional recovery, and limited lifespans. Creativity comes from making connections, and it takes energy, time, and insight for those connections to be meaningful. Until recently, a human brain was a prerequisite for making those kinds of connections, and there’s a reason why that is valuable.

Every human brain isn’t just a store of data—it’s a knowledge engine that thinks in a unique way, creating novel combinations of ideas. Instead of having one “connection machine” (an AI model) duplicated a million times, we have seven billion neural networks, each with a unique perspective. Relying on the cognitive diversity of human thought helps us escape the monolithic thinking that may emerge if everyone were to draw from the same AI-generated sources.

Today, the AI industry’s business models unintentionally echo the ways in which early industrialists approached forests and fisheries—as free inputs to exploit without considering ecological limits.

Just as pollution from early factories unexpectedly damaged the environment, AI systems risk polluting the digital environment by flooding the Internet with synthetic content. Like a forest that needs careful management to thrive or a fishery vulnerable to collapse from overexploitation, the creative ecosystem can be degraded even if the potential for imagination remains.

Depleting our creative diversity may become one of the hidden costs of AI, but that diversity is worth preserving. If we let AI systems deplete or pollute the human outputs they depend on, what happens to AI models—and ultimately to human society—over the long term?

AI’s creative debt

Every AI chatbot or image generator exists only because of human works, and many traditional artists argue strongly against current AI training approaches, labeling them plagiarism. Tech companies tend to disagree, although their positions vary. For example, in 2023, imaging giant Adobe took an unusual step by training its Firefly AI models solely on licensed stock photos and public domain works, demonstrating that alternative approaches are possible.

Adobe’s licensing model offers a contrast to companies like OpenAI, which rely heavily on scraping vast amounts of Internet content without always distinguishing between licensed and unlicensed works.

Photo of a mining dumptruck and water tank in an open pit copper mine.

OpenAI has argued that this type of scraping constitutes “fair use” and effectively claims that competitive AI models at current performance levels cannot be developed without relying on unlicensed training data, despite Adobe’s alternative approach.

The “fair use” argument often hinges on the legal concept of “transformative use,” the idea that using works for a fundamentally different purpose from creative expression—such as identifying patterns for AI—does not violate copyright. Generative AI proponents often argue that their approach is how human artists learn from the world around them.

Meanwhile, artists are expressing growing concern about losing their livelihoods as corporations turn to cheap, instantaneously generated AI content. They also call for clear boundaries and consent-driven models rather than allowing developers to extract value from their creations without acknowledgment or remuneration.

Copyright as crop rotation

This tension between artists and AI reveals a deeper ecological perspective on creativity itself. Copyright’s time-limited nature was designed as a form of resource management, like crop rotation or regulated fishing seasons that allow for regeneration. Copyright expiration isn’t a bug; its designers hoped it would ensure a steady replenishment of the public domain, feeding the ecosystem from which future creativity springs.

On the other hand, purely AI-generated outputs cannot be copyrighted in the US, potentially brewing an unprecedented explosion in public domain content, although it’s content that contains smoothed-over imitations of human perspectives.

Treating human-generated content solely as raw material for AI training disrupts this ecological balance between “artist as consumer of creative ideas” and “artist as producer.” Repeated legislative extensions of copyright terms have already significantly delayed the replenishment cycle, keeping works out of the public domain for much longer than originally envisioned. Now, AI’s wholesale extraction approach further threatens this delicate balance.

The resource under strain

Our creative ecosystem is already showing measurable strain from AI’s impact, from tangible present-day infrastructure burdens to concerning future possibilities.

Aggressive AI crawlers already effectively function as denial-of-service attacks on certain sites, with Cloudflare documenting GPTBot’s immediate impact on traffic patterns. Wikimedia’s experience provides clear evidence of current costs: AI crawlers caused a documented 50 percent bandwidth surge, forcing the nonprofit to divert limited resources to defensive measures rather than to its core mission of knowledge sharing. As Wikimedia says, “Our content is free, our infrastructure is not.” Many of these crawlers demonstrably ignore established technical boundaries like robots.txt files.

Beyond infrastructure strain, our information environment also shows signs of degradation. Google has publicly acknowledged rising volumes of “spammy, low-quality,” often auto-generated content appearing in search results. A Wired investigation found concrete examples of AI-generated plagiarism sometimes outranking original reporting in search results. This kind of digital pollution led Ross Anderson of Cambridge University to compare it to filling oceans with plastic—it’s a contamination of our shared information spaces.

Looking to the future, more risks may emerge. Ted Chiang’s comparison of LLMs to lossy JPEGs offers a framework for understanding potential problems, as each AI generation summarizes web information into an increasingly “blurry” facsimile of human knowledge. The logical extension of this process—what some researchers term “model collapse“—presents a risk of degradation in our collective knowledge ecosystem if models are trained indiscriminately on their own outputs. (However, this differs from carefully designed synthetic data that can actually improve model efficiency.)

This downward spiral of AI pollution may soon resemble a classic “tragedy of the commons,” in which organizations act from self-interest at the expense of shared resources. If AI developers continue extracting data without limits or meaningful contributions, the shared resource of human creativity could eventually degrade for everyone.

Protecting the human spark

While AI models that simulate creativity in writing, coding, images, audio, or video can achieve remarkable imitations of human works, this sophisticated mimicry currently lacks the full depth of the human experience.

For example, AI models lack a body that endures the pain and travails of human life. They don’t grow over the course of a human lifespan in real time. When an AI-generated output happens to connect with us emotionally, it often does so by imitating patterns learned from a human artist who has actually lived that pain or joy.

A photo of a young woman painter in her art studio.

Even if future AI systems develop more sophisticated simulations of emotional states or embodied experiences, they would still fundamentally differ from human creativity, which emerges organically from lived biological experience, cultural context, and social interaction.

That’s because the world constantly changes. New types of human experience emerge. If an ethically trained AI model is to remain useful, researchers must train it on recent human experiences, such as viral trends, evolving slang, and cultural shifts.

Current AI solutions, like retrieval-augmented generation (RAG), address this challenge somewhat by retrieving up-to-date, external information to supplement their static training data. Yet even RAG methods depend heavily on validated, high-quality human-generated content—the very kind of data at risk if our digital environment becomes overwhelmed with low-quality AI-produced output.

This need for high-quality, human-generated data is a major reason why companies like OpenAI have pursued media deals (including a deal signed with Ars Technica parent Condé Nast last August). Yet paradoxically, the same models fed on valuable human data often produce the low-quality spam and slop that floods public areas of the Internet, degrading the very ecosystem they rely on.

AI as creative support

When used carelessly or excessively, generative AI is a threat to the creative ecosystem, but we can’t wholly discount the tech as a tool in a human creative’s arsenal. The history of art is full of technological changes (new pigments, brushes, typewriters, word processors) that transform the nature of artistic production while augmenting human creativity.

Bear with me because there’s a great deal of nuance here that is easy to miss among today’s more impassioned reactions to people using AI as a blunt instrument of creating mediocrity.

While many artists rightfully worry about AI’s extractive tendencies, research published in Harvard Business Review indicates that AI tools can potentially amplify rather than merely extract creative capacity, suggesting that a symbiotic relationship is possible under the right conditions.

Inherent in this argument is that the responsible use of AI is reflected in the skill of the user. You can use a paintbrush to paint a wall or paint the Mona Lisa. Similarly, generative AI can mindlessly fill a canvas with slop, or a human can utilize it to express their own ideas.

Machine learning tools (such as those in Adobe Photoshop) already help human creatives prototype concepts faster, iterate on variations they wouldn’t have considered, or handle some repetitive production tasks like object removal or audio transcription, freeing humans to focus on conceptual direction and emotional resonance.

These potential positives, however, don’t negate the need for responsible stewardship and respecting human creativity as a precious resource.

Cultivating the future

So what might a sustainable ecosystem for human creativity actually involve?

Legal and economic approaches will likely be key. Governments could legislate that AI training must be opt-in, or at the very least, provide a collective opt-out registry (as the EU’s “AI Act” does).

Other potential mechanisms include robust licensing or royalty systems, such as creating a royalty clearinghouse (like the music industry’s BMI or ASCAP) for efficient licensing and fair compensation. Those fees could help compensate human creatives and encourage them to keep creating well into the future.

Deeper shifts may involve cultural values and governance. Inspired by models like Japan’s “Living National Treasures“—where the government funds artisans to preserve vital skills and support their work. Could we establish programs that similarly support human creators while also designating certain works or practices as “creative reserves,” funding the further creation of certain creative works even if the economic market for them dries up?

Or a more radical shift might involve an “AI commons”—legally declaring that any AI model trained on publicly scraped data should be owned collectively as a shared public domain, ensuring that its benefits flow back to society and don’t just enrich corporations.

Photo of family Harvesting Organic Crops On Farm

Meanwhile, Internet platforms have already been experimenting with technical defenses against industrial-scale AI demands. Examples include proof-of-work challenges, slowdown “tarpits” (e.g., Nepenthes), shared crawler blocklists (“ai.robots.txt“), commercial tools (Cloudflare’s AI Labyrinth), and Wikimedia’s “WE5: Responsible Use of Infrastructure” initiative.

These solutions aren’t perfect, and implementing any of them would require overcoming significant practical hurdles. Strict regulations might slow beneficial AI development; opt-out systems burden creators, while opt-in models can be complex to track. Meanwhile, tech defenses often invite arms races. Finding a sustainable, equitable balance remains the core challenge. The issue won’t be solved in a day.

Invest in people

While navigating these complex systemic challenges will take time and collective effort, there is a surprisingly direct strategy that organizations can adopt now: investing in people. Don’t sacrifice human connection and insight to save money with mediocre AI outputs.

Organizations that cultivate unique human perspectives and integrate them with thoughtful AI augmentation will likely outperform those that pursue cost-cutting through wholesale creative automation. Investing in people acknowledges that while AI can generate content at scale, the distinctiveness of human insight, experience, and connection remains priceless.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

In the age of AI, we must protect human creativity as a natural resource Read More »

review:-ryzen-ai-cpu-makes-this-the-fastest-the-framework-laptop-13-has-ever-been

Review: Ryzen AI CPU makes this the fastest the Framework Laptop 13 has ever been


With great power comes great responsibility and subpar battery life.

The latest Framework Laptop 13, which asks you to take the good with the bad. Credit: Andrew Cunningham

The latest Framework Laptop 13, which asks you to take the good with the bad. Credit: Andrew Cunningham

At this point, the Framework Laptop 13 is a familiar face, an old friend. We have reviewed this laptop five other times, and in that time, the idea of a repairable and upgradeable laptop has gone from a “sounds great if they can pull it off” idea to one that’s become pretty reliable and predictable. And nearly four years out from the original version—which shipped with an 11th-generation Intel Core processor—we’re at the point where an upgrade will get you significant boosts to CPU and GPU performance, plus some other things.

We’re looking at the Ryzen AI 300 version of the Framework Laptop today, currently available for preorder and shipping in Q2 for people who buy one now. The laptop starts at $1,099 for a pre-built version and $899 for a RAM-less, SSD-less, Windows-less DIY version, and we’ve tested the Ryzen AI 9 HX 370 version that starts at $1,659 before you add RAM, an SSD, or an OS.

This board is a direct upgrade to Framework’s Ryzen 7040-series board from mid-2023, with most of the same performance benefits we saw last year when we first took a look at the Ryzen AI 300 series. It’s also, if this matters to you, the first Framework Laptop to meet Microsoft’s requirements for its Copilot+ PC initiative, giving users access to some extra locally processed AI features (including but not limited to Recall) with the promise of more to come.

For this upgrade, Ryzen AI giveth, and Ryzen AI taketh away. This is the fastest the Framework Laptop 13 has ever been (at least, if you spring for the Ryzen AI 9 HX 370 chip that our review unit shipped with). If you’re looking to do some light gaming (or non-Nvidia GPU-accelerated computing), the Radeon 890M GPU is about as good as it gets. But you’ll pay for it in battery life—never a particularly strong point for Framework, and less so here than in most of the Intel versions.

What’s new, Framework?

This Framework update brings the return of colorful translucent accessories, parts you can also add to an older Framework Laptop if you want. Credit: Andrew Cunningham

We’re going to focus on what makes this particular Framework Laptop 13 different from the past iterations. We talk more about the build process and the internals in our review of the 12th-generation Intel Core version, and we ran lots of battery tests with the new screen in our review of the Intel Core Ultra version. We also have coverage of the original Ryzen version of the laptop, with the Ryzen 7 7840U and Radeon 780M GPU installed.

Per usual, every internal refresh of the Framework Laptop 13 comes with another slate of external parts. Functionally, there’s not a ton of exciting stuff this time around—certainly nothing as interesting as the higher-resolution 120 Hz screen option we got with last year’s Intel Meteor Lake update—but there’s a handful of things worth paying attention to.

Functionally, Framework has slightly improved the keyboard, with “a new key structure” on the spacebar and shift keys that “reduce buzzing when your speakers are cranked up.” I can’t really discern a difference in the feel of the keyboard, so this isn’t a part I’d run out to add to my own Framework Laptop, but it’s a fringe benefit if you’re buying an all-new laptop or replacing your keyboard for some other reason.

Keyboard legends have also been tweaked; pre-built Windows versions get Microsoft’s dedicated (and, within limits, customizable) Copilot key, while DIY editions come with a Framework logo on the Windows/Super key (instead of the word “super”) and no Copilot key.

Cosmetically, Framework is keeping the dream of the late ’90s alive with translucent plastic parts, namely the bezel around the display and the USB-C Expansion Modules. I’ll never say no to additional customization options, though I still think that “silver body/lid with colorful bezel/ports” gives the laptop a rougher, unfinished-looking vibe.

Like the other Ryzen Framework Laptops (both 13 and 16), not all of the Ryzen AI board’s four USB-C ports support all the same capabilities, so you’ll want to arrange your ports carefully.

Framework’s recommendations for how to configure the Ryzen AI laptop’s expansion modules. Credit: Framework

Framework publishes a graphic to show you which ports do what; if you’re looking at the laptop from the front, ports 1 and 3 are on the back, and ports 2 and 4 are toward the front. Generally, ports 1 and 3 are the “better” ones, supporting full USB4 speeds instead of USB 3.2 and DisplayPort 2.0 instead of 1.4. But USB-A modules should go in ports 2 or 4 because they’ll consume extra power in bays 1 and 3. All four do support display output, though, which isn’t the case for the Ryzen 7040 Framework board, and all four continue to support USB-C charging.

The situation has improved from the 7040 version of the Framework board, where not all of the ports could do any kind of display output. But it still somewhat complicates the laptop’s customizability story relative to the Intel versions, where any expansion card can go into any port.

I will also say that this iteration of the Framework laptop hasn’t been perfectly stable for me. The problems are intermittent but persistent, despite using the latest BIOS version (3.03 as of this writing) and driver package available from Framework. I had a couple of total-system freezes/crashes, occasional problems waking from sleep, and sporadic rendering glitches in Microsoft Edge. These weren’t problems I’ve had with the other Ryzen AI laptops I’ve used so far or with the Ryzen 7040 version of the Framework 13. They also persisted across two separate clean installs of Windows.

It’s possible/probable that some combination of firmware and driver updates can iron out these problems, and they generally didn’t prevent me from using the laptop the way I wanted to use it, but I thought it was worth mentioning since my experience with new Framework boards has usually been a bit better than this.

Internals and performance

“Ryzen AI” is AMD’s most recent branding update for its high-end laptop chips, but you don’t actually need to care about AI to appreciate the solid CPU and GPU speed upgrades compared to the last-generation Ryzen Framework or older Intel versions of the laptop.

Our Framework Laptop board uses the fastest processor offering: a Ryzen AI 9 HX 370 with four of AMD’s Zen 5 CPU cores, eight of the smaller, more power-efficient Zen 5c cores, and a Radeon 890M integrated GPU with 16 of AMD’s RDNA 3.5 graphics cores.

There are places where the Intel Arc graphics in the Core Ultra 7/Meteor Lake version of the Framework Laptop are still faster than what AMD can offer, though your experience may vary depending on the games or apps you’re trying to use. Generally, our benchmarks show the Arc GPU ahead by a small amount, but it’s not faster across the board.

Relative to other Ryzen AI systems, the Framework Laptop’s graphics performance also suffers somewhat because socketed DDR5 DIMMs don’t run as fast as RAM that’s been soldered to the motherboard. This is one of the trade-offs you’re probably OK with making if you’re looking at a Framework Laptop in the first place, but it’s worth mentioning.

A few actual game benchmarks. Ones with ray-tracing features enabled tend to favor Intel’s Arc GPU, while the Radeon 890M pulls ahead in some other games.

But the new Ryzen chip’s CPU is dramatically faster than Meteor Lake at just about everything, as well as the older Ryzen 7 7840U in the older Framework board. This is the fastest the Framework Laptop has ever been, and it’s not particularly close (but if you’re waffling between the Ryzen AI version, the older AMD version that Framework sells for a bit less money or the Core Ultra 7 version, wait to see the battery life results before you spend any money). Power efficiency has also improved for heavy workloads, as demonstrated by our Handbrake video encoding tests—the Ryzen AI chip used a bit less power under heavy load and took less time to transcode our test video, so it uses quite a bit less power overall to do the same work.

Power efficiency tests under heavy load using the Handbrake transcoding tool. Test uses CPU for encoding and not hardware-accelerated GPU-assisted encoding.

We didn’t run specific performance tests on the Ryzen AI NPU, but it’s worth noting that this is also Framework’s first laptop with a neural processing unit (NPU) fast enough to support the full range of Microsoft’s Copilot+ PC features—this was one of the systems I used to test Microsoft’s near-final version of Windows Recall, for example. Intel’s other Core Ultra 100 chips, all 200-series Core Ultra chips other than the 200V series (codenamed Lunar Lake), and AMD’s Ryzen 7000- and 8000-series processors often include NPUs, but they don’t meet Microsoft’s performance requirements.

The Ryzen AI chips are also the only Copilot+ compatible processors on the market that Framework could have used while maintaining the Laptop’s current level of upgradeability. Qualcomm’s Snapdragon X Elite and Plus chips don’t support external RAM—at least, Qualcomm only lists support for soldered-down LPDDR5X in its product sheets—and Intel’s Core Ultra 200V processors use RAM integrated into the processor package itself. So if any of those features appeal to you, this is the only Framework Laptop you can buy to take advantage of them.

Battery and power

Battery tests. The Ryzen AI 300 doesn’t do great, though it’s similar to the last-gen Ryzen Framework.

When paired with the higher-resolution screen option and Framework’s 61 WHr battery, the Ryzen AI version of the laptop lasted around 8.5 hours in a PCMark Modern Office battery life test with the screen brightness set to a static 200 nits. This is a fair bit lower than the Intel Core Ultra version of the board, and it’s even worse when compared to what a MacBook Air or a more typical PC laptop will give you. But it’s holding roughly even with the older Ryzen version of the Framework board despite being much faster.

You can improve this situation somewhat by opting for the cheaper, lower-resolution screen; we didn’t test it with the Ryzen AI board, and Framework won’t sell you the lower-resolution screen with the higher-end chip. But for upgraders using the older panel, the higher-res screen reduced battery life by between 5 and 15 percent in past testing of older Framework Laptops. The slower Ryzen AI 5 and Ryzen AI 7 versions will also likely last a little longer, though Framework usually only sends us the highest-end versions of its boards to test.

A routine update

This combo screwdriver-and-spudger is still the only tool you need to take a Framework Laptop apart. Credit: Andrew Cunningham

It’s weird that my two favorite laptops right now are probably Apple’s MacBook Air and the Framework Laptop 13, but that’s where I am. They represent opposite visions of computing, each of which appeals to a different part of my brain: The MacBook Air is the personal computer at its most appliance-like, the thing you buy (or recommend) if you just don’t want to think about your computer that much. Framework embraces a more traditionally PC-like approach, favoring open standards and interoperable parts; the result is more complicated and chaotic but also more flexible. It’s the thing you buy when you like thinking about your computer.

Framework Laptop buyers continue to pay a price for getting a more repairable and modular laptop. Battery life remains OK at best, and Framework doesn’t seem to have substantially sped up its firmware or driver releases since we talked with them about it last summer. You’ll need to be comfortable taking things apart, and you’ll need to make sure you put the right expansion modules in the right bays. And you may end up paying more than you would to get the same specs from a different laptop manufacturer.

But what you get in return still feels kind of magical, and all the more so because Framework has now been shipping product for four years. The Ryzen AI version of the laptop is probably the one I’d recommend if you were buying a new one, and it’s also a huge leap forward for anyone who bought into the first-generation Framework Laptop a few years ago and is ready for an upgrade. It’s by far the fastest CPU (and, depending on the app, the fastest or second-fastest GPU) Framework has shipped in the Laptop 13. And it’s nice to at least have the option of using Copilot+ features, even if you’re not actually interested in the ones Microsoft is currently offering.

If none of the other Framework Laptops have interested you yet, this one probably won’t, either. But it’s yet another improvement in what has become a steady, consistent sequence of improvements. Mediocre battery life is hard to excuse in a laptop, but if that’s not what’s most important to you, Framework is still offering something laudable and unique.

The good

  • Framework still gets all of the basics right—a matte 3:2 LCD that’s pleasant to look at, a nice-feeling keyboard and trackpad, and a design
  • Fastest CPU ever in the Framework Laptop 13, and the fastest or second-fastest integrated GPU
  • First Framework Laptop to support Copilot+ features in Windows, if those appeal to you at all
  • Fun translucent customization options
  • Modular, upgradeable, and repairable—more so than with most laptops, you’re buying a laptop that can change along with your needs and which will be easy to refurbish or hand down to someone else when you’re ready to replace it
  • Official support for both Windows and Linux

The bad

  • Occasional glitchiness that may or may not be fixed with future firmware or driver updates
  • Some expansion modules are slower or have higher power draw if you put them in the wrong place
  • Costs more than similarly specced laptops from other OEMs
  • Still lacks certain display features some users might require or prefer—in particular, there are no OLED, touchscreen, or wide-color-gamut options

The ugly

  • Battery life remains an enduring weak point.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Review: Ryzen AI CPU makes this the fastest the Framework Laptop 13 has ever been Read More »