Features

the-wheel-of-time-is-back-for-season-three,-and-so-are-our-weekly-recaps

The Wheel of Time is back for season three, and so are our weekly recaps

Andrew Cunningham and Lee Hutchinson have spent decades of their lives with Robert Jordan and Brandon Sanderson’s Wheel of Time books, and they previously brought that knowledge to bear as they recapped each first season episode and second season episode of Amazon’s WoT TV series. Now we’re back in the saddle for season three—along with insights, jokes, and the occasional wild theory.

These recaps won’t cover every element of every episode, but they will contain major spoilers for the show and the book series. We’ll do our best to not spoil major future events from the books, but there’s always the danger that something might slip out. If you want to stay completely unspoiled and haven’t read the books, these recaps aren’t for you.

New episodes of The Wheel of Time season three will be posted for Amazon Prime subscribers every Thursday. This write-up covers the entire three-episode season premiere, which was released on March 13.

Lee: Welcome back! Holy crap, has it only been 18 months since we left our broken and battered heroes standing in tableaux, with the sign of the Dragon flaming above Falme? Because it feels like it’s been about ten thousand years.

Andrew: Yeah, I’m not saying I want to return to the days when every drama on TV had 26 hour-long episodes per season, but when you’re doing one eight-episode run every year-and-a-half-to-two-years, you really feel those gaps. And maybe it’s just [waves arms vaguely at The World], but I am genuinely happy to have this show back.

This season’s premiere simply whips, balancing big action set-pieces and smaller character moments in between. But the whole production seems to be hitting a confident stride. The cast has gelled; they know what book stuff they’re choosing to adapt and what they’re going to skip. I’m sure there will still be grumbles, but the show does finally feel like it’s become its own thing.

Rosamund Pike returns as as Moiraine Damodred.

Credit: Courtesy of Prime/Amazon MGM Studios

Rosamund Pike returns as as Moiraine Damodred. Credit: Courtesy of Prime/Amazon MGM Studios

Lee: Oh yeah. The first episode hits the ground running, with explosions and blood and stolen ter’angreal. And we’ve got more than one episode to talk about—the gods of production at Amazon have given us a truly gigantic three-episode premiere, with each episode lasting more than an hour. Our content cup runneth over!

Trying to straight-up recap three hours of TV isn’t going to happen in the space we have available, so we’ll probably bounce around a bit. What I wanted to talk about first was exactly what you mentioned: unlike seasons one and two, this time, the show seems to have found itself and locked right in. To me, it feels kind of like Star Trek: The Next Generation’s third season versus its first two.

Andrew: That’s a good point of comparison. I feel like a lot of TV shows fall into one of two buckets: either it starts with a great first season and gradually falls off, or it gets off to a rocky start and finds itself over time. Fewer shows get to take the second path because a “show with a rocky start” often becomes a “canceled show,” but they can be more satisfying to watch.

The one Big Overarching Plot Thing to know for book readers is that they’re basically doing book 4 (The Shadow Rising) this season, with other odds and ends tucked in. So even if it gets canceled after this, at least they will have gotten to do what I think is probably the series’ high point.

Lee: Yep, we find out in our very first episode this season that we’re going to be heading to the Aiel Waste rather than the southern city of Tear, which is a significant re-ordering of events from the books. But unlike some of the previous seasons’ changes that feel like they were forced upon the show by outside factors (COVID, actors leaving, and so on), this one feels like it serves a genuine narrative purpose. Rand is reciting the Prophesies of the Dragon to himself and he knows he needs the “People of the Dragon” to guarantee success in Tear, and while he’s not exactly sure who the “People of the Dragon” might be, it’s obvious that Rand has no army as of yet. Maybe the Aiel can help?

Rand is doing all of this because both the angel and the devil on Rand’s shoulders—that’s the Aes Sedai Moiraine Damodred with cute blue angel wings and the Forsaken Lanfear in fancy black leather BDSM gear—want him wielding Callandor, The Sword That is Not a Sword (as poor Mat Cauthon explains in the Old Tongue). This powerful sa’angreal is located in the heart of the Stone of Tear (it’s the sword in the stone, get it?!), and its removal from the Stone is a major prophetic sign that the Dragon has indeed come again.

Book three is dedicated to showing how all that happens—but, like you said, we’re not in book three anymore. We’re gonna eat our book 4 dessert before our book 3 broccoli!

Natasha O’Keeffe as Lanfear.

Credit: Courtesy of Prime/Amazon MGM Studios

Natasha O’Keeffe as Lanfear. Credit: Courtesy of Prime/Amazon MGM Studios

Andrew: I like book 4 a lot (and I’d include 5 and 6 here too) because I think it’s when Robert Jordan was doing his best work balancing his worldbuilding and politicking with the early books’ action-adventure stuff, and including multiple character perspectives without spreading the story so thin that it could barely move forward. Book 3 was a stepping stone to this because the first two books had mainly been Rand’s, and we spend almost no time in Rand’s head in book 3. But you can’t do that in a TV show! So they’re mixing it up. Good! I am completely OK with this.

Lee:What did you think of Queen Morgase’s flashback introduction where we see how she won the Lion Throne of Andor (flanked by a pair of giant lions that I’m pretty sure came straight from Pier One Imports)? It certainly seemed a bit… evil.

Andrew: One of the bigger swerves that the show has taken with an established book character, I think! And well before she can claim to have been under the control of a Forsaken. (The other swerves I want to keep tabs on: Moiraine actively making frenemies with Lanfear to direct Rand, and Lan being the kind of guy who would ask Rand if he “wants to talk about it” when Rand is struggling emotionally. That one broke my brain, the books would be half as long as they are if men could openly talk to literally any other men about their states of mind.)

But I am totally willing to accept that Morgase change because the alternative is chapters and chapters of people yapping about consolidating political support and daes dae’mar and on and on. Bo-ring!

But speaking of Morgase and Forsaken, we’re starting to spend a little time with all the new baddies who got released at the end of last season. How do you feel about the ones we’ve met so far? I know we were generally supportive of the fact that the show is just choosing to have fewer of them in the first place.

Lee: Hah, I loved the contrast with Book Lan, who appears to only be capable of feeling stereotypically manly feelings (like rage, shame, or the German word for when duty is heavier than a mountain, which I’m pretty sure is something like “Bergpflichtenschwerengesellschaften”). It continues to feel like all of our main characters have grown up significantly from their portrayals on the page—they have sex, they use their words effectively, and they emotionally support each other like real people do in real life. I’m very much here for that particular change.

But yes, the Forsaken. We know from season two that we’re going to be seeing fewer than in the books—I believe we’ve got eight of them to deal with, and we meet almost all of them in our three-episode opening blast. I’m very much enjoying Moghedien’s portrayal by Laia Costa, but of course Lanfear is stealing the show and chewing all the scenery. It will be fascinating to see how the show lets the others loose—we know from the books that every one of the Forsaken has a role to play (including one specific Forsaken whose existence has yet to be confirmed but who figures heavily into Rand learning more about how the One Power works), and while some of those roles can be dropped without impacting the story, several definitely cannot.

And although Elaida isn’t exactly a Forsaken, it was awesome to see Shohreh Aghdashloo bombing around the White Tower looking fabulous as hell. Chrisjen Avasarala would be proud.

The boys, communicating and using their words like grown-ups.

Credit: Courtesy of Prime/Amazon MGM Studios

The boys, communicating and using their words like grown-ups. Credit: Courtesy of Prime/Amazon MGM Studios

Andrew: Maybe I’m exaggerating but I think Shohreh Aghdashloo’s actual voice goes deeper than Hammed Animashaun’s lowered-in-post-production voice for Loial. It’s an incredible instrument.

Meeting Morgase in these early episodes means we also meet Gaebril, and the show only fakes viewers out for a few scenes before revealing what book-readers know: that he’s the Forsaken Rahvin. But I really love how these scenes play, particularly his with Elayne. After one weird, brief look, they fall into a completely convincing chummy, comfortable stepdad-stepdaughter relationship, and right after that, you find out that, oops, nope, he’s been there for like 15 minutes and has successfully One Power’d everyone into believing he’s been in their lives for decades.

It’s something that we’re mostly told-not-shown in the books, and it really sells how powerful and amoral and manipulative all these characters are. Trust is extremely hard to come by in Randland, and this is why.

Lee: I very much liked the way Gaebril’s/Rahvin’s crazy compulsion comes off, and I also like the way Nuno Lopes is playing Gaebril. He seems perhaps a little bumbling, and perhaps a little self-effacing—truly, a lovable uncle kind of guy. The kind of guy who would say “thank you” to a servant and smile at children playing. All while, you know, plotting the downfall of the kingdom. In what is becoming a refrain, it’s a fun change from the books.

And along the lines of unassuming folks, we get our first look at a Gray Man and the hella creepy mechanism by which they’re created. I can’t recall in the books if Moghedien is explicitly mentioned as being able to fashion the things, but she definitely can in the show! (And it looks uncomfortable as hell. “Never accept an agreement that involves the forcible removal of one’s soul” is an axiom I try to live by.)

Olivia Williams as Queen Morgase Trakand and Shohreh Aghdashloo as Elaida do Avriny a’Roihan.

Credit: Courtesy of Prime/Amazon MGM Studios

Olivia Williams as Queen Morgase Trakand and Shohreh Aghdashloo as Elaida do Avriny a’Roihan. Credit: Courtesy of Prime/Amazon MGM Studios

Andrew: It’s just one of quite a few book things that these first few episodes speedrun. Mat has weird voices in his head and speaks in tongues! Egwene and Elayne pass the Accepted test! (Having spent most of an episode on Nynaeve’s Accepted test last season, the show yada-yadas this a bit, showing us just a snippet of Egwene’s Rand-related trials and none of Elayne’s test at all.) Elayne’s brothers Gawyn and Galad show up, and everyone thinks they’re very hot, and Mat kicks their asses! The Black Ajah reveals itself in explosive fashion, and Siuan can only trust Elayne and Nynaeve to try and root them out! Min is here! Elayne and Aviendha kiss, making more of the books’ homosexual subtext into actual text! But for the rest of the season, we split the party in basically three ways: Rand, Egwene, Moiraine and company head with Aviendha to the Waste, so that Rand can make allies of the Aiel. Perrin and a few companions head home to the Two Rivers and find that things are not as they left them. Nynaeve and Elayne are both dealing with White Tower intrigue. There are other threads, but I think this sets up most of what we’ll be paying attention to this season.

As we try to wind down this talk about three very busy episodes, is there anything you aren’t currently vibing with? I feel like Josha Stradowski’s Rand is getting lost in the shuffle a bit, despite this nominally being his story.

Lee: I agree about Rand—but, hey, the same de-centering of Rand happened in the books, so at least there is symmetry. I think the things I’m not vibing with are at this point just personal dislikes. The sets still feel cheap. The costumes are great, but the Great Serpent rings are still ludicrously large and impractical.

I’m overjoyed the show is unafraid to shine a spotlight on queer characters, and I’m also desperately glad that we aren’t being held hostage by Robert Jordan’s kinks—like, we haven’t seen a single Novice or Accepted get spanked, women don’t peel off their tops in private meetings to prove that they’re women, and rather than titillation or weirdly uncomfortable innuendo, these characters are just straight-up screwing. (The Amyrlin even notes that she’s not sure the Novices “will ever recover” after Gawyn and Galad come to—and all over—town.)

If I had to pick a moment that I enjoyed the most out of the premiere, it would probably be the entire first episode—which in spite of its length kept me riveted the entire time. I love the momentum, the feeling of finally getting the show that I’d always hoped we might get rather than the feeling of having to settle.

How about you? Dislikes? Loves?

Ceara Coveney as Elayne Trakand and Ayoola Smart as Aviendha, and they’re thinking about exactly what you think they’re thinking about.

Credit: Courtesy of Prime/Amazon MGM Studios

Ceara Coveney as Elayne Trakand and Ayoola Smart as Aviendha, and they’re thinking about exactly what you think they’re thinking about. Credit: Courtesy of Prime/Amazon MGM Studios

Andrew: Not a ton of dislikes, I am pretty in the tank for this at this point. But I do agree that some of the prop work is weird. The Horn of Valere in particular looks less like a legendary artifact and more like a decorative pitcher from a Crate & Barrel.

There were two particular scenes/moments that I really enjoyed. Rand and Perrin and Mat just hang out, as friends, for a while in the first episode, and it’s very charming. We’re told in the books constantly that these three boys are lifelong pals, but (to the point about Unavailable Men we were talking about earlier) we almost never get to see actual evidence of this, either because they’re physically split up or because they’re so wrapped up in their own stuff that they barely want to speak to each other.

I also really liked that brief moment in the first episode where a Black Ajah Aes Sedai’s Warder dies, and she’s like, “hell yeah, this feels awesome, this is making me horny because of how evil I am.” Sometimes you don’t want shades of gray—sometimes you just need some cartoonishly unambiguous villainy.

Lee: I thought the Black Ajah getting excited over death was just the right mix of of cartoonishness and actual-for-real creepiness, yeah. These people have sold their eternal souls to the Shadow, and it probably takes a certain type. (Though, as book readers know, there are some surprising Black Ajah reveals yet to be had!)

We close out our three-episode extravaganza with Mat having his famous stick fight with Zoolander-esque male models Gawyn and Galad, Liandrin and the Black Ajah setting up shop (and tying off some loose ends) in Tanchico, Perrin meeting Faile and Lord Luc in the Two Rivers, and Rand in the Aiel Waste, preparing to do—well, something important, one can be sure.

We’ll leave things here for now. Expect us back next Friday to talk about episode four, which, based on the preview trailers already showing up online, will involve a certain city in the desert, wherein deep secrets will be revealed.

Mia dovienya nesodhin soende, Andrew!

Andrew: The Wheel weaves as the Wheel wills.

Credit: WoT Wiki

The Wheel of Time is back for season three, and so are our weekly recaps Read More »

what-is-space-war-fighting?-the-space-force’s-top-general-has-some-thoughts.

What is space war-fighting? The Space Force’s top general has some thoughts.


Controlling space means “employing kinetic and non-kinetic means to affect adversary capabilities.”

Members of the Space Force render a salute during a change of command ceremony July 2, 2024, as Col. Ramsey Horn took the helm of Space Delta 9, the unit that oversees orbital warfare operations at Schriever Space Force Base, Colorado. Credit: US Space Force / Dalton Prejeant

DENVER—The US Space Force lacks the full range of space weapons China and Russia are adding to their arsenals, and military leaders say it’s time to close the gap.

Gen. Chance Saltzman, the Space Force’s chief of space operations, told reporters at the Air & Space Forces Association Warfare Symposium last week that he wants to have more options to present to national leaders if an adversary threatens the US fleet of national security satellites used for surveillance, communication, navigation, missile warning, and perhaps soon, missile defense.

In prepared remarks, Saltzman outlined in new detail why the Space Force should be able to go on the offense in an era of orbital warfare. Later, in a roundtable meeting with reporters, he briefly touched on the how.

The Space Force’s top general has discussed the concept of “space superiority” before. This is analogous to air superiority—think of how US and allied air forces dominated the skies in wartime over the last 30 years in places like Iraq, the Balkans, and Afghanistan.

In order to achieve space superiority, US forces must first control the space domain by “employing kinetic and non-kinetic means to affect adversary capabilities through disruption, degradation, and even destruction, if necessary,” Saltzman said.

Kinetic? Imagine a missile or some other projectile smashing into an enemy satellite. Non-kinetic? This category involves jamming, cyberattacks, and directed-energy weapons, like lasers or microwave signals, that could disable spacecraft in orbit.

“It includes things like orbital warfare and electromagnetic warfare,” Saltzman said. These capabilities could be used offensively or defensively. In December, Ars reported on the military’s growing willingness to talk publicly about offensive space weapons, something US officials long considered taboo for fear of sparking a cosmic arms race.

Officials took this a step further at last week’s warfare symposium in Colorado. Saltzman said China and Russia, which military leaders consider America’s foremost strategic competitors, are moving ahead of the United States with technologies and techniques to attack satellites in orbit.

This new ocean

For the first time in more than a century, warfare is entering a new physical realm. By one popular measure, the era of air warfare began in 1911, when an Italian pilot threw bombs out of his airplane over Libya during the Italo-Turkish War. Some historians might trace airborne warfare to earlier conflicts, when reconnaissance balloons offered eagle-eyed views of battlefields and troop movements. Land and sea combat began in ancient times.

“None of us were alive when the other domains started being contested,” Saltzman said. “It was just natural. It was just a part of the way things work.”

Five years since it became a new military service, the Space Force is in an early stage of defining what orbital warfare actually means. First, military leaders had to stop considering space as a benign environment, where threats from the harsh environment of space reign supreme.

Artist’s illustration of a satellite’s destruction in space. Credit: Aerospace Corporation

“That shift from benign environment to a war-fighting domain, that was pretty abrupt,” Saltzman said. “We had to mature language. We had to understand what was the right way to talk about that progression. So as a Space Force dedicated to it, we’ve been progressing our vocabulary. We’ve been saying, ‘This is what we want to focus on.'”

“We realized, you know what, defending is one thing, but look at this architecture (from China). They’re going to hold our forces at risk. Who’s responsible for that? And clearly the answer is the Space Force,” Saltzman said. “We say, ‘OK, we’ve got to start to solve for that problem.'”

“Well, how do militaries talk about that? We talk about conducting operations, and that includes offense and defense,” he continued. “So it’s more of a maturation of the role and the responsibilities that a new service has, just developing the vocabulary, developing the doctrine, operational concepts, and now the equipment and the training. It’s just part of the process.”

Of course, this will all cost money. Congress approved a $29 billion budget for the Space Force in 2024, about $4 billion more than NASA received but just 3.5 percent of the Pentagon’s overall budget. Frank Kendall, secretary of the Air Force under President Biden, said last year that the Space Force’s budget is “going to need to double or triple over time” to fund everything the military needs to do in space.

The six types of space weapons

Saltzman said the Space Force categorizes adversarial space weapons in six categories—three that are space-based and three that are ground-based.

“You have directed-energy, like lasers, you have RF (radio frequency) jamming capabilities, and you have kinetic, something that you’re trying to destroy physically,” Saltzman said. These three types of weapons could be positioned on the ground or in space, getting to Saltzman’s list of six categories.

“We’re seeing in our adversary developmental capabilities, they’re pursuing all of those,” Saltzman said. “We’re not pursuing all of those yet.”

But Saltzman argued that maybe the United States should. “There are good reasons to have all those categories,” he said. Targeting an enemy satellite in low-Earth orbit, just a few hundred miles above the planet, requires a different set of weapons than a satellite parked more than 22,000 miles up—roughly 36,000 kilometers—in geosynchronous orbit.

China is at the pinnacle of the US military’s threat pyramid, followed by Russia and less sophisticated regional powers like North Korea and Iran.

“Really, what’s most concerning… is the mix of weapons,” Saltzman said. “They are pursuing the broadest mix of weapons, which means they’re going to hold a vast array of targets at risk if we can’t defeat them. So our focus out of the gate has been on resiliency of our architectures. Make the targeting as hard on the adversary as possible.”

Gen. Chance Saltzman, the chief of Space Operations, speaks at the Air & Space Forces Association’s Warfare Symposium on March 3, 2025. Credit: Jud McCrehin / Air & Space Forces Association

About a decade ago, the military recognized an imperative to transition to a new generation of satellites. Where they could, Pentagon officials replaced or complemented their fleets of a few large multibillion-dollar satellites with constellations of many more cheaper, relatively expendable satellites. If an adversary took out just one of the military’s legacy satellites, commanders would feel the pain. But the destruction of multiple smaller satellites in the newer constellations wouldn’t have any meaningful effect.

That’s one of the reasons the military’s Space Development Agency has started launching a network of small missile-tracking satellites in low-Earth orbit, and it’s why the Pentagon is so interested in using services offered by SpaceX’s Starlink broadband constellation. The Space Force is looking at ways to revamp its architecture for space-based navigation by potentially augmenting or replacing existing GPS satellites with an array of positioning platforms in different orbits.

“If you can disaggregate your missions from a few satellites to many satellites, you change the targeting calculus,” Saltzman said. “If you can make things maneuverable, then it’s harder to target, so that is the initial effort that we invested heavily on in the last few years to make us more resilient.”

Now, Saltzman said, the Space Force must go beyond reshaping how it designs its satellites and constellations to respond to potential threats. These new options include more potent offensive and defensive weapons. He declined to offer specifics, but some options are better than others.

The cost of destruction

“Generally in a military setting, you don’t say, ‘Hey, here’s all the weapons, and here’s how I’m going to use them, so get ready,'” Saltzman said. “That’s not to our advantage… but I will generally [say] that I am far more enamored by systems that deny, disrupt, [and] degrade. There’s a lot of room to leverage systems focused on those ‘D words.’ The destroy word comes at a cost in terms of debris.”

A high-speed impact between an interceptor weapon and an enemy satellite would spread thousands of pieces of shrapnel across busy orbital traffic lanes, putting US and allied spacecraft at risk.

“We may get pushed into a corner where we need to execute some of those options, but I’m really focused on weapons that deny, disrupt, degrade,” Saltzman said.

This tenet of environmental stewardship isn’t usually part of the decision-making process for commanders in other military branches, like the Air Force or the Navy. “I tell my air-breathing friends all the time: When you shoot an airplane down, it falls out of your domain,” Saltzman said.

China now operates more than 1,000 satellites, and more than a third of these are dedicated to intelligence, surveillance, and reconnaissance missions. China’s satellites can collect high-resolution spy imagery and relay the data to terrestrial forces for military targeting. The Chinese “space-enabled targeting architecture” is “pretty impressive,” Saltzman said.

This slide from a presentation by Space Systems Command illustrates a few of the counter-space weapons fielded by China and Russia. Credit: Space Systems Command

“We have a responsibility not only to defend the assets in space but to protect the war-fighter from space-enabled attack,” said Lt. Gen. Doug Schiess, a senior official at US Space Command. “What China has done with an increasing launch pace is put up intelligence, surveillance, and reconnaissance satellites that can then target our naval forces, our land forces, and our air forces at much greater distance. They’ve essentially built a huge kill chain, or kill web, if you will, to be able to target our forces much earlier.”

China’s aerospace forces have either deployed or are developing direct-ascent anti-satellite missiles, co-orbital satellites, electronic warfare platforms like mobile jammers, and directed-energy, or laser, systems, according to a Pentagon report on China’s military and security advancements. These weapons can reach targets from low-Earth orbit all the way up to geosynchronous orbit.

In his role as a member of the Joint Chiefs of Staff, Saltzman advises the White House on military matters. Like most military commanders, he said he wants to offer his superiors as many options as possible. “The more weapons mix we have, the more options we can offer the president,” Saltzman said.

The US military has already demonstrated it can shoot down a satellite with a ground-based interceptor, and the Space Force is poised to field new ground-based satellite jammers in the coming months. The former head of the Space Force, Gen. Jay Raymond, told lawmakers in 2021 that the military was developing directed-energy weapons to assure dominance in space, although he declined to discuss details in an unclassified hearing.

So the Pentagon is working on at least three of the six space weapons categories identified by Saltzman. China and Russia appear to have the edge in space-based weapons, at least for now.

In the last several years, Russia has tested a satellite that can fire a projectile capable of destroying another spacecraft in orbit, an example of a space-based kinetic weapon. Last year, news leaked that US intelligence officials are concerned about Russian plans to put a nuclear weapon in orbit. China launched a satellite named Shijian-17 in 2016 with a robotic arm that could be used to grapple and capture other satellites in space. Then, in 2021, China launched Shijian-21, which docked with a defunct Chinese satellite to take over its maneuvering and move it to a different orbit.

There’s no evidence that the US Space Force has demonstrated kinetic space-based anti-satellite weapons, and Pentagon officials have roundly criticized the possibility of Russia placing a nuclear weapon in space. But the US military might soon develop space-based interceptors as part of the Trump administration’s “Golden Dome” missile defense shield. These interceptors might also be useful in countering enemy satellites during conflict.

The Sodium Guidestar at the Air Force Research Laboratory’s Starfire Optical Range in New Mexico. Researchers with AFRL’s Directed Energy Directorate use the Guidestar laser for real-time, high-fidelity tracking and imaging of satellites too faint for conventional adaptive optical imaging systems. Credit: US Air Force

The Air Force used a robotic arm on a 2007 technology demonstration mission to snag free-flying satellites out of orbit, but this was part of a controlled experiment with a spacecraft designed for robotic capture. Several companies, such as Maxar and Northrop Grumman, are developing robotic arms that could grapple “non-cooperative” satellites in orbit.

While the destruction of an enemy satellite is likely to be the Space Force’s last option in a war, military commanders would like to be able to choose to do so. Schiess said the military “continues to have gaps” in this area.

“With destroy, we need that capability, just like any other domain needs that capability, but we have to make sure that we do that with responsibility because the space domain is so important,” Schiess said.

Matching the rhetoric of today

The Space Force’s fresh candor about orbital warfare should be self-evident, according to Saltzman. “Why would you have a military space service if not to execute space control?”

This new comfort speaking about space weapons comes as the Trump administration strikes a more bellicose tone in foreign policy and national security. Pete Hegseth, Trump’s secretary of defense, has pledged to reinforce a “warrior ethos” in the US armed services.

Space Force officials are doing their best to match Hegseth’s rhetoric.

“Every guardian is a war-fighter, regardless of your functional specialty, and every guardian contributes to Space Force readiness,” Saltzman said. Guardian is the military’s term for a member of the Space Force, comparable to airmen, sailors, soldiers, and marines. “Whether you built the gun, pointed the gun, or pulled the trigger, you are a part of combat capability.”

Echoing Hegseth, the senior enlisted member of the Space Force, Chief Master Sgt. John Bentivegna, said he’s focused on developing a “war-fighter ethos” within the service. This involves training on scenarios of orbital warfare, even before the Space Force fields any next-generation weapons systems.

“As Gen. Saltzman is advocating for the money and the resources to get the kit, the culture, the space-minded war-fighter, that work has been going on and continues today,” Bentivegna said.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

What is space war-fighting? The Space Force’s top general has some thoughts. Read More »

m4-max-and-m3-ultra-mac-studio-review:-a-weird-update,-but-it-mostly-works

M4 Max and M3 Ultra Mac Studio Review: A weird update, but it mostly works

Comparing the M4 Max and M3 Ultra to high-end PC desktop processors.

As for the Intel and AMD comparisons, both companies’ best high-end desktop CPUs like the Ryzen 9 9950X and Core Ultra 285K are often competitive with the M4 Max’s multi-core performance, but are dramatically less power-efficient at their default settings.

Mac Studio or M4 Pro Mac mini?

The Mac Studio (bottom) and redesigned M4 Mac mini. Credit: Andrew Cunningham

Ever since Apple beefed up the Mac mini with Pro-tier chips, there’s been a pricing overlap around and just over $2,000 where the mini and the Studio are both compelling.

A $2,000 Mac mini comes with a fully enabled M4 Pro processor (14 CPU cores, 20 GPU cores), 512GB of storage, and 48GB of RAM, with 64GB of RAM available for another $200 and 10 gigabit Ethernet available for another $100. RAM is the high-end Mac mini’s main advantage over the Studio—the $1,999 Studio comes with a slightly cut-down M4 Max (also 14 CPU cores, but 32 GPU cores), 512GB of storage, and just 36GB of RAM.

In general, if you’re spending $2,000 on a Mac desktop, I would lean toward the Studio rather than the mini. You’re getting roughly the same CPU but a much faster GPU and more ports. You get less RAM, but depending on what you’re doing, there’s a good chance that 36GB is more than enough.

The only place where the mini is clearly better than the Studio once you’ve above $2,000 is memory. If you want 64GB of RAM in your Mac, you can get it in the Mac mini for $2,200. The cheapest Mac Studio with 64GB of RAM also requires a processor upgrade, bringing the total cost to $2,700. If you need memory more than you need raw performance, or if you just need something that’s as small as it can possibly be, that’s when the high-end mini can still make sense.

A lot of power—if you need it

Apple’s M4 Max Mac Studio. Credit: Andrew Cunningham

Obviously, Apple’s hermetically sealed desktop computers have some downsides compared to a gaming or workstation PC, most notably that you need to throw out and replace the whole thing any time you want to upgrade literally any component.

M4 Max and M3 Ultra Mac Studio Review: A weird update, but it mostly works Read More »

better-than-the-real-thing?-spark-2-packs-39-amp-sims-into-$300-bluetooth-speaker

Better than the real thing? Spark 2 packs 39 amp sims into $300 Bluetooth speaker


Digital amp modeling goes very, very portable.

The Spark 2 from Positive Grid looks like a miniature old-school amp, but it is, essentially, a computer with some knobs and a speaker. It has Bluetooth, USB-C, and an associated smartphone app. It needs firmware updates, which can brick the device—ask me how I found this out—and it runs code on DSP chips. New guitar tones can be downloaded into the device, where they run as software rather than as analog electrical circuits in an amp or foot pedal.

In other words, the Spark 2 is the latest example of the “software-ization” of music.

Forget the old image of a studio filled with a million-dollar, 48-track mixing board from SSL or API and bursting with analog amps, vintage mics, and ginormous plate reverbs. Studios today are far more likely to be digital, where people record “in the box” (i.e., they track and mix on a computer running software like Pro Tools or Logic Pro) using digital models of classic (and expensive) amplifiers, coded by companies like NeuralDSP and IK Multimedia. These modeled amp sounds are then run through convolution software that relies on digital impulse responses captured from different speakers and speaker cabinets. They are modified with effects like chorus and distortion, which are all modeled, too. The results can be world-class, and they’re increasingly showing up on records.

Once the sounds are recorded, a mixer will often use digital plugins to replicate studio gear like tape delays, FET compressors, and reverbs (which may be completely algorithmic or may rely on impulse responses captured from real halls, studios, plates, and spring reverbs). These days, even the microphones might be digitally modeled by companies like Slate, Antelope, and Universal Audio.

This has put incredible power into the hands of home musicians; for a couple of thousand bucks, most home studios can own models of gear that would have cost more than a house 20 years ago. But one downside of this shift to software is that all the annoying quirks of computing devices have followed.

Want to rock out to the classic Marshall tones found in Universal Audio’s “Lion” amp simulator plugin? Just plug your guitar into your audio interface, connect the interface to a computer via USB, launch a DAW, instantiate the plugin on a blank track, choose the correct input, activate input monitoring so you can hear the results of your jamming, and adjust your DAW’s buffer size to something small in an attempt to prevent latency. A problem with any item on that list means “no jamming for you.”

You may be prompted to update the firmware in your audio interface, or to update your operating system, or to update your DAW—or even its plugins. Oh, and did I mention that Universal Audio uses the truly terrible iLok DRM system and that if your Wi-Fi drops for even a few minutes, the plugins will deactivate? Also, you’ll need to run a constant companion app in the background called UA Connect, which itself can be prone to problems.

Assuming everything is up to date and working, you’re still tethered to your computer by a cable, and you have to make all your settings tweaks with a mouse. After a day of working on computers, this is not quite how I want to spend my “music time.”

But the upsides of digital modeling are just too compelling to return to the old, appliance-like analog gear. For one thing, the analog stuff is expensive. The Lion amp plugin mentioned above gives you not one but several versions of a high-quality Marshall head unit—each one costing thousands of dollars—but you don’t need to lift it (they’re heavy!), mic it (annoying!), or play it at absurdly low levels because your baby is sleeping upstairs. For under a hundred bucks, you can get that sound of an overdriven Marshall turned up to 75 percent and played through several different speaker cabinet options (each of these is also expensive!) right on your machine.

Or consider the Tone King Imperial Mk II, a $2,700, Fender-style amp built in the US. It sounds great. But NeuralDSP offers a stunning digital model for a hundred bucks—and it comes with compressor, overdrive, delay, and reverb pedals, to say nothing of a tuner, a doubler, a pitch-shifter, and a ton of great presets.

So I want the digital amp modeling, but I also want—sometimes, at least—the tactile simplicity of physical knobs and well-built hardware. Or I want to jack in and play without waking up a computer, logging in, launching apps, or using a mouse and an audio interface. Or I want to take my amp models to places where finicky computers aren’t always welcome, like the stage of a club.

Thanks to hardware like the Profiler from Kemper, the Helix gear from Line6, the Cortex pedalboards from NeuralDSP, or Tonex gear from IK Multimedia, this is increasingly common.

The Spark line from Positive Grid has carved out its own niche in this world by offering well-built little amps that run Positive Grid’s digital amp and effects simulations. (If you don’t want the hardware, the company sells its modeling software for PC and Mac under the “Bias” label.)

The Spark 2 is the latest in this line, and I’ve been putting it through its paces over the last couple of months.

Let’s cut right to the conclusion: The Spark 2 is a well-designed, well-built piece of gear. For $300, you get a portable, 50-watt practice amp and Bluetooth speaker that can store eight guitar tones onboard and download thousands more using a smartphone app. Its models aren’t, to my ears, the most realistic out there, but if you want a device to jack into and jam, to play along with backing tracks or loops, or to record some creative ideas, this fits the bill.

Photo of Spark 2.

Credit: Positive Grid

Good practice

Everything about the Spark 2 feels well-built. The unit is surprisingly solid, and it comes with a carrying strap for portability. If you want to truly live the wire-free lifestyle, you can buy a battery pack for $79 that gives you several hours of juice.

For a practice amp, the Spark 2 is also well-connected. It has Bluetooth for streaming audio—but it also has a 3.5 mm aux in jack. It has decent, if somewhat boxy-sounding, speakers, and they get quite loud—but it also has two quarter-inch line out jacks. It has a guitar input jack and a headphone jack. It can use a power supply or a battery. It can connect to a computer via USB, and you can even record that way if you don’t have another audio interface.

Most of the unit’s top is taken up with chunky knobs. These let you select one of the eight onboard presets or adjust model parameters like gain, EQ, modulation, delay, and reverb. There’s also a knob for blending your guitar audio with music played through the device.

Buttons provide basic access to a tuner and a looper, though the associated app unlocks more complex options.

So about that app. It’s not necessary to use the Spark 2, but you’ll need the app if you want to download or create new tones from the many pieces of modeled gear. Options here go far beyond what’s possible with the knobs atop the physical unit.

Spark models a chamber reverb, for instance, which is basically a reflective room into which a speaker plays sound that a microphone picks up. The Spark chamber lets you adjust the volume level of the reverb signal, the reflection time of the chamber, the “dwell” time of the sound in the room, the amount of sound damping, and whether the sound will have some of its lows or highs cut off. (This is common in reverbs to avoid excessive low-end “mud” or top-end “brightness” building up in the reverberating signal.) You’ll need the app to adjust most of these options; the “reverb” control on the Spark 2 simply changes the level.

There’s a fair bit of modeled gear on offer: one noise gate, six compressors, 14 drive pedals, 39 amps, 13 EQ units, six delays, and nine reverbs. Most of these have numerous options. It is not nearly as overwhelming as a package like Amplitube for PCs and Macs, but it’s still a lot of stuff.

To run it all, Positive Grid has beefed up the computational power of the Spark series. The company told me that digital signal processing power has doubled since the original Spark lineup, which allows for “smoother transitions between tones, richer effects, and an expanded memory for presets and loops.” The system runs on an M7 chip “developed specifically for expanded processing power and precise tone reproduction,” and the extra power has allowed Positive Grid to run more complex models on-device, improving their preamp and amplifier sag modeling.

Despite the DSP increase, the results here just don’t compare with the sort of scary-precise tube amp and effects simulations you can run on a computer or a far more expensive hardware modeling rig. I could never get clean and “edge of breakup” tones to sound anything other than artificial, though some of the distortion sounds were quite good. Reverbs and delays also sounded solid.

But the Spark 2 wasn’t really designed for studio-quality recording, and Positive Grid is candid about this. The models running on the Spark 2 are inspired by the company’s computer work, but they are “optimized for an all-in-one, mobile-friendly playing experience,” I was told. The Spark 2 is meant for “practice, jamming, and basic recording,” and those looking for “studio-level control and complex setups” should seek out something else.

This tracks with my experience. Compared to a regular amp, the Spark 2 is crazy portable. When testing the unit, I would haul it between rooms without a second thought, searching for a place to play that wouldn’t annoy some member of my family. (Headphones? Never!) Thanks to the optional battery, I didn’t even need to plug it in. It was a simple, fun way to get some electric guitar practice in without using a screen or a computer, and its sound could fill an entire room. Compared to the weight and hassle of moving a “real” amp, this felt easy.

About that app

I’ve been talking about the Spark 2 and its screen-free experience, but of course you do need to use the app to unlock more advanced features and download new tones onto the hardware. So how good is the software?

For modifying the gear in your presets, the app works fine. Every piece of gear has a nice picture, and you just flick up or down to get a piece of equipment into or out of the effects chain. Changing parameters is simple, with large numbers popping up on screen whenever you touch a virtual control, and you can draw from a huge library of pre-made effect chains.

The app also features plenty of backing music that it can play over the Spark 2. This includes backing tracks, tabbed songs, and the “groove looper,” giving you plenty of options to work on your soloing, but it’s the artificial intelligence that Positive Grid is really pitching this time around.

You are legally required to shoehorn “AI” into every product launch now, and Positive Grid put its AI tools into the app. These include Smart Jam, which tries to adapt to your playing and accompany it in real time. The company tells me that Smart Jam was “trained on a combination of musical datasets that analyze chord structures, song patterns, and rhythmic elements,” but I could never get great results from it. Because the system doesn’t know what you’re going to play in advance, there was always a herky-jerky quality as it tried to adapt its backing track to my changing performance.

I had more success with Spark AI, which is a natural language tone-shaping engine. You tell the system what you’re looking for—the solo in “Stairway to Heaven,” perhaps—and it returns several presets meant to approximate that sound. It does work, I’ll say that. The system reliably gave me tone options that were, with a little imagination, identifiable as “in the ballpark” of what I asked for.

Perhaps the main barrier here is simply that the current Spark amp models aren’t always powerful enough to truly copy the sounds you might be looking for. Spark AI is a great way to pull up a tone that’s appropriate for whatever song you might be practicing, and to do so without forcing you to build it yourself out of pieces of virtual gear. In that sense, it’s a nice practice aid.

Rock on

As it’s pitched—a practice amp and Bluetooth speaker that costs $300—Spark 2 succeeds. It’s such a well-built and designed unit that I enjoyed using it every time I played, even if the tones couldn’t match a real tube amp or even top-quality models. And the portability was more useful than expected, even when just using it around the house.

As DSP chips grow ever more powerful, I’m looking forward to where modeling can take us. For recording purposes, some of the best models will continue to run on powerful personal computers. But for those looking to jam, or to play shows, or to haul a guitar to the beach for an afternoon, hardware products running modeling software offer incredible possibilities already—and they will “spark” even more creativity in the years to come.

Photo of Nate Anderson

Better than the real thing? Spark 2 packs 39 amp sims into $300 Bluetooth speaker Read More »

iphone-16e-review:-the-most-expensive-cheap-iphone-yet

iPhone 16e review: The most expensive cheap iPhone yet


The iPhone 16e rethinks—and prices up—the basic iPhone.

An iPhone sits on the table, displaying the time with the screen on

The iPhone 16e, with a notch and an Action Button. Credit: Samuel Axon

The iPhone 16e, with a notch and an Action Button. Credit: Samuel Axon

For a long time, the cheapest iPhones were basically just iPhones that were older than the current flagship, but last week’s release of the $600 iPhone 16e marks a big change in how Apple is approaching its lineup.

Rather than a repackaging of an old iPhone, the 16e is the latest main iPhone—that is, the iPhone 16—with a bunch of stuff stripped away.

There are several potential advantages to this change. In theory, it allows Apple to support its lower-end offerings for longer with software updates, and it gives entry-level buyers access to more current technologies and features. It also simplifies the marketplace of accessories and the like.

There’s bad news, too, though: Since it replaces the much cheaper iPhone SE in Apple’s lineup, the iPhone 16e significantly raises the financial barrier to entry for iOS (the SE started at $430).

We spent a few days trying out the 16e and found that it’s a good phone—it’s just too bad it’s a little more expensive than the entry-level iPhone should ideally be. In many ways, this phone solves more problems for Apple than it does for consumers. Let’s explore why.

Table of Contents

A beastly processor for an entry-level phone

Like the 16, the 16e has Apple’s A18 chip, the most recent in the made-for-iPhone line of Apple-designed chips. There’s only one notable difference: This variation of the A18 has just four GPU cores instead of five. That will show up in benchmarks and in a handful of 3D games, but it shouldn’t make too much of a difference for most people.

It’s a significant step up over the A15 found in the final 2022 refresh of the iPhone SE, enabling a handful of new features like AAA games and Apple Intelligence.

The A18’s inclusion is good for both Apple and the consumer; Apple gets to establish a new, higher baseline of performance when developing new features for current and future handsets, and consumers likely get many more years of software updates than they’d get on the older chip.

The key example of a feature enabled by the A18 that Apple would probably like us all to talk about the most is Apple Intelligence, a suite of features utilizing generative AI to solve some user problems or enable new capabilities across iOS. By enabling these for the cheapest iPhone, Apple is making its messaging around Apple Intelligence a lot easier; it no longer needs to put effort into clarifying that you can use X feature with this new iPhone but not that one.

We’ve written a lot about Apple Intelligence already, but here’s the gist: There are some useful features here in theory, but Apple’s models are clearly a bit behind the cutting edge, and results for things like notifications summaries or writing tools are pretty mixed. It’s fun to generate original emojis, though!

The iPhone 16e can even use Visual Intelligence, which actually is handy sometimes. On my iPhone 16 Pro Max, I can point the rear camera at an object and press the camera button a certain way to get information about it.

I wouldn’t have expected the 16e to support this, but it does, via the Action Button (which was first introduced in the iPhone 15 Pro). This is a reprogrammable button that can perform a variety of functions, albeit just one at a time. Visual Intelligence is one of the options here, which is pretty cool, even though it’s not essential.

The screen is the biggest upgrade over the SE

Also like the 16, the 16e has a 6.1-inch display. The resolution’s a bit different, though; it’s 2,532 by 1,170 pixels instead of 2,556 by 1,179. It also has a notch instead of the Dynamic Island seen in the 16. All this makes the iPhone 16e’s display seem like a very close match to the one seen in 2022’s iPhone 14—in fact, it might literally be the same display.

I really missed the Dynamic Island while using the iPhone 16e—it’s one of my favorite new features added to the iPhone in recent years, as it consolidates what was previously a mess of notification schemes in iOS. Plus, it’s nice to see things like Uber and DoorDash ETAs and sports scores at a glance.

The main problem with losing the Dynamic Island is that we’re back to the old minor mess of notifications approaches, and I guess Apple has to keep supporting the old ways for a while yet. That genuinely surprises me; I would have thought Apple would want to unify notifications and activities with the Dynamic Island just like the A18 allows the standardization of other features.

This seems to indicate that the Dynamic Island is a fair bit more expensive to include than the good old camera notch flagship iPhones had been rocking since 2017’s iPhone X.

That compromise aside, the display on the iPhone 16e is ridiculously good for a phone at this price point, and it makes the old iPhone SE’s small LCD display look like it’s from another eon entirely by comparison. It gets brighter for both HDR content and sunny-day operation; the blacks are inky and deep, and the contrast and colors are outstanding.

It’s the best thing about the iPhone 16e, even if it isn’t quite as refined as the screens in Apple’s current flagships. Most people would never notice the difference between the screens in the 16e and the iPhone 16 Pro, though.

There is one other screen feature I miss from the higher-end iPhones you can buy in 2025: Those phones can drop the display all the way down to 1 nit, which is awesome for using the phone late at night in bed without disturbing a sleeping partner. Like earlier iPhones, the 16e can only get so dark.

It gets quite bright, though; Apple claims it typically reaches 800 nits in peak brightness but that it can stretch to 1200 when viewing certain HDR photos and videos. That means it gets about twice as bright as the SE did.

Connectivity is key

The iPhone 16e supports the core suite of connectivity options found in modern phones. There’s Wi-Fi 6, Bluetooth 5.3, and Apple’s usual limited implementation of NFC.

There are three new things of note here, though, and they’re good, neutral, and bad, respectively.

USB-C

Let’s start with the good. We’ve moved from Apple’s proprietary Lightning port found in older iPhones (including the final iPhone SE) toward USB-C, now a near-universal standard on mobile devices. It allows faster charging and more standardized charging cable support.

Sure, it’s a bummer to start over if you’ve spent years buying Lightning accessories, but it’s absolutely worth it in the long run. This change means that the entire iPhone line has now abandoned Lightning, so all iPhones and Android phones will have the same main port for years to come. Finally!

The finality of this shift solves a few problems for Apple: It greatly simplifies the accessory landscape and allows the company to move toward producing a smaller range of cables.

Satellite connectivity

Recent flagship iPhones have gradually added a small suite of features that utilize satellite connectivity to make life a little easier and safer.

Among those is crash detection and roadside assistance. The former will use the sensors in the phone to detect if you’ve been in a car crash and contact help, and roadside assistance allows you to text for help when you’re outside of cellular reception in the US and UK.

There are also Emergency SOS and Find My via satellite, which let you communicate with emergency responders from remote places and allow you to be found.

Along with a more general feature that allows Messages via satellite, these features can greatly expand your options if you’re somewhere remote, though they’re not as easy to use and responsive as using the regular cellular network.

Where’s MagSafe?

I don’t expect the 16e to have all the same features as the 16, which is $200 more expensive. In fact, it has more modern features than I think most of its target audience needs (more on that later). That said, there’s one notable omission that makes no sense to me at all.

The 16e does not support MagSafe, a standard for connecting accessories to the back of the device magnetically, often while allowing wireless charging via the Qi standard.

Qi wireless charging is still supported, albeit at a slow 7.5 W, but there are no magnets, meaning a lot of existing MagSafe accessories are a lot less useful with this phone, if they’re usable at all. To be fair, the SE didn’t support MagSafe either, but every new iPhone design since the iPhone 12 way back in 2020 has—and not just the premium flagships.

It’s not like the MagSafe accessory ecosystem was some bottomless well of innovation, but that magnetic alignment is handier than you might think, whether we’re talking about making sure the phone locks into place for the fastest wireless charging speeds or hanging the phone on a car dashboard to use GPS on the go.

It’s one of those things where folks coming from much older iPhones may not care because they don’t know what they’re missing, but it could be annoying in households with multiple generations of iPhones, and it just doesn’t make any sense.

Most of Apple’s choices in the 16e seem to serve the goal of unifying the whole iPhone lineup to simplify the message for consumers and make things easier for Apple to manage efficiently, but the dropping of MagSafe is bizarre.

It almost makes me think that Apple might plan to drop MagSafe from future flagship iPhones, too, and go toward something new, just because that’s the only explanation I can think of. That otherwise seems unlikely to me right now, but I guess we’ll see.

The first Apple-designed cellular modem

We’ve been seeing rumors that Apple planned to drop third-party modems from companies like Qualcomm for years. As far back as 2018, Apple was poaching Qualcomm employees in an adjacent office in San Diego. In 2020, Apple SVP Johny Srouji announced to employees that work had begun.

It sounds like development has been challenging, but the first Apple-designed modem has arrived here in the 16e of all places. Dubbed the C1, it’s… perfectly adequate. It’s about as fast or maybe just a smidge slower than what you get in the flagship phones, but almost no user would notice any difference at all.

That’s really a win for Apple, which has struggled with a tumultuous relationship with its partners here for years and which has long run into space problems in its phones in part because the third-party modems weren’t compact enough.

This change may not matter much for the consumer beyond freeing up just a tiny bit of space for a slightly larger battery, but it’s another step in Apple’s long journey to ultimately and fully control every component in the iPhone that it possibly can.

Bigger is better for batteries

There is one area where the 16e is actually superior to the 16, much less the SE: battery life. The 16e reportedly has a 3,961 mAh battery, the largest in any of the many iPhones with roughly this size screen. Apple says it offers up to 26 hours of video playback, which is the kind of number you expect to see in a much larger flagship phone.

I charged this phone three times in just under a week with it, though I wasn’t heavily hitting 5G networks, playing many 3D games, or cranking the brightness way up all the time while using it.

That’s a bit of a bump over the 16, but it’s a massive leap over the SE, which promised a measly 15 hours of video playback. Every single phone in Apple’s lineup now has excellent battery life by any standard.

Quality over quantity in the camera system

The 16E’s camera system leaves the SE in the dust, but it’s no match for the robust system found in the iPhone 16. Regardless, it’s way better than you’d typically expect from a phone at this price.

Like the 16, the 16e has a 48 MP “Fusion” wide-angle rear camera. It typically doesn’t take photos at 48 MP (though you can do that while compromising color detail). Rather, 24 MP is the target. The 48 MP camera enables 2x zoom that is nearly visually indistinguishable from optical zoom.

Based on both the specs and photo comparisons, the main camera sensor in the 16e appears to me to be exactly the same as that one found in the 16. We’re just missing the ultra-wide lens (which allows more zoomed-out photos, ideal for groups of people in small spaces, for example) and several extra features like advanced image stabilization, the newest Photographic Styles, and macro photography.

The iPhone 16e takes excellent photos in bright conditions. Samuel Axon

That’s a lot of missing features, sure, but it’s wild how good this camera is for this price point. Even something like the Pixel 8a can’t touch it (though to be fair, the Pixel 8a is $100 cheaper).

Video capture is a similar situation: The 16e shoots at the same resolutions and framerates as the 16, but it lacks a few specialized features like Cinematic and Action modes. There’s also a front-facing camera with the TrueDepth sensor for Face ID in that notch, and it has comparable specs to the front-facing cameras we’ve seen in a couple of years of iPhones at this point.

If you were buying a phone for the cameras, this wouldn’t be the one for you. It’s absolutely worth paying another $200 for the iPhone 16 (or even just $100 for the iPhone 15 for the ultra-wide lens for 0.5x zoom; the 15 is still available in the Apple Store) if that’s your priority.

The iPhone 16’s macro mode isn’t available here, so ultra-close-ups look fuzzy. Samuel Axon

But for the 16e’s target consumer (mostly folks with the iPhone 11 or older or an iPhone SE, who just want the cheapest functional iPhone they can get) it’s almost overkill. I’m not complaining, though it’s a contributing factor to the phone’s cost compared to entry-level Android phones and Apple’s old iPhone SE.

RIP small phones, once and for all

In one fell swoop, the iPhone 16e’s replacement of the iPhone SE eliminates a whole range of legacy technologies that have held on at the lower end of the iPhone lineup for years. Gone are Touch ID, the home button, LCD displays, and Lightning ports—they’re replaced by Face ID, swipe gestures, OLED, and USB-C.

Newer iPhones have had most of those things for quite some time. The latest feature was USB-C, which came in 2023’s iPhone 15. The removal of the SE from the lineup catches the bottom end of the iPhone up with the top in these respects.

That said, the SE had maintained one positive differentiator, too: It was small enough to be used one-handed by almost anyone. With the end of the SE and the release of the 16e, the one-handed iPhone is well and truly dead. Of course, most people have been clear they want big screens and batteries above almost all else, so the writing had been on the wall for a while for smaller phones.

The death of the iPhone SE ushers in a new era for the iPhone with bigger and better features—but also bigger price tags.

A more expensive cheap phone

Assessing the iPhone 16e is a challenge. It’s objectively a good phone—good enough for the vast majority of people. It has a nearly top-tier screen (though it clocks in at 60Hz, while some Android phones close to this price point manage 120Hz), a camera system that delivers on quality even if it lacks special features seen in flagships, strong connectivity, and performance far above what you’d expect at this price.

If you don’t care about extra camera features or nice-to-haves like MagSafe or the Dynamic Island, it’s easy to recommend saving a couple hundred bucks compared to the iPhone 16.

The chief criticism I have that relates to the 16e has less to do with the phone itself than Apple’s overall lineup. The iPhone SE retailed for $430, nearly half the price of the 16. By making the 16e the new bottom of the lineup, Apple has significantly raised the financial barrier to entry for iOS.

Now, it’s worth mentioning that a pretty big swath of the target market for the 16e will buy it subsidized through a carrier, so they might not pay that much up front. I always recommend buying a phone directly if you can, though, as carrier subsidization deals are usually worse for the consumer.

The 16e’s price might push more people to go for the subsidy. Plus, it’s just more phone than some people need. For example, I love a high-quality OLED display for watching movies, but I don’t think the typical iPhone SE customer was ever going to care about that.

That’s why I believe the iPhone 16e solves more problems for Apple than it does for the consumer. In multiple ways, it allows Apple to streamline production, software support, and marketing messaging. It also drives up the average price per unit across the whole iPhone line and will probably encourage some people who would have spent $430 to spend $600 instead, possibly improving revenue. All told, it’s a no-brainer for Apple.

It’s just a mixed bag for the sort of no-frills consumer who wants a minimum viable phone and who for one reason or another didn’t want to go the Android route. The iPhone 16e is definitely a good phone—I just wish there were more options for that consumer.

The good

  • Dramatically improved display than the iPhone SE
  • Likely stronger long-term software support than most previous entry-level iPhones
  • Good battery life and incredibly good performance for this price point
  • A high-quality camera, especially for the price

The bad

  • No ultra-wide camera
  • No MagSafe
  • No Dynamic Island

The ugly

  • Significantly raises the entry price point for buying an iPhone

Photo of Samuel Axon

Samuel Axon is a senior editor at Ars Technica. He covers Apple, software development, gaming, AI, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

iPhone 16e review: The most expensive cheap iPhone yet Read More »

amd-radeon-rx-9070-and-9070-xt-review:-rdna-4-fixes-a-lot-of-amd’s-problems

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems


For $549 and $599, AMD comes close to knocking out Nvidia’s GeForce RTX 5070.

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD is a company that knows a thing or two about capitalizing on a competitor’s weaknesses. The company got through its early-2010s nadir partially because its Ryzen CPUs struck just as Intel’s current manufacturing woes began to set in, first with somewhat-worse CPUs that were great value for the money and later with CPUs that were better than anything Intel could offer.

Nvidia’s untrammeled dominance of the consumer graphics card market should also be an opportunity for AMD. Nvidia’s GeForce RTX 50-series graphics cards have given buyers very little to get excited about, with an unreachably expensive high-end 5090 refresh and modest-at-best gains from 5080 and 5070-series cards that are also pretty expensive by historical standards, when you can buy them at all. Tech YouTubers—both the people making the videos and the people leaving comments underneath them—have been almost uniformly unkind to the 50 series, hinting at consumer frustrations and pent-up demand for competitive products from other companies.

Enter AMD’s Radeon RX 9070 XT and RX 9070 graphics cards. These are aimed right at the middle of the current GPU market at the intersection of high sales volume and decent profit margins. They promise good 1440p and entry-level 4K gaming performance and improved power efficiency compared to previous-generation cards, with fixes for long-time shortcomings (ray-tracing performance, video encoding, and upscaling quality) that should, in theory, make them more tempting for people looking to ditch Nvidia.

Table of Contents

RX 9070 and 9070 XT specs and speeds

RX 9070 XT RX 9070 RX 7900 XTX RX 7900 XT RX 7900 GRE RX 7800 XT
Compute units (Stream processors) 64 RDNA4 (4,096) 56 RDNA4 (3,584) 96 RDNA3 (6,144) 84 RDNA3 (5,376) 80 RDNA3 (5,120) 60 RDNA3 (3,840)
Boost Clock 2,970 MHz 2,520 MHz 2,498 MHz 2,400 MHz 2,245 MHz 2,430 MHz
Memory Bus Width 256-bit 256-bit 384-bit 320-bit 256-bit 256-bit
Memory Bandwidth 650GB/s 650GB/s 960GB/s 800GB/s 576GB/s 624GB/s
Memory size 16GB GDDR6 16GB GDDR6 24GB GDDR6 20GB GDDR6 16GB GDDR6 16GB GDDR6
Total board power (TBP) 304 W 220 W 355 W 315 W 260 W 263 W

AMD’s high-level performance promise for the RDNA 4 architecture revolves around big increases in performance per compute unit (CU). An RDNA 4 CU, AMD says, is nearly twice as fast in rasterized performance as RDNA 2 (that is, rendering without ray-tracing effects enabled) and nearly 2.5 times as fast as RDNA 2 in games with ray-tracing effects enabled. Performance for at least some machine learning workloads also goes way up—twice as fast as RDNA 3 and four times as fast as RDNA 2.

We’ll see this in more detail when we start comparing performance, but AMD seems to have accomplished this goal. Despite having 64 or 56 compute units (for the 9070 XT and 9070, respectively), the cards’ performance often competes with AMD’s last-generation flagships, the RX 7900 XTX and 7900 XT. Those cards came with 96 and 84 compute units, respectively. The 9070 cards are specced a lot more like last generation’s RX 7800 XT—including the 16GB of GDDR6 on a 256-bit memory bus, as AMD still isn’t using GDDR6X or GDDR7—but they’re much faster than the 7800 XT was.

AMD has dramatically increased the performance-per-compute unit for RDNA 4. AMD

The 9070 series also uses a new 4 nm manufacturing process from TSMC, an upgrade from the 7000 series’ 5 nm process (and the 6 nm process used for the separate memory controller dies in higher-end RX 7000-series models that used chiplets). AMD’s GPUs are normally a bit less efficient than Nvidia’s, but the architectural improvements and the new manufacturing process allow AMD to do some important catch-up.

Both of the 9070 models we tested were ASRock Steel Legend models, and the 9070 and 9070 XT had identical designs—we’ll probably see a lot of this from AMD’s partners since the GPU dies and the 16GB RAM allotments are the same for both models. Both use two 8-pin power connectors; AMD says partners are free to use the 12-pin power connector if they want, but given Nvidia’s ongoing issues with it, most cards will likely stick with the reliable 8-pin connectors.

AMD doesn’t appear to be making and selling reference designs for the 9070 series the way it did for some RX 7000 and 6000-series GPUs or the way Nvidia does with its Founders Edition cards. From what we’ve seen, 2 or 2.5-slot, triple-fan designs will be the norm, the way they are for most midrange GPUs these days.

Testbed notes

We used the same GPU testbed for the Radeon RX 9070 series as we have for our GeForce RTX 50-series reviews.

An AMD Ryzen 7 9800X3D ensures that our graphics cards will be CPU-limited as little as possible. An ample 1050 W power supply, 32GB of DDR5-6000, and an AMD X670E motherboard with the latest BIOS installed round out the hardware. On the software side, we use an up-to-date installation of Windows 11 24H2 and recent GPU drivers for older cards, ensuring that our tests reflect whatever optimizations Microsoft, AMD, Nvidia, and game developers have made since the last generation of GPUs launched.

We have numbers for all of Nvidia’s RTX 50-series GPUs so far, plus most of the 40-series cards, most of AMD’s RX 7000-series cards, and a handful of older GPUs from the RTX 30-series and RX 6000 series. We’ll focus on comparing the 9070 XT and 9070 to other 1440p-to-4K graphics cards since those are the resolutions AMD is aiming at.

Performance

At $549 and $599, the 9070 series is priced to match Nvidia’s $549 RTX 5070 and undercut the $749 RTX 5070 Ti. So we’ll focus on comparing the 9070 series to those cards, plus the top tier of GPUs from the outgoing RX 7000-series.

Some 4K rasterized benchmarks.

Starting at the top with rasterized benchmarks with no ray-tracing effects, the 9070 XT does a good job of standing up to Nvidia’s RTX 5070 Ti, coming within a few frames per second of its performance in all the games we tested (and scoring very similarly in the 3DMark Time Spy Extreme benchmark).

Both cards are considerably faster than the RTX 5070—between 15 and 28 percent for the 9070 XT and between 5 and 13 percent for the regular 9070 (our 5070 scored weirdly low in Horizon Zero Dawn Remastered, so we’d treat those numbers as outliers for now). Both 9070 cards also stack up well next to the RX 7000 series here—the 9070 can usually just about match the performance of the 7900 XT, and the 9070 XT usually beats it by a little. Both cards thoroughly outrun the old RX 7900 GRE, which was AMD’s $549 GPU offering just a year ago.

The 7900 XT does have 20GB of RAM instead of 16GB, which might help its performance in some edge cases. But 16GB is still perfectly generous for a 1440p-to-4K graphics card—the 5070 only offers 12GB, which could end up limiting its performance in some games as RAM requirements continue to rise.

On ray-tracing improvements

Nvidia got a jump on AMD when it introduced hardware-accelerated ray-tracing in the RTX 20-series in 2018. And while these effects were only supported in a few games at the time, many modern games offer at least some kind of ray-traced lighting effects.

AMD caught up a little when it began shipping its own ray-tracing support in the RDNA2 architecture in late 2020, but the issue since then has always been that AMD cards have taken a larger performance hit than GeForce GPUs when these effects are turned on. RDNA3 promised improvements, but our tests still generally showed the same deficit as before.

So we’re looking for two things with RDNA4’s ray-tracing performance. First, we want the numbers to be higher than they were for comparably priced RX 7000-series GPUs, the same thing we look for in non-ray-traced (or rasterized) rendering performance. Second, we want the size of the performance hit to go down. To pick an example: the RX 7900 GRE could compete with Nvidia’s RTX 4070 Ti Super in games without ray tracing, but it was closer to a non-Super RTX 4070 in ray-traced games. It has helped keep AMD’s cards from being across-the-board competitive with Nvidia’s—is that any different now?

Benchmarks for games with ray-tracing effects enabled. Both AMD cards generally keep pace with the 5070 in these tests thanks to RDNA 4’s improvements.

The picture our tests paint is mixed but tentatively positive. The 9070 series and RDNA4 post solid improvements in the Cyberpunk 2077 benchmarks, substantially closing the performance gap with Nvidia. In games where AMD’s cards performed well enough before—here represented by Returnal—performance goes up, but roughly proportionately with rasterized performance. And both 9070 cards still punch below their weight in Black Myth: Wukong, falling substantially behind the 5070 under the punishing Cinematic graphics preset.

So the benefits you see, as with any GPU update, will depend a bit on the game you’re playing. There’s also a possibility that game optimizations and driver updates made with RDNA4 in mind could boost performance further. We can’t say that AMD has caught all the way up to Nvidia here—the 9070 and 9070 XT are both closer to the GeForce RTX 5070 than the 5070 Ti, despite keeping it closer to the 5070 Ti in rasterized tests—but there is real, measurable improvement here, which is what we were looking for.

Power usage

The 9070 series’ performance increases are particularly impressive when you look at the power-consumption numbers. The 9070 comes close to the 7900 XT’s performance but uses 90 W less power under load. It beats the RTX 5070 most of the time but uses around 30 W less power.

The 9070 XT is a little less impressive on this front—AMD has set clock speeds pretty high, and this can increase power use disproportionately. The 9070 XT is usually 10 or 15 percent faster than the 9070 but uses 38 percent more power. The XT’s power consumption is similar to the RTX 5070 Ti’s (a GPU it often matches) and the 7900 XT’s (a GPU it always beats), so it’s not too egregious, but it’s not as standout as the 9070’s.

AMD gives 9070 owners a couple of new toggles for power limits, though, which we’ll talk about in the next section.

Experimenting with “Total Board Power”

We don’t normally dabble much with overclocking when we review CPUs or GPUs—we’re happy to leave that to folks at other outlets. But when we review CPUs, we do usually test them with multiple power limits in place. Playing with power limits is easier (and occasionally safer) than actually overclocking, and it often comes with large gains to either performance (a chip that performs much better when given more power to work with) or efficiency (a chip that can run at nearly full speed without using as much power).

Initially, I experimented with the RX 9070’s power limits by accident. AMD sent me one version of the 9070 but exchanged it because of a minor problem the OEM identified with some units early in the production run. I had, of course, already run most of our tests on it, but that’s the way these things go sometimes.

By bumping the regular RX 9070’s TBP up just a bit, you can nudge it closer to 9070 XT-level performance.

The replacement RX 9070 card, an ASRock Steel Legend model, was performing significantly better in our tests, sometimes nearly closing the gap between the 9070 and the XT. It wasn’t until I tested power consumption that I discovered the explanation—by default, it was using a 245 W power limit rather than the AMD-defined 220 W limit. Usually, these kinds of factory tweaks don’t make much of a difference, but for the 9070, this power bump gave it a nice performance boost while still keeping it close to the 250 W power limit of the GeForce RTX 5070.

The 90-series cards we tested both add some power presets to AMD’s Adrenalin app in the Performance tab under Tuning. These replace and/or complement some of the automated overclocking and undervolting buttons that exist here for older Radeon cards. Clicking Favor Efficiency or Favor Performance can ratchet the card’s Total Board Power (TBP) up or down, limiting performance so that the card runs cooler and quieter or allowing the card to consume more power so it can run a bit faster.

The 9070 cards get slightly different performance tuning options in the Adrenalin software. These buttons mostly change the card’s Total Board Power (TBP), making it simple to either improve efficiency or boost performance a bit. Credit: Andrew Cunningham

For this particular ASRock 9070 card, the default TBP is set to 245 W. Selecting “Favor Efficiency” sets it to the default 220 W. You can double-check these values using an app like HWInfo, which displays both the current TBP and the maximum TBP in its Sensors Status window. Clicking the Custom button in the Adrenalin software gives you access to a Power Tuning slider, which for our card allowed us to ratchet the TBP up by up to 10 percent or down by as much as 30 percent.

This is all the firsthand testing we did with the power limits of the 9070 series, though I would assume that adding a bit more power also adds more overclocking headroom (bumping up the power limits is common for GPU overclockers no matter who makes your card). AMD says that some of its partners will ship 9070 XT models set to a roughly 340 W power limit out of the box but acknowledges that “you start seeing diminishing returns as you approach the top of that [power efficiency] curve.”

But it’s worth noting that the driver has another automated set-it-and-forget-it power setting you can easily use to find your preferred balance of performance and power efficiency.

A quick look at FSR4 performance

There’s a toggle in the driver for enabling FSR 4 in FSR 3.1-supporting games. Credit: Andrew Cunningham

One of AMD’s headlining improvements to the RX 90-series is the introduction of FSR 4, a new version of its FidelityFX Super Resolution upscaling algorithm. Like Nvidia’s DLSS and Intel’s XeSS, FSR 4 can take advantage of RDNA 4’s machine learning processing power to do hardware-backed upscaling instead of taking a hardware-agnostic approach as the older FSR versions did. AMD says this will improve upscaling quality, but it also means FSR4 will only work on RDNA 4 GPUs.

The good news is that FSR 3.1 and FSR 4 are forward- and backward-compatible. Games that have already added FSR 3.1 support can automatically take advantage of FSR 4, and games that support FSR 4 on the 90-series can just run FSR 3.1 on older and non-AMD GPUs.

FSR 4 comes with a small performance hit compared to FSR 3.1 at the same settings, but better overall quality can let you drop to a faster preset like Balanced or Performance and end up with more frames-per-second overall. Credit: Andrew Cunningham

The only game in our current test suite to be compatible with FSR 4 is Horizon Zero Dawn Remastered, and we tested its performance using both FSR 3.1 and FSR 4. In general, we found that FSR 4 improved visual quality at the cost of just a few frames per second when run at the same settings—not unlike using Nvidia’s recently released “transformer model” for DLSS upscaling.

Many games will let you choose which version of FSR you want to use. But for FSR 3.1 games that don’t have a built-in FSR 4 option, there’s a toggle in AMD’s Adrenalin driver you can hit to switch to the better upscaling algorithm.

Even if they come with a performance hit, new upscaling algorithms can still improve performance by making the lower-resolution presets look better. We run all of our testing in “Quality” mode, which generally renders at two-thirds of native resolution and scales up. But if FSR 4 running in Balanced or Performance mode looks the same to your eyes as FSR 3.1 running in Quality mode, you can still end up with a net performance improvement in the end.

RX 9070 or 9070 XT?

Just $50 separates the advertised price of the 9070 from that of the 9070 XT, something both Nvidia and AMD have done in the past that I find a bit annoying. If you have $549 to spend on a graphics card, you can almost certainly scrape together $599 for a graphics card. All else being equal, I’d tell most people trying to choose one of these to just spring for the 9070 XT.

That said, availability and retail pricing for these might be all over the place. If your choices are a regular RX 9070 or nothing, or an RX 9070 at $549 and an RX 9070 XT at any price higher than $599, I would just grab a 9070 and not sweat it too much. The two cards aren’t that far apart in performance, especially if you bump the 9070’s TBP up a little bit, and games that are playable on one will be playable at similar settings on the other.

Pretty close to great

If you’re building a 1440p or 4K gaming box, the 9070 series might be the ones to beat right now. Credit: Andrew Cunningham

We’ve got plenty of objective data in here, so I don’t mind saying that I came into this review kind of wanting to like the 9070 and 9070 XT. Nvidia’s 50-series cards have mostly upheld the status quo, and for the last couple of years, the status quo has been sustained high prices and very modest generational upgrades. And who doesn’t like an underdog story?

I think our test results mostly justify my priors. The RX 9070 and 9070 XT are very competitive graphics cards, helped along by a particularly mediocre RTX 5070 refresh from Nvidia. In non-ray-traced games, both cards wipe the floor with the 5070 and come close to competing with the $749 RTX 5070 Ti. In games and synthetic benchmarks with ray-tracing effects on, both cards can usually match or slightly beat the similarly priced 5070, partially (if not entirely) addressing AMD’s longstanding performance deficit here. Neither card comes close to the 5070 Ti in these games, but they’re also not priced like a 5070 Ti.

Just as impressively, the Radeon cards compete with the GeForce cards while consuming similar amounts of power. At stock settings, the RX 9070 uses roughly the same amount of power under load as a 4070 Super but with better performance. The 9070 XT uses about as much power as a 5070 Ti, with similar performance before you turn ray-tracing on. Power efficiency was a small but consistent drawback for the RX 7000 series compared to GeForce cards, and the 9070 cards mostly erase that disadvantage. AMD is also less stingy with the RAM, giving you 16GB for the price Nvidia charges for 12GB.

Some of the old caveats still apply. Radeons take a bigger performance hit, proportionally, than GeForce cards. DLSS already looks pretty good and is widely supported, while FSR 3.1/FSR 4 adoption is still relatively low. Nvidia has a nearly monopolistic grip on the dedicated GPU market, which means many apps, AI workloads, and games support its GPUs best/first/exclusively. AMD is always playing catch-up to Nvidia in some respect, and Nvidia keeps progressing quickly enough that it feels like AMD never quite has the opportunity to close the gap.

AMD also doesn’t have an answer for DLSS Multi-Frame Generation. The benefits of that technology are fairly narrow, and you already get most of those benefits with single-frame generation. But it’s still a thing that Nvidia does that AMDon’t.

Overall, the RX 9070 cards are both awfully tempting competitors to the GeForce RTX 5070—and occasionally even the 5070 Ti. They’re great at 1440p and decent at 4K. Sure, I’d like to see them priced another $50 or $100 cheaper to well and truly undercut the 5070 and bring 1440p-to-4K performance t0 a sub-$500 graphics card. It would be nice to see AMD undercut Nvidia’s GPUs as ruthlessly as it undercut Intel’s CPUs nearly a decade ago. But these RDNA4 GPUs have way fewer downsides than previous-generation cards, and they come at a moment of relative weakness for Nvidia. We’ll see if the sales follow.

The good

  • Great 1440p performance and solid 4K performance
  • 16GB of RAM
  • Decisively beats Nvidia’s RTX 5070, including in most ray-traced games
  • RX 9070 XT is competitive with RTX 5070 Ti in non-ray-traced games for less money
  • Both cards match or beat the RX 7900 XT, AMD’s second-fastest card from the last generation
  • Decent power efficiency for the 9070 XT and great power efficiency for the 9070
  • Automated options for tuning overall power use to prioritize either efficiency or performance
  • Reliable 8-pin power connectors available in many cards

The bad

  • Nvidia’s ray-tracing performance is still usually better
  • At $549 and $599, pricing matches but doesn’t undercut the RTX 5070
  • FSR 4 isn’t as widely supported as DLSS and may not be for a while

The ugly

  • Playing the “can you actually buy these for AMD’s advertised prices” game

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems Read More »

ai-versus-the-brain-and-the-race-for-general-intelligence

AI versus the brain and the race for general intelligence


Intelligence, ±artificial

We already have an example of general intelligence, and it doesn’t look like AI.

There’s no question that AI systems have accomplished some impressive feats, mastering games, writing text, and generating convincing images and video. That’s gotten some people talking about the possibility that we’re on the cusp of AGI, or artificial general intelligence. While some of this is marketing fanfare, enough people in the field are taking the idea seriously that it warrants a closer look.

Many arguments come down to the question of how AGI is defined, which people in the field can’t seem to agree upon. This contributes to estimates of its advent that range from “it’s practically here” to “we’ll never achieve it.” Given that range, it’s impossible to provide any sort of informed perspective on how close we are.

But we do have an existing example of AGI without the “A”—the intelligence provided by the animal brain, particularly the human one. And one thing is clear: The systems being touted as evidence that AGI is just around the corner do not work at all like the brain does. That may not be a fatal flaw, or even a flaw at all. It’s entirely possible that there’s more than one way to reach intelligence, depending on how it’s defined. But at least some of the differences are likely to be functionally significant, and the fact that AI is taking a very different route from the one working example we have is likely to be meaningful.

With all that in mind, let’s look at some of the things the brain does that current AI systems can’t.

Defining AGI might help

Artificial general intelligence hasn’t really been defined. Those who argue that it’s imminent are either vague about what they expect the first AGI systems to be capable of or simply define it as the ability to dramatically exceed human performance at a limited number of tasks. Predictions of AGI’s arrival in the intermediate term tend to focus on AI systems demonstrating specific behaviors that seem human-like. The further one goes out on the timeline, the greater the emphasis on the “G” of AGI and its implication of systems that are far less specialized.

But most of these predictions are coming from people working in companies with a commercial interest in AI. It was notable that none of the researchers we talked to for this article were willing to offer a definition of AGI. They were, however, willing to point out how current systems fall short.

“I think that AGI would be something that is going to be more robust, more stable—not necessarily smarter in general but more coherent in its abilities,” said Ariel Goldstein, a researcher at Hebrew University of Jerusalem. “You’d expect a system that can do X and Y to also be able to do Z and T. Somehow, these systems seem to be more fragmented in a way. To be surprisingly good at one thing and then surprisingly bad at another thing that seems related.”

“I think that’s a big distinction, this idea of generalizability,” echoed neuroscientist Christa Baker of NC State University. “You can learn how to analyze logic in one sphere, but if you come to a new circumstance, it’s not like now you’re an idiot.”

Mariano Schain, a Google engineer who has collaborated with Goldstein, focused on the abilities that underlie this generalizability. He mentioned both long-term and task-specific memory and the ability to deploy skills developed in one task in different contexts. These are limited-to-nonexistent in existing AI systems.

Beyond those specific limits, Baker noted that “there’s long been this very human-centric idea of intelligence that only humans are intelligent.” That’s fallen away within the scientific community as we’ve studied more about animal behavior. But there’s still a bias to privilege human-like behaviors, such as the human-sounding responses generated by large language models

The fruit flies that Baker studies can integrate multiple types of sensory information, control four sets of limbs, navigate complex environments, satisfy their own energy needs, produce new generations of brains, and more. And they do that all with brains that contain under 150,000 neurons, far fewer than current large language models.

These capabilities are complicated enough that it’s not entirely clear how the brain enables them. (If we knew how, it might be possible to engineer artificial systems with similar capacities.) But we do know a fair bit about how brains operate, and there are some very obvious ways that they differ from the artificial systems we’ve created so far.

Neurons vs. artificial neurons

Most current AI systems, including all large language models, are based on what are called neural networks. These were intentionally designed to mimic how some areas of the brain operate, with large numbers of artificial neurons taking an input, modifying it, and then passing the modified information on to another layer of artificial neurons. Each of these artificial neurons can pass the information on to multiple instances in the next layer, with different weights applied to each connection. In turn, each of the artificial neurons in the next layer can receive input from multiple sources in the previous one.

After passing through enough layers, the final layer is read and transformed into an output, such as the pixels in an image that correspond to a cat.

While that system is modeled on the behavior of some structures within the brain, it’s a very limited approximation. For one, all artificial neurons are functionally equivalent—there’s no specialization. In contrast, real neurons are highly specialized; they use a variety of neurotransmitters and take input from a range of extra-neural inputs like hormones. Some specialize in sending inhibitory signals while others activate the neurons they interact with. Different physical structures allow them to make different numbers and connections.

In addition, rather than simply forwarding a single value to the next layer, real neurons communicate through an analog series of activity spikes, sending trains of pulses that vary in timing and intensity. This allows for a degree of non-deterministic noise in communications.

Finally, while organized layers are a feature of a few structures in brains, they’re far from the rule. “What we found is it’s—at least in the fly—much more interconnected,” Baker told Ars. “You can’t really identify this strictly hierarchical network.”

With near-complete connection maps of the fly brain becoming available, she told Ars that researchers are “finding lateral connections or feedback projections, or what we call recurrent loops, where we’ve got neurons that are making a little circle and connectivity patterns. I think those things are probably going to be a lot more widespread than we currently appreciate.”

While we’re only beginning to understand the functional consequences of all this complexity, it’s safe to say that it allows networks composed of actual neurons far more flexibility in how they process information—a flexibility that may underly how these neurons get re-deployed in a way that these researchers identified as crucial for some form of generalized intelligence.

But the differences between neural networks and the real-world brains they were modeled on go well beyond the functional differences we’ve talked about so far. They extend to significant differences in how these functional units are organized.

The brain isn’t monolithic

The neural networks we’ve generated so far are largely specialized systems meant to handle a single task. Even the most complicated tasks, like the prediction of protein structures, have typically relied on the interaction of only two or three specialized systems. In contrast, the typical brain has a lot of functional units. Some of these operate by sequentially processing a single set of inputs in something resembling a pipeline. But many others can operate in parallel, in some cases without any input activity going on elsewhere in the brain.

To give a sense of what this looks like, let’s think about what’s going on as you read this article. Doing so requires systems that handle motor control, which keep your head and eyes focused on the screen. Part of this system operates via feedback from the neurons that are processing the read material, causing small eye movements that help your eyes move across individual sentences and between lines.

Separately, there’s part of your brain devoted to telling the visual system what not to pay attention to, like the icon showing an ever-growing number of unread emails. Those of us who can read a webpage without even noticing the ads on it presumably have a very well-developed system in place for ignoring things. Reading this article may also mean you’re engaging the systems that handle other senses, getting you to ignore things like the noise of your heating system coming on while remaining alert for things that might signify threats, like an unexplained sound in the next room.

The input generated by the visual system then needs to be processed, from individual character recognition up to the identification of words and sentences, processes that involve systems in areas of the brain involved in both visual processing and language. Again, this is an iterative process, where building meaning from a sentence may require many eye movements to scan back and forth across a sentence, improving reading comprehension—and requiring many of these systems to communicate among themselves.

As meaning gets extracted from a sentence, other parts of the brain integrate it with information obtained in earlier sentences, which tends to engage yet another area of the brain, one that handles a short-term memory system called working memory. Meanwhile, other systems will be searching long-term memory, finding related material that can help the brain place the new information within the context of what it already knows. Still other specialized brain areas are checking for things like whether there’s any emotional content to the material you’re reading.

All of these different areas are engaged without you being consciously aware of the need for them.

In contrast, something like ChatGPT, despite having a lot of artificial neurons, is monolithic: No specialized structures are allocated before training starts. That’s in sharp contrast to a brain. “The brain does not start out as a bag of neurons and then as a baby it needs to make sense of the world and then determine what connections to make,” Baker noted. “There already a lot of constraints and specifics that are already set up.”

Even in cases where it’s not possible to see any physical distinction between cells specialized for different functions, Baker noted that we can often find differences in what genes are active.

In contrast, pre-planned modularity is relatively new to the AI world. In software development, “This concept of modularity is well established, so we have the whole methodology around it, how to manage it,” Schain said, “it’s really an aspect that is important for maybe achieving AI systems that can then operate similarly to the human brain.” There are a few cases where developers have enforced modularity on systems, but Goldstein said these systems need to be trained with all the modules in place to see any gain in performance.

None of this is saying that a modular system can’t arise within a neural network as a result of its training. But so far, we have very limited evidence that they do. And since we mostly deploy each system for a very limited number of tasks, there’s no reason to think modularity will be valuable.

There is some reason to believe that this modularity is key to the brain’s incredible flexibility. The region that recognizes emotion-evoking content in written text can also recognize it in music and images, for example. But the evidence here is mixed. There are some clear instances where a single brain region handles related tasks, but that’s not consistently the case; Baker noted that, “When you’re talking humans, there are parts of the brain that are dedicated to understanding speech, and there are different areas that are involved in producing speech.”

This sort of re-use of would also provide an advantage in terms of learning since behaviors developed in one context could potentially be deployed in others. But as we’ll see, the differences between brains and AI when it comes to learning are far more comprehensive than that.

The brain is constantly training

Current AIs generally have two states: training and deployment. Training is where the AI learns its behavior; deployment is where that behavior is put to use. This isn’t absolute, as the behavior can be tweaked in response to things learned during deployment, like finding out it recommends eating a rock daily. But for the most part, once the weights among the connections of a neural network are determined through training, they’re retained.

That may be starting to change a bit, Schain said. “There is now maybe a shift in similarity where AI systems are using more and more what they call the test time compute, where at inference time you do much more than before, kind of a parallel to how the human brain operates,” he told Ars. But it’s still the case that neural networks are essentially useless without an extended training period.

In contrast, a brain doesn’t have distinct learning and active states; it’s constantly in both modes. In many cases, the brain learns while doing. Baker described that in terms of learning to take jumpshots: “Once you have made your movement, the ball has left your hand, it’s going to land somewhere. So that visual signal—that comparison of where it landed versus where you wanted it to go—is what we call an error signal. That’s detected by the cerebellum, and its goal is to minimize that error signal. So the next time you do it, the brain is trying to compensate for what you did last time.”

It makes for very different learning curves. An AI is typically not very useful until it has had a substantial amount of training. In contrast, a human can often pick up basic competence in a very short amount of time (and without massive energy use). “Even if you’re put into a situation where you’ve never been before, you can still figure it out,” Baker said. “If you see a new object, you don’t have to be trained on that a thousand times to know how to use it. A lot of the time, [if] you see it one time, you can make predictions.”

As a result, while an AI system with sufficient training may ultimately outperform the human, the human will typically reach a high level of performance faster. And unlike an AI, a human’s performance doesn’t remain static. Incremental improvements and innovative approaches are both still possible. This also allows humans to adjust to changed circumstances more readily. An AI trained on the body of written material up until 2020 might struggle to comprehend teen-speak in 2030; humans could at least potentially adjust to the shifts in language. (Though maybe an AI trained to respond to confusing phrasing with “get off my lawn” would be indistinguishable.)

Finally, since the brain is a flexible learning device, the lessons learned from one skill can be applied to related skills. So the ability to recognize tones and read sheet music can help with the mastery of multiple musical instruments. Chemistry and cooking share overlapping skillsets. And when it comes to schooling, learning how to learn can be used to master a wide range of topics.

In contrast, it’s essentially impossible to use an AI model trained on one topic for much else. The biggest exceptions are large language models, which seem to be able to solve problems on a wide variety of topics if they’re presented as text. But here, there’s still a dependence on sufficient examples of similar problems appearing in the body of text the system was trained on. To give an example, something like ChatGPT can seem to be able to solve math problems, but it’s best at solving things that were discussed in its training materials; giving it something new will generally cause it to stumble.

Déjà vu

For Schain, however, the biggest difference between AI and biology is in terms of memory. For many AIs, “memory” is indistinguishable from the computational resources that allow it to perform a task and was formed during training. For the large language models, it includes both the weights of connections learned then and a narrow “context window” that encompasses any recent exchanges with a single user. In contrast, biological systems have a lifetime of memories to rely on.

“For AI, it’s very basic: It’s like the memory is in the weights [of connections] or in the context. But with a human brain, it’s a much more sophisticated mechanism, still to be uncovered. It’s more distributed. There is the short term and long term, and it has to do a lot with different timescales. Memory for the last second, a minute and a day or a year or years, and they all may be relevant.”

This lifetime of memories can be key to making intelligence general. It helps us recognize the possibilities and limits of drawing analogies between different circumstances or applying things learned in one context versus another. It provides us with insights that let us solve problems that we’ve never confronted before. And, of course, it also ensures that the horrible bit of pop music you were exposed to in your teens remains an earworm well into your 80s.

The differences between how brains and AIs handle memory, however, are very hard to describe. AIs don’t really have distinct memory, while the use of memory as the brain handles a task more sophisticated than navigating a maze is generally so poorly understood that it’s difficult to discuss at all. All we can really say is that there are clear differences there.

Facing limits

It’s difficult to think about AI without recognizing the enormous energy and computational resources involved in training one. And in this case, it’s potentially relevant. Brains have evolved under enormous energy constraints and continue to operate using well under the energy that a daily diet can provide. That has forced biology to figure out ways to optimize its resources and get the most out of the resources it does commit to.

In contrast, the story of recent developments in AI is largely one of throwing more resources at them. And plans for the future seem to (so far at least) involve more of this, including larger training data sets and ever more artificial neurons and connections among them. All of this comes at a time when the best current AIs are already using three orders of magnitude more neurons than we’d find in a fly’s brain and have nowhere near the fly’s general capabilities.

It remains possible that there is more than one route to those general capabilities and that some offshoot of today’s AI systems will eventually find a different route. But if it turns out that we have to bring our computerized systems closer to biology to get there, we’ll run into a serious roadblock: We don’t fully understand the biology yet.

“I guess I am not optimistic that any kind of artificial neural network will ever be able to achieve the same plasticity, the same generalizability, the same flexibility that a human brain has,” Baker said. “That’s just because we don’t even know how it gets it; we don’t know how that arises. So how do you build that into a system?”

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

AI versus the brain and the race for general intelligence Read More »

11-standouts-from-steam-next-fest’s-thousands-of-free-game-demos

11 standouts from Steam Next Fest’s thousands of free game demos


Let Ars help you find some needles in Steam’s massive haystack of game trials.

If you head over to the Steam Next Fest charts right now, Valve will offer you a glimpse of the 2,228 games offering free downloadable demos as part of the event through Sunday, March 3. That is way too many games to effectively evaluate in such a short time, even with the massive resources of the Ars Orbiting HQ.

But we haven’t let that stop us from trying. With the assistance of some early access provided by Valve and game publishers, we’ve spent the last few days playing dozens and dozens of the most promising Next Fest demos in an attempt to pull out some interesting-looking needles from Valve’s massive haystack. Below are the results of that search—a varied list of 11 titles we think are worth investing some time (and zero dollars of money) into a demo download.

But this is just a starting point. Please use the comments below to share any other diamonds in the rough you think your fellow Ars readers need to know about.

Afterblast

Developer: Lumino Games

Planned release date: April 2025

Popular steam tags: FPS, Online Co-Op, Action Roguelike, Roguelite

Steam page

Start with the roguelike, room-based shooting action of Returnal. Move it to a first-person perspective and add in the double-jump-and-dash movement system of the new Doom games. Throw in a Halo Infinite-style grappling hook that can be used for traversal or combat. The result would come pretty close to Afterblast, a twitch-action shooter that shines from the jump in this fast-paced demo.

This is the kind of game where halting your movement for even a split second often means being instantly overwhelmed by enemies that swarm from all angles. Rather than being hard to handle, though, the game’s zippy movement makes it feel relatively simple to jump and dash from ledge to ledge, avoiding bullet-hell style projectile patterns as you do.

The grappling hook is by far the most satisfying part of the demo, though. Beyond jumping out of the way of opposing fire, you can also use it to drag yourself over to stunned enemies, exploding them into piles of goo and collectible items. Bouncing from enemy to enemy in this way, with a few well-placed jumps and dashes in between, felt like being a kid bouncing on a trampoline.

The Afterblast demo provides a feel for the game’s randomized item system, which gives access to automated drones, powerful grenades, and other superpowers that should make each run feel unique. We can’t wait to see more.

-Kyle Orland

Castle V Castle

Developer: Nopopo

Planned release date: “Coming soon”

Popular steam tags: Card Battler, Roguelite, Card Game, PvE

Steam page

If you remember cult-classic Flash game Castle Wars, you already know the basics of Castle V Castle. You and your opponent take turns using a handful of cards to either build up your own castle or break down the castle on the other side until one castle has been reduced to rubble. You then improve your deck and do it all again.

Playing effectively means figuring out when to attack, when to defend, and when to use cards to bulk up the various resources needed to play future cards. Strong play requires thinking a few moves ahead, both to avoid being left with a hand of unplayable cards and to counter or reflect potential incoming attacks from your opponent (or deny them the resources they might need).

An extremely clean black and white interface and quick, amusing animations make this a very easy game to pick up and play. But it’s the strategy of taking on short-term risks to get long-term rewards that will keep you coming back for round after round. And while the demo’s extremely punishing daily challenges are good for some continued longevity, we’re hoping the final game will add online multiplayer battles rather than just letting us beat down the AI over and over again.

-Kyle Orland

Dragonkin: The Banished

Developer: Eko Software

Planned release date: March 6, 2025 (Early Access)

Popular steam tags: Action, Hack and Slash, Adventure, RPG, Loot

Steam page

If you’ve grown tired of Diablo IV but are looking for another dark, mouse-based action RPG to scratch the same itch, Dragonkin: The Banished should be right up your alley. Just like in Diablo, the name of the game here is clicking to move and attack swarms of enemies in dark isometric dungeons, throwing in some magical attacks with the number keys on your keyboard as needed.

The Dragonkin demo leads you through a lot of extremely expository and overwrought cutscenes, broken up by short playable vignettes that introduce you to the main character classes: the heavy, the archer, the fire mage, etc. These introductory characters seem extremely overpowered for the early game, cutting through enemy grunts like butter while barely taking a scratch from underpowered opposition attacks. But it gives a good feel for the wide variety of available attacks available, from radiating lightning to a satisfying bull charge.

It’s only after this introduction that you’re thrown into a more standard challenge, clearing out a dungeon with a low-level character that needs all the help they can get. While the moment to moment gameplay will be familiar to Diablo heads, a few small touches like a handy dash-dodge maneuver make it all lean a bit more towards the “action” side of “action RPG.”

If you’re looking for pyrotechnic explosions, cinematic cut scenes, and plenty of things to click on, you could definitely do a lot worse.

-Kyle Orland

Glum

Developer: CinderCat Games

Planned release date: 2025

Popular steam tags: Adventure, Funny, Singleplayer

Steam page

First-person shooters are obviously all about the shooting—it’s right there in the name, after all. But the most satisfying part of these games is often running right up to an enemy and pounding them with a suspiciously powerful close-range melee attack.

Glum takes this satisfying melee bit and makes it the focus of the whole game. Instead of fists or a bludgeoning weapon, though, your main melee weapon in Glum is a steel-toed boot. You’ll see that boot hovering menacingly and incongruously in front of you as you zip around vaguely medieval-themed rooms, charging back with a bent knee for perfectly timed strikes as soon as an enemy gets in your face.

The Glum demo already takes this “first-person booter” concept in some interesting directions, providing plenty of barrels and other heavy objects you can convert into single-use projectile weapons with a kick from the right angle. You can also kick off of walls and other angled surfaces to fly through the air in some surprisingly satisfying and floaty platforming, which is key to finding the many secrets hidden in the demo’s tutorial-esque rooms.

But the most satisfying part is still kicking an enemy directly and watching the ragdoll corpse fly off a wall and back toward your boot, where you can send it flying into other encroaching foes. The light-hearted comedy action is especially welcome in an industry that sometimes seems full of first-person games that are too full of themselves.

-Kyle Orland

Guntouchables

Developer: Game Swing

Planned release date: “To be announced”

Popular steam tags: Action Roguelike, Multiplayer, Post-apocalyptic

Steam page

Seeing and hearing the pre-game logo screen for Ghost Ship Games immediately puts my brain into Deep Rock Galactic mode: headset on, workday stress abandoned, beverage ready. Guntouchables, published by Ghost Ship and developed by Game Swing, could readily fit into that slot. It’s a multiplayer-first overhead shooter in which you and up to three other sausage-y folks with tiny stick legs grab loot, kill baddies, and get the heck out of there. If it’s been a moment since you and your squad have shot some things together, send them a message and link the demo.

A horde of mutants have overrun the world, and you and your fellow redneck preppers are having their moment. You pick a weapon, like the precise but slow hunting rifle or pray-and-spray SMG, and a character, each with their own secondary weapons and skill trees. The game shows you a map with stuff to grab or destroy and the car you need to reach to escape. Then you’re off, moving with WASD keys and aiming in a circle with a mouse. The demo wasn’t ready for controller play yet, but I could see it coming in the future (along with another justification for the trackpads on the Steam Deck).

I cajoled a friend into playing, and we had a great time, both blasting and winning, but also explaining to each other just how dumb that last move was. The game could do more to help you quickly identify and distinguish among your teammates, as they’re currently all green names and health bars. And an optional tutorial mission would go a long way to help explain weapon and item mechanics that we had to test out live. But it’s a demo, and the core experience—shoot, run, grab, swarm coming, panic—already feels plenty strong. There’s a goofy, lightly icky charm to the voice-overs and visuals, and the upgrade paths are pleasantly addictive. It’s a ridiculous game with a ridiculous name, and I recommend it.

-Kevin Purdy

Hyper Empire

Developer: Fair Weather Studios

Planned release date: Q1 2025

Popular steam tags: 4X, Turn-Based Strategy, Auto Battler, Strategy

Steam page

When you play an RPG or strategy game, do you spend hours just staring at the tech tree, trying to figure out the upgrade path that will maximize your power going forward? If so, you’ll probably love Hyper Empire, a super-condensed 4X space simulation where a good 80 percent of the game is spent staring at a menu screen and deciding how best to spend your limited resources.

That’s actually more interesting than it might initially sound. You need to spend money on a fleet of ships to defend your carrier from potential attack, of course. But spending too much on powerful ships means you can’t invest in the outpost stations and tools that will bring in even more resources (and more powerful ships) later in the run. The balance between short-term risk and long-term reward seems well-tuned for those who like to agonize over every potential decision.

When battles inevitably happen, they play out as automated orgies of interstellar explosions that can be a joy to watch, especially once you hit a critical mass of defensive ships. And while random happenings can influence your resources between those battles, none of them have seemed too impactful in the demo so far.

The biggest problem with Hyper Empires right now is that the demo caps out at 30 turns, right when the “just one more turn” resource-building loop is starting to get good. We can’t wait to continue to juggle a bunch of numbers like an intergalactic accountant in the full game.

-Kyle Orland

Monaco 2

Developer: Pocketwatch Games

Planned release date: 2025

Popular steam tags: Co-op, Heist, Indie, Arcade, Top-Down, Loot

Steam page

It’s been well over a decade since the first Monaco wowed us with its ultra-stylish overhead “heist simulator” gameplay. This long-delayed sequel keeps the same basic find-the-Macguffin-and-escape gameplay, but now with vibrant 3D graphics and more complex, multi-floor building layouts.

As with the first Monaco, this is a stealth game that doesn’t absolutely require stealth. Sure, it’s easier if you sneak by the guards and cameras without raising the alarm, using handy sightlines and quiet movements to avoid detection. But if you’re found, the game quickly transitions into something of a 3D game of Pac-Man, where you have to outrun the guards and use hidden corridors or hidey-holes to outsmart them.

The Monaco 2 demo includes four classes of thieves, each with their own unique way of distracting or avoiding the guards. I especially liked the socialite, who uses a toy poodle to charm nearby guards into ignoring her, and the tech specialist, who can use a drone to interact with doors and items while he hides in relative safety.

The updated 3D viewpoint loses some of the simplistic charm of the original’s overhead perspective. Still, this modernized version of the classic stealth game is incredibly easy to pick up and play, especially with a few friends in co-op mode.

-Kyle Orland

Monster Train 2

Developer: Shiny Shoe

Planned release date: “Coming Soon”

Popular steam tags: Strategy, Card Game, Roguelike, Demons, PvP

Steam page

Monster Train 2 is a lot more Monster Train. Given that the original is in my top five of all-time Steam game hours, I’m happy about that. Just 30 minutes into testing it, I had to tell myself, “No, this really is the last round,” and physically walk away to enforce it. Well, the last round, and then some upgrade shopping. OK, one more and then no more.

Monster Train 2 is, like the original, an amalgam of turn-based tactics and roguelike deckbuilding, with a heaven-versus-hell backstory that is arch, goofy, and entirely skippable. Enemies enter your train on the bottom of three decks and fight their way upward, turn by turn. Your card deck has hellish monsters that you place across three levels of your train and spells that can damage, buff your monsters, and debuff their misguided angels. The music is high-energy melodic metal, the art pops off the screen, and the challenge is largely the same: balancing momentary threats against the need to prepare for future baddies.

Besides new monsters, cards, and clans, the sequel adds some new things, all of which might add up to be a bit too much to manage for some folks. Hero-type creatures can have abilities with cooldowns. New card types include equipment you can put on creatures and abilities you can apply to train floors. You can customize your train and upgrade its core pyre with abilities. This demo had me forgetting monster abilities and feeling overwhelmed with where to focus my upgrades. And yet I had a good time, and I’ll probably learn a new approach to turn actions over time. Hell, after all, devours the indolent.

-Kevin Purdy

Reignbreaker

Developer: Studio Fizbin

Planned release date: March 18, 2025

Popular steam tags: Indie, Hack and Slash, Action Roguelike, Combat

Steam page

The surface similarities between Reignbreaker and Hades are hard to ignore. But Reignbreaker‘s flavor of isometric run-and-gun-and-bash gameplay sets itself apart instantly with an extremely compelling steampunk aesthetic, full of clanging metal sound effects and relentless robotic enemies. That extends to the art direction, with thick outlines making it easy to pick out the color-coded hazards from the dull blues and grays of the metal-and-stone backgrounds.

Reignbreaker also stands out for some extremely chunky-feeling melee attacks and a javelin that can be used for powerful ranged projectiles or slammed down as a temporary turret. Tight controls make it a joy to dash between enemy projectiles as you wait for the opportune moment to go in for the kill. The Reignbreaker demo also shows off a few of the powerful bosses that will require most players to acquire a few permanent power-ups before making it too deep into the game’s randomized corridors.

This is one to keep an eye on as Hades 2 continues to barrel through Early Access toward its eventual final release

-Kyle Orland

Shuffle Tactics

Developer: Club Sandwich

Planned release date: “Coming soon”

Popular steam tags: Singleplayer, Roguelike, Tactical RPG, Isometric

Steam page

Like any good game with the word “Tactics” in its title, Shuffle Tactics is all about positioning. Move your units on a grid to maximize the damage they can inflict on enemies while minimizing the counterattacks that will inevitably come on their next turn. You know the drill.

But Shuffle Tactics adds just a hint of Slay the Spire into the mix, limiting your actions to those drawn from a deck of cards that you can build and modify between battles. As the game goes on, that means you might not be sure what options will be available on your next turn, forcing some quick improvisation if you want to maximize your chances.

I’m already a big fan of the demo’s satisfying sword-throwing mechanic, which lets you toss your melee weapon for a ranged attack and then call it back for more damage when it returns to your hand. I’m also enamored with the game’s evocative pixel-art animations, which make every movement and attack a joy to watch.

My biggest problem is that the demo gets very difficult very quickly; I had quite a few runs fall apart incredibly early when faced with an unavoidable “Elite” matchup that I wasn’t yet powerful enough to conquer. Hopefully. the developer can work out the balancing issues before launch because this mix of tactical strategy and card-based luck is a match made in heaven.

-Kyle Orland

Squirreled Away

Developer: Far Seas

Planned release date: “Coming soon”

Popular steam tags: Exploration, Third Person, Cute, Relaxing

Steam page

“Be the squirrel” is Far Seas’ description of its upcoming game, and while that’s a big promise, Squirreled Away seems on track to deliver. You’re not just a squirrel, mind you, but a tool-crafting, home-building, achievement-unlocking squirrel, owing to the demands of the gaming format. But when moving around, you get to experience the manic, sticky-pawed, and often weightless nature of squirreldom. Why do squirrels run corkscrews around the perimeter of a tree instead of climbing straight up? Maybe because, like me, they’re trying to keep up with their camera view on their right controller stick while moving with the left.

Squirreled Away‘s demo gives you a taste of its core mechanics, like stashing away items in a cache for winter or building an axe out of a pebble and twig so you can break larger branches into sticks for a fellow squirrel. The look, feel, and sound of the experience are decidedly calm, with single-instrument melodies lilting in and out as you scamper about, gather resources, and unlock quests and areas. You have health and stamina bars, but the game is gentle if you run them out, sending you back to safety or reminding you to eat some food.

It feels like a more kinetic Animal Crossing, with friendly animals and low-stakes challenges. Except that at any time, you could bail on your tasks, scamper upward, and leap from one far-out branch to another, living out the daydreams of anybody who works next to a window overlooking a tree.

Kevin Purdy

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

11 standouts from Steam Next Fest’s thousands of free game demos Read More »

reddit-mods-are-fighting-to-keep-ai-slop-off-subreddits-they-could-use-help.

Reddit mods are fighting to keep AI slop off subreddits. They could use help.


Mods ask Reddit for tools as generative AI gets more popular and inconspicuous.

Redditors in a treehouse with a NO AI ALLOWED sign

Credit: Aurich Lawson (based on a still from Getty Images)

Credit: Aurich Lawson (based on a still from Getty Images)

Like it or not, generative AI is carving out its place in the world. And some Reddit users are definitely in the “don’t like it” category. While some subreddits openly welcome AI-generated images, videos, and text, others have responded to the growing trend by banning most or all posts made with the technology.

To better understand the reasoning and obstacles associated with these bans, Ars Technica spoke with moderators of subreddits that totally or partially ban generative AI. Almost all these volunteers described moderating against generative AI as a time-consuming challenge they expect to get more difficult as time goes on. And most are hoping that Reddit will release a tool to help their efforts.

It’s hard to know how much AI-generated content is actually on Reddit, and getting an estimate would be a large undertaking. Image library Freepik has analyzed the use of AI-generated content on social media but leaves Reddit out of its research because “it would take loads of time to manually comb through thousands of threads within the platform,” spokesperson Bella Valentini told me. For its part, Reddit doesn’t publicly disclose how many Reddit posts involve generative AI use.

To be clear, we’re not suggesting that Reddit has a large problem with generative AI use. By now, many subreddits seem to have agreed on their approach to AI-generated posts, and generative AI has not superseded the real, human voices that have made Reddit popular.

Still, mods largely agree that generative AI will likely get more popular on Reddit over the next few years, making generative AI modding increasingly important to both moderators and general users. Generative AI’s rising popularity has also had implications for Reddit the company, which in 2024 started licensing Reddit posts to train the large language models (LLMs) powering generative AI.

(Note: All the moderators I spoke with for this story requested that I use their Reddit usernames instead of their real names due to privacy concerns.)

No generative AI allowed

When it comes to anti-generative AI rules, numerous subreddits have zero-tolerance policies, while others permit posts that use generative AI if it’s combined with human elements or is executed very well. These rules task mods with identifying posts using generative AI and determining if they fit the criteria to be permitted on the subreddit.

Many subreddits have rules against posts made with generative AI because their mod teams or members consider such posts “low effort” or believe AI is counterintuitive to the subreddit’s mission of providing real human expertise and creations.

“At a basic level, generative AI removes the human element from the Internet; if we allowed it, then it would undermine the very point of r/AskHistorians, which is engagement with experts,” the mods of r/AskHistorians told me in a collective statement.

The subreddit’s goal is to provide historical information, and its mods think generative AI could make information shared on the subreddit less accurate. “[Generative AI] is likely to hallucinate facts, generate non-existent references, or otherwise provide misleading content,” the mods said. “Someone getting answers from an LLM can’t respond to follow-ups because they aren’t an expert. We have built a reputation as a reliable source of historical information, and the use of [generative AI], especially without oversight, puts that at risk.”

Similarly, Halaku, a mod of r/wheeloftime, told me that the subreddit’s mods banned generative AI because “we focus on genuine discussion.” Halaku believes AI content can’t facilitate “organic, genuine discussion” and “can drown out actual artwork being done by actual artists.”

The r/lego subreddit banned AI-generated art because it caused confusion in online fan communities and retail stores selling Lego products, r/lego mod Mescad said. “People would see AI-generated art that looked like Lego on [I]nstagram or [F]acebook and then go into the store to ask to buy it,” they explained. “We decided that our community’s dedication to authentic Lego products doesn’t include AI-generated art.”

Not all of Reddit is against generative AI, of course. Subreddits dedicated to the technology exist, and some general subreddits permit the use of generative AI in some or all forms.

“When it comes to bans, I would rather focus on hate speech, Nazi salutes, and things that actually harm the subreddits,” said 3rdusernameiveused, who moderates r/consoom and r/TeamBuilder25, which don’t ban generative AI. “AI art does not do that… If I was going to ban [something] for ‘moral’ reasons, it probably won’t be AI art.”

“Overwhelmingly low-effort slop”

Some generative AI bans are reflective of concerns that people are not being properly compensated for the content they create, which is then fed into LLM training.

Mod Mathgeek007 told me that r/DeadlockTheGame bans generative AI because its members consider it “a form of uncredited theft,” adding:

You aren’t allowed to sell/advertise the workers of others, and AI in a sense is using patterns derived from the work of others to create mockeries. I’d personally have less of an issue with it if the artists involved were credited and compensated—and there are some niche AI tools that do this.

Other moderators simply think generative AI reduces the quality of a subreddit’s content.

“It often just doesn’t look good… the art can often look subpar,” Mathgeek007 said.

Similarly, r/videos bans most AI-generated content because, according to its announcement, the videos are “annoying” and “just bad video” 99 percent of the time. In an online interview, r/videos mod Abrownn told me:

It’s overwhelmingly low-effort slop thrown together simply for views/ad revenue. The creators rarely care enough to put real effort into post-generation [or] editing of the content [and] rarely have coherent narratives [in] the videos, etc. It seems like they just throw the generated content into a video, export it, and call it a day.

An r/fakemon mod told me, “I can’t think of anything more low-effort in terms of art creation than just typing words and having it generated for you.”

Some moderators say generative AI helps people spam unwanted content on a subreddit, including posts that are irrelevant to the subreddit and posts that attack users.

“[Generative AI] content is almost entirely posted for purely self promotional/monetary reasons, and we as mods on Reddit are constantly dealing with abusive users just spamming their content without regard for the rules,” Abrownn said.

A moderator of the r/wallpaper subreddit, which permits generative AI, disagrees. The mod told me that generative AI “provides new routes for novel content” in the subreddit and questioned concerns about generative AI stealing from human artists or offering lower-quality work, saying those problems aren’t unique to generative AI:

Even in our community, we observe human-generated content that is subjectively low quality (poor camera/[P]hotoshopping skills, low-resolution source material, intentional “shitposting”). It can be argued that AI-generated content amplifies this behavior, but our experience (which we haven’t quantified) is that the rate of such behavior (whether human-generated or AI-generated content) has not changed much within our own community.

But we’re not a very active community—[about] 13 posts per day … so it very well could be a “frog in boiling water” situation.

Generative AI “wastes our time”

Many mods are confident in their ability to effectively identify posts that use generative AI. A bigger problem is how much time it takes to identify these posts and remove them.

The r/AskHistorians mods, for example, noted that all bans on the subreddit (including bans unrelated to AI) have “an appeals process,” and “making these assessments and reviewing AI appeals means we’re spending a considerable amount of time on something we didn’t have to worry about a few years ago.”

They added:

Frankly, the biggest challenge with [generative AI] usage is that it wastes our time. The time spent evaluating responses for AI use, responding to AI evangelists who try to flood our subreddit with inaccurate slop and then argue with us in modmail, [direct messages that message a subreddits’ mod team], and discussing edge cases could better be spent on other subreddit projects, like our podcast, newsletter, and AMAs, … providing feedback to users, or moderating input from users who intend to positively contribute to the community.

Several other mods I spoke with agree. Mathgeek007, for example, named “fighting AI bros” as a common obstacle. And for r/wheeloftime moderator Halaku, the biggest challenge in moderating against generative AI is “a generational one.”

“Some of the current generation don’t have a problem with it being AI because content is content, and [they think] we’re being elitist by arguing otherwise, and they want to argue about it,” they said.

A couple of mods noted that it’s less time-consuming to moderate subreddits that ban generative AI than it is to moderate those that allow posts using generative AI, depending on the context.

“On subreddits where we allowed AI, I often take a bit longer time to actually go into each post where I feel like… it’s been AI-generated to actually look at it and make a decision,” explained N3DSdude, a mod of several subreddits with rules against generative AI, including r/DeadlockTheGame.

MyarinTime, a moderator for r/lewdgames, which allows generative AI images, highlighted the challenges of identifying human-prompted generative AI content versus AI-generated content prompted by a bot:

When the AI bomb started, most of those bots started using AI content to work around our filters. Most of those bots started showing some random AI render, so it looks like you’re actually talking about a game when you’re not. There’s no way to know when those posts are legit games unless [you check] them one by one. I honestly believe it would be easier if we kick any post with [AI-]generated image… instead of checking if a button was pressed by a human or not.

Mods expect things to get worse

Most mods told me it’s pretty easy for them to detect posts made with generative AI, pointing to the distinct tone and favored phrases of AI-generated text. A few said that AI-generated video is harder to spot but still detectable. But as generative AI gets more advanced, moderators are expecting their work to get harder.

In a joint statement, r/dune mods Blue_Three and Herbalhippie said, “AI used to have a problem making hands—i.e., too many fingers, etc.—but as time goes on, this is less and less of an issue.”

R/videos’ Abrownn also wonders how easy it will be to detect AI-generated Reddit content “as AI tools advance and content becomes more lifelike.”

Mathgeek007 added:

AI is becoming tougher to spot and is being propagated at a larger rate. When AI style becomes normalized, it becomes tougher to fight. I expect generative AI to get significantly worse—until it becomes indistinguishable from ordinary art.

Moderators currently use various methods to fight generative AI, but they’re not perfect. r/AskHistorians mods, for example, use “AI detectors, which are unreliable, problematic, and sometimes require paid subscriptions, as well as our own ability to detect AI through experience and expertise,” while N3DSdude pointed to tools like Quid and GPTZero.

To manage current and future work around blocking generative AI, most of the mods I spoke with said they’d like Reddit to release a proprietary tool to help them.

“I’ve yet to see a reliable tool that can detect AI-generated video content,” Aabrown said. “Even if we did have such a tool, we’d be putting hundreds of hours of content through the tool daily, which would get rather expensive rather quickly. And we’re unpaid volunteer moderators, so we will be outgunned shortly when it comes to detecting this type of content at scale. We can only hope that Reddit will offer us a tool at some point in the near future that can help deal with this issue.”

A Reddit spokesperson told me that the company is evaluating what such a tool could look like. But Reddit doesn’t have a rule banning generative AI overall, and the spokesperson said the company doesn’t want to release a tool that would hinder expression or creativity.

For now, Reddit seems content to rely on moderators to remove AI-generated content when appropriate. Reddit’s spokesperson added:

Our moderation approach helps ensure that content on Reddit is curated by real humans. Moderators are quick to remove content that doesn’t follow community rules, including harmful or irrelevant AI-generated content—we don’t see this changing in the near future.

Making a generative AI Reddit tool wouldn’t be easy

Reddit is handling the evolving concerns around generative AI as it has handled other content issues, including by leveraging AI and machine learning tools. Reddit’s spokesperson said that this includes testing tools that can identify AI-generated media, such as images of politicians.

But making a proprietary tool that allows moderators to detect AI-generated posts won’t be easy, if it happens at all. The current tools for detecting generative AI are limited in their capabilities, and as generative AI advances, Reddit would need to provide tools that are more advanced than the AI-detecting tools that are currently available.

That would require a good deal of technical resources and would also likely present notable economic challenges for the social media platform, which only became profitable last year. And as noted by r/videos moderator Abrownn, tools for detecting AI-generated video still have a long way to go, making a Reddit-specific system especially challenging to create.

But even with a hypothetical Reddit tool, moderators would still have their work cut out for them. And because Reddit’s popularity is largely due to its content from real humans, that work is important.

Since Reddit’s inception, that has meant relying on moderators, which Reddit has said it intends to keep doing. As r/dune mods Blue_Three and herbalhippie put it, it’s in Reddit’s “best interest that much/most content remains organic in nature.” After all, Reddit’s profitability has a lot to do with how much AI companies are willing to pay to access Reddit data. That value would likely decline if Reddit posts became largely AI-generated themselves.

But providing the technology to ensure that generative AI isn’t abused on Reddit would be a large challege. For now, volunteer laborers will continue to bear the brunt of generative AI moderation.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Reddit mods are fighting to keep AI slop off subreddits. They could use help. Read More »

after-50-years,-ars-staffers-pick-their-favorite-saturday-night-live-sketches

After 50 years, Ars staffers pick their favorite Saturday Night Live sketches


“Do not taunt Happy Fun Ball.”

American musician Stevie Wonder (left) appears on an episode of ‘Saturday Night Live’ with comedian and actor Eddie Murphy, New York, New York, May 6, 1983. Credit: Anthony Barboza/Getty Images

American musician Stevie Wonder (left) appears on an episode of ‘Saturday Night Live’ with comedian and actor Eddie Murphy, New York, New York, May 6, 1983. Credit: Anthony Barboza/Getty Images

The venerable late-night sketch comedy show Saturday Night Live is celebrating its 50th anniversary season this year. NBC will air a special on Sunday evening featuring current and former cast members.

I’ve long been a big fan of the show, since I was a kid in the late 1980s watching cast members such as Phil Hartman, Dana Carvey, and Jan Hooks. By then, the show was more than a decade old. It had already spawned huge Hollywood stars like Chevy Chase and Eddie Murphy and had gone through some near-death experiences as it struggled to find its footing.

The show most definitely does not appeal to some people. When I asked the Ars editorial team to share their favorite sketches, a few writers told me they had never found Saturday Night Live funny, hadn’t watched it in decades, or just did not get the premise of the show. Others, of course, love the show’s ability to poke fun at the cultural and political zeitgeist of the moment.

With the rise of the Internet, Saturday Night Live has become much more accessible. If you don’t care to watch live on Saturday night or record the show, its sketches are available on YouTube within a day or two. Not all of the show’s 10,000-odd sketches from the last five decades are available online, but many of them are.

With that said, here are some of our favorites!

Celebrity Hot Tub Party (Season 9)

Saturday Night Live has a thing for hot tubs, and it starts here, with the greatest of all hot tub parties.

Should you get in the water? Will it make you sweat?

Good god!

Celebrity Hot Tub.

—Ken Fisher

Papyrus (Season 43)

Some of SNL’s best skits satirize cultural touchstones that seem like they’d be way too niche but actually resonate broadly with its audience—like Font Snobs, i.e., those people who sneer at fonts like Comic-Sans (you know who you are) in favor of more serious options like the all-time favorite Helvetica. (Seriously, Helvetica has its own documentary.)

In “Papyrus,” host Ryan Gosling played Steven, a man who becomes obsessed with the fact that the person who designed the Avatar logo chose to use Papyrus. “Was it laziness? Was it cruelty?” Why would any self-respecting graphic designer select the same font one sees all over in “hookah bars, Shakira merch, [and] off-brand teas”? The skit is played straight as a tense psychological thriller and ends with a frustrated Steven screaming, “I know what you did!” in front of the graphic designer’s house while the designer smirks in triumph.

There was even a sequel last year in which Gosling’s Steven is in a support group and seems to have recovered from the trauma of seeing the hated font everywhere—as long as he avoids triggers. Then he learns that the font for Avatar: The Way of Water is just Papyrus in bold.

So begins an elaborate plot to infiltrate a graphic designer awards event to confront his tormentor head-on. The twist: Steven achieves a personal epiphany instead and confronts the root of his trauma: the fact that he was never able to understand his father, Jonathan WingDings. “My dad was so hard to read,” a weeping Steven laments as he finally gets some much-needed closure. Like most sequels, it doesn’t quite capture the magic of the original, but it’s still a charming addition to the archive.

Papyrus.

—Jennifer Ouellette

Washington’s Dream (Season 49)

The only SNL skit known and loved by all my kids. Nate Bargatze is George Washington, who explains his dream of “liberty” to soldiers in his revolutionary army. Washington’s future America is heavy on bizarre weights, measures, and rules, though not quite so concerned about things like slavery.

Washington’s Dream.

—Nate Anderson

Commercial parodies

I’ve always been partial to SNL‘s commercial parodies, probably because I saw way too many similar (but earnest) commercials while watching terrestrial TV growing up.

The other good thing about the commercial format is that it’s hard to make them longer than about two minutes, so they don’t outstay their welcome like some other SNL sketches

It’s hard to pick just one, so I’ll give a trio, along with the bits I think about and/or quote regularly.

Old Glory Insurance: “I don’t even know why the scientists make them!” (Season 21)

Old Glory Insurance.

First Citywide Change Bank: “All the time, our customers ask us, ‘How do you make money doing this?’ The answer is simple: volume.” (Season 14)

First CityWide Change Bank.

Happy Fun Ball: “Do not taunt Happy Fun Ball” (Season 16)

Happy Fun Ball.

—Kyle Orland

Anything with Phil Hartman (Seasons 12 to 20)

Phil Hartman was a regular on Saturday Night Live throughout my high school and college years, and it was nice to know that on the rare Saturday night when I did not have a date or plans, he and the cast would be on television to provide entertainment. He was the “glue” guy during his time on the show, playing a variety of roles and holding the show together.

Here are some of his most memorable sketches, at least to me.

Anal Retentive Chef. Hartman acts as Gene, who is… well, anal retentive. He appeared in five different skits over the years. This is the first one. (Season 14)

The Anal Retentive Chef.

Hartman had incredible range. During his first year on the show, he played President Reagan, who at the time had acquired the reputation of becoming doddering and forgetful. However, as Hartman clearly shows us in this sketch, that is far from reality. (Season 12)

President Reagan, Mastermind.

And here he is a few years later, during the first year of President Clinton’s term in office. This skit also features Chris Farley, who was memorable in almost everything he appeared in. “Do you mind if I wash it down?” (Season 18)

President Bill Clinton at McDonald’s.

Kyle has noted commercial parodies above, and there are many good ones. Hartman often appeared in these because he did such a good job of playing the “straight man” character in comedy, the generally normal person in contrast to all of the wackiness happening in a scene. One of Hartman’s most famous commercials is for Colon Blow cereal. However, my favorite is this zany commercial for Jiffy Pop… Airbags. (Season 17)

Jiffy Pop Airbag.

—Eric Berger

Motherlover (Season 34)

The Lonely Island (an American comedy trio, formed by Andy Samberg, Jorma Taccone, and Akiva Schaffer, which wrote comedy music videos) had bigger, more viral hits, but nothing surpasses the subversiveness of “to me, you’re like a brother, so be my motherlover.”

Motherlover.

—Jacob May

More Cowbell (Season 25)

This classic sketch gets featured on almost all SNL “best of” lists; “more cowbell” even made it into the dictionary. It’s a sendup of VH1’s “Behind the Music,” focused on the recording of Blue Oyster Cult’s 1975 hit “Don’t Fear the Reaper,” which features a distinctive percussive cowbell in the background. Will Ferrell is perfection as fictional cowbell player Gene Frenkel, whose overly enthusiastic playing is a distraction to his bandmates. But Christopher Walken’s “legendary” (and fictional) producer Bruce Dickinson loves the cowbell, encouraging Gene to “really explore the studio space” with each successive take. “I gotta have more cowbell, baby!”

Things escalate as Gene’s playing first becomes too flamboyant, and then passive-aggressive, until the band works through its tensions and decides to embrace the cowbell after all. The comic timing is spot on, and the cast doesn’t let the joke run too long (a common flaw in lesser SNL skits). Ferrell’s physical antics and Walken’s brilliantly deadpan delivery—”I got a fever and the only prescription is more cowbell!”—has the cast on the verge of breaking character throughout. It deserves its place in the pantheon of SNL‘s best.

More Cowbell.

—Jennifer Ouellette

The Californians (Season 37-present day)

I was going to go with Old Glory Insurance as my favorite SNL skit, but since Kyle already grabbed that one, I have to fall back on some of my runners-up. And although the Microsoft Robots and Career Day and even good ol’ Jingleheimer Junction almost topped my list, ultimately, I have to give it up to the recurring SNL skit that has probably given me more joy than anything the show has done since John Belushi’s samurai librarian. I am speaking of The Californians.

This fake soap opera, featuring a cast of perpetually blonde, perpetually unfaithful, perpetually directions-obsessed California stereotypes hits me just right. The elements that get repeated in every skit (including and especially Fred Armisen’s inevitable “WHATAREYUUUUDUUUUUUUINGHERE” or the locally produced furniture that everyone makes a point of using in the second act) are the kind of absurdities that get funnier over time, and it’s awesome to see guest stars try on the hyper-SoCal accent that is mandatory for all characters in the Californians’ universe.

Special props to Kristen Wiig, too—she’s inevitably hilarious, but her incredulous line reading when Mick Jagger shows up as Stuart’s long-absent father (“STUART! You never told me you had a dad!”) can and will fully send me into doubled-over hysterics every single time.

The Californians.

—Lee Hutchinson

What’s the fuss about?

In more than 20 years of living in the United States, few things still remain as far outside my cultural frame of reference as SNL. Whenever someone makes an unintelligible joke in Slack (or IRC before it) and everyone laughs, it invariably turns out to be some SNL thing that anyone who grew up here instinctively understands.

To me, it was always just *crickets*.

—Jonathan Gitlin

Black Jeopardy (Season 42)

Kenan Thompson was the show’s first cast member born after SNL‘s premiere in 1975, and after joining the show in 2003, he has become its longest-running cast member. Whenever he is on screen, you know you’re about to see something hilarious. One of his best roles on SNL has become the “game show host,” with long-running bits on Family Feud and the absurdly hilarious Black Jeopardy. The most famous of these latter skits occurred in 2016, when Tom Hanks appeared. If you haven’t watched it, you really must.

Black Jeopardy.

—Eric Berger

Josh Acid (Season 15)

One of my favorite SNL sketches (and perhaps one of the most underrated) is an Old West send-up featuring a sheriff named “Josh Acid” (played by Mel Gibson during his hosting appearance in 1989), who keeps two bottles of acid in holsters instead of the standard six-shooter revolvers.

The character is a hero in his town, but when he throws acid on people, their skin melts, and they die a horrible, gruesome death. The townspeople witness one such death and say it’s “gross.” In response, the main character cites Jim Bowie using a Bowie knife and says, “I use acid because that’s my name.” At one point, Kevin Nealon, as the bartender, says the town is grateful he’s cleaned up the place, but “it’s just that we’re not sure which is worse: lawlessness, or having to watch people die horribly from acid.”

Later, when a woman asks Josh to choose between her or acid, he says, “Frida, I took a job, and that job’s not done until every criminal in this territory is either behind bars or melted down.”

The sketch is just absurdly ridiculous in a delightful way, and it gleefully subverts the stoic nobility of the stereotypical Western hero, which is a trope baby boomers grew up with on TV. If I were to stretch, I’d also say it works because it lampoons the idea that some methods of legally or rightfully killing someone are more honorable and socially acceptable than others.

It’s not on YouTube that I can find, but I found a copy on TikTok.

—Benj Edwards

Hidden Camera Commercials (Season 17)

For me—and, I suspect, most people—there are several “golden ages” of SNL. But if I had to pick just one, it would be the Chris Farley era. The crown jewel of Farley’s SNL tenure was certainly the Bob Odenkirk- penned “Van Down by the River.” Today, though, I’d like to highlight a deeper cut: a coffee commercial in which Farley’s character is told he is drinking decaf coffee instead of regular. Instead of being delighted that he can’t tell the difference in taste, he gets… ANGRY.

Farley’s incredulous “what?” and dawning rage at being deceived never fail to make me laugh.

Hidden Camera Commercials.

—Aaron Zimmerman

Wake Up and Smile (Season 21)

SNL loves to take a simple idea and repeat it—sometimes without enough progression. But “Wake Up and Smile” stands out by following its simple idea (perky morning show hosts are lost without their teleprompters) into an incredibly dark place. In six minutes, you can watch the polished veneer of civilization collapse into tribal violence, all within the absurdist confines of a vapid TV show. In the end, everyone wakes from their temporary dystopian dreamland. Well, except for the weatherman.

Wake Up and Smile

—Nate Anderson

Thanks, Nate, and everyone who contributed. Indeed, one of the joys of watching the show live is you never know when a sketch is going to dark or very, very dark.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

After 50 years, Ars staffers pick their favorite Saturday Night Live sketches Read More »

centurylink-nightmares:-users-keep-asking-ars-for-help-with-multi-month-outages

CenturyLink nightmares: Users keep asking Ars for help with multi-month outages


More CenturyLink horror stories

Three more tales of CenturyLink failing to fix outages until hearing from Ars.

Horror poster take on the classic White Zombie about Century Link rendering the internet powerless

Credit: Aurich Lawson | White Zombie (Public Domain)

Credit: Aurich Lawson | White Zombie (Public Domain)

CenturyLink hasn’t broken its annoying habit of leaving customers without service for weeks or months and repeatedly failing to show up for repair appointments.

We’ve written about CenturyLink’s failure to fix long outages several times in the past year and a half. In each case, desperate customers contacted Ars because the telecom provider didn’t reconnect their service. And each time, CenturyLink finally sprang into action and fixed the problems shortly after hearing from an Ars reporter.

Unfortunately, it keeps happening, and CenturyLink (also known as Lumen) can’t seem to explain why. In only the last two months, we heard from CenturyLink customers in three states who were without service for periods of between three weeks and over four months.

In early December, we heard from John in Boulder, Colorado, who preferred that we not publish his last name. John said he and his wife had been without CenturyLink phone and DSL Internet service for over three weeks.

“There’s no cell service where we live, so we have to drive to find service… We’ve scheduled repairs [with CenturyLink] three different times, but each time nobody showed up, emailed, or called,” he told us. They pay $113 a month for phone and DSL service, he said.

John also told us his elderly neighbors were without service. He read our February 2024 article about a 39-day outage in Oregon and wondered if we could help. We also published an August 2023 article about CenturyLink leaving an 86-year-old woman in Minnesota with no Internet service for a month and a May 2024 article about CenturyLink leaving a couple in Oregon with no service for two months, then billing them for $239.

We contacted CenturyLink about the outages affecting John and his neighbor, providing both addresses to the company. Service for both was fixed several hours later. Suddenly, a CenturyLink “repair person showed up today, replaced both the modem and the phone card in the nearest pedestal, and we are reconnected to the rest of the world,” John told us.

John said he also messaged a CenturyLink technician whose contact information he saved from a previous visit for a different matter. It turned out this technician had been promoted to area supervisor, so John’s outreach to him may also have contributed to the belated fix. However it happened, CenturyLink confirmed to Ars that service was restored for both John and his neighbor on the same day,

“Good news, we were able to restore service to both customers today,” a company spokesperson told us. “One had a modem issue, which needed to be replaced, and the other had a problem with their line.”

What were you waiting for?

After getting confirmation that the outages were fixed, we asked the CenturyLink spokesperson whether the company has “a plan to make sure that customer outages are always fixed when a customer contacts the company instead of waiting for a reporter to contact the company on the customer’s behalf weeks later.”

Here is the answer we got from CenturyLink: “Restoring customer service is a priority, and we apologized for the delay. We’re looking at why there was a repair delay.”

It appears that nothing has changed. Even as John’s problem was fixed, CenturyLink users in other states suffered even longer outages, and no one showed up for scheduled repair appointments. These outages weren’t fixed until late January—and only after the customers contacted us to ask for help.

Karen Kurt, a resident of Sheridan, Oregon, emailed us on January 23 to report that she had no CenturyLink DSL Internet service since November 4, 2024. One of her neighbors was also suffering through the months-long outage.

“We have set up repair tickets only to have them voided and/or canceled,” Kurt told us. “We have sat at home on the designated repair day from 8–5 pm, and no one shows up.” Kurt’s CenturyLink phone and Internet service costs $172.04 a month, according to a recent bill she provided us. Kurt said she also has frequent CenturyLink phone outages, including some stretches that occurred during the three-month Internet outage.

Separately, a CenturyLink customer named David Stromberg in Bellevue, Washington, told us that his phone service had been out since September 16. He repeatedly scheduled repair appointments, but the scheduled days went by with no repairs. “Every couple weeks, they do this and the tech doesn’t show up,” he said.

“Quick” fixes

As far as we can tell, there weren’t any complex technical problems preventing CenturyLink from ending these outages. Once the public relations department heard from Ars, CenturyLink sent technicians to each area, and the customers had their services restored.

On the afternoon of January 24, we contacted CenturyLink about the outage affecting Kurt and her neighbor. CenturyLink restored service for both houses less than three hours later, finally ending outages that lasted over 11 weeks.

On Sunday, January 26, we informed CenturyLink’s public relations team about the outage affecting Stromberg in Washington. Service was restored about 48 hours later, ending the phone outage that lasted well over four months.

As we’ve done in previous cases, we asked CenturyLink why the outages lasted so long and why the company repeatedly failed to show up for repair appointments. We did not receive any substantive answer. “Services have been restored, and appropriate credits will be provided,” the CenturyLink spokesperson replied.

Stromberg said getting the credit wasn’t so simple. “We contacted them after service was restored. They credited the full amount, but it took a few phone calls. They also gave us a verbal apology,” he told us. He said they pay $80.67 a month for CenturyLink phone service and that they get Internet access from Comcast.

Kurt said she had to call CenturyLink each month the outage dragged on to obtain a bill credit. Though the outage is over, she said her Internet access has been unreliable since the fix, with webpages often taking painfully long times to load.

Kurt has only a 1.5Mbps DSL connection, so it’s not a modern Internet connection even on a good day. CenturyLink told us it found no further problems on its end, so it appears that Kurt is stuck with what she has for now.

Desperation

“We are just desperate,” Kurt told us when she first reached out. Kurt, a retired teacher, said she and her husband were driving to a library to access the Internet and help grandchildren with schoolwork. She said there’s no reliable cell service in the area and that they are on a waiting list for Starlink satellite service.

Kurt said her husband once suggested they switch to a different Internet provider, and she pointed out that there aren’t any better options. On the Starlink website, entering their address shows they are in an area labeled as sold out.

Although repair appointments came and went without a fix, Kurt said she received emails from CenturyLink falsely claiming that service had been restored. Kurt said she spoke with technicians doing work nearby and asked if CenturyLink is trying to force people to drop the service because it doesn’t want to serve the area anymore.

Kurt said a technician replied that there are some areas CenturyLink doesn’t want to serve anymore but that her address isn’t on that list. A technician explained that they have too much work, she said.

CenturyLink has touted its investments in modern fiber networks but hasn’t upgraded the old copper lines in Kurt’s area and many others.

“This is DSL. No fiber here!” Kurt told us. “Sometimes when things are congested, you can make a sandwich while things download. I have been told that is because this area is like a glass of water. At first, there were only a few of us drinking out of the glass. Now, CenturyLink has many more customers drinking out of that same glass, and so things are slower/congested at various times of the day.”

Kurt said the service tends to work better in mid-morning, early afternoon, after 9 pm on weeknights, and on weekends. “Sometimes pages take a bit of time to load. That is especially frustrating while doing school work with my grandson and granddaughter,” she said.

CenturyLink Internet even slower than expected

After the nearly three-month outage ended, Kurt told us on January 27 that “many times, we will get Internet back for two or three days, only to lose it again.” This seemed to be what happened on Sunday, February 2, when Kurt told us her Internet stopped working again and that she couldn’t reach a human at CenturyLink. She restarted the router but could not open webpages.

We followed up with CenturyLink’s public relations department again, but this time, the company said its network was performing as expected. “We ran a check and called Karen regarding her service,” CenturyLink told us on February 3. “Everything looks good on our end, with no problems reported since the 24th. She mentioned that she could access some sites, but the speed seemed really slow. We reminded her that she has a 1.5Mbps service. Karen acknowledged this but felt it was slower than expected.”

Kurt told us that her Internet is currently slower than it was before the outage. “Before October, at least the webpages loaded,” she said. Now, “the pages either do not load, continue to attempt to load, or finally time out.”

While Kurt is suffering from a lack of broadband competition, municipalities sometimes build public broadband networks when private companies fail to adequately serve their residents. ISPs such as CenturyLink have lobbied against these efforts to expand broadband access.

In May 2024, we wrote about how public broadband advocates say they’ve seen a big increase in opposition from “dark money” groups that don’t have to reveal their donors. At the time, CenturyLink did not answer questions about specific donations but defended its opposition to government-operated networks.

“We know it will take everyone working together to close the digital divide,” CenturyLink told us then. “That’s why we partner with municipalities on their digital inclusion efforts by providing middle-mile infrastructure that supports last-mile networks. We have and will continue to raise legitimate concerns when government-owned networks create an anti-competitive environment. There needs to be a level playing field when it comes to permitting, right-of-way fees, and cross subsidization of costs.”

Stuck with CenturyLink

Kurt said that CenturyLink has set a “low bar” for its service, and it isn’t even meeting that low standard. “I do not use the Internet a lot. I do not use the Internet for gaming or streaming things. The Internet here would never be able to do that. But I do expect the pages to load properly and fully,” she said.

Kurt said she and her husband live in a house they built in 2007 and originally were led to believe that Verizon service would be available. “Prior to purchasing the property, we did our due diligence and sought out all utility providers… Verizon insisted it was their territory on at least two occasions,” she said.

But when it was time to install phone and Internet lines, it turned out Verizon didn’t serve the location, she said. This is another problem we’ve written about multiple times—ISPs incorrectly claiming to offer service in an area, only to admit they don’t after a resident moves in. (Verizon sold its Oregon wireline operations to Frontier in 2010.)

“We were stuck with CenturyLink,” and “CenturyLink did not offer Internet when we first built this home,” Kurt said. They subscribed to satellite Internet offered by WildBlue, which was acquired by ViaSat in 2009. They used satellite for several years until they could get CenturyLink’s DSL Internet.

Now they’re hoping to replace CenturyLink with Starlink, which uses low-Earth orbit satellites that offer faster service than older satellite services. They’re on the waiting list for Starlink and are interested in Amazon’s Kuiper satellite service, which isn’t available yet.

“We are hoping one of these two vendors will open up a spot for us and we can move our Internet over to satellite,” Kurt said. “We have also heard that Starlink and Amazon are going to be starting up phone service as well as Internet. That would truly be a gift to us. If we could move all of our services over to something reliable, our life would be made so much easier.”

Not enough technicians for copper network

John, the Colorado resident who had a three-week CenturyLink outage, said his default DSL speed is 10Mbps downstream and 2Mbps upstream. He doubled that by getting a second dedicated line to create a bonded connection, he said.

When John set up repair appointments during the outage, the “dates came and went without the typical ‘your tech’s on their way’ email, without anyone showing up,” he said. John said he repeatedly called CenturyLink and was told there was a bad cable that was being fixed.

“Every time I called, I’d get somebody who said that it was a bad cable and it was being fixed. Every single time, they’d say it would be fixed by 11 pm the following day,” he said. “It wasn’t, so I’d call again. I asked to talk with a supervisor, but that was always denied. Every time, they said they’d expedite the request. The people I talked with were all very nice and very apologetic about our outage, but they clearly stayed in their box.”

John still had the contact information for the CenturyLink technician who set up his bonded connection and messaged him around the same time he contacted Ars. When a CenturyLink employee finally showed up to fix the problem, he “found that our DSL was out because our modem was bad, and the phone was out because there was a bad dial-tone card in the closest pedestal. It took this guy less than an hour to get us back working—and it wasn’t a broken cable,” John said.

John praised CenturyLink’s local repair team but said his requests for repairs apparently weren’t routed to the right people. A CenturyLink manager told John that the local crew never got the repair ticket from the phone-based customer service team, he said.

The technician who fixed the service offered some insight into the local problems, John told us. “He said that in the mountains of western Boulder County, there are a total of five techs who know how to work with copper wire,” John told us. “All the other employees only work with fiber. CenturyLink is losing the people familiar with copper and not replacing them, even though copper is what the west half of the county depends on.”

Lumen says it has 1.08 million fiber broadband subscribers and 1.47 million “other broadband subscribers,” defined as “customers that primarily subscribe to lower speed copper-based broadband services marketed under the CenturyLink brand.”

John doesn’t know whether his copper line will ever be upgraded to fiber. His house is 1.25 miles from the nearest fiber box. “I wonder if they’ll eventually replace lines like the one to our house or if they’ll drop us as customers when the copper line eventually degrades to the point it’s not usable,” he said.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

CenturyLink nightmares: Users keep asking Ars for help with multi-month outages Read More »

the-severance-writer-and-cast-on-corporate-cults,-sci-fi,-and-more

The Severance writer and cast on corporate cults, sci-fi, and more

The following story contains light spoilers for season one of Severence but none for season 2.

The first season of Severance walked the line between science-fiction thriller and Office Space-like satire, using a clever conceit (characters can’t remember what happens at work while at home, and vice versa) to open up new storytelling possibilities.

It hinted at additional depths, but it’s really season 2’s expanded worldbuilding that begins to uncover additional themes and ideas.

After watching the first six episodes of season two and speaking with the series’ showrunner and lead writer, Dan Erickson, as well as a couple of members of the cast (Adam Scott and Patricia Arquette), I see a show that’s about more than critiquing corporate life. It’s about all sorts of social mechanisms of control. It’s also a show with a tremendous sense of style and deep influences in science fiction.

Corporation or cult?

When I started watching season 2, I had just finished watching two documentaries about cults—The Vow, about a multi-level marketing and training company that turned out to be a sex cult, and Love Has Won: The Cult of Mother God, about a small, Internet-based religious movement that believed its founder was the latest human form of God.

There were hints of cult influences in the Lumon corporate structure in season 1, but without spoiling anything, season 2 goes much deeper into them. As someone who has worked at a couple of very large media corporations, I enjoyed Severance’s send-up of corporate culture. And as someone who has worked in tech startups—both good and dysfunctional ones—and who grew up in a radical religious environment, I now enjoy its send-up of cult social dynamics and power plays.

Employees watch a corporate propaganda video

Lumon controls what information is presented to its employees to keep them in line. Credit: Apple

When I spoke with showrunner Dan Erickson and actor Patricia Arquette, I wasn’t surprised to learn that it wasn’t just me—the influence of stories about cults on season 2 was intentional.

Erickson explained:

I watched all the cult documentaries that I could find, as did the other writers, as did Ben, as did the actors. What we found as we were developing it is that there’s this weird crossover. There’s this weird gray zone between a cult and a company, or any system of power, especially one where there is sort of a charismatic personality at the top of it like Kier Eagan. You see that in companies that have sort of a reverence for their founder.

Arquette also did some research on cults. “Very early on when I got the pilot, I was pretty fascinated at that time with a lot of cult documentaries—Wild Wild Country, and I don’t know if you could call it a cult, but watching things about Scientology, but also different military schools—all kinds of things like that with that kind of structure, even certain religions,” she recalled.

The Severance writer and cast on corporate cults, sci-fi, and more Read More »