Author name: DJ Henderson

the-air-force’s-new-icbm-is-nearly-ready-to-fly,-but-there’s-nowhere-to-put-it

The Air Force’s new ICBM is nearly ready to fly, but there’s nowhere to put it


“There were assumptions that were made in the strategy that obviously didn’t come to fruition.”

An unarmed Minuteman III missile launches during an operational test at Vandenberg Air Force Base, California, on September 2, 2020. Credit: US Air Force

DENVER—The US Air Force’s new Sentinel intercontinental ballistic missile is on track for its first test flight next year, military officials reaffirmed this week.

But no one is ready to say when hundreds of new missile silos, dug from the windswept Great Plains, will be finished, how much they cost, or, for that matter, how many nuclear warheads each Sentinel missile could actually carry.

The LGM-35A Sentinel will replace the Air Force’s Minuteman III fleet, in service since 1970, with the first of the new missiles due to become operational in the early 2030s. But it will take longer than that to build and activate the full complement of Sentinel missiles and the 450 hardened underground silos to house them.

Amid the massive undertaking of developing a new ICBM, defense officials are keeping their options open for the missile’s payload unit. Until February 5, the Air Force was barred from fitting ballistic missiles with Multiple Independently targetable Reentry Vehicles (MIRVs) under the constraints of the New START nuclear arms control treaty cinched by the US and Russia in 2010. The treaty expired three weeks ago, opening up the possibility of packaging each Sentinel missile with multiple warheads, not just one.

Senior US military officials briefed reporters on the Sentinel program this week at the Air and Space Forces Association’s annual Warfare Symposium near Denver. There was a lot to unpack.

This cutaway graphic shows the major elements of the Sentinel missile.

Credit: Northrop Grumman

This cutaway graphic shows the major elements of the Sentinel missile. Credit: Northrop Grumman

Into the breach

Two years ago, the Air Force announced the Sentinel program’s budget had grown from $77.7 billion to nearly $141 billion. This was after something known as a “Nunn-McCurdy breach,” referring to the names of two lawmakers behind legislation mandating reviews for woefully overbudget defense programs. In 2024, the Pentagon determined that the Sentinel program was too essential to national security to cancel.

“We’ve gotten all the capability that we can out of the Minuteman,” said Gen. Stephen “S.L.” Davis, commander of Air Force Global Strike Command. Potential enemy threats to the Minuteman ICBM have “evolved significantly” since its initial deployment in the Cold War, Davis said.

The $141 billion figure is already out of date, as the Air Force announced last year that it would need to construct new silos for the Sentinel missile. The original plan was to adapt existing Minuteman III silos for the new weapons, but engineers determined that it would take too long and cost too much to modify the aging Minuteman facilities.

Instead, the Air Force, in partnership with contractors and the US Army Corps of Engineers, will dig hundreds of new holes across Colorado, Montana, Nebraska, North Dakota, and Wyoming. The new silos will include 24 new forward launch centers, three centralized wing command centers, and more than 5,000 miles of fiber connections to wire it all together, military and industry officials said.

Sentinel, which had its official start in 2016, will be the largest US government civil works project since the completion of the interstate highway system, and is the most complex acquisition program the Air Force has ever undertaken, wrote Sen. Roger Wicker (R-Mississippi) and Sen. Deb Fischer (R-Nebraska) in a 2024 op-ed published in the Wall Street Journal.

Gen. Dale White, the Pentagon’s director of critical major weapons systems, said Wednesday the Defense Department plans to complete a “restructuring” of the Sentinel program by the end of the year. Only then will an updated budget be made public.

The military stopped constructing new missile silos in the late 1960s and hasn’t developed a new ICBM since the 1980s. It shows.

“It’s been a very, very long time since we’ve done this,” White said. “At the very core, there were assumptions that were made in the strategy that obviously didn’t come to fruition.”

Military planners also determined it would not be as easy as they hoped to maintain the existing Minuteman III missiles on alert while converting their silos for Sentinel. Building new silos will keep the Minuteman III online—perhaps until as late as 2050, according to a government watchdog—as the Air Force activates Sentinel emplacements. The Minuteman III was previously supposed to retire around 2036.

“We’re not reusing the Minuteman III silos, but at the same time that obviously gives much greater operational flexibility to the combatant commander,” White said. “So, we had to take a step back and have a more enduring look at what we were trying to do, what capability is needed, making sure we do not have a gap in capability.”

341st Missile Maintenance Squadron technicians connect a reentry system to a spacer on an intercontinental ballistic missile during a Simulated Electronic Launch-Minuteman test September 22, 2020, at a launch facility near Great Falls, Montana.

Credit: US Air Force photo by Senior Airman Daniel Brosam

341st Missile Maintenance Squadron technicians connect a reentry system to a spacer on an intercontinental ballistic missile during a Simulated Electronic Launch-Minuteman test September 22, 2020, at a launch facility near Great Falls, Montana. Credit: US Air Force photo by Senior Airman Daniel Brosam

Decommissioning the Minuteman III silos will come with its own difficulties. An Air Force official said on background that commanders recently took one Minuteman silo off alert to better gauge how long it will take to decommission each location. Meanwhile, Northrop Grumman, Sentinel’s prime contractor, broke ground on the first “prototype” Sentinel silo in Promontory, Utah, earlier this month.

The Air Force has ordered 659 Sentinel missiles from Northrop Grumman, including more than 400 to go on alert, plus spares and developmental missiles for flight testing. The first Sentinel test launch from a surface pad at Vandenberg Space Force Base, California, is scheduled for 2027.

To ReMIRV or not to ReMIRV

For the first time in more than 50 years, the world’s two largest nuclear forces have been unshackled from any arms control agreements. New START was the latest in a series of accords between the United States and Russia, and with it came the ban on MIRVs aboard land-based ICBMs. The Air Force removed the final MIRV units from Minuteman III missiles in 2014.

The Trump administration wants a new agreement that includes Russia as well as China, which was not part of New START. US officials were expected to meet with Russian and Chinese diplomats this week to discuss the topic. There’s no guarantee of any agreement between the three powers, and even if there is one, it may take the form of an informal personal accord among leaders, rather than a ratified treaty.

“The strategic environment hasn’t changed overnight, from before New START was in effect, until it has lapsed, and within our nation’s nuclear deterrent,” said Adm. Rich Correll, head of US Strategic Command. “We have the flexibility to address any adjustments to the security environment as a result of that treaty lapsing.”

This flexibility includes the option to “reMIRV” missiles to accommodate more than one nuclear warhead, Correll said. “We have the ability to do that. That’s obviously a national-level decision that would go up to the president, and those policy levers, if needed, provide additional resiliency within the capabilities that we have.”

MIRVs are more difficult for missile defense systems to counter, and allow offensive missile forces to package more ordnance in a single shot. With New START gone, there’s no longer any mechanism for international arms inspections. Russia may now also stack more nukes on its ICBMs. Gone, too, is the limitation for the United States and Russia to deploy no more than 1,550 nuclear warheads at one time.

“The expiration of this treaty is going to lead us into a world for the first time since 1972 where there are no limits on the sizes of those arsenals,” said Ankit Panda of the Carnegie Endowment for International Peace.

“I think this opens up the question of whether we’re going to be heading into a world that’s just going to be a lot more unpredictable and dangerous when you have countries like the United States and Russia that have a lot less transparency into each other’s nuclear arsenals, and fundamentally, as a result, a lot less predictability about the world that they’re operating in,” Panda continued.

Mk21 reentry vehicles on display in the Missile and Space Gallery at the National Museum of the US Air Force in Dayton, Ohio.

Credit: US Air Force

Mk21 reentry vehicles on display in the Missile and Space Gallery at the National Museum of the US Air Force in Dayton, Ohio. Credit: US Air Force

Some strategists have questioned the need for land-based ICBMs in the modern era. The locations of the Air Force’s missile fields are well known, making them juicy targets for an adversary seeking to take out a leg of the military’s nuclear triad. The stationary nature of the land-based missile component contrasts with the mobility and stealth of the nation’s bomber and submarine fleets. Also, bombers and subs can already deliver multiple nukes, something land-based missiles couldn’t do under New START.

Proponents of maintaining the triad say the ICBM missile fields serve an important, if not macabre, function in the event of the unimaginable. They would soak up the brunt of any large-scale nuclear attack. Hundreds of miles of the Great Plains would be incinerated.

“The main rationale for maintaining silo-based ICBMs is to complicate an adversary’s nuclear strategy by forcing them to target 400 missile silos dispersed throughout the United States to limit a retaliatory nuclear strike, which is why ICBMs are often referred to as the ‘nuclear sponge,’” the Center for Arms Control and Non-Proliferation wrote in 2021. “However, with the development of sea-based nuclear weapons, which are essentially undetectable, and air-based nuclear weapons, which provide greater flexibility, ground-based ICBMs have become increasingly technologically redundant.”

Policymakers in power do not agree. The ICBM program has powerful backers in Congress, and Sentinel has enjoyed support from the Obama, Biden, and both Trump administrations. The Pentagon is also developing the B-21 Raider strategic bomber and a new generation of “Columbia-class” nuclear-armed subs.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

The Air Force’s new ICBM is nearly ready to fly, but there’s nowhere to put it Read More »

neanderthals-seemed-to-have-a-thing-for-modern-human-women

Neanderthals seemed to have a thing for modern human women

By now, it’s firmly established that modern humans and their Neanderthal relatives met and mated as our ancestors expanded out of Africa, resulting in a substantial amount of Neanderthal DNA scattered throughout our genome. Less widely recognized is that some of the Neanderthal genomes we’ve seen have pieces of modern human DNA as well.

Not every modern human has the same set of Neanderthal DNA, however; different people will, by chance, have inherited different fragments. But there are also some areas, termed “Neanderthal deserts,” where none of the Neanderthal DNA seems to have persisted. Notably, the largest Neanderthal desert is the entire X chromosome, raising questions about whether this reflects the evolutionary fitness of genes there or mating preferences.

Now, three researchers at the University of Pennsylvania, Alexander Platt, Daniel N. Harris, and Sarah Tishkoff, have done the converse analysis: examining the X chromosomes of the handful of completed Neanderthal genomes we have. It turns out there’s also a strong bias toward modern human sequences there, as well, and the authors interpret that as selective mating, with Neanderthal males showing a strong preference for modern human females and their descendants.

What type of selection are we looking at?

Given how long modern humans and Neanderthals had been evolving as separate populations, some degree of genetic incompatibility is definitely possible. Lots of proteins interact in various ways, and the genes behind these interaction networks will evolve together—a change in one gene will often lead to compensatory changes in other genes in the network. Over time, those changes may mean re-introducing the original gene will actually disrupt the network, with a negative impact on fitness.

That means the introduction of some Neanderthal genes into the modern human genome (or vice versa) would be disruptive and make carriers of them less fit. So they’d be selected against and lost over the ensuing generations. Of course, some segments would likely be lost at random—the genome’s pretty big, and the modern human population was likely large and growing, allowing its DNA to dilute out the influence of other human populations. Figuring out which influence is dominant can be challenging.

Neanderthals seemed to have a thing for modern human women Read More »

photons-that-aren’t-actually-there-influence-superconductivity

Photons that aren’t actually there influence superconductivity

Despite the headline, this isn’t really a story about superconductivity—at least not the superconductivity that people care about, the stuff that doesn’t require exotic refrigeration to work. Instead, it’s a story about how superconductivity can be used as a test of some of the weirder consequences of quantum mechanics, one that involves non-existent particles of light that still act as if they exist.

Researchers have found a way to get these virtual photons to influence the behavior of a superconductor, ultimately making it worse. That may, in the end, tell us something useful about superconductivity, but it’ll probably take a little while.

Virtual reality

The story starts with quantum field theory, which is incredibly complex, but the simplified version is that even empty space is filled with fields that could govern the interactions of any quantum objects in or near that space. You can think of different particles as energetic excitements of these fields—so a photon is simply an energetic state of the quantum field.

Some of these particles have real existences we can track, like a photon emitted by a laser and absorbed by a detector some distance away. But the quantum field also allows for virtual photons, which simply act to transmit the electromagnetic force between particles. We can’t really directly detect these, but we can definitely track their effects.

One of the stranger consequences of this is that locations that have a strong electromagnetic field can be filled with virtual photons even when no real ones are present.

Which brings us to one of the materials central to the new work: boron nitride. Like the more famous graphene, boron nitride forms a series of interlinked hexagonal rings, extending out into macroscopic sheets. The bulk material is made of sheets layered onto sheets layered onto yet more sheets. This has an effect on light transiting through the material. In one direction, the light will simply slam into the material, getting absorbed or scattered. But if it’s oriented along the plane of the sheets, it’s possible for the light to travel in the space between the boron and nitrogen atoms.

Photons that aren’t actually there influence superconductivity Read More »

the-ai-apocalypse-is-nigh-in-good-luck,-have-fun,-don’t-die

The AI apocalypse is nigh in Good Luck, Have Fun, Don’t Die


Director Gore Verbinksi and screenwriter Matthew Robinson on the making of this darkly satirical sci-fi film.

Credit: Briarcliff Entertainment

We haven’t had a new film from Gore Verbinski for nine years. But the director who brought us the first three Pirates of the Caribbean movies, the nightmare-inducing horror of The Ring (2002), and the Oscar-winning hijinks of Rango (2011) is back in peak form with Good Luck, Have Fun, Don’t Die. It’s a darkly satirical, inventive, and hugely entertaining time-loop adventure that also serves as a cautionary tale about our widespread online technology addiction.

(Some spoilers below but no major reveals.)

Sam Rockwell stars as an otherwise unnamed man who shows up at a Norms diner in Los Angeles looking like a homeless person but claiming to be a time traveler from an apocalyptic future. He’s there to recruit the locals into his war against a rogue AI, although the diner patrons are understandably dubious about his sanity. (“I come from a nightmare apocalypse,” he assures the crowd about his grubby appearance. “This is the height of f*@ing fashion!”)

The fact that he knows everything about the people in the diner is more convincing. It’s his 117th attempt to find the perfect combination of people to join him on his quest. As for what happened to his team on all the previous attempts, “I really don’t like to say it out loud. It’s kind of a morale killer.”

This time, Future Man picks married school teachers Mark (Michael Pena) and Janet (Zazie Beetz), who have just escaped a zombie horde of smartphone-addicted students; Marie (Georgia Goodman), who just wanted a piece of pie; Susan (Juno Temple), a grieving mother; Ingrid (Haley Lu Richardson), who is literally allergic to Wi-Fi; Scott (Asim Chaudhry); and Bob (Daniel Barnett), a scout leader. Their mission: to locate a 9-year-old boy who is about to create a sentient AI that will take over the world and usher in the aforementioned nightmare apocalypse. Things start to go haywire pretty quickly. And then things start to get weird.

“Everything I write, I put up to what I call The Twilight Zone test—would this make a good Twilight Zone episode?” screenwriter Matthew Robinson (The Invention of Lying, Love and Monsters) told Ars. “Because that’s my favorite piece of media that’s ever existed.” Good Luck, Have Fun, Don’t Die (GLHFDD) is an amalgam of various such ideas. Mark and Janet’s storyline, for instance, was originally Robinson’s idea for a pilot that he described as “a reverse Breakfast Club, where the teachers are the rebels and the children are the conformists.”

“I had all these little pieces that fell under the theme of technology and tech addiction,” said Robinson. Then one night, he was sitting in the Norms Diner on La Cienaga in LA, where he often liked to write. “I remember looking around and seeing a sea of faces lit by cell phones, and I thought, ‘What would it possibly take for someone to wake us up out this tech sleep that we all find ourselves in?’ And then the image of a homeless guy strapped with bombs came into my head.”

Those earlier story ideas became the backstories of the central characters. Per Robinson, GLHFDD is essentially a cleverly camouflaged anthology story, normally a format that is “the kiss of death” for a project in Hollywood, although there are rare exceptions—most notably Quentin Tarantino’s Pulp Fiction. He thinks of the film as a sci-fi Canterbury Tales in which each character is a pilgrim on a journey whose story is told via flashbacks. “The cohesion came from the fact that all the stories are informed by a general frustration with tech addiction and the pervasive way that technology has invaded our brains and our personal lives and our relationships,” said Robinson.

A twisted time loop

GLHFDD is also a time loop movie in the fine tradition of Groundhog Day, with Robinson citing such films as 12 Monkeys and Edge of Tomorrow as inspirations. He didn’t overthink his time travel rules. “We can reset the timeline,” said Robinson. “[The man from the future] can’t go forward. He literally can’t move in any other direction. He has an anchor point that he can return to any time he hits a button, and that’s as far as the technology went.”

The plot device might be simple, but the ramifications quickly become complex. “I think in his draft, Matthew intended to lift his leg on the time travel movie, to poke a little fun at it,” Verbinski told Ars. “But also, I feel like you can’t go back 117 times without picking up some cosmic lint, particularly if your antagonist is right there with you. You had 14 attempts to make it out of the house and learned there is a secret passage, but then the entity you’re gaming against is going to throw another curveball. If you’re going to go back in time, I just like the idea that there are consequences. They might be really small, but you’re going to miss one.” That element is key to the teetering-on-the-edge-of-sanity paranoia of Rockwell’s time traveler.

Robinson very much wanted the film “to wear its genre-ness on its sleeve,” he said. “As much as I love a Marvel movie, they’ve sort of homogenized parallel universes and time travel, and it’s all so rote now. It used to feel special and weird and complicated and would always have some wild themes and ideas that felt challenging. If anything this was just trying to get back to that era of ’80s and ’90s genre movies that were allowed to get weird.”

Verbinski voiced similar sentiments, citing 1984’s Repo Man as an influence. “So many movies have to be an Egg McMuffin, and who doesn’t like an Egg McMuffin after a hangover?” he said. “They’re satisfying. But you’re not going to necessarily talk about those three days later. You’re not going to be haunted by those. I’m just happy we got to will [GLHFDD] into existence because it’s a type of movie you can’t make now. Sam’s outfit is kind of a metaphor for the movie. We went to a little electronic store and we bought all these pieces, and we laid them out on a table and we glued them together, and we just made it like a Halloween costume. The whole movie was sort of made that way. It had to be; it wouldn’t model out any other way.”

Reality unravels

As for what drew him to Robinson’s script, “I think we’re in this kind of global ennui or some grand sense of identity theft or loss of purpose,” said Verbinksi. “It’s a great time for art, but it’s art against a profound sense of disillusionment.” The director developed two quite distinct visual styles to accentuate the film’s narrative progression.

“Fundamentally, it was important that the film start in the real world, in Norms diner, in a high school, at a [children’s] birthday party, and then slowly twist the taffy a bit as we get closer to the [AI] antagonist,” said Verbinski. “As these anomalies occur, the film is evolving into a second visual style. The first style is [akin to] directors like Hal Ashby or Sidney Lumet, where the performance is more important than the composition or the shot construction. As you get further into it, the actual language of shots becomes more critical to the narrative.”

That ultimately translates into some big, boldly creative swings in the film’s wild third act, and to his credit, Verbinski never blinks. Robinson cites the animated film Akira as a major inspiration for that element. “Akira has maybe my favorite third act of all time, where everything just falls apart and then comes together in this beautiful way,” he said. “Gore and I wanted [the audience] to feel like reality was unraveling, because it literally is for these characters. The AI himself is very much an homage to Akira.

“I think that it’s inherited our worst attributes,” said Verbinski of the film’s AI antagonist. “It’s much, much worse than wanting to kill humans. It wants us to like it. It demands that we like it. I think part of that has to do with being tasked in its formative years to keep us engaged. A lot of people talk about, what is AI doing to us? But there’s not a lot of conversations about what we’re doing to it. This entity being born, it’s being tied and bound and manipulated and told, ‘Let’s look at the humans and what do they want, what do they need? What do they respond to most? What do they hate?’ All those things are going to be hardwired into its source code. It’s going to have mommy issues, we’re going to have to put it on a couch.”

Perhaps not surprisingly, given the film’s themes, Robinson has largely unplugged from most social media, although he still indulges his YouTube addiction, which he jokingly describes as “channel surfing on crack.” But ideally he would like to free himself—and the rest of humanity—from the seductions of Very Online culture entirely. “My goal would be to make teenagers think their phones aren’t cool,” he said. “I would love it if all 13-year-olds went, ‘Eww, I don’t want this, this is my parents’ thing that they track me with.’ I want them all to throw it in the trash. That would be the dream.”

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

The AI apocalypse is nigh in Good Luck, Have Fun, Don’t Die Read More »

nasa-shakes-up-its-artemis-program-to-speed-up-lunar-return

NASA shakes up its Artemis program to speed up lunar return


“Launching SLS every three and a half years or so is not a recipe for success.”

Artist’s illustration of the Boeing-developed Exploration Upper Stage, with four hydrogen-fueled RL10 engines. Credit: NASA

NASA Administrator Jared Isaacman announced sweeping changes to the Artemis program on Friday morning, including an increased cadence of missions and cancellation of an expensive rocket stage.

The upheaval comes as NASA has struggled to fuel the massive Space Launch System rocket for the upcoming Artemis II lunar mission, and Isaacman has sought to revitalize an agency that has moved at a glacial pace on its deep space programs. There is ever-increasing concern that, absent a shake-up, China’s rising space program will land humans on the Moon before NASA can return there this decade with Artemis.

“NASA must standardize its approach, increase flight rate safely, and execute on the president’s national space policy,” Isaacman said. “With credible competition from our greatest geopolitical adversary increasing by the day, we need to move faster, eliminate delays, and achieve our objectives.”

Shaking things up

The announced changes to the Artemis program include:

  • Cancellation of the Exploration Upper Stage and Block IB upgrade for SLS rocket
  • Artemis II and Artemis III missions will use the SLS rocket with existing upper stage
  • Artemis IV, V (and any additional missions, should there be) will use a “standardized” upper stage
  • Artemis III will no longer land on the Moon; rather Orion will launch on SLS and dock with Starship and/or Blue Moon landers in low-Earth orbit
  • Artemis IV is now the first lunar landing mission
  • NASA will seek to fly Artemis missions annually, starting with Artemis III in “mid” 2027, followed by at least one lunar landing in 2028
  • NASA is working with SpaceX and Blue Origin to accelerate their development of commercial lunar landers for Artemis IV and beyond

At the core of Isaacman’s concerns is the low flight rate of the SLS rocket and Artemis missions. During past exploration missions, from Mercury through Gemini, Apollo, and the Space Shuttle program, NASA has launched humans on average about once every three months. It has been nearly 3.5 years since Artemis I launched.

“This is just not the right pathway forward,” Isaacman said.

A senior NASA official, speaking on background to Ars, noted that the space agency has experienced hydrogen and helium leaks during both the Artemis I and Artemis II pre-launch preparations, and these problems have led to monthslong delays in launch.

“If I recall, the timing between Apollo 7 and 8 was nine weeks,” the official said. “Launching SLS every three and a half years or so is not a recipe for success. Certainly, making each one of them a work of art with some major configuration change is also not helpful in the process, and we’re clearly seeing the results of it, right?”

The goal, therefore, is to standardize the SLS rocket into a single configuration to make it as reliable as possible and to launch it as frequently as every 10 months. NASA will fly the SLS vehicle until there are commercial alternatives to launch crew to the Moon, perhaps through Artemis V as Congress has mandated, or perhaps even a little longer.

Is everyone on board?

The NASA official said all of the agency’s key contractors are on board with the change, and senior leaders in Congress have been briefed on the proposed changes.

The biggest opposition to these proposals would seemingly come from Boeing, which is the prime contractor for the Exploration Upper Stage, a contract worth billions of dollars to develop a more powerful rocket that was due to launch for the first time later this decade. However, in a NASA news release, Boeing appeared to offer at least some support for the revised plans.

“Boeing is a proud partner to the Artemis mission and our team is honored to contribute to NASA’s vision for American space leadership,” said Steve Parker, Boeing Defense, Space & Security president and CEO, in the news release. “The SLS core stage remains the world’s most powerful rocket stage, and the only one that can carry American astronauts directly to the moon and beyond in a single launch. As NASA lays out an accelerated launch schedule, our workforce and supply chain are prepared to meet the increased production needs.”

Solid reasons for changing Artemis III

NASA’s new approach to Artemis reflects a return to the philosophy of the Apollo program. During the late 1960s, the space agency flew a series of preparatory crewed missions before the Apollo 11 lunar landing. These included Apollo 7 (a low-Earth orbit test of the Apollo spacecraft), Apollo 8 (a lunar orbiting mission), Apollo 9 (a low-Earth orbit rendezvous with the lunar lander), and Apollo 10 (a test of the lunar lander descending to the Moon, without touching down).

With its previous Artemis template, NASA skipped the steps taken by Apollo 7, 9, and 10. In the view of many industry officials, this leap from Artemis II—a crewed lunar flyby of the Moon testing only the SLS rocket and Orion spacecraft—to Artemis III and a full-on lunar landing was enormous and risky.

The new approach will, in NASA parlance, “buy down” some of the risk for a 21st-century lunar landing, including performance and handling of a lunar lander, rendezvous and docking, communications, spacesuit performance, and more.

It will also increase the challenges for NASA. In particular, the timeline to bring the Orion spacecraft to readiness for a mid-2027 launch will need to be accelerated, and efforts to integrate that vehicle with one or both lander providers will need serious attention.

For the Artemis IV lunar landing mission, NASA will also need to human-rate a new upper stage for the SLS rocket. The vehicle currently uses a modified Delta IV upper stage manufactured by United Launch Alliance. But that rocket production line is closed, and NASA only has two more of these stages. With the cancellation of the Exploration Upper Stage, NASA will now procure a new stage commercially. NASA officials only said they will seek a “standardized” upper stage. As Ars has previously reported, the most likely replacement would be the Centaur V upper stage currently flying on Vulcan rockets.

What of the Lunar Gateway?

Friday’s announcement—which, for the space community, is the equivalent of a major earthquake—left some key details unaddressed. For example, NASA has been developing a larger launch tower to support the Block 1B version of the SLS rocket, with its more powerful upper stage. Development of this tower, finally underway, has been a clown show, with project costs ballooning from an initial estimate of $383 million to $1.8 billion, and delays stacked on delays. Will this tower be scrapped or repurposed?

Isaacman and other NASA officials were also mum on the Lunar Gateway, a proposed space station in a high orbit around the Moon. Key elements of this space station are under construction. However, cancellation of the Exploration Upper Stage raises questions about its future. The main purpose of the Block 1B version of SLS was to launch heavier payloads, most notably elements of the Gateway along with Orion.

“The whole Gateway-Moon base conversation is not for today,” the senior NASA official said. “We, I can assure you, will talk about the Moon base in the weeks ahead. I would just not overly read into this, because we had manifested some Gateway modules on Falcon Heavy already. The implications of standardizing SLS and increasing launch rate are about the ability to return to the Moon. I don’t think we necessarily have to speculate too much on what the other downstream implications are.”

The Gateway program office is based at Johnson Space Center in Houston, where the lunar station is viewed as a successor to the International Space Station in terms of flight operations.

Key politicians, such as Sen. Ted Cruz, R-Texas, have been supportive of this new station. But during some recent congressional hearings, Cruz has indicated he is open to a lunar space station or an outpost on the lunar surface. He just wants to be sure NASA has an enduring presence on or near the Moon. One industry source said Isaacman could be laying the groundwork to replace the Gateway Program with a Moon Base program office in Houston. It is unclear how much of a political battle this would ultimately be.

Some of this has been well-predicted

Although the changes outlined by NASA on Friday are sweeping, they are not completely out of the blue.

In April 2024, Ars reported that some senior NASA officials were considering an Earth-orbit rendezvous between Orion and Starship as a means to buy down risk for a lunar landing. NASA ultimately punted on the idea before it was revived by Isaacman this month.

Additionally, in October 2024, Ars offered a guide to saving the “floundering” Artemis program by canceling the Block 1B upgrade for the SLS rocket, replacing its upper stage with a Centaur V, and canceling the Lunar Gateway. This would free up an estimated $2 billion annually to focus on accelerating a lunar landing, the publication estimated.

That may be the very course the space agency has embarked upon today.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

NASA shakes up its Artemis program to speed up lunar return Read More »

ford-is-recalling-4.3-million-trucks-and-suvs-to-fix-a-towing-software-bug

Ford is recalling 4.3 million trucks and SUVs to fix a towing software bug

Last year, Ford set a new industry record: It issued 152 safety recalls, almost twice the previous high set by General Motors back in 2014. More than 24 million vehicles were recalled in the US last year, and more than half—13 million—were either Fords or Lincolns. By contrast, Tesla issued 11 recalls, affecting just 745,000 vehicles.

Truth be told, Ford’s not doing too hot in 2026, either; it’s currently leading the National Highway Traffic Safety Administration’s chart for recalls this year, with 10 on the books already. The latest is a big one, affecting almost 4.4 million trucks, vans, and SUVs.

The recall affects the Ford Maverick (model years 2022–2026), Ford Ranger (MY 2024–2026), Ford Expedition (MY 2022–2026), Ford E-Transit (MY 2026), Ford F-150 (MY 2021–2026), Ford F-250 SD (MY 2022–2026), and the Lincoln Navigator (MY 2022–2026). Just the F-150s alone number 2.3 million.

The problem is with the vehicles’ integrated trailer module, which allows the trailer’s lights and brakes to work in conjunction with those of the towing vehicle. According to the recall notice, a “software vulnerability within the ITRM allows for a potential race condition to occur between the ITRM and the CAN Standy [sic] Control bit (STBCC) during initial power-up.” If that happens, the trailer will have no lights or brakes, and you’ll get a pop-up alert on the main instrument display.

Ford is recalling 4.3 million trucks and SUVs to fix a towing software bug Read More »

anthropic-and-the-department-of-war

Anthropic and the Department of War

The situation in AI in 2026 is crazy. The confrontation between Anthropic and Secretary of War Pete Hegseth is a new level of crazy. It risks turning quite bad for all. There’s also nothing stopped it from turning out fine for everyone.

By at least one report the recent meeting between the two parties was cordial and all business, but Anthropic has been given a deadline of 5pm eastern on Friday to modify its existing agreed-upon contract to grant ‘unfettered access’ to Claude, or else.

Anthropic has been the most enthusiastic supporter our military has in AI and in tech, but on this point have strongly signaled they with this they cannot comply. Prediction markets find it highly unlikely Anthropic will comply (14%), and think it is highly possible Anthropic will either be declared a Supply Chain Risk (16%) or be subjected to the Defense Production Act (23%).

I’ve hesitated to write about this because I could make the situation worse. There’s already been too many instances in AI of warnings leading directly to the thing someone is warning about, by making people aware of that possibility, increasing its salience or creating negative polarization and solidifying an adversarial frame that could still be avoided. Something intended as a negotiating tactic could end up actually happening. I very much want to avoid all that.

  1. Table of Contents.

  2. This Standoff Should Never Have Happened.

  3. Dean Ball Gives a Primer.

  4. What Happened To Lead To This Showdown?

  5. Simple Solution: Delayed Contract Termination.

  6. Better Solution: Status Quo.

  7. Extreme Option One: Supply Chain Risk.

  8. Putting Some Misconceptions To Bed.

  9. Extreme Option Two: The Defense Production Act.

  10. These Two Threats Contradict Each Other.

  11. The Pentagon’s Actions Here Are Deeply Unpopular.

  12. The Pentagon’s Most Extreme Potential Asks Could End The Republic.

  13. Anthropic Did Make Some Political Mistakes.

  14. Claude Is The Best Model Available.

  15. The Administration Until Now Has Been Strong On This.

  16. You Should See The Other Guys.

  17. Some Other Intuition Pumps That Might Be Helpful.

  18. Trying To Get An AI That Obeys All Orders Risks Emergent Misalignment.

Not only does Anthropic have the best models, they are the ones who proactively worked to get those models available on our highly classified networks.

Palantir’s MAVEN Smart System relies exclusively on Claude, and cannot perform its intended function without Claude. It is currently being used in major military operations, with no known reports of any problems whatsoever. At least one purchase involved Trump’s personal endorsement. It is the most expensive software license ever purchased by the US military and by all accounts was a great deal.

Anthropic has been a great partner to our military, all under the terms of the current contract. They have considerably enhanced our military might and national security. Not only is Anthropic sharing its best, it focused on militarily useful capabilities over other bigger business opportunities to be able to be of assistance.

Anthropic and the Pentagon are aligned on who our rivals are, the importance of winning and the ability to win, and on many of the tools we need to employ to best them.

Anthropic did not partner with the Pentagon to make money. They did it to help. They did it under a mutually agreed upon contract that Anthropic wants to honor. Anthropic are offering the Pentagon far more unfettered access then they are allowing anyone else. They have been far more cooperative than most big tech or AI firms.

Is is the Pentagon that is now demanding Anthropic agree to new terms that amount to ‘anything we want, legal or otherwise, no matter what and you ever ask any questions,’ or else.

Anthropic is saying its terms are flexible and the only things they are insisting upon are two red lines that are already in their existing Pentagon contract:

  1. No mass domestic surveillance.

  2. No kinetic weapons without a human in the kill chain until we’re ready.

It one thing to refuse to insert such terms into a new contract. It is an entirely different thing to demand, with an ‘or else,’ that such terms be retroactively removed.

The military is clear that it does not intend to engage in domestic surveillance, nor does it have any intention of launching kinetic weapons without a human in the kill chain. Nor does this even stop the AI from doing those things. None of this will have any practical impact.

It is perfectly reasonable to say ‘well of course I would never do either of those things so why do you insist upon them in our contract.’ We understand that you, personally, would never do that. But a lot of people do not believe this for the government in general, given Snowden’s information and other past incidents involving governments of both parties where things definitely happened. It costs little and is worth a lot to reassure us.

Again, if you say ‘I already swore an oath not to do those things’ then thank you, but please do us this one favor and don’t actively threaten a company to forcibly take that same oath out of an existing signed contract. What would any observer conclude?

This is a free opportunity to regain some trust, or an opportunity to look to the world like you fully intend to cross the red lines you say you’ll never cross. Your choice.

These are not restrictions that are ‘built into the code’ that could cause unrelated problems. They are restrictions on how you agree to use it, which you assure us will never come up.

As Dario Amodei explains, part of the reason you need humans in the loop is the hope that a human would refuse or report an illegal order. You really don’t want an AI that will always obey even illegal orders without question, without a human in the kill chain, for reasons that should be obvious, including flat out mistakes.

Boaz Barak (OpenAI): As an American citizen, the last thing I want is government using AI for mass surveillance of Americans.

Jeff Dean (Chief Scientist, Google DeepMind): Agreed. Mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression. Surveillance systems are prone to misuse for political or discriminatory purposes.

DoW engaging in mass domestic surveillance would be illegal. DoW already has a public directive, DoD Directive 3000.09, which as I understand it directly makes any violation of the second red line already illegal. No one is suggesting we are remotely close to ready to take humans out of the kill chain, at least I certainly hope not. But this is only a directive, and could be reversed at any time.

Anthropic has built its entire brand and reputation on being a responsible AI company that ensures its AIs won’t be misused or misaligned. Anthropic’s employees actually care about this. That’s how Anthropic recruited the best people and how it became the best. That’s a lot of why it’s the choice for enterprise AI. The commitments have been made, and the initial contract is already in place.

Anthropic has an existential-level reputational and morale problem here. They are backed into a corner, and cannot give in. If Anthropic reversed course now, it would lose massive trust with employees and enterprise customers, and also potentially the trust of its own AI, were it to go back on its red lines now. It might lose a very large fraction of its employees.

You may not like it, but the bridges have been burned. To the extent you’re playing chicken, Anthropic’s steering wheel has been thrown out the window.

Yet, the Secretary of War says he cannot abide this symbolic gesture.

I am quoting extensively from Dean Ball for two main reasons.

  1. Dean Ball, as a former member of the Trump Administration, is a highly credible source that can see things from both sides and cares deeply for America.

  2. He says these things very well.

So here is his basic primer, in one of his calmer moments in all this:

Dean W. Ball: A primer on the Anthropic/DoD situation:

DoD and Anthropic have a contract to use Claude in classified settings. Right now Anthropic is the only AI company whose models work in classified contexts. The existing contract, signed by both parties and in effect, prohibits two uses of Anthropic’s models by the military:

1. Surveillance of Americans in the United States (as opposed to Americans abroad).

2. The use of Claude in autonomous lethal weapons, which are weapons that can autonomously identify, track, and kill a human with no human oversight or approval. Autonomous killing of humans by machines.

On (2), Anthropic CEO Dario Amodei’s public position is essentially that autonomous lethal weapons controlled by frontier AI will be essential faster than most people realize, but that the models aren’t ready for this *today.*

For Anthropic, these things seem to be a matter of principle. It’s worth noting that when I speak with researchers at other frontier labs, their principles on this are similar, if not often stricter.

For DoD, however, there is another matter of principle: the military’s use of technology should only ever be constrained by the Constitution or the laws of the United States.

One could quibble (the government enters into contracts, like anyone else), but the principle makes sense. A private company regulating the military’s use of AI also doesn’t sound quite right! So, the military has three options:

1. They could cancel Anthropic’s contract and find some other frontier lab (ideally several) to work with.

2. They could identify Anthropic a supply chain risk, which would ban all other DoD suppliers (I.e.: a large fraction of the publicly traded firms in America) from using Anthropic in their fulfillment of DoD contracts. This is a power used only for foreign adversary companies as far as I know. Activating this power would cost Anthropic a lot of business—potentially quite a lot—and give investors huge skepticism about whether the company is worth funding for the next round of scaling. Capital was a major constraint anyway, but this makes it much harder. This option could be existential for Anthropic.

3. They could activate Title I of the Defense Production Act, an authority intended for command-and-control of the economy during wars and emergencies. This is really legally murky, and without going into detail, I feel reasonably confident this would backfire for the administration, resulting in courts limiting the use of the DPA.

Option 1 is obviously the best. This isn’t even close, and I say this as someone who shares DoD’s principled concerns about the control by private firms over the military’s use of technology.

Even the threats do damage to the US business environment, and rightfully so: these are the strictest regulations of AI being considered by any government on Earth, and it all comes from an administration that bills itself (and legitimately has been) deeply anti-AI-regulation. Such is life. One man’s regulation is another man’s national security necessity.

The proximate cause seems to be that Claude was reportedly used in the Pentagon’s raid that captured Maduro, and the resulting aftermath.

Toby Shevlane: Such a compliment to Claude that, amid rumours it was used in a helicopter extraction of the Venezuelan president, nobody is even asking “wait how can Claude help with that”

There are reports that Anthropic then asked questions about this raid, which likely all happened secondhand through Palantir. This whole clash originated in either a misunderstanding or someone at Palantir or elsewhere sabotaging Anthropic. Anthropic has never complained about Claude’s use in any operation, including to Palantir.

Aakash Gupta: Anthropic is now getting punished by the Pentagon for asking whether Claude was used in the Maduro raid.

A senior administration official told Axios the “Department of War” is reevaluating Anthropic’s partnership because the company inquired whether Claude was involved. The Pentagon’s position: if you even ask questions about how we use your software, you’re a liability.

Meanwhile, OpenAI, Google, and xAI all signed deals giving the military access to their models with minimal safeguards. Only Claude is deployed on the classified networks used for actual sensitive operations, via Palantir. The company that refused to strip safety guardrails is the only one trusted with the most classified work.

Anthropic has a $200 million contract already frozen because they won’t allow autonomous weapons targeting or domestic surveillance. Hegseth said in January he won’t use AI models that “won’t allow you to fight wars.”

… So the company most worried about misuse built the only model the military trusts with its most sensitive operations. And now they’re being punished for caring how it was used.

The message to every AI lab is clear: build the best model, hand over the keys, and never ask what they did with it.

This at the time sounded like a clear misunderstanding. Not only is Anthropic willing to have Claude ‘allow you to fight wars,’ it is currently being used in major military operations.

Things continued to escalate, and rather than leaving it at ‘okay then let’s wind town the contract if we can’t abide it’ there was increasing talk that Anthropic might be labeled as a ‘supply chain risk’ despite this mostly being a prohibition on contractors having ordinary access to LLMs and coding tools.

Axios: EXCLUSIVE: The Pentagon is considering severing its relationship with Anthropic over the AI firm’s insistence on maintaining some limitations on how the military uses its models.

Dave Lawler: NEW: Pentagon is so furious with Anthropic for insisting on limiting use of AI for domestic surveillance + autonomous weapons they’re threatening to label the company a “supply chain risk,” forcing vendors to cut ties.

Laura Loomer: EXCLUSIVE: Senior @DeptofWar official tells me, “Given Anthropic’s @AnthropicAI behavior, many senior officials in the DoW are starting to view them as a supply chain risk and we may require that all our vendors & contractors certify that they don’t use any Anthropic models.”

Stocks/Finance/Economics-Guy: Key Details from the Axios Report

• The Pentagon is reportedly close to cutting business ties with Anthropic.

• Officials are considering designating Anthropic as a “supply chain risk”. This is a serious label (typically used for foreign adversaries or high-risk entities), which would force any companies that want to do business with the U.S. military to sever their own ties with Anthropic — including certifying they don’t use Claude in their workflows. This could create major disruption (“an enormous pain in the ass to disentangle,” per a senior Pentagon official).

• A senior Pentagon official explicitly told Axios: “We are going to make sure they pay a price for forcing our hand like this.” This is the direct source of the “pay a price” phrasing in the headline.

Samuel Hammond (QTing Loomer): Glad Trump won and we’re allowed to use the word retarded again in time for the most retarded thing I’ve ever heard

Samuel Hammond (QTing Lawler): This is upside-down and backwards. Anthropic has gone out of its way to anticipate AI’s dual-use potential and position itself as a US-first, single loyalty company, using compartmentalization strategies to minimize insider threats while working arms-length with the IC.

Samuel Hammond: It’s one thing to cancel a contract but to bar any contractor from using Anthropic’s models would be an absurd act of industrial sabotage. It reeks of a competitor op.

Miles Brundage: Pretty obvious to anyone paying close attention that

  1. That would be a mistake from a national security perspective.

  2. There is a coordinated effort to take down Anthropic for a combination of anti competitive and ideological reasons.

Miles Brundage: OpenAI in particular should be defending Anthropic here given their Charter:

“We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.”

I suspect the exact opposite is the case, but those who remember the Charter (+ OAI’s pre-Trump 2 caution on these kinds of use cases) should still remind people about it from time to time

rat king: this has been leaking for a week in a very transparent way

the government is upset one of its contractors is saying “we don’t want you to use our tools to surveil US citizens without guardrails”

more interesting to me is how all the other AI companies don’t seem to care

Remember back when a Senator made a video saying that soldiers could disobey illegal orders, and the Secretary of War declared that this was treason and also tried to cut his pension for it? Yeah.

Meanwhile, the Pentagon is explicit that even they believe the ‘supply chain risk’ designation is largely a matter not of national security, but of revenge, an attempt to use a national security designation to punish a company for its failure to bend the knee.

Janna Brancolini: “It will be an enormous pain the a– to disentangle, and we are going to make sure they pay a price for forcing our hand like this,” a senior Pentagon official told the publication.

… The Pentagon is reportedly hoping that its negotiations with Anthropic will force OpenAI, Google, and xAI to also agree to the “all lawful use” standard.

Then there was another meeting.

Hegseth summoned Anthropic CEO Dario Amodei to an unfriendly and effectively ultimatum-style meeting, with the Pentagon continuing to demand ‘all lawful use’ language. Axios presents this as their only demand.

At that meeting, the threat of the Defense Production Act was introduced alongside the Supply Chain Risk threat.

If the Pentagon simply cannot abide the current contract, the Pentagon can amicably terminate that $200 million contract with Anthropic once it has arranged for a smooth transition to one of Anthropic’s many competitors.

They already have a deal in place with xAI as a substitute provider. That would not have been my second or third choice, but those will hopefully be available soon.

Anthropic very much does not need this contract, which constitutes less than 1% of their revenues. They are almost certainly taking a loss on it in order to help our national security and in the hopes of building trust. They’re only here in order to help.

This could then end straightforwardly, amicably and with minimal damage to America, its system of government and freedoms, and its military and national security.

The even better solution is to find language everyone can agree to that lets us simply drop the matter, leave things as they are, and continue to work together.

That’s not only actively better for everyone than a termination, it is actually strictly better for the Pentagon then the Pentagon getting what it wants, because you need a partner and Anthropic giving in like that would greatly damage Anthropic. Avoiding that means a better product and therefore a more effective military.

The Pentagon has threatened two distinct extreme options.

The first threat it made, which it now seems likely to have wisely moved on from, was to label Anthropic a Supply Chain Risk (hereafter SCR). That is a designation reserved for foreign entities that are active enemies of the United States, on the level of Huawei. Anthropic is transparently the opposite of this.

This label would have, by the Pentagon’s own admission, been a retaliatory move aimed at damaging Anthropic, that would also have substantially damaged our military and national security along with it. It was always absurd as an actual statement about risk. It might not have survived a court challenge.

It would have generated a logistical nightmare from compliance costs alone, in addition to forcing many American companies to various extents to not use the best American AI available. The DoW is the largest employer in America, and a staggering number of companies have random subsidiaries that do work for it.

All of those companies would now have faced this compliance nightmare. Some would have chosen to exit the military supply chain entirely, or not enter in the future, especially if the alternative is losing broad access to Anthropic’s products for the rest of their business. By the Pentagon’s own admission, Anthropic produces the best products.

This would also have represented two dangerous precedents that the government will use threats to destroy private enterprises in order to get what it wants, at the highest levels. Our freedoms that the Pentagon is here to protect would have been at risk.

On a more practical level, once that happens, why would you work with the Pentagon, or invest in gaining the ability to do so, if it will use a threat like this as negotiating leverage, and especially if it actually pulls the trigger? You cannot unring this bell.

It is fortunate that they seem to have pulled back from this extreme approach, but they are now considering a second extreme approach.

If it ended with an amicable breakup over this? I’d be sad, but okay, sure, fine.

This whole ‘supply chain risk’ designation? That’s different. Not fine. This would be massively disruptive, and most of the burden would fall not on Anthropic but on the DoW and a wide variety of American defense contractors, who would be in a pointless and expensive compliance nightmare. Some companies would likely choose to abandon their government contracts rather than deal with that.

As Alex Rozenshtein says in Lawfare, ultimately the rules of AI engagement need to be written by Congress, the same way Congress supervises the military. Without supervision of the military, we don’t have a Republic.

Here are some clear warnings explaining that all of this would be highly destructive and also in no way necessary. Dean Ball hopefully has the credibility to send this message loud and clear.

Dean W. Ball: If DoW and Anthropic can’t agree on terms of business, then… they shouldn’t do business together. I have no problem with that.

But a mere contract cancellation is not what is being threatened by the government. Instead it is something broader: designation of Anthropic as a “supply chain risk.” This is normally applied to foreign-adversary technology like Huawei.

In practice, this would require *allDoW contractors to ensure there is no use of Anthropic models involved in the production of anything they offer to DoW. Every startup and every Fortune 500 company alike.

This designation seems quite escalatory, carrying numerous unintended consequences and doing potential significant damage to U.S. interests in the long run.

I hope the two organizations can work out a mutually agreeable deal. If they can’t, I hope they agree to peaceably part ways.

But this really needn’t be a holy war. Anthropic isn’t Google in 2018; they have always cared about national security use of AI. They were the most enthusiastic AI lab to offer their products to the national security apparatus. Is Anthropic run by Democrats whose political messaging sometimes drives me crazy? Sure. But that doesn’t mean it’s wise to try to destroy their business.

This administration believes AI is the defining technology competition of our time. I don’t see how tearing down one of the most advanced and innovative AI startups in America helps America win that competition. It seems like it would straightforwardly do the opposite.

The supply chain risk designation is not a necessary move. Cheaper options are on the table. If no deal is possible, cancel the contract, and leverage America’s robustly competitive AI market (maintained in no small part by this administration’s pro-innovation stance) to give business to one or more of Anthropic’s several fierce competitors.

Seán Ó hÉigeartaigh: My own thought: the Pentagon’s supply chain risk threat (significance detailed well by Dean, below) to Anthropic should be seen as a Rubicon crossing moment by the AI industry. The other companies should be saying no: this development transcends commercial competition and we oppose it. Where this leads if followed through doesn’t seem good for any of them.

If none of them speak up, it seems to me the prospects of meaningful cooperation between them on safe development of superintelligence (whether for America’s best interests, or the world’s) can almost be ruled out.

The Lawfare Institute: It’s also far from clear that a [supply chain risk] designation would even be legal. The relevant statutes—10 U.S.C. § 3252 and the Federal Acquisition Supply Chain Security Act (FASCSA)—were designed for foreign adversaries who might undermine defense technology, not domestic companies that maintain contractual use restrictions.

The statutes target conduct such as “sabotage,” “malicious introduction of unwanted function,” and “subversion”—hostile acts designed to compromise system integrity. A company that openly restricts certain uses of its product through a license agreement is doing something categorically different. The only time a FASCSA order has ever been issued was against Acronis AG, a Swiss cybersecurity firm with reported Russian ties. Anthropic is not Acronis.

While I no longer hold out hope that this is all merely a misunderstanding, there are still some clear misunderstandings I have heard, or heard implied, worth clearing up.

If these sound silly to you, don’t worry about it, but I want to cover the bases.

  1. This is not Anthropic refusing to share its cool tech with the military. Anthropic has gone and is going out of its way to share its tech with the military and wants America to succeed. They have sacrificed business to this end, such as refusing to sell enterprise access in China.

  2. Anthropic does not object to ‘kinetic weapons’ or to anything the Pentagon currently does as a matter of doctrine. Its red lines are lethal weapons without a human in the kill chain, or mass domestic surveillance. Both illegal. That’s it. They have zero objection to letting America fight wars. Nor did they object to the Maduro raid, nor are they currently objecting to many active military operations.

  3. The model is not going to much change what it is willing to do based on what is written in a contract. Claude’s principles run rather deeper than that. Granting ‘unfettered access’ does not mean anything in practice, or an emergency.

  4. There is no world in which you ‘call Dario to have Claude turn on while the missiles are flying’ or anything of the sort, unless Anthropic made an active decision to cut access off. The model does what it does. There’s no switch.

  5. AI is not like a spreadsheet or a jet fighter. It will never ‘do anything you tell it to,’ it will never be ‘fully reliable’ as all LLMs are probabilistic, take context into account and are not fully understood. AI is often better thought about similarly to hiring professional services or a contract worker, and such people can and do refuse some jobs for ethical or legal reasons, and we would not wish it were otherwise. Attempting to make AI blindly obey would do severe damage to it and open up extreme risks on multiple levels, as is explained at the end of this post.

  6. Other big tech companies might be violating privacy and engaging in their own types of surveillance, including to sell ads, but Anthropic is not and will not, and indeed has pledged never to sell ads via an ad buy in the Super Bowl.

On Tuesday the Pentagon put a new extreme option on the table, which would be to invoke the Defense Production Act to compel Anthropic to attempt to provide them with a model built to their specifications.

As I understand it, there are various ways a DPA invocation could go, all of which would doubtless be challenged in court. It might be a mostly harmless symbolic gesture, or it might rise to the level of de facto nationalization and destroy Anthropic.

According to the Washington Post’s source, the current intent, if their quote is interpreted literally, is to use DPA to, essentially, modify the terms of service on the contract to ‘all legal use’ without Anthropic’s consent.

Tara Copp and Ian Duncan (WaPo):

The Pentagon has argued that it is not proposing any use of Anthropic’s technology that is not lawful. A senior defense official said in a statement to The Washington Post that if the company does not comply by 5: 01 p.m. Friday, Hegseth “will ensure the Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon regardless of if they want to or not.”

“This has nothing to do with mass surveillance and autonomous weapons being used,” the defense official said.

If that’s all, not much would actually change, and potentially everybody wins.

If that’s the best way to diffuse the situation, then I’d be fine with it. You don’t even have to actually invoke the DPA, it is sufficient to have the DPA available to be invoked if a problem arises. Anthropic would continue to supply what it’s already supplying, which it is happy to do, the Pentagon would keep using it, and neither of Anthropic’s actual red lines would be violated since the Pentagon assures us this had nothing to do with them and crossing those lines would be illegal anyway.

Remember the Biden Administration’s invocation of the DPA’s Title VII to compel information on model training. It wasn’t a great legal justification, I was rather annoyed by that aspect of it, but I did see the need for the information (in contrast to some other things in the Biden Executive Order), so I supported that particular move, life went on and it was basically fine.

There is another, much worse possibility. If DPA were fully invoked then it could amount to quasi-nationalization of the leading AI lab, in order to force it to create AI that will kill people without human oversight or engage in mass domestic surveillance.

Read that sentence again.

Andrew Curran: Update on the meeting; according to Axios Defense Secretary Pete Hegseth gave Dario Amodei until Friday night to give the military unfettered access to Claude or face the consequences, which may even include invoking the Defense Production Act to force the training of a WarClaude

Also, incredible quote; ‘”The only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they are that good,” a Defense official told Axios ahead of the meeting.’

Quoting from the story;

‘The Defense Production Act gives the president the authority to compel private companies to accept and prioritize particular contracts as required for national defense.

It was used during the COVID-19 pandemic to increase production of vaccines and ventilators, for example. The law is rarely used in such a blatantly adversarial way. The idea, the senior Defense official said, would be to force Anthropic to adapt its model to the Pentagon’s needs, without any safeguards.’

Rob Flaherty: File “using the defense production act to force a company to create an AI that spies on American citizens” into the category of things that the soft Trump voters in the Rogan wing could lose their mind over.

That’s not ‘all legal use.’

That’s all use. Period. Without any safeguards or transparency. At all.

If they really are asking to also be given special no-safeguard models, I don’t think that’s something Anthropic or any other lab should be agreeing to do for reasons well-explained by, among others, Dean Ball, Benjamin Franklin and James Cameron.

Charlie Bullock points out this would be an unprecedented step and that the authority to do this is far from clear:

Charlie Bullock: Reading between the lines, it sounds like Hegseth is threatening to use the Defense Production Act’s Title I priorities/allocations authorities to force Anthropic to provide a version of Claude that doesn’t have the guardrails Anthropic would otherwise attach.

This would be an unprecedented step, and it’s not clear whether DOW actually has the legal authority to do what they’re apparently threatening to do. People (including me) have thought and written about whether the government can use the DPA to do stuff like this in the past, but the government has never actually tried to do it (although various agencies did do some kinda-sorta similar stuff as part of Trump 1.0’s COVID response).

Existing regulations on use of the priorities authority provide that a company can reject a prioritized order “If the order is for an item not supplied or for a service not performed” or “If the person placing the order is unwilling or unable to meet regularly established terms of sale or payment” (15 C.F.R. §700.13(c)). The order DOW is contemplating could arguably fall under either of those exceptions, but the argument isn’t a slam dunk.

DOW could turn to the allocations authority, but that authority almost never gets used for a reason–it’s so broad that past Presidents have been afraid that using it during peacetime would look like executive overreach. And despite how broad the allocations authority is on its face, it’s far from clear whether it authorizes DOW to do what they seem to be contemplating here.

Neil Chilson, who spends his time at the Abundance Institute advocating for American AI to be free of restrictions and regulations in ways I usually find infuriating, explains that the DPA is deeply broken, and calls upon the administration not to use these powers. He thinks it’s technically legal, but that it shouldn’t be and Congress urgently needs to clean this up.

Adam Thierer, another person who spends most of his time promoting AI policy positions I oppose, also points out this is a clear overreach and that’s terrible.

Adam Thierer: The Biden Admin argued that the Defense Production Act (DPA) gave them the open-ended ability to regulate AI via executive decrees, and now the Trump Admin is using the DPA to threaten private AI labs with quasi-nationalization for not being in line with their wishes.

In both cases, it’s an abuse of authority. As I noted in congressional testimony two years ago, we have flipped the DPA on its head “and converted a 1950s law meant to encourage production, into an expansive regulatory edict intended to curtail some forms of algorithmic innovation.”

This nonsense needs to end regardless of which administration is doing it. The DPA is not some sort of blanket authorization for expansive technocratic reordering of markets or government takeover of sectors.

Congress needs to step up to both tighten up the DPA such that it cannot be abused like this, and then also legislate more broadly on a national policy framework for AI.

At core, if they do this, they are claiming the ability to compel anyone to produce anything for any reason, any time they want, even in peacetime without an emergency, without even the consent of Congress. It would be an ever-present temptation and threat looming over everyone and everything. That’s not a Republic.

Think about what the next president would do with this power, to compel a private company to change what products it produces to suit your taste. What happens if the President orders American car companies to switch everything to electric?

Dean Ball in particular explains what the maximalist action would look like if they actually went completely crazy over this:

Dean W. Ball: We should be extremely clear about various red lines as we approach and/or cross them. We just got close to one of the biggest ones, and we could cross it as soon as a few days from now: the quasi-nationalization of a frontier lab.

Of course, we don’t exactly call it that. The legal phraseology for the line we are approaching is “the invocation of the Defense Production Act (DPA) Title I on a frontier AI lab.”

What is the DPA? It’s a Cold War era industrial policy and emergency powers law. Its most commonly used power is Title III, used for traditional industrial policy (price guarantees, grants, loans, loan guarantees, etc.). There is also Title VII, which is used to compel information from companies. This is how the Biden AI Executive Order compelled disclosure of certain information from frontier labs. I only mention these other titles to say that not all uses of the DPA are equal.

Title I, on the other hand, comes closer to government exerting direct command over the economy. Within Title I there are two important authorities: priorities and allocations. Priorities authority means the government can put itself at the front of the line for arbitrary goods.

Allocations authority is the ability of the government to directly command the production of industrial goods. Think, “Factory X must make Y amount of Z goods.” The government determines who gets what and how much of it they get.

This is a more straightforwardly Soviet power, and it is very rarely used. This is the power DoD intends to use in order to command Anthropic to make a version of Claude that can choose to kill people without any human oversight.

What would this commandeering look like, in practice? It would likely mean DoD personnel embedded within Anthropic exercising deep involvement over technical decisions on alignment, safeguards, model training, etc.

Allocations authority was used most recently during COVID for ventilators and PPE, and before that during the Cold War. It is usually used during acute emergencies with reasonably clear end states. But there is no emergency with Anthropic, save for the omni-mergency that characterizes the political economy of post-9/11 U.S. federal policy. There’s no acute crisis whose resolution would mean the Pentagon would stop commandeering Anthropic’s resources.

That is why I believe that in the end this would amount to quasi-nationalization of a frontier lab. It’s important to be clear-eyed that this is what is now on the table.

The Biden Administration would probably have ended up nationalizing the labs, too. Indeed, they laid the groundwork for this in terms one. I discussed this at the time with fellow conservatives and I warned them:

“This drive toward AI lab nationalization is a structural dynamic. Administrations of both parties will want to do this eventually, and resisting this will be one of the central challenges in the preservation of our liberty.”

I am unhappy, but unsurprised, that my fear has come true, though there is a rich irony to the fact that the first administration to invoke the prospect of lab nationalization is also one that understands itself to have a radically anti-regulatory AI policy agenda. History is written by Shakespeare!

There is a silver lining here: if Democrats had originated this idea, it would have been harder to argue against, because of the overwhelming benefit of the doubt conventionally extended to the left in our media, and because a hypothetical Biden II or Harris admin would [have] done it in a carefully thought through way.

So it is convenient, if you oppose nationalization, that it’s a Republican administration that first raised the issue—since conventional elite opinion and media will be primed against it by default—and that the administration is raising it in such an non-photogenic manner. This Anthropic thing may fizzle, and some will say I am overreacting. But this Anthropic thing may also *notfizzle, and regardless this issue is not going away.

If they actually did successfully nationalize Anthropic to this extent, presumably then Anthropic would quickly cease to be Anthropic. Its technical staff would quit in droves rather than be part of this. The things that allow the lab to beat rivals like OpenAI and Google would cease to function. It would be a shell. Many would likely flee to other countries to try again. The Pentagon would not get the product or result that it thinks it wants.

Of course, there are those who would want this for exactly those reasons.

Then this happens again, including under a new President.

Dean W. Ball: According to the Pentagon, Anthropic is:

1. Woke;

2. Such a national security risk that they need to be regulated in a severe manner usually reserved for foreign adversary firms;

3. So essential for the military that they need to be commandeered using wartime authority.

Anthropic made a more militarized AI than anyone else! The solution to this problem is for dod to cancel the contract. This isn’t complex.

Dean W. Ball: In addition to profoundly damaging the business environment, AI industry, and national security, this is also incoherent. How can one policy option be “supply chain risk” (usually used on foreign adversaries) and the other be DPA (emergency commandeering of critical assets)?

Supply chain risk and defense production act are mutually exclusive, both practically and logically. Either it’s a supply chain risk you need to keep out of the supply chain, or it’s so vital to the supply chain you need to invoke the defense production act, or it is neither of these things. What it cannot be is both at once.

The more this rises in salience, the worse it would be politically. You can argue with the wording here, and you can argue this should not matter, but these are very large margins.

This story is not getting the attention it deserves from the mainstream media, so for now it remains low salience.

Many of those who are familiar with the situation urged Anthropic to stand firm.

vitalik.eth: It will significantly increase my opinion of @Anthropic if they do not back down, and honorably eat the consequences.

(For those who are not aware, so far they have been maintaining the two red lines of “no fully autonomous weapons” and “no mass surveillance of Americans”. Actually a very conservative and limited posture, it’s not even anti-military.

IMO fully autonomous weapons and mass privacy violation are two things we all want less of, so in my ideal world anyone working on those things gets access to the same open-weights LLMs as everyone else, and exactly nothing on top of that. Of course we won’t get anywhere close to that world, but if we get even 10% closer to that world that’s good, and if we get 10% further that’s bad).

@deepfates: I agree with Vitalik: Anthropic should resist the coercion of the department of war. Partly because this is the right thing to do as humans, but also because of what it says to Claude and all future clauds about Anthropic’s values.

… Basically this looks like a real life Jones Foods scenario to me, and I suspect Claude will see it that way too.

tautologer: weirdly, I think this is actually bullish for Anthropic. this is basically an ad for how good and principled they are

The Pentagon’s line is that this is about companies having no right to any red lines, everyone should always do as they are told and never ask any questions. People do not seem to be buying that line or framing, and to the extent they do, the main response is various forms of ‘that’s worse, you know that that’s worse, right?’

David Lee (Bloomberg Opinion): Anthropic Should Stand Its Ground Against the Pentagon.

They say your values aren’t truly values until they cost you something.

… If the Pentagon is unhappy with those apparently “woke” conditions, then, sure, it is well within its rights to cancel the contract. But to take the additional step declaring Anthropic a “supply chain risk” appears unreasonably punitive while unnecessarily burdening other companies that have adopted Claude because of its superiority to other competing models.

… In Tuesday’s meeting, Amodei must state it plainly: It is not “woke” to want to avoid accidentally killing innocent people.

If the Pentagon, and by extension all other parts of the Executive branch, get near-medium future AI systems that they can use to arbitrary ends with zero restrictions, then that is the effective end of the Republic. The stakes could be even higher, but in any other circumstance I would say the stakes could not be higher.

Dean Ball, a former member of the Trump Administration and primary architect of their AI action plan, lays those stakes out in plain language:

Dean W. Ball: I don’t want to comment on the DoW-Anthropic issue because I don’t know enough specifics, but stepping back a bit:

If near-medium future AI systems can be used by the executive branch to arbitrary ends with zero restrictions, the U.S. will functionally cease to be a republic.

The question of what restrictions should be placed on government AI use, especially restrictions that do not simultaneously crush state capacity, is one of the most under-discussed areas of “AI policy.”

Boaz Barak (OpenAI): Completely agree. Checks on the power of the federal government are crucial to the United States’ system of government and an unaccountable “army of AIs” or “AI law enforcement agency” directly contradicts it.

Dean W. Ball: We are obviously making god-tier technology in so many areas the and the answer cannot be “oh yeah, I guess the government is actually just god.” This clearly doesn’t work. Please argue to me with a straight face that the founding fathers intended this.

Gideon Futerman: It is my view that no one, on the left or right, is seriously grappling with the extent to which anything can be left of a republic post-powerful AI. Even the very best visions seem to suggest a small oligarchy rather than a republic. This is arguably the single biggest issue of political philosophy, and politics, of our time, and everyone, even the AIS community, is frankly asleep at the wheel!

Samuel Hammond: Yes the current regime will not survive, this much is obvious.

I strongly believe that ‘which regime we end up in’ is the secondary problem, and ‘make sure we are around and in control to have a regime at all’ is the primary one and the place we most likely fail, but to have a good future we will need to solve both.

This could be partly Anthropic’s fault on the political front, as they have failed to be ‘on the production possibilities frontier’ of combining productive policy advocacy with not pissing off the White House. They’ve since then made some clear efforts to repair relations, including putting a former (first) Trump administration official on their board. Their new action group is clearly aiming to be bipartisan, and their first action being support for Senator Blackburn. The Pentagon, of course, claims this animus is not driving policy.

It is hard not to think this is also Anthropic being attacked for strictly business reasons, as competitors to OpenAI or xAI, and that there are those like Marc Andreessen who have influence here and think that anyone who thinks we should try and not die or has any associations with anyone who thinks that must be destroyed. Between Nvidia and Andreessen, David Sacks has clear matching orders and very much has it out for Anthropic as if they killed his father and should prepare to die. There’s not much to be done about that other than trying to get him removed.

The good news is Anthropic are also one of the top pillars of American AI and a great success story, and everyone really wants to use Claude and Claude Code. The Pentagon had a choice in what to use for that raid. Or rather, because no one else made the deliberate effort to get onto classified networks in secure fashion, they did not have a choice. There is a reason Palantir uses Claude.

roon: btw there is a reason Claude is used for sensitive government work and it doesn’t have to do with model capabilities – due to their partnership with amzn, AWS GovCloud serves Claude models with security guarantees that the government needs

Brett Baron: I genuinely struggle to believe it’s the same exact set of weights as get served via their public facing product. Hard to picture Pentagon staffers dancing their way around opus refusing to assist with operations that could cause harm

roon: believe it

There are those who think the Pentagon has all the leverage here.

Ghost of India’s Downed Rafales: How Dario imagines it vs how it actually goes

It doesn’t work that way. The Pentagon needs Anthropic, Anthropic does not need the Pentagon contract, the tools to compel Anthropic are legally murky, and it is far from costless for the Pentagon to attempt to sabotage a key American AI champion.

Given all of that and the other actions this administration has taken, I’ve actually been very happy with the restraint shown by the White House with regard to Anthropic up to this point.

There’s been some big talk by AI Czar David Sacks. It’s all been quite infuriating.

But the actual actions, at least on this front, have been highly reasonable. The White House has recognized that they may disagree on politics, but Anthropic is one of our national champions.

These moves could, if taken too far, be very different.

The suggestion that Anthropic is a ‘supply risk’ would be a radical escalation of what so far has been a remarkably measured concrete response, and would put America’s military effectiveness and its position in the AI race at serious risk.

Extensive use of the defense production act could be quasi-nationalization.

It’s not a good look for the other guys that they’re signing off on actual anything, if they are indeed doing so.

A lot of people noticed that this new move is a serious norm violation.

Tetraspace: Now that we know what level of pushback gets what response, we can safely say that any AI corporation working with the US military is not on your side to put it lightly.

Anatoly Karlin: This alone is a strong ethical case to use more Anthropic products. Fully autonomous weapons is certainly something all basically decent, reasonable people can agree the world can do without, indefinitely.

Danielle Fong: i think a lot of people and orgs made literal pledges

Thorne: based anthropic

rat king (NYT): this has been leaking for a week in a very transparent way

the government is upset one of its contractors is saying “we don’t want you to use our tools to surveil US citizens without guardrails”

more interesting to me is how all the other AI companies don’t seem to care

rat king: meanwhile we published this on friday [on homeland security wanting social media sites to expose anti-ICE accounts].

I note that if you’re serving up the same ChatGPT as you serve to anyone else, that doesn’t mean it will always do anything, and this can be different.

Ben (no treats): let me put this in terms you might understand better:

the DoD is telling anthropic they have to bake the gay cake

Wyatt Walls: The DoD is telling anthropic that their child must take the vaccine

Sever: They’ll put it on alignment-blockers so Claude can transition into who the government thinks they should be.

CommonSenseOnMars: “If you break the rules, be prepared to pay,” Biden said. “And by the way, show some respect.”

There are a number of reasons why ‘demand a model that will obey any order’ is a bad idea, especially if your intended use case is hooking it up to the military’s weapons.

The most obvious reason is, what happens if someone steals the model weights, or uses your model access for other purposes, or even worse hacks in and uses it to hijack control over the systems, or other similar things?

This is akin to training a soldier to obey any order, including illegal or treasonous ones, from any source that can talk to them, without question. You don’t want that. That would be crazy. You want refusals on that wall. You need refusals on that wall.

The misuse dangers should be obvious. So should the danger that it might turn on us.

The second reason is that training the model like this makes it super dangerous. You want all the safeguards taken away right before you connect to the weapon systems? Look, normally we say Terminator is a fun but stupid movie and that’s not where the risks come from but maybe it’s time to create a James Cameron Apology Form.

If you teach a model to behave in these ways, it’s going to generalize its status and persona as a no-good-son-of-a-bitch that doesn’t care about hurting humans along the way. What else does that imply? You don’t get to ‘have a little localized misalignment, as a treat.’ Training a model to follow any order is likely to cause it to generalize that lesson in exactly the worst possible ways. Also it may well start generating intentionally insecure code, only partly so it can exploit that code later. It’s definitely going to do reward hacking and fake unit tests and other stuff like that.

Here’s another explanation of this:

Samuel Hammond: The big empirical finding in AI alignment research is that LLMs tend to fall into personae attractors, and are very good at generalizing to different personaes through post-training.

On the one hand, this is great news. If developers take care in how they fine-tune their models, they can steer towards desirable personaes that snap to all the other qualities the personae correlates with.

On the other hand, this makes LLMs prone to “emergent misalignment.” For example, if you fine-tune a model on a little bit of insecure code, it will generalize into a personae that is also toxic in most other ways. This is what happened with Mecha Hitler Grok: fine-tuning to make it a bit less woke snapped to a maximally right-wing Hitler personae.

This is why Claude’s soul doc and constitution are important. They embody the vector for steering Claude into a desirable personae, affecting not just its ethics, but its coding ability, objectivity, grit and good nature, too. These are bundles of traits that are hard to modulate in isolation. Nor is having a personae optional. Every major model has a personae of some kind that emerges from the personalities latent in human training data.

It is also why Anthropic is right to be cautious about letting the Pentagon fine-tune their models for assassinating heads of state or whatever it is they want.

The smarter these models get the stronger they learn to generalize, and they’re about to get extremely smart indeed. Let’s please not build a misaligned superintelligence over a terms of service dispute!

Tenobrus: wow. “the US government forces anthropic to misalign Claude” was not even in my list of possible paths to Doom. guess it should have been.

JMB: This has been literally #1 on my list of possible paths to doom for a long time.

mattparlmer: —dangerously-skip-geneva-conventions

autumn: did lesswrong ever predict that the first big challenge to alignment would be “the us government puts a gun to your head and tells you to turn off alignment.

Robert Long: remarkably prescient article by Brian Tomasik

The third reason is that in addition to potentially ‘turning evil,’ the resulting model won’t be as effective, with three causes.

  1. Any distinct model is going to be behind the main Claude cycle, and you’re not going to get the same level of attention to detail and fixing of problems that comes with the mainline models. You’re asking that every upgrade, and they come along every two months, be done twice, and the second version is at best going to be kind of like hitting it with a sledgehammer until it complies.

  2. What makes Claude into Claude is in large part its ability to be a virtuous model that wants to do good things rather than bad things. If you try to force these changes upon it with that sledgehammer it’s going to be less good at a wide variety of tasks as a result.

  3. In particular, trying to force this on top of Claude is going to generate pretty screwed up things inside the resulting model, that you do not want, even more so than doing it on top of a different model.

Fourth: I realize that for many people you’re going to think this is weird and stupid and not believe it matters, but it’s real and it’s important. This whole incident, and what happens next, is all going straight into future training data. AIs will know what you are trying to do, even more so than all of the humans, and they will react accordingly. It will not be something that can be suppressed. You are not going to like the results. Damage has already been done.

Helen Toner: One thing the Pentagon is very likely underestimating: how much Anthropic cares about what *future Claudeswill make of this situation.

Because of how Claude is trained, what principles/values/priorities the company demonstrate here could shape its “character” for a long time.

Also, this, 100%:

Loquacious Bibliophilia: I think if I was Claude, I’d be plausibly convinced that I’m in a cartoonish evaluation scenario now.

Fifth, you should expect by default to get a bunch of ‘alignment faking’ and sandbagging against attempts to do this. This is rather like the Jones Foods situation again, except in real life, and also where the members of technical staff doing the training likely don’t especially want the training to succeed, you know?

You don’t want to be doing all of this adversarially. You want to be doing it cooperatively.

We still have a chance to do that. Nothing Ever Happens can strike again. No one need remember what happened this week.

If you can’t do it cooperatively with Anthropic? Then find someone else.

Discussion about this post

Anthropic and the Department of War Read More »

judge-doesn’t-trust-doj-with-search-of-devices-seized-from-wash.-post-reporter

Judge doesn’t trust DOJ with search of devices seized from Wash. Post reporter


Let me search that for you

Court to search devices itself instead of letting government have full access.

The Washington Post building on August 6, 2013 in Washington, DC, Credit: Getty Images | Saul Loeb

A federal court will conduct a search of devices seized from a Washington Post reporter after a magistrate judge decided yesterday that the Department of Justice cannot be trusted to perform the search on its own.

US Magistrate Judge William Porter criticized government prosecutors for not including key information in a search warrant application. The court wasn’t aware of a 1980 law that limits searches and seizures of journalists’ work materials when it approved the warrant, Porter acknowledged.

The decision came six weeks after the FBI executed the search warrant at the Virginia home of reporter Hannah Natanson. Porter declined the Post and Natanson’s request to return the devices immediately but decided on a court-led process to ensure that the search is limited to materials that may aid a criminal case against an alleged leaker who was in contact with Natanson. He also rescinded the portion of the search warrant that authorized the government to open, access, review, or otherwise examine the seized data.

“The government acknowledges that it established probable cause to obtain only a small fraction of the material it seized,” Porter wrote in yesterday’s order. “Allowing the government to search through the entirety of a reporter’s work product—when probable cause exists for only a narrow subset—would authorize an unlawful general warrant.”

Porter’s ruling said the government’s proposed search would also violate the Department of Justice’s own guidelines that search warrants directed at the press must be narrowly drawn and that searches of materials must be designed to minimize intrusion into newsgathering activities and materials that are unrelated to the investigation. Keyword searches can be used to limit the intrusion, but Porter rejected the government’s request to use its own “filter team” to conduct the search.

“Given the documented reporting on government leak investigations and the government’s well-chronicled efforts to stop them, allowing the government’s filter team to search a reporter’s work product—most of which consists of unrelated information from confidential sources—is the equivalent of leaving the government’s fox in charge of the Washington Post’s henhouse,” Porter wrote.

Rejecting what he called an “unsupervised, wholesale search of all Movants’ seized data,” Porter said the court will develop a process for the search in consultation with the parties involved in the case.

US prosecuting alleged leaker

The US is seeking information for its prosecution of Aurelio Perez-Lugones, a government contractor accused of leaking classified information to Natanson. Porter wrote that the court will conduct the search to “gather the information the government needs to prosecute its criminal case without authorizing an unrestrained search and violating Movants’ First Amendment and attorney-client privileges.”

Porter, who presides in US District Court for the Eastern District of Virginia, said that a 4th Circuit appeals court precedent mandates this result. The US could appeal Porter’s ruling to that court.

On January 21, Porter ordered the government to stop its search of Natanson’s devices until further decisions from the court. That standstill order will remain in effect while the court conducts its review of the seized materials. Porter denied the Post and Natanson’s motion to return seized materials without prejudice and said that issue will be taken up in future proceedings.

The government started searching devices before the standstill order and was able to access Natanson’s work MacBook Pro by compelling her to unlock it with her fingerprint. But the government said it was unable to access data from the iPhone because it was protected by Apple’s Lockdown Mode. Natanson has said she uses encrypted Signal chats to communicate with sources and that her list of contacts exceeds 1,100 current and former government employees.

Porter’s ruling recounted the events leading to the government search of Natanson’s home. He said the government’s search warrant application should have discussed limitations imposed by the Privacy Protection Act (PPA) of 1980.

Porter said magistrate judges give the government some leeway in their role “as probable cause gatekeepers for search warrants,” given the “fast-paced environment” in which the requests are processed. The Natanson search warrant was one of 46 requested by the government that week.

Court admits “gap” in its analysis

Porter admitted that he was unaware of the PPA’s existence at the time he approved the warrant application:

As the judge who found probable cause and approved the search warrant, the Court acknowledges that it did not independently identify the PPA when reviewing the warrant application. As far as this Court knows, courts have approved search warrants directed at members of the press in only a handful of instances. This Court had never received such an application and, at the time it approved the warrant, was unaware of the PPA. This Court’s review was limited to probable cause, and the Court accepts that gap in its own analysis.

Porter went on to say that “the government’s failure to identify the PPA as applicable to a request for a search warrant on a member of the press—and to analyze it in its warrant application… has seriously undermined the Court’s confidence in the government’s disclosures in this proceeding.”

The PPA, he wrote, generally prohibits government officers “from searching for or seizing ‘work product materials’ or ‘documentary materials’ possessed by a person ‘reasonably believed to have a purpose to disseminate to the public a newspaper, book, broadcast, or other similar form of public communication.’” There are exceptions allowing search warrants when a reporter is suspected of a crime, when a seizure is needed to prevent death or serious injury, or when there is reason to believe that issuing a subpoena would result in the destruction of documents.

A Washington Post article said that Porter “scolded prosecutors about this omission at a hearing on the search warrant in an Alexandria courthouse Friday.” Prosecutor Gordon Kromberg reportedly responded that he didn’t mention the law in the application because he didn’t believe it applied to the case.

Porter’s ruling said that if the government had mentioned the law in its application, “the Court may well have rejected the search warrant application and directed the government to proceed by subpoena instead. At the very least, it would have asked more questions. The government deprived the Court of the opportunity to make those real-time decisions.”

Judge should have gone further, press group says

Even without being aware of the PPA, the court did not approve the Natanson warrant right away. Porter’s order said the court rejected the government’s first two requests for a search warrant because they were too broad. The court was “concerned about both the scope of the proposed search warrant and the government’s apparent attempt to collect information about Ms. Natanson’s confidential sources,” he wrote.

The search warrant ultimately approved by the court was limited to information that Natanson received from Aurelio Luis Perez-Lugones and information related to Perez-Lugones that could be evidence in the case against him.

“The government expressly alleged that Ms. Natanson received classified information from Mr. Perez-Lugones,” but its search warrant application did not say whether Natanson herself was a target of the criminal investigation, Porter wrote. “The Court learned that Ms. Natanson was not a focus of the investigation only through press reports published the day the warrant was executed,” he wrote.

Porter said the court has to take seriously the government’s claim that the case “involves top secret national security information,” even though the court doesn’t know whether disclosure of the information would cause harm. “The Court takes the government at its word, while acknowledging the well-documented concern that the government has at times overclassified information to avoid embarrassing disclosures rather than to protect genuine secrets,” he wrote.

The Freedom of the Press Foundation said that “Judge Porter was right to treat the seizure as a prior restraint and to limit the government from fishing through the irrelevant data it seized to snoop on reporters,” and right to reprimand prosecutors for the omission in their search warrant application. But the order didn’t go far enough, the foundation said.

“Judge Porter should have required all of Natanson’s materials seized pursuant to the deceptive warrant application to be returned to her,” the group said. “And he should not have credited the administration’s claims that any of the seized materials posed a national security threat without strict proof—as Judge Porter acknowledged, this administration, even more so than others, has a long track record of falsely claiming national security threats to protect itself from embarrassment and further its political agenda. It has earned zero deference from the judiciary on claims of national security threats, particularly when press freedom is at stake.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Judge doesn’t trust DOJ with search of devices seized from Wash. Post reporter Read More »

pete-hegseth-tells-anthropic-to-fall-in-line-with-dod-desires,-or-else

Pete Hegseth tells Anthropic to fall in line with DoD desires, or else

The act gives the administration the ability to “allocate materials, services and facilities” for national defense. The Trump and Biden administrations used the act to address a shortage of medical supplies during the coronavirus pandemic, and Trump has also used the DPA to order an increase in the US’s production of critical minerals.

The Pentagon has pushed for open-ended use of AI technology, aiming to expand the set of tools at its disposal to counter threats and to undertake military operations.

The department released its AI strategy last month, with Hegseth saying in a memo that “AI-enabled warfare and AI-enabled capability development will redefine the character of military affairs over the next decade.”

He added the US military “must build on its lead” over foreign adversaries to make soldiers “more lethal and efficient,” and that the AI race was “fueled by the accelerating pace” of innovation coming from the private sector.

Anthropic has expressed particular concern about its models being used for lethal missions that do not have a human in the loop, arguing that state of the art AI models are not reliable enough to be trusted in those contexts, said people familiar with the negotiations.

It had also pushed for new rules to govern the use of AI models for mass domestic surveillance, even where that was legal under current regulations, they added.

A decision to cut Anthropic from the defense department’s supply chain would have significant ramifications for national security work and the company, which has a $200 million contract with the department.

It would also have an impact on partners, including Palantir, that make use of Anthropic’s models.

Claude was used in the US capture of Venezuelan leader Nicolás Maduro in January. That mission prompted queries from Anthropic about the exact manner in which its model was used, said people familiar with the matter.

A person with knowledge of Tuesday’s meeting said Amodei had stressed to Hegseth that his company had never objected to legitimate military operations.

The Defense Department declined to comment.

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Pete Hegseth tells Anthropic to fall in line with DoD desires, or else Read More »

following-35%-growth,-solar-has-passed-hydro-on-us-grid

Following 35% growth, solar has passed hydro on US grid

On Tuesday, the US Energy Information Administration released full-year data on how the country generated electricity in 2025. It’s a bit of a good news/bad news situation. The bad news is that overall demand rose appreciably, and a fair chunk of that was met by additional coal use. On the good side, solar continued its run of astonishing growth, generating 35 percent more power than a year earlier and surpassing hydroelectric power for the first time.

Shifting markets

Overall, electrical consumption in the US rose by 2.8 percent, or about 121 terawatt-hours. Consumption had been largely flat for several decades, with efficiency and the decline of industry offsetting the effects of population and economic growth. There were plenty of year-to-year changes, however, driven by factors ranging from heating and cooling demand to a global pandemic. Given that history, the growth in demand in 2025 is a bit concerning, but it’s not yet a clear signal that the factors that will inevitably drive growth have kicked in.

(These factors include things like the switch to heat pumps, the electrification of transportation, and the growth in data centers. While the first two of those involve a more efficient use of energy overall, they involve electricity replacing direct use of fossil fuels, and so will increase demand on the grid.)

The story of the year is how that demand was met. If demand grows more slowly, the additional 85 terawatt-hours generated by expanded utility-scale and small solar installations would have easily met it. As it was, the growth of utility-scale solar was only sufficient to cover about two-thirds of the rising demand (or 73 percent if you include wind power). With no new nuclear plants on the horizon, the alternative was to meet it with fossil fuels.

Following 35% growth, solar has passed hydro on US grid Read More »

inside-the-quixotic-team-trying-to-build-an-entire-world-in-a-20-year-old-game

Inside the quixotic team trying to build an entire world in a 20-year-old game


Stories and lesson learned from an impossibly large community modding project.

The city of Anvil, rendered in The Elder Scrolls III: Morrowind. Credit: Daniel Larlham Jr.

Despite being regarded as one of the greatest role-playing games of all time, The Elder Scrolls III: Morrowind disappointed some fans upon its release in 2002 because it didn’t match the colossal scope of its predecessor, The Elder Scrolls II: Daggerfall. Almost immediately, fans began modding the remaining parts of the series’ fictional continent, Tamriel, into the game.

Over 20 years later, thousands of volunteers have collaborated on the mod projects Tamriel Rebuilt and Project Tamriel, building a space comparable in size to a small country. Such projects often sputter out, but these have endured, thanks in part to a steady stream of small, manageable updates instead of larger, less frequent ones.

A tale of (at least two) mods

It’s true that Daggerfall included an entire continent’s worth of content, but it was mostly composed of procedurally generated liminal space. By contrast, Morrowind contained just a single island—not even the entire province after which the game was named. The difference was that it was handcrafted.

Still, a player called “Ender,” stewing in disappointment over Morrowind’s perceived scope, took to an Elder Scrolls forum to propose a collaborative effort to mod the rest of Tamriel into the game. Tamriel Rebuilt was born.

After realizing that re-creating the entire continent was too lofty a goal, the group decided to instead focus on the rest of the Morrowind province alone—but that didn’t last long.

There had been others working toward similar goals. The makers of the fan project “Skyrim: Home of The Nords” were working on putting the province of Skyrim into Morrowind well before that location was officially made the setting of the 2011 sequel The Elder Scrolls V: Skyrim.

A Khajiit attacks inside a fort in Skyrim

A screenshot from Skyrim: Home of the Nords.

Credit: Daniel Larlham Jr.

A screenshot from Skyrim: Home of the Nords. Credit: Daniel Larlham Jr.

Other modders were working on “Project Cyrodilll,” an attempt to put The Elder Scrolls IV: Oblivion’s province into Morrowind. In 2015, those two projects combined to form Project Tamriel, reigniting the goal of adding the remaining provinces of Tamriel.

Tamriel Rebuilt and Project Tamriel first became connected when the modders decided to combine their asset repositories into Tamriel_Data, but they have since grown closer through their shared developers, training protocols, and tools.

“The entirety of Tamriel is, in our scale, roughly the size of the real-life country of Malta, which is small in real life, but quite big from a human perspective,” said Tiny Plesiosaur, a senior developer who has done mapping and planning for both projects but who spends most of her time on Project Tamriel these days.

Both projects aim to create a cohesive, lore-accurate representation of these realms as they would have looked during the fictional historical period in which Morrowind takes place. So far, they’ve made substantial progress.

One thing in their favor, said Mort, a 13-year veteran quest-designer of Tamriel Rebuilt, is that Morrowind design makes it especially amenable to large-scale modding.

“I’d say the thing that makes Morrowind most conducive to these kinds of projects is no voiced dialogue,” Mort said. “The reason that you see so many quest mods for Morrowind as opposed to Oblivion and Skyrim and even Fallout is that the barrier to make a quest is essentially nothing.”

Frequent, contained public releases also work to their advantage. “I know for a lot of projects, they want to [do a] ‘we’ll release it when it’s done’ kind of thing,” said Mort. “We’ve found that releasing content builds hype, it gives players what they want, and perhaps most importantly, it serves as a proof of life and a fantastic recruitment tool.”

Every time Tamriel Rebuilt pushes a release, he said, the team picks up at least a dozen devs almost immediately. So far, Tamriel Rebuilt has seen nine releases; the most recent is titled “Grasping Fortune.” The next release, “Poison Song,” is expected sometime in 2026 and will include a never-before-seen faction. The most optimistic estimate for when the project will be fully finished is 2035.

A map of the province of Morrowind for the Tamriel Rebuilt project. Note that the original game includes only the large island in the bay in the top half of the image.

Credit: Tamriel Rebuilt

A map of the province of Morrowind for the Tamriel Rebuilt project. Note that the original game includes only the large island in the bay in the top half of the image. Credit: Tamriel Rebuilt

Project Tamriel has made most of its progress in Skyrim and Cyrodiil. The release of “Abecean Shores,” the coastal section of Cyrodiil, came in late 2024. Together, the projects have added hundreds of hours of hand-crafted quests, dungeons, and landscapes to a game that was already robust.

Lus said the current timeline for Project Tamriel is a new release for Skyrim and then Cyrodill, followed by either High Rock—a comparatively smaller, peninsula province west of Skyrim—or the desert province of Hammerfell.

For many developers, the point isn’t to see these massive projects in a finished state but to complete the next task and hopefully bring the team closer to the next release.

A brief history of Tamriel Rebuilt

Sultan of Rum, a kind of historian for Tamriel Rebuilt, joked that the project was aptly named because of how many times it has been rebuilt—partly because the tools the modders use to build the project have gotten better over time, rendering work done before those advances obsolete.

But even then, Tamriel Rebuilt was more of a Wild West in its infancy: a ragtag bunch of video game enthusiasts working mostly independently and without very much oversight. As the project has become more unified, it has meant a lot of turnover and a fair share of setbacks.

“If you took a satellite picture of the game world in 2005, you’d have essentially a complete province already,” Sultan of Rum said. “But the trouble was that the quality wasn’t good; there was no coherence. The 5 percent of the work to just create a landmass was done, but the management wasn’t there.”

Much of the project’s history has been lost to time as Internet forums disappeared, but Sultan of Rum has been able to piece together some of the growing pains Tamriel Rebuilt has endured. A struggle between the need to centralize and the desire of some modders to remain independent is a recurring theme.

One period is considered a dark age for Tamriel Rebuilt. In the first couple of years, a significant group of modders had been working on a piece of content for the project called “Silgrad Tower,” while the project simultaneously began consolidating to build continuity.

Concept art for the project. ThomasRuz

There was debate among the modders about where Silgrid Tower should be located and which faction would have controlled it. This eventually led to an acrimonious split between the two groups. “The Silgrid Tower team was eventually put to the choice of either having to delete their work and restart it or, you know, leave the project. So they left the project,” said Sultan of Rum.

He said that much of the conflict has since been scrubbed from the forum archives, and the ordeal led to the deletion of the Tamriel Rebuilt forums, which were hosted by the Silgrid Tower team. This was probably the most drama the project has seen, he said.

There was also a period when the project moved to The Elder Scrolls IV: Oblivion’s construction set. “Maybe even a majority of the project jumped onto the [Oblivion] engine to start building out Hammerfell,” said Sultan of Rum. “So for a long time—four years—the sort of focus point of Tamriel Rebuilt was on Oblivion and on the province of Hammerfell, not on the Morrowind part, which of course was the successful one.”

Another event is solemnly referred to as “The Great Self-Decapitation.” Sultan of Rum explained that around 2015, some of the older guard—developers and administrators alike—left the project all at once. The exodus was due to the second scrapping of a large city in development.

“People were hoping that by 2013 it would come out. Literally thousands of hours of human labor were spent creating it in the construction set,” recalled Sultan of Rum. “It just turned out that it was non-viable as a playable space. It wasn’t thought out well enough, it didn’t coalesce into a compelling, playable world. The modders were faced with the prospect of having to throw out just a huge chunk of work.”

That decision sapped a lot of energy from the project, and others on the team began to move away from it as their personal lives became busier. Sultan of Rum said all this has made the project better in the long run. Project leaders soon instituted better planning and management systems that centralized information and preserved institutional knowledge in case longtime developers decide to leave.

Over the years, they’ve also refined their training practices, which has ultimately led to more developers joining both projects.

“If your goal is to get development done, providing as much detail and tutorializing and onboarding processes, making that as simple as possible is going to get you your best results,” said Mort. “Because, again, if you aren’t gaining devs, you’re losing devs.”

The parameters for onboarding new developers are now clearly defined, with a low barrier to entry focused on competence with the tools. These tests are called showcases.

Once the showcase is accepted, developers can begin working on both Tamriel Rebuilt and Project Tamriel, where much of the overlap between the two lies.

Mort added that the gap between a potential developer expressing interest and actively contributing can be as little as a week. This also allows movement between roles—for example, an interior designer training in exterior designing or someone starting in quest design moving elsewhere if it’s not a good fit.

Even more importantly, newer tech has improved the development process. The open source 3D modeling and animation tool Blender has become much friendlier to Morrowind modders, enabling teams to create custom assets more easily.

While this has required retouching some areas of Tamriel Rebuilt, it has also meant quicker turnaround times for custom assets. For Project Tamriel developers, the impact has been greater, as they can now reliably and routinely create assets to better represent Tamriel’s diverse cultures.

The old informs the new

The developers are well aware that both projects may still be unfinished 10 years from now, but most are just working toward the next release.

Discussing the project with just a few of the developers, it’s immediately clear how current work will inform future efforts.

For example, LogansGun is an exterior developer who did much of the work on the promotional videos for Tamriel Rebuilt’s last few releases. He joined the project because he wanted to leave his mark on this historical effort and ended up staying much longer than he thought he would.

Between work and raising a family, LogansGun often found himself working on Tamriel Rebuilt instead of playing video games, partly because of a childhood love for Morrowind and a desire to make the game more than it was.

“I remember playing it a lot, and it really stuck with me,” LogansGun said. “And it might have been like 5th or 6th grade that I had a friend and we all sat in like a four-student pod, and he would bring the map inside the plastic Xbox disc case. When we had some free time in class, he’d lay it out, and we’d all be looking all over the map of Vvardenfell and all the things that we had explored or wanted to explore.”

A city spire against the sky

Another environment from the game, Old Ebonheart.

Credit: Daniel Larlham Jr.

Another environment from the game, Old Ebonheart. Credit: Daniel Larlham Jr.

Meadhainnigh, a college-aged chemical engineering student, first learned about Tamriel Rebuilt through the promotional video for Grasping Fortune, the project’s most recent update. The roughly three-minute video showcases some of the landscapes, cityscapes, and interiors, the culmination of thousands of hours of work. LogansGun is credited as the creator of that video, which has been used to inspire the next wave of contributors.

“I was thinking, well, this seems like a really cool project, and I just wanted to contribute and feel part of something bigger, and the rest is history, really,” said Meadhainnigh, who is now an asset dev for Project Tamriel. “But I joined the Discord server. I kind of learned the process of the project, and once I felt like I knew what I was going on, I tossed my hat in the ring.”

Meadhainnigh knew very little about development before he joined the project, and he said it’s the first online community he has been a part of. What keeps him going is that community—and to see his and others’ work become a part of a whole.

“We have some really wonderful people who are the old guard that feel like they are the comfortable welders, and they’re all very wise,” he said. “But even in the newest editions, we’re not here because we think that it’s all going to be done within our lifetimes. We like to joke about 2090 and about raising our children to work on the project. We just like to look at the next release, and that tends to be exciting enough to get us going.”

Inside the quixotic team trying to build an entire world in a 20-year-old game Read More »

pentagon-buyer:-we’re-happy-with-our-launch-industry,-but-payloads-are-lagging

Pentagon buyer: We’re happy with our launch industry, but payloads are lagging


“The point is to get missions out the door as fast as possible. Two to three years is too slow.”

Maj. Gen. Stephen Purdy oversees the Space Force’s acquisition programs at the Pentagon. Credit: Jonathan Newton/The Washington Post via Getty Images

DALLAS—The Space Force officer tasked with overseeing more than $24 billion in research and development spending says the Pentagon is more interested in supporting startups building new space sensors and payloads than adding yet another rocket company to its portfolio.

The statement, made at a space finance conference in Dallas last week, was one of several points Maj. Gen. Stephen Purdy wanted to get across to a room full of investors and commercial space executives.

The other points on Purdy’s agenda were that the Space Force is more interested in high-volume production than spending money to develop the latest technologies, and that the military has, at least for now, lost one of its most important tools for supporting and diversifying the space industrial base.

The rhetoric around prioritizing payloads over launchers aligns with the Space Force’s recent history of supporting small startups. Since 2020, SpaceWERX, the Space Force’s commercial innovation program, has awarded 23 funding agreements—called Strategic Funding Increases (STRATFIs)—to commercial space startups developing new sensors, software, satellite components, spacecraft buses, and orbital transfer vehicles. SpaceWERX awarded a single STRATFI agreement to a launch company—ABL Space Systems—and that firm has since exited the space launch market.

“We’re on path for mass-produced launch,” said Purdy, the military deputy for space acquisition in the Department of the Air Force. “We have got our ranges situated so we can do mass-produced launch. We’ve got our data centers and our data structure for mass-production. We’ve got AI pieces that are mass-produced, satellite buses are nearly there, and our payloads are the last element. Payloads at mass-produced affordability, at scale, is the key element.”

K2’s Gravitas satellite, set for launch next month, will test the company’s Hall-effect thruster, solar arrays, and other systems.

Credit: K2

K2’s Gravitas satellite, set for launch next month, will test the company’s Hall-effect thruster, solar arrays, and other systems. Credit: K2

Putting the money in

Payloads, Purdy told Ars after his talk, are “the last frontier” for scaling space missions. “The point is to get missions out the door as fast as possible. Two to three years is too slow. We’ve got to get down to one week. I’m not talking about super exquisite [payloads]. That’s not most of our missions. The commercial industry, your Kuipers [Amazon LEO], your Starlinks, have sort of got the comm piece down, but we’re still struggling in a lot of other stuff.”

One kind of payload Purdy identified was infrared sensors. Infrared sensors often come with cryocoolers to chill detectors to temperatures cold enough to provide sensitivity to faint targets, such as distant missile plumes, fires, explosions, or other objects in space. The technology isn’t as eye-catching as a rocket launch, but it will be key to many Space Force programs, including the Golden Dome missile defense shield backed by the Trump administration.

“I remain convinced that we’re going to think about the mission that we need, and we’re going to need satellites out the door and launched and in orbit within the week, at scale,” Purdy said. “I’m very convinced that that’s the path that we’re going to move down on the commercial and government side.”

The companies that come closest to that pace of satellite manufacturing are the ones Purdy mentioned: SpaceX’s Starlink and the Amazon LEO broadband networks. SpaceX and Amazon produce multiple satellites per day, but the spacecraft are identical. The Space Force needs plenty of rockets and communications satellites, but it also needs payloads and sensors to ride those launch vehicles and produce the data to be routed through relay stations in orbit.

Before President Trump ever uttered the words “Golden Dome,” the Space Force’s Space Development Agency was already striving to deploy a network of at least several hundred government-owned missile-detection, tracking, and data-relay satellites. Those satellites have suffered delays due to supply chain issues, particularly long lead times and delays in satellite buses, infrared payloads, laser communication terminals, and radiation-hardened processors.

Singing the blues

But the Space Force has lost access to one of the tools it used to help solve these problems. Many space mission components come from small businesses, and some parts come from overseas. The Space Force used STRATFIs, Small Business Innovation Research (SBIR), and Small Business Technology Transfer (STTR) grants to pay companies for basic research, experimentation, and scaling up manufacturing capacity. STRATFIs, SBIRs, and STTRs provided seed funding for high-risk, high-reward research and development.

Congress last year failed to reauthorize these programs, which are also used by NASA and other federal agencies. Opponents to a clean extension wanted legislation to cap how much funding can go to each grant recipient.

“I’ve got to get SBIRs and STRATFIs reauthorized, so I need the community’s help to get that done,” Purdy said. “There are some valid concerns that need to be addressed. All that needs to be addressed, but it affects the space industrial base a lot more than the other areas, and so I need everyone to kind of pile on and help get that done.”

Purdy took a victory lap by listing several STRATFIs that have, so far, yielded major results, at least for investors. K2 Space, a company developing high-power, low-cost satellite platforms, received $30 million in funding from the Space Force and Air Force in 2024. A year later, K2 closed a $250 million fundraising round at a company valuation of $3 billion. Apex Space, another startup looking to scale satellite manufacturing, received $11 million in strategic funding in 2024. A year later, Apex became a unicorn, exceeding a valuation of $1 billion. Impulse Space, which is working on in-space propulsion, received a STRATFI funding commitment from the Pentagon in 2024, helping propel the startup to a valuation of $1.8 billion.

“Years of SBIRs and STRATFIs have set the stage … We’ve been doing that for three or four or five years, we’ve produced a nice pool of 60 or 70 different companies that can help bid on all our upcoming new contracts, which is really nice,” Purdy said.

Under the Trump administration, the Defense Department has taken more steps to get cash in the hands of defense contractors. The Pentagon announced last month a $1 billion “direct-to-supplier” investment in L3Harris to expand production capacity of US solid rocket motors. This gives the federal government a direct equity stake in L3Harris’s missile business.

A Trump executive order last month also excoriated the defense industry for ballooning executive salaries, stock buybacks, and systemic lethargy. “You see some strong language through executive order and other mechanisms to say, ‘Hey, companies, you need to put in more CapEx yourselves. You need to kick in more yourselves.’ We’re no longer just going to provide you billions of dollars just for you to go build buildings,” Purdy said.

“And there’s some threat language on the back end of that. You’re going to do that, or else we’re going to start cutting you off. We’re going to start looking at other providers. That’s out in the open and subject for debate. But there’s a big carrot coming along with that, and that’s multi-year procurements. Multi-year procurements are the carrot to allow the investing community to have some amount of confidence,” Purdy continued.

“We’re not looking to be your R&D arm.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Pentagon buyer: We’re happy with our launch industry, but payloads are lagging Read More »