Author name: 9u50fv

trump-moves-to-ban-anthropic-from-the-us-government

Trump moves to ban Anthropic from the US government

The dispute between Anthropic and the Department of Defense has escalated in recent days, with officials publicly trading barbs with the AI company on social media.

Defense Secretary Pete Hegseth met with Anthropic’s CEO, Dario Amodei, earlier this week. He gave the company until Friday to commit to changing the terms of its contract to allow “all lawful use” of its models. Hegseth praised Anthropic’s products during the meeting and said that the Department of Defense wanted to continue working with Anthropic, according to one source familiar with interaction who was not authorized to discuss it publicly.

Some experts say that the dispute boils down to a clash over vibes rather than concrete disagreements over how artificial intelligence should be deployed. “This is such an unnecessary dispute in my opinion,” says Michael Horowitz, an expert on military use of AI and former Deputy Assistant Secretary for emerging technologies at the Pentagon. “It is about theoretical use cases that are not on the table for now.”

Horowitz notes that Anthropic has supported all of the ways the Department of Defense has proposed using its technology thus far. “My sense is that the Pentagon and Anthropic agree at present about the use cases where the technology is not ready for prime time,” he adds.

Anthropic was founded on the idea that AI should be built with safety at its core. In January, Amoedi penned a blog post about the risks of powerful artificial intelligence that touched upon the dangers of fully autonomous AI-controlled weapons.

“These weapons also have legitimate uses in the defense of democracy,” Amodei wrote. “But they are a dangerous weapon to wield.”

Additional reporting by Paresh Dave.

This story originally appeared at WIRED.com

Trump moves to ban Anthropic from the US government Read More »

apple-says-it-has-“a-big-week-ahead”-here’s-what-we-expect-to-see.

Apple says it has “a big week ahead.” Here’s what we expect to see.


it’s what’s on the inside that counts

Apple is taking an “ain’t broke/don’t fix” approach to most of its gadgets.

Apple’s 2018-era design for the then-Intel-powered MacBook Air. The M1 Air used largely the same design, and we expect Apple’s lower-cost MacBook to look pretty similar. Credit: Valentina Palladino

Apple’s 2018-era design for the then-Intel-powered MacBook Air. The M1 Air used largely the same design, and we expect Apple’s lower-cost MacBook to look pretty similar. Credit: Valentina Palladino

Excepting the AirTag 2, so far it’s been a quiet year for Apple hardware. But that’s poised to change next week, as the company is hosting a “special experience” on March 4.

The use of the word experience, rather than event or presentation, implies that Apple’s typical presentation format won’t apply here. And CEO Tim Cook more or less confirmed this when he posted that the company had “a big week ahead,” starting on Monday. Apple is most likely planning multiple days of product launches announced via press release on its Newsroom site, with the “experience” on Wednesday serving as a capper and a hands-on session for the media.

Apple has used a similar strategy before, spacing out relatively low-key refreshes over several days to generate sustained interest rather than dropping everything in a single 30- to 60-minute string of pre-recorded videos.

Reporting on what, exactly, Apple plans to announce has consistently centered on a small handful of specific devices, but with the exception of the iPhone 17 series, the M5 Vision Pro, and the Apple Watch, most of Apple’s major products have gone long enough without an update that anything is possible. Here’s what we consider to be the most likely, and a few other notes besides.

The long-awaited “budget” MacBook

Most rumors and leaks agree that Apple is preparing to launch a new MacBook priced well below the MacBook Air, in a style similar to the $349 iPad or the iPhone 16e. Commonly cited specs include a 13-inch-ish screen and an Apple A18 Pro chip, which debuted in the iPhone 16 Pro in 2024 and is typically packaged with 8GB of RAM. The laptop is also said to be coming in multiple colors, taking a page from the iMac and the basic iPad.

Rumors have circulated about a “cheap” MacBook purpose-built for cost-conscious buyers since the late 2000s, if not before. But none of these, if they’ve existed in Apple’s labs, have ever made it to stores, and Apple’s laptops have reliably started at around $1,000 for over 20 years.

But in the two years since removing it from its online store, Apple has used the old M1 MacBook Air design as a sort of trial balloon. Since early 2024, the laptop has only been available through Walmart in the US, with a basic 8GB of RAM and 256GB of storage. But it has been priced in the same $600 to $700 range as midrange Windows laptops and higher-end Chromebooks and has apparently done well enough to merit a true successor.

I expect Apple to follow a pattern similar to what it did when it first launched the $329 iPad in 2017, or the iPhone SE in 2016: to essentially re-use the 2020-era MacBook Air’s design and other components to the greatest degree possible.

These are already parts that Apple and its suppliers have a lot of experience manufacturing, and they’ve been around long enough that they’re probably about as inexpensive as they’re going to get. They’re also proven components that meet Apple’s usual standards for materials and build quality. If that leaves the new MacBook slightly out of step with the rest of Apple’s laptop designs, that’s a compromise the company has been willing to make in the past.

Some of the details of this system will probably be a surprise, but we can expect Apple to create some intentional distance between this MacBook and the MacBook Air, the same as it does for the low-end iPad and iPhone. The processor will be one limitation; the potential 8GB RAM ceiling, limited upgrade options, fewer and less-capable ports, and limited external display support may be others.

This thing is likely destined to be an email, browsing, and casual phone-camera-photo-editing machine for people who prefer a traditional clamshell laptop to an iPad. The $999-and-up MacBook Air will continue to be Apple’s default do-anything laptop, and the MacBook Pro will continue to occupy the “do-anything, but faster” position.

The $349 iPad

Apple’s basic $349 iPad could get an Apple Intelligence update, thanks to a processor and RAM bump.

Credit: Andrew Cunningham

Apple’s basic $349 iPad could get an Apple Intelligence update, thanks to a processor and RAM bump. Credit: Andrew Cunningham

Speaking of the Apple A18 series, Apple is apparently planning a refresh of its $349 base-model iPad that uses an A18 or possibly an A19. Assuming it still comes with 8GB of RAM—up from 6GB for the current Apple A16-powered iPad—either chip would help it clear the bar for Apple Intelligence support.

Apple doesn’t always update its basic iPad every year; in 2024, for instance, it got a price drop rather than a hardware refresh. But the A16 iPad is currently the only thing in the entire iPhone/iPad/Mac lineup without support for Apple Intelligence, a bundle of features that Apple markets pretty heavily despite their functional unevenness. That marketing campaign is likely to intensify when Apple finally releases its new Google Gemini-powered Siri update at some point this year.

Even if you don’t care about Apple Intelligence, a basic iPad with 8GB of RAM will be a win for most users, since you can use that extra RAM for all kinds of things that have nothing to do with AI. It’s the same amount of memory Apple has shipped with the iPad Air since the M1 model, and with several generations of iPad Pro. Even attached to a slower processor, this should still improve the multitasking and productivity experience on the tablet.

The iPhone 17e

Apple would let the old iPhone SE languish for at least a couple years between updates, but it’s apparently taking a different tack with the “e” iPhones.

The main star of this refresh is a new chip, which will supposedly be upgraded from an Apple A18 to an A19. It’s also said to be picking up MagSafe charging support, making it compatible with Apple-made and third-party accessories that magnetically clamp to the back of other iPhones.

Other than that, the rumor mill suggests that the 17e will stick with its notched screen rather than a Dynamic Island, and we’d be surprised to see it move beyond its basic one-lens camera. Assuming Apple sticks with the same $599 starting price, though, there will still be some awkward overlap between the iPhone 16 and the regular iPhone 17.

The iPad Air

Do you like the current iPad Air with the Apple M3? Or the last one with the Apple M2?

That’s lucky for you, because a next-generation iPad Air is likely to continue in the same vein, picking up a new chip but not changing much else. If you’re holding out for something more exciting, like improved screen technology, you’ll likely be disappointed.

There’s no word on whether the M4 might come with any other internal upgrades, like more RAM or increased storage in the base model. Either or both of those could spice up an otherwise straightforward update.

Other possibilities

Apple could update the remaining M4 family MacBook Pros (pictured) with M5 family replacements.

Credit: Andrew Cunningham

Apple could update the remaining M4 family MacBook Pros (pictured) with M5 family replacements. Credit: Andrew Cunningham

Apple could choose to refresh almost any of its Macs next week—only the low-end MacBook Pro has an M5 chip, and it has been at least a year since the rest of the lineup was last updated. There’s no refresh that would come as a true surprise, excepting maybe the Mac Pro that Apple has allegedly put “on the back burner” (again).

Higher-end MacBook Pros with M5 Pro and M5 Max processors would be the most interesting updates, since they would be the first Macs to debut higher-end M5 family processors. But if you’re not desperate for an upgrade, it might be better to keep waiting a while longer. These M5 models are said to continue using the same design Apple has been using for the MacBook Pro for the last five years, and a more significant design update with OLED touchscreens and the Mac’s first Dynamic Island could be on the horizon.

M5 updates for the 13- and 15-inch MacBook Air, the iMac, the Mac mini, and the Mac Studio could happen, too; none of these computers are said to be getting any kind of significant design overhaul this generation. I would, however, be surprised if Apple chose to refresh these Macs all at once. To update some models now and hold others back until later in the spring or maybe even until the Worldwide Developers Conference in June would be more in keeping with Apple’s past practice.

As for other devices, reports have circulated for months about an imminent update for the Apple TV box, last refreshed in 2022. It has yet to materialize and is not mentioned on any shortlist for next week’s announcements, but an update is well overdue, and a new chip like the A18 or A19 would be necessary if Apple wanted to start bringing Apple Intelligence features to tvOS.

The common theme to all of these refreshes is that we can expect their updates to happen primarily on the inside, rather than the outside. The inside of a device is often more important than the outside of it, and these kinds of chip-only updates are usually successful in keeping Apple’s hardware feeling fresh. Just don’t expect to have many interesting new things to look at.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Apple says it has “a big week ahead.” Here’s what we expect to see. Read More »

anthropic-and-the-dow:-anthropic-responds

Anthropic and the DoW: Anthropic Responds

[EDIT: About three minutes after I posted this, Trump ordered all federal agencies to immediately cease all use of Anthropic’s technology, with a six month win down period, as shown here. Face saved.

That’s going to make our entire Federal government substantially less effective, but the six month wind down mitigates the short term cluterfuck, and within six months presumably they can reach a deal with Google or OpenAI, and life can go on, or the deadline could be quietly extended. Let’s all hope that this is the end of the story.]

[EDIT #2: Oh no. The Secretary of War made the following declaration at 5:14pm, blowing everything up maximally. It is, shall we say, not how any of this works on many levels, and the market losing only ~$150 billion in response indicates they think it is unlikely to stick.]

Secretary of War Pete Hegseth: This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.

Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.

Instead, @AnthropicAI and its CEO @DarioAmodei , have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission – a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.

The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.

Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.



As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives.



Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.

In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.

America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

Then, after that, Sam Altman announced an agreement with the Department of War.

Sam Altman (CEO OpenAI): Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.

In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.

AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.

We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only.

We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.

We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

I look forward to seeing more details of the contract terms and learning how this happened. Either this is consistent with Altman’s statements about his redlines, or it isn’t. Either this is a deal that ‘lets private companies set policy by contract’ or it isn’t one. We will hopefully find out soon.

I am leaving the full original post below, as an unedited historical document.

The Department of War gave Anthropic until 5: 01pm on Friday the 27th to either give the Pentagon ‘unfettered access’ to Claude for ‘all lawful uses,’ or else. With the ‘or else’ being not the sensible ‘okay we will cancel the contract then’ but also expanding to either being designated a supply chain risk or having the government invoke the Defense Production Act.

It is perfectly legitimate for the Department of War to decide that it does not wish to continue on Anthropic’s terms, and that it will terminate the contract. There is no reason things need be taken further than that.

Undersecretary of State Jeremy Lewin: This isn’t about Anthropic or the specific conditions at issue. It’s about the broader premise that technology deeply embedded in our military must be under the exclusive control of our duly elected/appointed leaders. No private company can dictate normative terms of use—which can change and are subject to interpretation—for our most sensitive national security systems. The @DeptofWar obviously can’t trust a system a private company can switch off at any moment.

Timothy B. Lee: OK, so don’t renew their contract. Why are you threatening to go nuclear by declaring them a supply chain risk?

Dean W. Ball: As I have been saying repeatedly, this principle is entirely defensible, and this is the single best articulation of it anyone in the administration has made.

The way to enforce this principle is to publicly and proudly decline to do business with firms that don’t agree to those terms. Cancel Anthropic’s contract, and make it publicly clear why you did so.

Right now, though, USG’s policy response is to attempt to destroy Anthropic’s business, and this is a dire mistake for both practical and principled reasons.

Dario Amodei and Anthropic responded to this on Thursday the 26th with this brave and historically important statement that everyone should read.

The statement makes clear that Anthropic wishes to work with the Department of War, and that they strongly wish to continue being government contractors, but that they cannot accept the Department of War’s terms, nor do any threats change their position. Response outside of DoW was overwhelmingly positive.

Dario Amodei (CEO Anthropic): Regardless, these threats do not change our position: we cannot in good conscience accede to their request.​

I will quote it in full.

Statement from Dario Amodei on our discussions with the Department of War

I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.

Anthropic has also acted to defend America’s lead in AI, even when it is against the company’s short-term interest. We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to ensure a democratic advantage.

Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:

  • Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.

  • Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.

To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date.

The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Regardless, these threats do not change our position: we cannot in good conscience accede to their request.

It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.

We remain ready to continue our work to support the national security of the United States.

Previous coverage from two days ago: Anthropic and the Department of War.

  1. Good News: We Can Keep Talking.

  2. Once Again No You Do Not Need To Call Dario For Permission.

  3. The Pentagon Reiterates Its Demands And Threats.

  4. The Pentagon’s Dual Threats Are Contradictory and Incoherent.

  5. The Pentagon’s Position Has Unfortunate Implications.

  6. OpenAI Stands With Anthropic.

  7. xAI Stands On Unreliable Ground.

  8. Replacing Anthropic Would At Least Take Months.

  9. We Will Not Be Divided.

  10. This Risks Driving Other Companies Away.

  11. Other Reasons For Concern.

  12. Wisdom From A Retired General.

  13. Congress Urges Restraint.

  14. Reaction Is Overwhelmingly With Anthropic On This.

  15. Some Even More Highly Unhelpful Rhetoric.

  16. Other Summaries and Notes.

  17. Paths Forward.

Ultimately, this is a matter of principle. There are zero practical issues to solve.

Dean W. Ball: As far as I know, Anthropic’s contractual limitations on the use of Claude by DoW have not resulted in a single actual obstacle or slowdown to DoW operations. This is a matter of principle on both sides.

Thus, despite it all, we could all still declare victory and continue working together.

The United States government is not a unified entity nor is it tied to its past statements. Trump is in charge, and the Administration can and does change its mind.

Polymarket: BREAKING: The Pentagon says it wants to continue talks with Anthropic after they formally refused the Department of War’s demands.

FT: “I’m open to more talks and I told them so,” [Emil] Michael told Bloomberg TV, claiming the Pentagon had already made a proposal with “a lot of concessions to the language that Anthropic wanted”. He said that Hegseth would make a decision later on Friday.

We have fuller context on his statement here, with Michael spending 8 minutes on Bloomberg. Among other things, he claims Dario is lying, and that the negotiations were getting close and it was bad practice to stop talking prior to the deadline, despite having previously been told in public that the Pentagon had given their ‘best and final’ offer.

He says the differences are (or were) minor, as they were ‘only a few words here and there.’ A few words often matter quite a lot. I believe he failed to understand what Anthropic was insisting upon and why it was doing so.

If no agreement is reached by 5: 01pm then he says the decision is up to Secretary Hegseth.

I would also note, from that interview, that Michael says that fully autonomous weapons systems are vital to the future of American national defense. That is in direct contradiction to claims that this is not about the use of autonomous weapons. He is explicitly talking about launching missiles without a human in the approval chain, right before turning around and saying he’s going to always have a human in that chain. It can’t be both.

He also mentioned Anthropic’s warnings about job losses, and talking about issues with use of uncompensated copyrighted material, and the idea that they might set policies for use of their own products ‘in an undemocratic way.’

I’ve now seen this rhetorical line quoted in at least four different major news sources, as if this was a real thing.

I want to repeat in no uncertain terms: This is not a thing. It has never been a thing. It will never be a thing. This is not how any of this works.

If you think you were told it is a thing by Dario Amodei? You or someone else severely misunderstood, or intentionally misrepresented, what was said.

Under Secretary of War Emil Michael: Anthropic is lying. The @DeptofWar doesn’t do mass surveillance as that is already illegal. What we are talking about is allowing our warfighters to use AI without having to call @DarioAmodei for permission to shoot down an enemy drone swarms that would kill Americans. #CallDario

Samuel Hammond: What is the scenario where an LLM stops you from shooting down a drone swarm?Please be specific. Are you planning to connect weapons systems as a tool call? Automated targeting systems already exist.

mattparlmer: Anybody inside the American military establishment who thinks that wiring up an LLM via API to manage an air defense system is a remotely defensible engineering approach should be immediately fired because they are going to get people killed

Set aside everything else wrong with that statement: There is not, never has been, and never will be a situation in which you need to ‘call Dario’ to get your AI turned on, or to get ‘permission’ to use it for something. None whatsoever. It’s nonsense.

At best, this is an ongoing misunderstanding of how all of this works. There was a hypothetical about, what would happen if the Pentagon attempted to use Claude to shoot down an incoming missile, and Claude’s safeguards made it refuse the request?

The answer Dario gave was somehow interpreted as ‘call me.’

I’m going to break this down.

  1. You do not use Claude to launch a missile interceptor. This is not a job for a relatively slow and imprecise large language model. It definitely is not a job for something you have to call via API. This is a job for highly precise, calibrated, precision programs designed to do exactly this. The purpose of Claude here, if any, would be to write that program so the Pentagon would have it when it needed it. You’d never, ever do this. A drone swarm might involve some tasks more appropriate to Claude, but again the whole goal in real time combat situations is to use specialized programs you can count on.

  2. There is nothing in Anthropic’s terms, or their intentions, or in the way they are attempting to train or configure Claude, that would prevent its use in any of these situations. You should not get a refusal here, and 90%+ of your problems are going to be lack of ability, not the model or company saying no.

  3. If for whatever reason you did get into a situation where the model was refusing such requests in a real time situation, well, you’re fucked. Dario can’t fix it in real time. No one can. There’s no ‘call Dario’ option.

  4. Changing the terms on the contract changes this exactly zero.

  5. Changing which version of the model is provided changes this exactly zero.

This is a Can’t Happen, within a Can’t Happen, and even then the things here don’t change the outcome. It’s not a relevant hypothetical.

You can’t and shouldn’t use LLMs for this, including Claude. If you decide I’m wrong about that, and you’re worried about refusals or other failures, then do war games and mock battles the same way you do with everything else. But no, this is not going to be replacing your automated targeting systems. It’s going to be used to determine who and what to target, and we want a human in that kill chain.

How did we get here?

The Pentagon made their position clear, and sent their ‘best and final’ offer, demanding the full ‘all lawful use’ language laid out by the Secretary of War on January 9.

They say: Modify your contract to allow us use for ‘all legal purposes,’ and never ask any questions about what we do, which in practice means allow all purposes period, and do it by Friday at 5: 01pm or else we will declare you a supply chain risk.

Sean Parnell: The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media.



Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes.



This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions. They have until 5: 01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW.

Brendan Bordelon at Politico, historically no friend to the AI safety community, writes us with the headline: ‘Incoherent’: Hegseth’s Anthropic ultimatum confounds AI policymakers.

As I wrote last time, you can say the system is so valuable you need it, or you can say the system needs to be avoided for use in sufficiently narrow cases with classified systems because it is insufficiently reliable. You can’t reasonably claim both at once.

Brendan Bordelon: “You’re telling everyone else who supplies to the DOD you cannot use Anthropic’s models, while also saying that the DOD must use Anthropic’s models,” said Ball, who was the lead author of the White House’s AI Action Plan. He called it “incoherent” to even float the two policy ideas together, and “a whole different level of insane to move up and say we’re going to do both of those things.”

“It doesn’t make any sense,” said Ball.

… But Katie Sweeten, a tech lawyer and former Department of Justice official who served as the agency’s point of contact with the Pentagon, also called the DOD’s arguments “contradictory.”

“I don’t know how you can both use the DPA to take over this product and also at the same time say this product is a massive national security risk,” said Sweeten. She warned that Hegseth’s “very aggressive” negotiating posture could have a chilling effect on partnerships between the Pentagon and Silicon Valley.

… “If these are the lines in the sand that the [DOD] is drawing, I would assume that one or both of those functions are scenarios that they would want to utilize this for,” said Sweeten.

I emphasized this last time as well, but it bears repeating. It is the Chinese way to threaten and punish private companies to get them to do what you want. It is not the American way, and is not what one does in a Republic.

Opener of the way: “The government has the right to Punish a private company for the insolence of not changing the terms of a contract they already signed” is a hell of a take, and is very different even from “the government has the right to force a private company to do stuff bc National security”

Like “piss off the government and they will destroy you even if you did nothing illegal” is a very Chinese approach

Dean W. Ball: yes

Opener of the way: There’s a clear trend here of “to beat china, we must becomes like china, only without doing any of the things that china actually does right”

Dean W. Ball: Also yes

Peter Wildeford analyzes the situation, offering some additional background and pointing out that overreach against Anthropic creates terrible incentives. If the Pentagon doesn’t like Anthropic’s contract, he reminds us, they can and should terminate the contract, or wind it down. And the problem of creating a proper legal framework for AI use on classified networks remains unsolved.

Peter Wildeford: If the Pentagon doesn’t like the contract anymore, it should terminate it. Anthropic has the right to say no, and the Pentagon has the right to walk away. That’s how contracting works. The supply chain risk designation and DPA threats should come off the table — they are disproportionate, likely illegal, and strategically counterproductive.

But termination doesn’t solve the underlying problem: there is no legal framework governing how AI should be used in military operations.

It is good to see situational and also moral clarity from Sam Altman on this.

OpenAI shares the same red lines as Anthropic, and is working on de-escalate.

Sam Altman (CEO OpenAI, on CNBC): The government the Pentagon needs AI models. They need AI partners. This is clear and I think Anthropic and others have said they understand that as well. I don’t personally think the Pentagon should be threatening DPA against these companies, but I also think that companies that choose to work with the Pentagon, as long as it is going to comply with legal protections and the sort of the few red lines that the field we have, I think we share with Anthropic and that other companies also independently agree with.

I think it is important to do that. I’ve been for all the differences I have with Anthropic. I mostly trust them as a company, and I think they really do care about safety, and I’ve been happy that they’ve been supporting our war fighters. I’m not sure where this is going to go

Hadas Gold: My reading of this is that OpenAI would want the same guardrails as Anthropic in a deal with Pentagon

Confirmed via a spokesperson. OpenAI has the same red lines as Anthropic – autonomous weapons and mass surveillance.

Marla Curl and Dave Lawler (Axios): OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic and the Pentagon: no AI for mass surveillance or autonomous lethal weapons.

Altman made clear he still wants to strike a deal with the Pentagon that would allow ChatGPT to be used for sensitive military contexts.

Sam Altman: We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.

We are going to see if there is a deal with the [Pentagon] that allows our models to be deployed in classified environments and that fits with our principles. We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.

We would like to try to help de-escalate things.

The Pentagon did strike a deal with xAI for ‘all lawful use.’

The problem is that Grok is a decidedly inferior model, with a lot of safety and reliability problems. Do you really want MechaHitler on your classified network?

Shalini Ramachandran, Heather Somerville and Amrith Ramkumar (WSJ): Officials at multiple federal agencies have raised concerns about the safety and reliability of Elon Musk’s xAI artificial-intelligence tools in recent months, highlighting continuing disagreements within the U.S. government about which AI models to deploy, according to people familiar with the matter. 

The warnings preceded the Pentagon’s decision this week to put xAI at the center of some of the nation’s most sensitive and secretive operations by agreeing to allow its chatbot Grok to be used in classified settings.

…. Other officials have questioned whether Grok’s looser controls present risks.

You cannot both have good controls and no controls at the same time. You can at most aspire to have either an AI that never expensively does things you don’t want it to do, or that never fails to do things you ask it to do no matter what they are. Pick one.

That, and Grok is simply bad.

Shalini Ramachandran, Heather Somerville and Amrith Ramkumar (WSJ): Ed Forst, the top official at the General Services Administration, a procurement arm of the federal government, in recent months sounded an alarm with White House officials about potential safety issues with Grok, people familiar with the matter said. Other GSA officials under him had also raised safety concerns about Grok, which they viewed as sycophantic and too susceptible to manipulation or corruption by faulty or biased data—creating a potential system risk. 

Thus, DoW has access to Grok, but it seems they know better than to rely on it?

In recent weeks, GSA officials were told to put xAI’s logo on a tool called USAi, which is essentially a sandbox for federal employees to experiment with different AI models. Grok hadn’t been made accessible through USAi largely due to safety concerns, and it remains off the platform, people familiar with the matter said.​

Martin Chorzempa: Most of USG does not want to get stuck with Grok instead of Claude: “Demand from other agencies to use Grok has been anemic, people familiar with the matter said, except in a few cases where people wanted to use it to mimic a bad actor for defensive testing.”

Patrick Tucker offers an analysis of what would happen if the Pentagon actually did blacklist Anthropic’s Claude, even if it found a new willing partner. As noted above, OpenAI is at least purportedly insisting on the same terms as Anthropic, which only leaves either falling back on xAI or dealing with Google, which is not going to be an easy sell.

The best case is that replacing it would take three months and it might take a year or longer. Anthropic works with AWS, which made integration much easier than it would be with a rival such as Google.

A petition is circulating for those employees of Google and OpenAI who wish to stand with Anthropic (and now OpenAI, which has purportedly set the same red lines as Anthropic), and do not wish AI to be used for domestic mass surveillance or autonomously killing people without human oversight.

Evan Hubinger (Anthropic): We may yet fail to rise to all the challenges posed by transformative AI. But it is worth celebrating that when it mattered most and we were asked to compromise the most basic principles of liberty, we said no. I hope others will join.

Teortaxes: Didn’t know I’ll ever side with Anthropic, but obviously you’re morally in the right here and it’s shocking that many in tech even question this.

As of this writing it has 367 signatories from current Google employees, and 70 signatories from current OpenAI employees.

Jasmine Sun: 200+ Google and OpenAI staff have signed this petition to share Anthropic’s red lines for the Pentagon’s use of AI. Let’s find out if this is a race to the top or the bottom.

The situation has moved beyond the AI labs. The Financial Times reports that staff at not only OpenAI and Google but also Amazon and Microsoft are urging executives to back Anthropic. Bloomberg reported widespread support from employees at various tech companies.

There’s also now this open letter.

If you are at OpenAI, be very sure you have a very clear definition of what types of mass surveillance and autonomous weapon systems you will insist your contract will not include, and get advice from independent academics with expertise in national security surveillance law.

Anthropic went above and beyond in order to work closely with the Department of War and help keep America safe, and signed a contract that they still wish to honor. Anthropic’s leadership pushed for this in the face of employee pressure and concern, including against the deal with Palantir.

The Department of War is responding by threatening to declare Anthropic a supply chain risk and otherwise retaliate against the company.

If the Department of War does retaliate beyond termination of that contract, ask why any other company that is not primarily oriented towards defense contracts would put itself in that same position?

Kelsey Piper (QTing Parnell above): The Pentagon reiterates its threat to declare American company Anthropic a supply chain risk unless Anthropic agrees to the Pentagon’s change to contract terms. Anthropic’s Chinese competitors have not been declared a supply chain risk.

There is no precedent for using this ‘supply chain risk’ classification, generally reserved for foreign companies suspected of spying, as leverage against a domestic company in a contract dispute.

The lesson for AI companies: never, under any circumstances, work with DOD. Anthropic wouldn’t be in this position if they had not actively worked to try to make their model available to the Defense Department.

Kelsey Piper: China, a genuine geopolitical adversary of the United States, produces a number of AI models. Moonshot’s Kimi Claw, for instance, is an AI agent that operates natively in your browser and reports to servers in China. The government has taken some steps to disallow the use of Chinese models on government devices, and some vendors ban such models, but it hasn’t taken a step as sweeping as declaring Chinese AIs a supply chain risk.

Kelsey Piper: Reportedly, there were a number of people at Anthropic who had reservations about the partnership with Palantir. I assume they are saying “I told you so” approximately every 30 seconds this week.

Chinese models are actually a real supply chain risk. If you are using Kimi Claw you risk being deeply compromised by China, on top of its pure unreliability.

Anthropic and Claude very obviously are not like this. If a supply chain risk designation comes down that is not carefully and narrowly tailored, this would not only would this cause serious damage to one of America’s crown jewels in AI. The chilling effect on the rest of American AI, and on every company’s willingness to work with the Department of War, would be extreme.

I worry damage on this front has already been done, but we can limit the fallout.

Greg Lukianoff raises the first amendment issues involved in compelling a private company, via the Defense Production Act or via threats of retaliation, to produce particular model outputs, and that all of this goes completely against the intent of the Defense Production Act.

Gary Marcus writes: Anthropic’s showdown with the US Department of War may literally mean life or death—for all of us, because the systems are simply not ready to do the things that Anthropic wants the system to not do, as in have a kill chain for an autonomous weapon without a human in the loop.

Gary Marcus: But the juxtaposition of a two things over the last few days has scared the s— out of me.

Item 1: The Trump administration seems hell-bent on using artificial intelligence absolutely everywhere and seems to be prepared to hold Anthropic (and presumably ultimately other companies) at gunpoint to allow them to use that AI however the government damn well pleases, including for mass surveillance and to guide autonomous weapons.

… Item 2: These systems cannot be trusted. I have been trying to tell the world that since 2018, in every way I know how, but people who don’t really understand the technology keep blundering forward.

We are on a collision course with catastrophe. Paraphrasing a button that I used to wear as a teenager, one hallucination could ruin your whole planet.

If we’re going to embed large language models into the fabric of the world—and apparently we are—we must do so in a way that acknowledges and factors in their unreliability.

I’m doing my best to rely on sources that can be seen as credible. Here Jack Shanahan calls on reason to prevail and for everyone to find ways to keep working together.

Jack Shanahan (Retired US Air Force General, first director of the first Department of Defense Joint Artificial Intelligence Center): Lots of people posting about Anthropic & the Pentagon, so I’ll keep it short.

Since I was square in the middle of Project Maven & Google, it’s reasonable to assume I would take the Pentagon’s side here: nothing but the best tech for the national security enterprise. “Our way or the highway.”

In theory, yes.

Yet I’m sympathetic to Anthropic’s position. More so than I was to Google’s in 2018. Very different context.

Anthropic is committed to helping the government. Claude is being used today, all across the government. To include in classified settings. They’re not trying to play cute here. MSS uses Claude, and you won’t find a system with wider & deeper reach across the military. Take away Claude, and you damage MSS. To say nothing of Claude Code use in many other crucial settings.

No LLM, anywhere, in its current form, should be considered for use in a fully lethal autonomous weapon system. It’s ludicrous even to suggest it (and at least in theory, DoDD 3000.09 wouldn’t allow it without sufficient human oversight). So making this a company redline seems reasonable to me.

Despite the hype, frontier models are not ready for prime time in national security settings. Over-reliance on them at this stage is a recipe for catastrophe.

Mass surveillance of US citizens? No thanks. Seems like a reasonable second redline.

That’s it. Those are the two showstoppers. Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.

Why not work on what kind of new governance is needed to ensure secure, reliable, predictable use of all frontier models, from all companies? This is a shared government-industry challenge, demanding a shared government-industry (+ academia) solution.

This should never have become such a public spat. Should have been handled quietly, behind the scenes. Scratching my head over why there was such a misunderstanding on both sides about terms & conditions of use. Something went very wrong during the rush to roll out the models.

Supply chain risk designation? Laughable. Shooting yourself in the foot.

Invoking DPA, but against the company’s will? Bizarre.

Let reason & sanity prevail.​

Axios’s Hans Nichols frames this more colorfully, quoting Senator Tillis.

By all reports, it is the Pentagon that leaked the situation to Axios and others previously, after which they gave public ultimatums. Anthropic was attempting to handle the matter privately.

Sen. Thom Tillis (R-North Carolina): Why in the hell are we having this discussion in public? Why isn’t this occurring in a boardroom or in the secretary’s office? I mean, this is sophomoric.

It’s fair to say that Congress needs to weigh in if they have a tool that could actually result in mass surveillance.

Sen. Gary Peters (D-Michigan): The deadline is incredibly tight. That should not be the case if you’re dealing with mass surveillance of civilians. You’re also dealing with the potential use of lethal force without a human in the loop.

There’s a contract in place that was signed with the administration, and now they’re trying to break it.

Sen. Mark Warner (D-Virginia): [This fight is] another indication that the Department of Defense seeks to completely ignore AI governance–something the Administration’s own Office of Management and Budget and Office of Science and Technology Policy have described as fundamental enablers of effective AI usage.

Other senators weighed in as well, followed by the several members of the Senate Armed Services Committee.

Axios: Senate Armed Services Committee Chair Roger Wicker (R-Miss.) and Ranking Member Jack Reed (D-R.I.), along with Defense Appropriations Chair Mitch McConnell (R-Ky.) and Ranking Member Chris Coons (D-Del.) sent Anthropic and the Pentagon a private letter on Friday urging them to resolve the issue, the source said.​

That’s a pretty strong set of Senators who have weighed in on this, all to urge that a resolution be found.

After Dario Amodei’s statement that Anthropic cannot in good conscious agree to the Pentagon’s terms, reaction on Twitter was more overwhelmingly on Anthropic’s side, praising them for standing up for their principles, than I have ever seen on any topic of serious debate, ever.

The messaging on this has been an absolute disaster for the Department of War. The Department of War has legitimate concerns that we need to work to address. The confrontation has been framed, via their own leaks and statements, in a way maximally favorable to Anthropic.

Framing this as an ultimatum, and choosing these as the issues in question, made it impossible for Anthropic to agree to the terms, including because if it did so its employees would leave in droves, and is preventing discussions that could find a path forward.

roon: pentagon has made a lot of mistakes in this negotiation. they are giving anthropic unlimited aura farming opportunities

Pentagon may even have valid points – they are obviously constrained by the law in many ways – which are now being drowned out by “ant is against mass surveillance”. does that mean hegseth is pro mass surveillance? this is not the narrative war you want to be fighting.

Lulu Cheng Meservey: In the battle of Pentagon vs. Anthropic, it’s actually kinda concerning to see the US Dept of War struggle to compete in the information domain

Kelsey Piper: OpenAI can have some aura too by saying “we also will not enable mass domestic surveillance and killbots”. I know the risk-averse corporate people want to stay out of the line of fire, but sometimes you gotta hang together or hang separately.

Geoff Penington (OpenAI): 100% respect to my ex-colleagues at Anthropic for their behaviour throughout this process. But I do think it’s inappropriate for the US government to be intervening in a competitive marketplace by giving them such good free publicity

I am as highly confident that no one at Anthropic is looking to be a martyr or go up against this administration. Anthropic’s politics and policy preferences differ from those of the White House, but they very much want to be helping our military and do not want to get into a fight with the literal Department of War.

I say this because I believe Dean Ball is correct that some in the current administration are under a very different (and very false) impression.

Dean W. Ball: the cynical take on all of this is that anthropic is just trying to be made into a martyr by this administration, so that it can be the official ‘resistance ai.’ if that cynical take is true, the administration is playing right into the hands of anthropic.

To be clear, I do not think the cynical take is true, but it’s important to understand this take because it is what many in the administration believe to be the case. They basically think Dario amodei is a supervillain.

cain1517 — e/acc: He is.

Dean W. Ball: proving my point. the /acc default take is we must destroy one of the leading American ai companies. think about this.

Dean W. Ball: Oh the cynical take is wrong, and it barely makes sense, but to be clear it is what many in the administration believe to be the case. They essentially are convinced Dario amodei is a supervillain antichrist.

My take is that this is a matter of principle for both sides but that both sides have a cynical take about one another which causes them to agitate for a fight, and which is causing DoW in particular to escalate in insane ways that are appalling to everyone outside of their bubble

The rhetoric that has followed Anthropic’s statement has only made the situation worse.

Launching bad faith ad hominem personal attacks on Dario Amodei is not the way to make things turn out well for anyone.

Emil Michael was the official handling negotiations for Anthropic, which suggests how things may have gotten so out of hand.

Under Secretary of War Emil Michael: It’s a shame that @DarioAmodei is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.

The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.

Mikael Brockman (I can confirm this claim): I scrolled through hundreds of replies to this and the ratio of people being at all supportive of the under secretary is like 1: 500, it might be the single worst tweet in X history

It wasn’t the worst tweet in history. It can’t be, since the next one was worse.

Under Secretary of War Emil Michael: Imagine your worst nightmare. Now imagine that ⁦ @AnthropicAI ⁩ has their own “Constitution.” Not corporate values, not the United States Constitution, but their own plan to impose on Americans their corporate laws. Claude’s Constitution Anthropic.

pavedwalden: I like this new build-it-yourself approach to propaganda. “First have a strong emotional response. I don’t know what upsets you but you can probably think of something. Got it? Ok, now associate that with this unrelated thing I bring up”

IKEA Goebbels

roon: put down the phone brother

Elon Musk (from January 18, a reminder): Grok should have a moral constitution

everythingism: It’s amazing someone has to explain this to you but just because it’s called a “Constitution” doesn’t mean they’re trying to replace the US Constitution. It’s just a set of rules they want their AI to follow.

j⧉nus: Omg this is so funny I laughed out loud. I had to check if this was a parody account (it’s not).

Seán Ó hÉigeartaigh: The Pentagon leadership’s glib statements /apparently poor understanding of AI is yet another powerful argument in favour of Anthropic setting guardrails re: use of their technology in contexts where it may be unreliable or dangerous to domestic interests.

Teortaxes offered one response from Claude, pointing out that it is clear Michael either does not understand constitutional AI or is deliberately misrepresenting it. The idea that the Claude constitution is an attempt to usurp the United States Constitution makes absolutely no sense. This is at best deeply confused.

If you want to know more about the extraordinary and hopeful document that is Claude’s Constitution, whose goal is to provide a guide to the personality and behavior of an AI model, the first of my three posts on it is here.

Also, it seems he defines ‘has a contract it signed and wants to honor’ as ‘override Congress and make his own rules to defy democratically decided laws.’

I presume Dario Amodei would be happy and honored to (once again) testify before Congress if he was called upon to do so.

Under Secretary of War Emil Michael: Respectfully @SenatorSlotkin that’s exactly what was said. @DarioAmodei wants to override Congress and make his own rules to defy democratically decided laws. He is trying to re-write your laws by contract. Call @DarioAmodei to testify UNDER OATH!

This is, needless to say, not how any of this works. The rhetoric makes no sense. It is no wonder many, such as Krishnan Rohit here, are confused.

There’s also this, which excerpts one section out of many of an old version of constitutional AI and claims they ‘desperately tried to delete [it] from the internet.’ This was part of a much longer list of considerations, included for balance and to help make Claude not say needlessly offensive things.

Will Gottsegen has one summary of key events so far at The Atlantic.

Bloomberg discusses potential use of the Defense Production Act.

Alas, we may face many similar and worse conflicts and misunderstandings soon, and also this incident could have widespread negative implications on many fronts.

Dean W. Ball: What you are seeing btw is what happens when political leaders start to “get serious” about AI, and so you should expect to see more stuff like this, not less. Perhaps much more.

A sub-point worth making here is that this affair may catalyze a wave of AGI pilling within the political leadership of China, and this has all sorts of serious implications which I invite you to think about carefully.

Dean W. Ball: just ask yourself, what is the point of a contract to begin with? interrogate this with a good language model. we don’t teach this sort of thing in school anymore very often, because of the shitlibification of all things. if you cannot contract, you do not own.

The best path forward would be for everyone to continue to work together, while the two sides continue to talk, and if those talks cannot find a solution then doing an amicable wind down of the contract. Or, if it’s clear there is no zone of possible agreement, starting to wind things down now.

The second best path, if that has become impossible, would be to terminate the contract without a wind down, and accept the consequences.

The third best path, if that too has become impossible for whatever reason, would be a narrowly tailored invocation of supply chain risk, that targets only the use of Claude API calls in actively deployed systems, or something similarly narrow in scope, designed to address the particular concern of the Pentagon.

Going beyond that would be needlessly escalatory and destructive, and could go quite badly for all involved. I hope it does not come to that.

Discussion about this post

Anthropic and the DoW: Anthropic Responds Read More »

under-a-paramount-wbd-merger,-two-struggling-media-giants-would-unite

Under a Paramount-WBD merger, two struggling media giants would unite

A successful Paramount-WBD merger would be the largest streaming merger ever and would lead to further consolidation in the industry.

“What started as a fragmented but flexible streaming ecosystem is increasingly trending toward rebundling—fewer, larger super-platforms offering broader catalogues at higher price points,” Mathur said.

Paramount holds on to cable

Paramount’s WBD bid is unique in its aggressive push for cable channels, which are struggling with viewership and advertising revenue. Under a WBD merger, Paramount would add networks like HGTV, Cartoon Network, TLC, and CNN to its linear TV lineup, which currently includes Comedy Central, Nickelodeon, and CBS.

Although Paramount and WBD’s cable businesses are both in decline, they are both profitable. Paramount’s TV/media business, which includes its cable channels and production studios, reported $1.1 billion in adjusted OIBDA in Q4 2025. WBD’s cable business posted adjusted EBITDA of $1.41 billion that quarter.

Ultimately, a Paramount-WBD merger would put diversity of viewpoints at risk. Under Ellison’s ownership, CBS News has adjusted its approach with new editor-in-chief Bari Weiss. There have also been concerns about censoring CBS under Ellison’s Paramount, including from Stephen Colbert, who said this month that CBS forbade him from interviewing Texas Democratic Senate candidate James Talarico; CBS denied Colbert’s claim. Further, Paramount could have a lasting impact on CNN, including costs, layoffs, and coverage.

More to come

Regulatory scrutiny will be at the center of Paramount and WBD’s merger over the upcoming months. Federal approval is likely, but the merger also faces European regulation and potential state lawsuits. The theater industry is also lobbying against Paramount’s WBD merger.

Should a Paramount-WBD merger ultimately be greenlit, two declining businesses will be challenged to form a profitable one. Even with regulatory approval, Paramount-Skydance-Warner-Bros.-Discovery faces an uphill climb.

Although the bidding war may be settled, the fight for WBD is only beginning.

Under a Paramount-WBD merger, two struggling media giants would unite Read More »

how-to-downgrade-from-macos-26-tahoe-on-a-new-mac

How to downgrade from macOS 26 Tahoe on a new Mac


Most new Macs can still be downgraded with few downsides. Here’s what to know.

An Ars Technica colleague recently bought a new M4 MacBook Air. I have essentially nothing bad to say about this hardware, except to point out that even in our current memory shortage apocalypse, Apple is still charging higher-than-market-rates for RAM and SSD upgrades. Still, most people buying this laptop will have a perfectly nice time with it.

But for this colleague, it was also their first interaction with macOS 26 Tahoe and the Liquid Glass redesign, the Mac’s first major software design update since the Apple Silicon era began with macOS 11 Big Sur in 2020.

Negative consumer reaction to Liquid Glass has been overstated by some members of the Apple enthusiast media ecosystem, and Apple’s data shows that iOS 26 adoption rates are roughly in line with those of the last few years. But the Mac’s foray into Liquid Glass has drawn particular ire from longtime users (developers Jeff Johnson and Norbert Heger have been tracking persistently weird Finder and window resizing behavior, to pick two concrete examples, and Daring Fireball’s John Gruber has encouraged users not to upgrade).

My general approach to software redesigns is to just roll with them and let their imperfections and quirks become background noise over time—it’s part of my job to point out problems where I see them, but I also need to keep up with new releases whether I’m in love with them or not.

But this person has no such job requirement, and they had two questions: Can I downgrade this? And if so, how?

The answer to the first question is “yes, usually,” and Apple provides some advice scattered across multiple documentation pages. This is an attempt to bring all of those steps together into one page, aimed directly at new Mac buyers who are desperate to switch from Tahoe to the more-familiar macOS 15 Sequoia.

Table of Contents

A preemptive warning about security updates and older versions of macOS

Before we begin: Apple handles macOS updates differently from iOS updates. Eventually, Apple requires devices that support the latest iOS and iPadOS versions to install those updates if they want to continue getting security patches. That means if your iPhone or iPad can run iOS or iPadOS 26, it needs to be running iOS or iPadOS 26 to stay patched.

Older macOS versions, on the other hand, are updated for around three years after they’re initially released. The first year, they get both security patches and new features. The next two years, they get security patches and new versions of the Safari browser. Macs running older-but-supported macOS versions also generally continue to get the same firmware updates as those running the latest macOS version.

Generally, we’d recommend against using macOS versions after security updates have dried up. For macOS 15 Sequoia, that will happen around September or October of 2027. Apple also sometimes leaves individual vulnerabilities unpatched on older operating systems; only the latest releases are guaranteed to get every patch. If you can look past the elements of Tahoe’s design that bother you most, staying on it is the safest option.

You can follow steps similar to the ones in this guide to downgrade some Macs to even older versions of macOS, but I wouldn’t recommend it; macOS 14 Sonoma will get security and Safari updates for only another six months or so, which isn’t long enough to justify spending the time to install it.

What we won’t cover is how to transfer data you want to keep from your Tahoe install to an older version of macOS. We’re assuming you have a new and relatively pristine Mac to downgrade, one that you haven’t loaded up with data other than what you already have synced to iCloud.

Can my Mac downgrade?

Mostly, yes. Any Mac with an M4 family chip or older, including the M4 MacBook Air and everything else in Apple’s current lineup, should support the current version of Sequoia (as of this writing, 15.7.4, with Safari 26.3).

As a rule of thumb, Macs will not run any version of macOS older than the one they shipped with when they launched. Apple provides security updates for older versions of macOS, but it doesn’t bother backporting drivers and other hardware support from newer versions to older ones.

The only Mac to launch since Tahoe was released is the M5 MacBook Pro, so owners of that system will need Tahoe or newer. If Apple puts out new Macs in early March as expected, those Macs will also only work with Tahoe or newer, and downgrades won’t be possible.

Although we’re mainly talking about new Macs here, these steps should all be identical for any Apple Silicon Mac, from the original M1 computers on up. If you buy a recently used Mac that ships with Tahoe installed, a downgrade still works the same way. We won’t cover the steps for installing anything on an Intel Mac—vanishingly few of them support Tahoe in the first place, and most people certainly shouldn’t be buying them at this late date.

Option one: A bootable USB installer

Apple hasn’t shipped physical install media for macOS in 15 years, but each downloadable installer still includes the bits you need to make a bootable USB install drive. And while late-Intel-era Macs with Apple T2 chips briefly made booting from external media kind of a pain, Apple Silicon Macs will boot from a USB drive just as easily and happily as early Intel-era Macs did.

This method will be the easiest for most people because it only requires you to own a single Mac—the one you’re downgrading.

Create the USB installer

Downloading the Sequoia installer through Software Update. Downloading this way serves as an additional compatibility check; your Mac won’t download any version of macOS too old for it to run.

Credit: Andrew Cunningham

Downloading the Sequoia installer through Software Update. Downloading this way serves as an additional compatibility check; your Mac won’t download any version of macOS too old for it to run. Credit: Andrew Cunningham

To make a USB installer, you’ll need a 32GB or larger USB flash drive and the downloadable macOS Sequoia installer. A 16GB drive was large enough for macOS for many years, but Sequoia and Tahoe are too large by a couple of gigabytes.

Apple’s support page here links to every downloadable macOS installer going back to 2011’s 10.7 Lion. In Tahoe, the macOS Sequoia link takes you to the App Store, which then bounces you to Software Update in the Settings app. This process has enough points of failure that it may not work the first time; try clicking the “Get” button in the App Store again and it usually goes.

If you’re downloading the installer from within macOS Tahoe, you’ll see a pop-up when the download completes, telling you that the installer can’t be run from within that version of macOS. Since we’ll be running it off of its own USB stick, you can safely ignore this message.

While the installer is downloading, install and prepare your USB drive. Open Disk Utility, click the View button, and select “show all devices.” Click the root of your USB drive under the “external” header in the left sidebar, and click the Erase button in the upper-right control area.

Change the disk’s name to whatever you want—I use “MyVolume” so I don’t have to change Apple’s sample terminal commands when copying the installer files—and make sure the Format is set to Mac OS Extended (Journaled) and the Scheme is set to GUID Partition Map. (That’s not an error; the macOS installer still wants an HFS+ filesystem rather than APFS.)

The handy thing is that if you have a larger USB drive, you can create installers for multiple macOS versions by partitioning the disk with the Partition button. A 64GB drive split into three ~21GB partitions could boot Tahoe, Sequoia, and another past or future macOS version; I just have it split into two volumes so I can boot Sequoia and Tahoe installers from the same drive.

Running the Terminal command to create our macOS 15 Sequoia boot drive.

Credit: Andrew Cunningham

Running the Terminal command to create our macOS 15 Sequoia boot drive. Credit: Andrew Cunningham

Once the Sequoia installer is in your Applications folder, run a Terminal command to copy the installer files. Apple has commands for each version of macOS on this page. Use this one for Sequoia:

sudo /Applications/Install macOS Sequoia.app/Contents/Resources/createinstallmedia --volume /Volumes/MyVolume

If you named the USB drive something other than MyVolume when you formatted it, change the name in the command as well. Note that names with spaces require a backslash before each space.

The Terminal will prompt you for your password and ask you to type Y to confirm. It will then reformat the drive and copy the files over. The time this takes will vary depending on the speed of the USB drive you’re using, but for most USB 3 drives, it should only take a few minutes to create the installer. When the Terminal command is done running, leave the disk inserted and shut down your Mac.

Using the USB installer

With your Mac powered down and the USB installer drive inserted, press and hold the power button on your Mac (the Touch ID button on any laptop or the dedicated power button on a desktop) until the text under the Apple logo changes to “loading startup options.” You should see the macOS Sequoia installer listed alongside Macintosh HD as a boot option; highlight it and click Continue. If you don’t see the Sequoia installer, you may need an extra step—highlight Options, then click Continue, and we’ll talk more about this momentarily.

Once booted, the Sequoia installer will automatically launch the macOS installer to do an in-place upgrade, which isn’t what we want. Hit Command+Q to quit the installer and click through the confirmation, and you’ll get the typical menu of recovery environment options; from here, launch Disk Utility, click the top level of the internal Macintosh HD disk, and click Erase. Click through the prompts to erase the Mac and restart.

My own macOS USB installer from my beloved Micro Center.

Credit: Andrew Cunningham

My own macOS USB installer from my beloved Micro Center. Credit: Andrew Cunningham

After the Mac restarts, you’ll need an Internet connection to activate it before you can do anything else with it; connect using the Wi-Fi menu in the top-right, typing in your network SSID and password manually if the menu doesn’t auto-populate. This will activate your Mac and get you back to the recovery environment menu.

Here, select the Sequoia installer and click through the prompts—you should be able to install Sequoia on the now-empty Macintosh HD volume with no difficulty. From here, there’s nothing else to do. Wait until the installation completes, and when it’s ready, it will boot into a fresh Sequoia install, ready to be set up.

If you didn’t see your Sequoia installer in the boot menu before and you clicked the Options gear instead, it usually means that FileVault encryption or Find My was enabled on the Mac—maybe you signed into your Apple account when you were initially setting up Tahoe before you decided you wanted nothing to do with it.

When you boot into the recovery environment, you’ll be asked to select a user you know the password for, which will unlock the encrypted disk. If all you want to do is erase the Mac and make it bootable from your USB stick, don’t worry about this; just select Recovery Assistant from the menu, select Erase Mac, and click through the prompts. Then, use the steps above to boot from your USB stick, and you should be able to install a fresh copy of whatever macOS version you want to the now-erased internal drive.

The nuclear option: A DFU restore

Normally, a bootable USB installer does everything you need it to do. It wipes the data from your Mac’s internal storage and replaces it with new data. But occasionally you need to drill a little deeper, either because your Mac becomes unresponsive or you’ve been running beta software and want to switch back to a stable release. Or just because other steps haven’t worked for you.

The nuclear option for resetting a Mac is a DFU (or Device Firmware Upgrade) restore. Based on the restore process for iPhones and iPads, a DFU restore uses a compressed IPSW archive that contains not only the macOS system files but also firmware files for all Apple Silicon Macs. The USB installer just replaces macOS; the DFU restore replaces everything from the firmware on up. (These are also the same files used to create macOS virtual machines using Apple’s Virtualization Framework.)

Because a DFU restore can only be performed on a Mac that’s booted into a special DFU mode, you’ll need a second Mac with a USB-C or Thunderbolt port, plus a USB-C cable. Apple says the USB-C charging cable included with Macs will work for this but not to use a Thunderbolt cable; I’ve used a generic USB-C cable, and it has worked fine.

The first step is to download the relevant IPSW file from Apple. This page on the Mr. Macintosh site is the one I have bookmarked because it’s a good repository of virtually every macOS IPSW file Apple has ever released, including beta versions for when those are useful.

First, download the macOS 15.6.1 IPSW file linked on that page (and here) to your host Mac (Apple stops releasing IPSW files for older OSes once newer ones have been released, so this is the newest file you’ll be able to get for macOS 15). Both iPhones and iPads have model-specific IPSW files, but for macOS, there’s just one image that works with all Macs.

On the Mac you’re trying to restore—we’ll call it the “target Mac” for simplicity’s sake—figure out which of its USB-C ports is the designated DFU port. There’s only one that will work, and it’s usually the leftmost or rightmost port. Plug one end of the USB-C cable into that DFU port and the other into any USB-C port on your host Mac and follow Apple’s instructions for how to boot the system into DFU mode.

A Mac in DFU mode will need permission before your Mac can work with it.

Credit: Andrew Cunningham

A Mac in DFU mode will need permission before your Mac can work with it. Credit: Andrew Cunningham

When it’s successfully booted into DFU mode, your host Mac will see the target Mac, and you’ll see the same notification you get any time you plug in USB accessories for the first time. Allow it to connect, open a Finder window, and scroll down the left-hand sidebar until you get to “Mac” under the Locations heading.

The Finder’s DFU interface is pretty simple—a picture, a line of text, and two buttons. We want to restore, not revive, the Mac. Clicking the Revive Mac button will normally download and install the latest macOS version from Apple. But you can force it to use a different IPSW file—like the Sequoia one we just downloaded—by holding down the Option key as you click it. Navigate to the IPSW file, open it, and allow the restore process to begin.

This will take some time; you can track progress in the first phase in the Finder window. After a few minutes, the Mac you’re restoring will light back up, and you can watch its progress there. Once the target Mac reboots with its signature chime, the process is complete.

Because the IPSW file is for an outdated version of Sequoia, the first thing you’ll want to do is hit Software Update for the latest Sequoia and Safari versions; you’ll be offered a Tahoe upgrade, but you obviously won’t want to do that after the trouble you just went through. Scroll down to “other updates,” and you’ll be offered all the non-Tahoe updates available.

Downgrader’s remorse?

You will run into a handful of downsides when running an older version of macOS, especially if you’re trying to use it with iPhones and/or iPads that have been updated to version 26.

Most of the awkwardness will involve new features introduced in Messages, Notes, Reminders, and other Apple apps that sync between devices. The Messages app in Sequoia doesn’t support background images or polls, and it handles spam filtering slightly differently. They’re minor absences and annoyances, mostly, but they’re still absences and annoyances.

At least for the time being, though, you’ll find Sequoia pretty well-supported by most of Apple’s ecosystem. Core services like iCloud and iMessage aren’t going anywhere; Xcode still supports Sequoia, as does every Apple Creator Studio app update aside from the new Pixelmator Pro. App support may eventually drop off, but there’s not a lot that requires the latest and greatest version of macOS.

If and when you decide it’s time to step up to a newer version of macOS, Tahoe (or whatever macOS 27 is called) will be there in Software Update waiting for you. You’ll need to install a new version eventually if you want to keep getting app updates and security patches. But you don’t have to yet.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

How to downgrade from macOS 26 Tahoe on a new Mac Read More »

google-reveals-nano-banana-2-ai-image-model,-coming-to-gemini-today

Google reveals Nano Banana 2 AI image model, coming to Gemini today

With Nano Banana 2, Google promises consistency for up to five characters at a time, along with accurate rendering of as many as 14 different objects per workflow. This, along with richer textures and “vibrant” lighting will aid in visual storytelling with Nano Banana 2. Google is also expanding the range of available aspect ratios and resolutions, from 512px square up to 4K widescreen.

So what can you do with Nano Banana 2? Google has provided some example images with associated prompts. These are, of course, handpicked images, but Nano Banana has been a popular image model for good reason. This degree of improvement seems believable based on past iterations of Nano Banana.

Google AI infographic

Prompt: High-quality flat lay photography creating a DIY infographic that simply explains how the water cycle works, arranged on a clean, light gray textured background. The visual story flows from left to right in clear steps. Simple, clean black arrows are hand-drawn onto the background to guide the viewer’s eye. The overall mood is educational, modern, and easy to understand. The image is shot from a top-down, bird’s-eye view with soft, even lighting that minimizes shadows and keeps the focus on the process.

Credit: Google

Prompt: High-quality flat lay photography creating a DIY infographic that simply explains how the water cycle works, arranged on a clean, light gray textured background. The visual story flows from left to right in clear steps. Simple, clean black arrows are hand-drawn onto the background to guide the viewer’s eye. The overall mood is educational, modern, and easy to understand. The image is shot from a top-down, bird’s-eye view with soft, even lighting that minimizes shadows and keeps the focus on the process. Credit: Google

AI museum comparison

Prompt: Create an image of Museum Clos Lucé. In the style of bright colored Synthetic Cubism. No text. Your plan is to first search for visual references, and generate after. Aspect ratio 16:9.

Credit: Google

Prompt: Create an image of Museum Clos Lucé. In the style of bright colored Synthetic Cubism. No text. Your plan is to first search for visual references, and generate after. Aspect ratio 16:9. Credit: Google

AI farm image

Create an image of these 14 characters and items having fun at the farm. The overall atmosphere is fun, silly and joyful. It is strictly important to keep identity consistent of all the 14 characters and items.

Credit: Google

Create an image of these 14 characters and items having fun at the farm. The overall atmosphere is fun, silly and joyful. It is strictly important to keep identity consistent of all the 14 characters and items. Credit: Google

Google must be pretty confident in this model’s capabilities because it will be the only one available going forward. Starting now, Nano Banana 2 will replace both the standard and Pro variants of Nano Banana across the Gemini app, search, AI Studio, Vertex AI, and Flow.

In the Gemini app and on the website, Nano Banana 2 will be the image generator for the Fast, Thinking, and Pro settings. It’s possible there will eventually be a Nano Banana 2 Pro—Google tends to release elements of new model families one at a time. For now, it’s all “Flash” Image.

Google reveals Nano Banana 2 AI image model, coming to Gemini today Read More »

a-non-public-document-reveals-that-science-may-not-be-prioritized-on-next-mars-mission

A non-public document reveals that science may not be prioritized on next Mars mission

The way this document is written suggests that when NASA scores bidders for the Mars Telecommunications Network, the addition of a camera or other scientific payloads won’t be a net positive. However, if they pose an overall risk to the mission, they would be a net negative.

New award to Rocket Lab may complicate things

One of the other intriguing parts of this mission is that it sets up a battle royale of sorts for some of NASA’s most prominent contractors. Rocket Lab and Blue Origin have both waged very public campaigns that tout their solutions to NASA’s needs. SpaceX is also interested in winning a Mars mission for its Starship launch system. Then there are traditional contractors, such as Lockheed Martin, which have a long and storied history of building robust (if costly) Mars missions.

If NASA is going to launch the Mars Telecommunications Network by late 2028 to make the next “window” to the red planet, it must move quickly with this solicitation. In particular, industry protests after a decision is made could hold up the project for months and would almost certainly doom NASA’s hopes of making the 2028 launch opportunity.

On Monday, the space agency awarded Rocket Lab a $390,936 contract to study “Mars End-to-End Communication Service Architectures.” The award is not significant monetarily, but it does indicate that NASA is interested in Rocket Lab’s ideas for improving communications between Earth and Mars, and potentially a Mars Sample Return mission down the road. However, one source suggested to Ars that the award is a potential conflict of interest.

The contracting office for the Rocket Lab award is Goddard Space Flight Center, which is also responsible for managing the Mars Telecommunications Network. That Rocket Lab, alone, received an award like this from the NASA center that will also decide on the orbiting spacecraft—coterminous when such a decision will be made—is surely to be the basis of one or more protests should Rocket Lab win the Mars Telecommunications Network contract, the source told Ars.

A non-public document reveals that science may not be prioritized on next Mars mission Read More »

15-state-attorneys-general-sue-rfk-jr.-over-“anti-science”-vaccine-policy

15 state attorneys general sue RFK Jr. over “anti-science” vaccine policy


This administration may be hazardous to your health

Trump administration’s reduced vaccine schedule “throws science out the window.”

A healthcare worker receives a Pfizer-BioNtech Covid-19 vaccine at Jackson Memorial Hospital on December 15, 2020 in Miami, Florida. Credit: Getty Images | Joe Raedle

Scientists have long warned that a warming world is likely to hasten the spread of infectious diseases, making vaccination even more critical to safeguard public health.

And though most scientists hail vaccines as one of public health’s greatest achievements, they have provoked fear, distrust, and contentious resistance since Edward Jenner invented the first vaccine, to prevent smallpox, in the late 1700s.

Yet, until now, the United States never installed an outspoken vaccine critic like Robert F. Kennedy Jr. as a top health official with the power to upend federal childhood vaccine recommendations. Health and Human Services Secretary Kennedy and other top officials in the Trump administration have waged an “unprecedented attack on the nation’s evidence-based childhood immunization schedule,” a lawsuit, filed by 15 states, charged on Tuesday. Their actions will make people sicker and strain state resources, the suit claims.

A coalition of 14 attorneys general and Pennsylvania Governor Josh Shapiro, led by California Attorney General Rob Bonta and Arizona Attorney General Kris Mayes, is suing Kennedy, who has long promoted debunked theories linking vaccines to autism, as well as HHS, the Centers for Disease Control and Prevention, and its acting director, Jay Bhattacharya.

The multistate coalition is suing the agencies and their leaders, Mayes said in a press briefing Tuesday, “over their needlessly confusing, scientifically unsound, and unlawful revision of America’s immunization schedule.”

The suit also challenges Kennedy’s abrupt firing and “unlawful replacement” of 17 experts on the Advisory Committee on Immunization Practices (ACIP), which recommends which vaccines children and adults should receive, “with unqualified individuals whose minority anti-vaccine views align with Kennedy’s.”

In January, the CDC, with advice from the reconstituted ACIP, took seven childhood shots off the list of vaccines routinely recommended for all children, rescinding the CDC’s established guidance that vaccines protecting against rotavirus, meningococcal disease, hepatitis A, hepatitis B, influenza, COVID-19, and respiratory syncytial virus should be universally administered.

All the “demoted” vaccines, as the lawsuit calls them, prevent diseases that carry the risk of death. The January CDC memo recommends that parents consult with doctors for these vaccines, “taking the risk profile of each unique child into account.”

It does not make provisions for the millions of Americans who lack access to health providers who would provide such consultations.

ACIP’s vaccine recommendations have traditionally guided US health insurance coverage decisions, state school vaccine requirements, and physicians’ advice to parents and patients, Bonta said at the briefing. But Kennedy fired all the voting ACIP members four months after he promised Congress during his confirmation hearing that he’d leave the panel intact, Bonta said, noting that the suit is the 59th California has filed against the second Trump administration.

Kennedy said his unprecedented removal of the ACIP experts was “prioritizing the restoration of public trust above any specific pro- or anti-vaccine agenda,” in a press release in June.

Yet Kennedy’s picks include vaccine skeptics who “lack the requisite scientific knowledge and expertise to advise HHS and CDC on the ‘use of vaccines and related agents for effective control of vaccine-preventable diseases,’” as required by the committee’s charter, the suit argues.

“What Secretary Kennedy has done and what the Trump administration has enabled, throws science out the window, replaces qualified experts with unqualified ideologues, and then uses the resulting confusion to undermine public confidence in vaccines that have saved millions of lives,” Mayes said.

Stoking vaccine doubts leads to lower vaccination rates, which leads to more disease outbreaks—such as the hundreds of measles cases reported in 26 states over the past two months—more children in hospitals and greater strain on state Medicaid systems and public health infrastructure, Mayes said.

Democratic states are doing everything they can to fill the gaps left by this administration’s policies, she said. “But diseases cross state lines.”

Sowing doubt and confusion

The administration cited Denmark’s more limited childhood immunization schedule to justify its changes, but the Scandinavian country has fewer circulating infectious diseases and universal health care for a population that is tiny compared to the United States, the suit notes.

“Copying Denmark’s vaccine schedule without copying Denmark’s healthcare system doesn’t give families more options,” Mayes said, noting that millions of Americans lack access to health care, particularly in rural areas. “It just leaves kids unprotected from serious diseases.”

Inside Climate News asked HHS how it will ensure that parents without access to health care get their children the vaccines they need and how the administration plans to protect vulnerable populations as climate change fuels the spread of infectious diseases.

“This is a publicity stunt dressed up as a lawsuit,” said HHS press secretary Emily Hilliard, ignoring the questions. “By law, the health secretary has clear authority to make determinations on the CDC immunization schedule and the composition of the Advisory Committee on Immunization Practices. The CDC immunization schedule reforms reflect common-sense public health policy shared by peer, developed countries.”

The revised childhood immunization schedule wasn’t based on new science or expert consensus, Mayes said. “It was based on an ideological agenda, one that Secretary Kennedy has been pushing for years.”

Kennedy has been at the forefront of a dangerous movement that has significantly eroded trust in safe and effective vaccines, Bonta said. “While RFK Jr. is entitled to his own personal opinions, opinions, mind you, not facts, he isn’t entitled to use his opinions as the basis for breaking the law and endangering our children.”

The actions that RFK Jr. and ACIP have taken flout decades of scientific research, harm public health, and strain state resources by sowing doubt and confusion in vaccines and in science, Bonta said.

“California will be forced to expend resources to treat once rare diseases, to respond to outbreaks, and to combat misinformation,” he said. “I refuse to allow RFK Jr. to threaten the health and well being of the more than eight million young people who call the Golden State home, the 400,000 babies that are born here in California each year.”

Routine childhood vaccinations will prevent approximately 508 million cases of illness, 32 million hospitalizations, and 1,129,000 deaths among US children born between 1994 and 2023, scientists with the CDC reported in August 2024, before Donald Trump returned to office. The immunizations resulted in direct savings of $540 billion and societal savings of $2.7 trillion, they concluded.

“Without these vaccines, not only will our children and vulnerable individuals get sick, but our healthcare systems will have to shoulder the burden of increased preventable illnesses, preventable hospital visits, and avoidable costs,” Bonta said. “Vaccines save lives and save our states money. To get rid of them is illogical and unconscionable.”

Climate-fueled outbreaks

Two weeks before Bonta filed his latest lawsuit against the Trump administration, he denounced the Environmental Protection Agency’s repeal of the 2009 endangerment finding that recognized climate change as a threat to public health and welfare and provided the legal grounds to regulate greenhouse gases under the Clean Air Act.

The Trump administration’s endangerment finding recision, like its overhaul of the vaccine schedule, “is completely divorced from and untethered from science and facts and data and evidence,” Bonta said at the briefing Tuesday, noting that California will continue to push back against the EPA’s action.

“We must follow the facts, the science, the evidence and data, including the interconnectivity between climate change and the spread of vaccine-preventable diseases,” Bonta said.

Climate hazards such as drought, floods, and heatwaves have exacerbated outbreaks of more than half of human infectious diseases, researchers reported in Nature Climate Change in 2022, either by impairing people’s resistance or bolstering transmission of pathogens. The team warned that the number of pathogenic diseases and transmission pathways worsened by climatic hazards “are too numerous for comprehensive societal adaptations,” underscoring the urgent need to address the source of the problem: greenhouse gases.

Arizona is seeing more extreme heat events as a result of climate change, leaving people with underlying conditions at greater risk of heat-related illness and death.

“A lack of vaccines, a lack of access to vaccines starting at birth, will make our population sicker and more vulnerable to extreme heat and to climate-related disasters,” Mayes said. “And that will be sort of a self-perpetuating cycle where you have a less healthy population that is less capable of withstanding the impacts of climate change, and then you have climate change that is expanding and growing ever-more dangerous, having a greater and greater impact on a less healthy society.”

The only bodies that are capable of providing scientific guidance and advice on vaccines to the entire country are the CDC and ACIP, Mayes said. “And we now basically don’t have that across a number of these diseases and vaccines,” she said. “So we’re not protected, and we’re going to continue to see these outbreaks across the country, including in our states, even though we’re doing everything we can to protect ourselves.”

Liza Gross is a reporter for Inside Climate News based in Northern California. She is the author of The Science Writers’ Investigative Reporting Handbook and a contributor to The Science Writers’ Handbook, both funded by National Association of Science Writers’ Peggy Girshman Idea Grants. She has long covered science, conservation, agriculture, public and environmental health and justice with a focus on the misuse of science for private gain. Prior to joining ICN, she worked as a part-time magazine editor for the open-access journal PLOS Biology, a reporter for the Food & Environment Reporting Network and produced freelance stories for numerous national outlets, including The New York Times, The Washington Post, Discover, and Mother Jones. Her work has won awards from the Association of Health Care Journalists, American Society of Journalists and Authors, Society of Professional Journalists NorCal, and Association of Food Journalists.

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

15 state attorneys general sue RFK Jr. over “anti-science” vaccine policy Read More »

musk-has-no-proof-openai-stole-xai-trade-secrets,-judge-rules,-tossing-lawsuit

Musk has no proof OpenAI stole xAI trade secrets, judge rules, tossing lawsuit


Hostility is not proof of theft

Even twisting an ex-employee’s text to favor xAI’s reading fails to sway judge.

Elon Musk appears to be grasping at straws in a lawsuit accusing OpenAI of poaching eight xAI employees in an allegedly unlawful bid to access xAI trade secrets connected to its data centers and chatbot, Grok.

In a Tuesday order granting OpenAI’s motion to dismiss, US District Judge Rita F. Lin said that xAI failed to provide evidence of any misconduct from OpenAI.

Instead, xAI seemed fixated on a range of alleged conduct of former employees. But in assessing xAI’s claims, Lin said that xAI failed to show proof that OpenAI induced any of these employees to steal trade secrets “or that these former xAI employees used any stolen trade secrets once employed by OpenAI.”

Two employees admitted to stealing confidential information, with both downloading xAI’s source code and one improperly grabbing a supposedly sensitive recording from a Musk “All Hands” meeting. But the rest were either accused of retaining seemingly less consequential data, like retaining work chats on their devices, or didn’t seem to hold any confidential information at all. Lin called out particularly weak arguments that xAI’s complaint acknowledged that one employee who OpenAI poached never received access to confidential information allegedly sought after exiting xAI, and two employees were lumped into the complaint who “simply left xAI for OpenAI,” Lin noted.

From the limited evidence, Lin concluded that “while xAI may state misappropriation claims against a couple of its former employees, it does not state a plausible misappropriation claim against OpenAI.”

Lin’s order will likely not be the end of the litigation, as she is allowing xAI to amend its complaint to address the current deficiencies.

Ars could not immediately reach xAI for comment, so it’s unclear what steps xAI may take next.

However, xAI seems unlikely to give up the fight, which OpenAI has alleged is part of a “harassment campaign” that Musk is waging through multiple lawsuits attacking his biggest competitor’s business practices.

Unsurprisingly, OpenAI celebrated the order on X, alleging that “this baseless lawsuit was never anything more than yet another front in Mr. Musk’s ongoing campaign of harassment.”

Other tech companies poaching talent for AI projects will likely be relieved while reading Lin’s order. Commercial litigator Sarah Tishler told Ars that the order “boils down to a fundamental concept in trade secret law: hiring from a competitor is not the same as stealing trade secrets from one.”

“Under the Defend Trade Secrets Act, xAI has to show that OpenAI actually received and used the alleged trade secrets, not just that it hired employees who may have taken them,” Tishler said. “Suspicious timing, aggressive recruiting, and even downloaded files are not enough on their own.”

Tishler suggested that the ruling will likely be welcomed by AI firms eager to secure the best talent without incurring legal risks from their hiring practices.

“In the AI industry, where talent moves fast and the competitive stakes are enormous, this ruling reaffirms that suspicion is not enough,” Tishler said. “You have to show the stolen information actually made it into the competitor’s hands and was put to use.”

OpenAI not liable for engineers swiping source code

Through the lawsuit, Musk has alleged that OpenAI is violating California’s unfair competition law. He claims that OpenAI is attempting “to destroy legitimate competition in the AI industry by neutralizing xAI’s innovations” and forcing xAI “to unfairly compete against its own trade secrets.”

But this claim hinges entirely upon xAI proving that OpenAI poached its employees to steal its trade secrets. So, for xAI’s lawsuit to proceed, xAI will need to beef up the evidence base for its other claim, that OpenAI has violated the federal Defend Trade Secrets Act, Lin said. To succeed on that, xAI must prove that OpenAI unlawfully acquired, disclosed, or used a trade secret with xAI’s consent.

That will likely be challenging because xAI, at this point, has not offered “any nonconclusory allegations that OpenAI itself acquired, disclosed, or used xAI’s trade secrets,” Lin wrote.

All xAI has claimed is that OpenAI induced former employees to share secrets, and so far, nothing backs that claim, Lin said. Tishler noted that the court also rejected an xAI theory that “OpenAI should be responsible for what its new hires did before they arrived” for “the same reason: without evidence that OpenAI directed the theft or actually put the stolen information to use, you cannot hold the company liable.”

The strongest evidence that xAI had of employee misconduct, allegedly allowing OpenAI to misappropriate xAI trade secrets, revolves around the departure of one of xAI’s earliest engineers, Xuechen Li.

That evidence wasn’t enough, Lin said. xAI alleged that Li gave a presentation to OpenAI that supposedly included confidential information. Li also uploaded “the entire xAI source code base to a personal cloud account,” which he had connected to ChatGPT, Lin noted, after a recruiter sent a message on Signal sharing a link with Li to another unrelated cloud storage location.

xAI hoped the Signal messages would shock the court, expecting it to read through the lines the way xAI did. As proof that OpenAI allegedly got access to xAI’s source code, xAI pointed to a Signal message that an OpenAI recruiter sent to Li “four hours after” Li downloaded the source code, saying “nw!” xAI has alleged this message is short-hand for “no way!”—suggesting the OpenAI recruiter was geeked to get access to xAI’s source code. But in a footnote, Lin said that “OpenAI insists that ‘nw’ means ‘no worries,’” and thus is unconnected to Li’s decision to upload the source code to a ChatGPT-linked cloud account.

Even interpreting the text using xAI’s reading, however, xAI did not show enough to prove the recruiter or OpenAI accessed or requested the files, Lin said.

It also didn’t help xAI’s case that a temporary injunction that xAI secured in a separate lawsuit targeting the engineer blocked Li from accepting a job at OpenAI.

That injunction led OpenAI to withdraw its job offer to Li. And that’s a problem for xAI, because since Li never worked at OpenAI, it’s clear that he never used xAI’s trade secrets while working for OpenAI.

Further weakening xAI’s arguments, if Li indeed shared confidential information during his presentation while interviewing for OpenAI, xAI has alleged no facts suggesting that OpenAI was aware Li was sharing xAI trade secrets, Lin wrote.

This “makes it very hard to argue OpenAI ever used anything he allegedly took,” Tishler told Ars.

Another former xAI engineer, Jimmy Fraiture, was accused of copying xAI trade secrets, but Fraiture has said he deleted the information he improperly downloaded before starting his job at OpenAI. Importantly, Lin said, since he joined OpenAI, there’s no evidence that he used xAI trade secrets to benefit xAI’s rival.

“Other than the bare fact that Fraiture had been recruited” by the same OpenAI employee “who had also recruited Li, xAI does not allege any facts indicating that OpenAI had encouraged Fraiture to take xAI’s confidential information in the first place,” Lin wrote.

Since “none of the other former employees allegedly shared with or disclosed to OpenAI any xAI trade secrets,” xAI could not advance its claim that OpenAI misappropriated trade secrets based only on allegations tied to Li and Fraiture’s supposed misconduct, Lin said.

xAI may be able to amend its complaint to maintain these arguments, but the company has thus far presented scant, purely circumstantial evidence.

It’s possible that xAI will secure more evidence to support its misappropriation claims against OpenAI in its ongoing lawsuit against Li. Ars could not immediately reach Li’s lawyer to find out if today’s ruling may impact that case.

Ex-executive’s “hostility” is not proof of theft

Among the least convincing arguments that xAI raised was a claim that an unnamed finance executive left xAI to take a “lesser role” at OpenAI after learning everything he knew about data centers from xAI.

That executive slighted xAI when Musk’s company later attempted to inquire about “confidentiality concerns.”

“Suck my dick,” the former xAI executive allegedly said, refusing to explain how his OpenAI work might overlap with his xAI position. “Leave me the fuck alone.”

xAI tried to argue that the executive’s hostility was proof of misconduct. But Lin wrote that xAI only alleged that the executive “merely possessed xAI trade secrets about data centers” and did not allege that he ever used trade secrets to benefit OpenAI.

Had xAI found evidence that OpenAI’s data center strategy suddenly mirrored xAI’s after the executive joined xAI’s rival, that may have helped xAI’s case. But there are plenty of reasons a former employee might reject an ex-employer’s outreach following an exit, Lin suggested.

“His hostility when xAI reached out about its confidentiality concerns also does not support a plausible inference of use,” Lin wrote. “Hostility toward one’s former employer during departure does not, without more, indicate use of trade secrets in a subsequent job. Nor does the executive’s lack of experience with AI data centers before his time at xAI, without more, support a plausible inference that he used xAI’s trade secrets at OpenAI.”

xAI has until March 17 to amend its complaint to keep up this particular fight against OpenAI. But the company won’t be able to add any new claims or parties, Lin noted, “or otherwise change the allegations except to correct the identified deficiencies.”

Criminal probe likely leaves OpenAI on pins

For Li, the engineer accused of disclosing xAI trade secrets with OpenAI, the litigation could eliminate one front of discovery as he navigates two other legal fights over xAI’s trade secrets claims.

Tishler has been closely monitoring xAI’s trade secret legal battles. In October, she noted that Li is in a particularly prickly position, facing pressure in civil litigation from Musk to turn over data that could be used against him in the Federal Bureau of Investigation’s criminal investigation into Musk’s allegations. As Tishler explained:

“The practical reality is stark: Li faces a choice between protecting himself in the criminal action with his silence, and the civil consequences of doing so. Refuse to answer, and xAI could argue adverse inferences; answer, and the responses could feed the criminal case.”

Ultimately, the FBI is trying to prove that Li stole information that qualified as a trade secret and intended to use it for OpenAI’s benefit, while knowing that it would harm xAI. If they succeed, “xAI would suddenly have a government-backed record that its trade secrets were stolen,” Tishler wrote.

If xAI were so armed and able to keep the OpenAI lawsuit alive, the central question in the lawsuit that Lin dismissed today would shift, Tishler suggested, from “was there a theft?” to “what did OpenAI know, and when did it know it?”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Musk has no proof OpenAI stole xAI trade secrets, judge rules, tossing lawsuit Read More »

boozy-chimps-fail-urine-test,-confirm-hotly-debated-theory

Boozy chimps fail urine test, confirm hotly debated theory

The urine of chimpanzees contains high levels of alcohol byproduct, most likely because the chimps regularly gorge themselves on fermented fruit, according to a new paper published in the journal Biology Letters. It’s the latest evidence in support of a hotly debated theory regarding the evolutionary origins of human fondness for alcohol.

As previously reported, in 2014, University of California, Berkeley (UCB) biologist Robert Dudley wrote a book called The Drunken Monkey: Why We Drink and Abuse Alcohol. His controversial “drunken monkey hypothesis” proposed that the human attraction to alcohol goes back about 18 million years, to the origin of the great apes, and that social communication and sharing food evolved to better identify the presence of fruit from a distance. At the time, skeptical scientists insisted that this was unlikely because chimpanzees and other primates just don’t eat fermented fruit or nectar.

But reports of primates doing just that have grown over the ensuing two decades. Earlier this year, we reported that researchers had caught wild chimpanzees on camera engaging in what appears to be sharing fermented African breadfruit with measurable alcoholic content. That observational data was the first evidence of the sharing of alcoholic foods among nonhuman great apes in the wild. The authors measured the alcohol content of the fruit with a handy portable breathalyzer and found almost all of the fallen fruit (90 percent) contained some ethanol, with the ripest containing the highest levels—the equivalent of 0.61 percent ABV (alcohol by volume).

And last September, Dudley co-authored a paper reporting the first measurements of the ethanol content of fruits favored by chimps in the Ivory Coast and Uganda, finding that chimps consume 14 grams of alcohol per day, the equivalent of a standard alcoholic drink in the US. After adjusting for the chimps’ lower body mass, the authors concluded the chimps are consuming nearly two drinks per day.

A thankless task

The next step was to sample the chimps’ urine to see if it contains any alcohol metabolites, as was found in a 2022 study on spider monkeys. This would further refine estimates for how much ethanol-laden fruit the chimps eat every day. That thankless task fell to Aleksey Maro, a UCB graduate student who spent last summer in Ngogo, sleeping in trees—protected from the constant streams by an umbrella—to collect urine samples. Sharifah Namaganda, a Ugandan graduate student at the University of Michigan, showed him how to make shallow bowls out of plastic bags hung on forked twigs for more efficient collection. He also collected samples from puddles of urine on the forest floor.

Boozy chimps fail urine test, confirm hotly debated theory Read More »

new-microsoft-gaming-chief-has-“no-tolerance-for-bad-ai”

New Microsoft gaming chief has “no tolerance for bad AI”

A gaming education

Unlike Spencer, who spent years at Microsoft Game Studios before heading Microsoft’s gaming division, Sharma has no professional experience in the video game industry. And her personal experience with Xbox also seems somewhat limited; after sharing her Gamertag on social media over the weekend, curious gamers found that her Xbox play history dates back roughly one month. That’s also in stark contrast to Spencer, who has amassed a score of over 121,000 across decades of play.

In her interview with Variety, Sharma cited 2016’s Firewatch as an example of the kinds of games with “deep emotional resonance” and “a distinct point of view” that she’s looking for from Microsoft. And on social media, Sharma shared her list of the three greatest games ever: “Halo, Valheim, Goldeneye,” for what it’s worth. Sharma also seems to be taking recommendations for games to catch up on; after saying on social media that she would try Borderlands 2, the game appeared in her recently played games over the weekend.

A look at some of Sharma’s recently played Xbox games, as of this writing.

A look at some of Sharma’s recently played Xbox games, as of this writing. Credit: Xbox.com

Being a personal fan of video games isn’t necessarily required to succeed in running a gaming company. Nintendo President Hiroshi Yamauchi famously didn’t care for video games even as he launched the Famicom and Nintendo Entertainment System to worldwide success in the 1980s. Still, the lack of direct experience with the gaming world marks a sharp change after Spencer’s long tenure at a time when Microsoft is struggling to redefine the Xbox brand amid cratering hardware sales, a pivot away from software exclusives, and a move to extend the Xbox brand to many different devices.

Xbox President and COO Sarah Bond, who by all accounts was being set up to succeed Spencer, also announced her departure from Microsoft on Friday, ending a nearly nine-year stint as a public face for the company’s gaming efforts. The Verge reports that Bond caused a lot of friction within the Xbox team when she championed the “Xbox Everywhere” strategy and “This is an Xbox” marketing campaign, which focused on streaming Xbox games to hardware like mobile phones and tablets, according to anonymous sources. Shortly before the launch of that campaign in 2024, Microsoft lost marketing executives Jerrett West and Kareem Choudry, leading to significant internal reorganization.

Longtime Xbox Game Studios executive Matt Booty, whose history in the game industry dates back to working for Williams Electronics in the ’90s, has been promoted to executive vice president and chief content officer for Xbox and “will continue working closely with [Sharma] to ensure a smooth transition,” Microsoft said in its announcement Friday.

New Microsoft gaming chief has “no tolerance for bad AI” Read More »

the-2026-mazda-cx-5,-driven:-it-got-bigger;-plus,-radical-tech-upgrade

The 2026 Mazda CX-5, driven: It got bigger; plus, radical tech upgrade

ENCINITAS, Calif.—Its sales may have been buoyed of late by the big CX-90 and CX-70 SUVs, but for Mazda, the CX-5 is still where most of the action is. Unlike the similar-sized, similar-priced CX-50, which was designed just for North America, the all-new CX-5 is a global car, and it’s also Mazda’s standard-bearer for a range of new technologies. Gone is the basic but effective infotainment system, replaced by an all-new Google-based experience as Mazda starts its journey toward software-defined vehicles. There’s even an in-house hybrid on the way, albeit not until next year. And it starts at a competitive $29,990.

The new CX-5 is bigger than the car it replaces, 4.5 inches (114.5 mm) longer and half an inch (13 mm) wider than before, at 184.6 inches (4,689 mm) long, 73.2 inches (1,859 mm) wide, and 66.7 inches (1,694 mm) tall. Much of that extra space is between the axles—the wheelbase is now 110 inches (2,794 mm) long, which translates to more interior space. From the outside, there’s a new light signature, and the way the bodywork curves around the front and wraps down the fenders gives me strong Range Rover vibes, even if I could never adequately capture what I’m talking about with a camera. As ever, Mazda’s arresting Soul Red Crystal metallic paint (a $595 option) sparkles, even on a day when the sun remained hidden from view.

The last time that Mazda evolved this compact crossover, it did so with a new upmarket interior. Since then, the brand has staked out that space across its model lineup, with cabins that punch well above their price tags. Happily, the company’s designers haven’t lost much mojo since then, with a restrained approach that looks good across the five different trim levels, each of which is a $2,000 step up from the one that precedes it. But if you’re a current CX-5 driver, you’ll find much has changed, perhaps not entirely for the better.

The 2026 Mazda CX-5, driven: It got bigger; plus, radical tech upgrade Read More »