openai

openai-slams-court-order-to-save-all-chatgpt-logs,-including-deleted-chats

OpenAI slams court order to save all ChatGPT logs, including deleted chats


OpenAI defends privacy of hundreds of millions of ChatGPT users.

OpenAI is now fighting a court order to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering—after news organizations suing over copyright claims accused the AI company of destroying evidence.

“Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to ‘preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying),” OpenAI explained in a court filing demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without “any just cause,” OpenAI argued, the order “continues to prevent OpenAI from respecting its users’ privacy decisions.” That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI’s application programming interface (API), OpenAI said.

The court order came after news organizations expressed concern that people using ChatGPT to skirt paywalls “might be more likely to ‘delete all [their] searches’ to cover their tracks,” OpenAI explained. Evidence to support that claim, news plaintiffs argued, was missing from the record because so far, OpenAI had only shared samples of chat logs that users had agreed that the company could retain. Sharing the news plaintiffs’ concerns, the judge, Ona Wang, ultimately agreed that OpenAI likely would never stop deleting that alleged evidence absent a court order, granting news plaintiffs’ request to preserve all chats.

OpenAI argued the May 13 order was premature and should be vacated, until, “at a minimum,” news organizations can establish a substantial need for OpenAI to preserve all chat logs. They warned that the privacy of hundreds of millions of ChatGPT users globally is at risk every day that the “sweeping, unprecedented” order continues to be enforced.

“As a result, OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained,” OpenAI argued.

Meanwhile, there is no evidence beyond speculation yet supporting claims that “OpenAI had intentionally deleted data,” OpenAI alleged. And supposedly there is not “a single piece of evidence supporting” claims that copyright-infringing ChatGPT users are more likely to delete their chats.

“OpenAI did not ‘destroy’ any data, and certainly did not delete any data in response to litigation events,” OpenAI argued. “The Order appears to have incorrectly assumed the contrary.”

At a conference in January, Wang raised a hypothetical in line with her thinking on the subsequent order. She asked OpenAI’s legal team to consider a ChatGPT user who “found some way to get around the pay wall” and “was getting The New York Times content somehow as the output.” If that user “then hears about this case and says, ‘Oh, whoa, you know I’m going to ask them to delete all of my searches and not retain any of my searches going forward,'” the judge asked, wouldn’t that be “directly the problem” that the order would address?

OpenAI does not plan to give up this fight, alleging that news plaintiffs have “fallen silent” on claims of intentional evidence destruction, and the order should be deemed unlawful.

For OpenAI, risks of breaching its own privacy agreements could not only “damage” relationships with users but could also risk putting the company in breach of contracts and global privacy regulations. Further, the order imposes “significant” burdens on OpenAI, supposedly forcing the ChatGPT maker to dedicate months of engineering hours at substantial costs to comply, OpenAI claimed. It follows then that OpenAI’s potential for harm “far outweighs News Plaintiffs’ speculative need for such data,” OpenAI argued.

“While OpenAI appreciates the court’s efforts to manage discovery in this complex set of cases, it has no choice but to protect the interests of its users by objecting to the Preservation Order and requesting its immediate vacatur,” OpenAI said.

Users panicked over sweeping order

Millions of people use ChatGPT daily for a range of purposes, OpenAI noted, “ranging from the mundane to profoundly personal.”

People may choose to delete chat logs that contain their private thoughts, OpenAI said, as well as sensitive information, like financial data from balancing the house budget or intimate details from workshopping wedding vows. And for business users connecting to OpenAI’s API, the stakes may be even higher, as their logs may contain their companies’ most confidential data, including trade secrets and privileged business information.

“Given that array of highly confidential and personal use cases, OpenAI goes to great lengths to protect its users’ data and privacy,” OpenAI argued.

It does this partly by “honoring its privacy policies and contractual commitments to users”—which the preservation order allegedly “jettisoned” in “one fell swoop.”

Before the order was in place mid-May, OpenAI only retained “chat history” for users of ChatGPT Free, Plus, and Pro who did not opt out of data retention. But now, OpenAI has been forced to preserve chat history even when users “elect to not retain particular conversations by manually deleting specific conversations or by starting a ‘Temporary Chat,’ which disappears once closed,” OpenAI said. Previously, users could also request to “delete their OpenAI accounts entirely, including all prior conversation history,” which was then purged within 30 days.

While OpenAI rejects claims that ordinary users use ChatGPT to access news articles, the company noted that including OpenAI’s business customers in the order made “even less sense,” since API conversation data “is subject to standard retention policies.” That means API customers couldn’t delete all their searches based on their customers’ activity, which is the supposed basis for requiring OpenAI to retain sensitive data.

“The court nevertheless required OpenAI to continue preserving API Conversation Data as well,” OpenAI argued, in support of lifting the order on the API chat logs.

Users who found out about the preservation order panicked, OpenAI noted. In court filings, they cited social media posts sounding alarms on LinkedIn and X (formerly Twitter). They further argued that the court should have weighed those user concerns before issuing a preservation order, but “that did not happen here.”

One tech worker on LinkedIn suggested the order created “a serious breach of contract for every company that uses OpenAI,” while privacy advocates on X warned, “every single AI service ‘powered by’ OpenAI should be concerned.”

Also on LinkedIn, a consultant rushed to warn clients to be “extra careful” sharing sensitive data “with ChatGPT or through OpenAI’s API for now,” warning, “your outputs could eventually be read by others, even if you opted out of training data sharing or used ‘temporary chat’!”

People on both platforms recommended using alternative tools to avoid privacy concerns, like Mistral AI or Google Gemini, with one cybersecurity professional on LinkedIn describing the ordered chat log retention as “an unacceptable security risk.”

On X, an account with tens of thousands of followers summed up the controversy by suggesting that “Wang apparently thinks the NY Times’ boomer copyright concerns trump the privacy of EVERY @OpenAI USER—insane!!!”

The reason for the alarm is “simple,” OpenAI said. “Users feel more free to use ChatGPT when they know that they are in control of their personal information, including which conversations are retained and which are not.”

It’s unclear if OpenAI will be able to get the judge to waver if oral arguments are scheduled.

Wang previously justified the broad order partly due to the news organizations’ claim that “the volume of deleted conversations is significant.” She suggested that OpenAI could have taken steps to anonymize the chat logs but chose not to, only making an argument for why it “would not” be able to segregate data, rather than explaining why it “can’t.”

Spokespersons for OpenAI and The New York Times’ legal team declined Ars’ request to comment on the ongoing multi-district litigation.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI slams court order to save all ChatGPT logs, including deleted chats Read More »

ai-#117:-openai-buys-device-maker-io

AI #117: OpenAI Buys Device Maker IO

What a week, huh? America signed a truly gigantic chip sales agreement with UAE and KSA that could be anything from reasonable to civilizational suicide depending on security arrangements and implementation details, Google announced all the things, OpenAI dropped Codex and also bought Jony Ive’s device company for $6.5 billion, Vance talked about reading AI 2027 (surprise, in a good way!) and all that other stuff.

Lemon, it’s Thursday, you’ve got movie tickets for Mission Impossible: Final Reckoning (19th and Broadway AMC, 3pm), an evening concert tonight from Light Sweet Crude and there’s a livestream from Anthropic coming up at 12: 30pm eastern, the non-AI links are piling up and LessOnline is coming in a few weeks. Can’t go backwards and there’s no time to spin anything else out of the weekly. Got to go forward to go back. Better press on.

So for the moment, here we go.

Earlier this week: Google I/O Day was the ultimate ‘huh, upgrades’ section. OpenAI brought us their Codex of Ultimate Vibing (and then Google offered their version called Jules). xAI had some strong opinions strongly shared in Regarding South Africa. And America Made a very important AI Chip Diffusion Deal with UAE and KSA, where the details we don’t yet know could make it anything from civilizational suicide to a defensible agreement, once you push back the terrible arguments made in its defense.

  1. Language Models Offer Mundane Utility. So, spend more on health care, then?

  2. Language Models Don’t Offer Mundane Utility. Not when you fabricate the data.

  3. Huh, Upgrades. We already covered Google, so: Minor Claude tweaks, xAI’s API.

  4. Codex of Ultimate Vibing. A few more takes, noting the practical barriers.

  5. On Your Marks. AlphaEvolve is probably a big long term deal.

  6. Choose Your Fighter. A handy guide to the OpenAI model that’s right for you.

  7. Deepfaketown and Botpocalypse Soon. Know it when you see it.

  8. Copyright Confrontation. A bunch of absolute losers.

  9. Regarding South Africa. Zeynep Tufekci gives it the NYT treatment.

  10. Cheaters Gonna Cheat Cheat Cheat Cheat Cheat. Cheat or be cheated.

  11. They Took Our Jobs. Small reductions in fixed time costs can bear big dividends.

  12. The Art of the Jailbreak. System prompt for Gemini Diffusion.

  13. Get Involved. Anthropic social, AI grantmaking and grants, whistleblowing.

  14. In Other AI News. Bunker subscriptions are on the rise.

  15. Much Ado About Malaysia. The supposedly big AI deal that wasn’t.

  16. Show Me the Money. LMArena sells out, OpenAI buys IO from Jony Ive.

  17. Quiet Speculations. More straight lines on graphs.

  18. Autonomous Dancing Robots. Everybody do the household chores.

  19. The Quest for Sane Regulations. It’s not looking good.

  20. The Mask Comes Off. OpenAI is still trying to mostly sideline the nonprofit.

  21. The Week in Audio. Bengio, Nadella, Hassabis, Roose, and Whitmer on OpenAI.

  22. Write That Essay. Someone might read it. Such as VPOTUS JD Vance.

  23. Vance on AI. Remarkably good thoughts! He’s actually thinking about it for real.

  24. Rhetorical Innovation. Where could that data center possibly be?

  25. Margaritaville. You know it would be your fault.

  26. Rhetorical Lack of Innovation. Cate Metz is still at it.

  27. If Anyone Builds It, Everyone Dies. No, seriously.

  28. Aligning a Smarter Than Human Intelligence is Difficult. Have it think different.

  29. People Are Worried About AI Killing Everyone. Might want to get on that.

  30. The Lighter Side. The new job is better anyway.

AI scientist announces potential major discovery, a promising treatment for dry AMD, a major cause of blindness. Paper is here.

Nikhil Krishnan sees health care costs going up near term due to AI for three reasons.

  1. There is a lot more scrutiny of those using AI to prevent paying out claims, than there is for those using AI to maximize billing and fight to get claims paid.

  2. Health care companies will charge additional fees for their use of ‘add on’ AI. Like everything else in health care, this will cost $0.05 and they will charge $500.

  3. People who use AI to realize they need health care will consume more health care.

This seems right in the near term. The entire health care system is bonkers and bans real competition. This is the result. In the medium term, it should radically improve health care productivity and outcomes, and then we can collectively decide how much to spend on it all. In the long term, we will see radical improvements, or we won’t need any health care.

In a related story, ChatGPT helps students feign ADHD. Well, not really. The actual story is ‘a 2000 word document created via ChatGPT, in a way that ordinary prompting would not easily duplicate, helps students feign ADHD.’ So mostly this is saying that a good guide helps you fake ADHD, and that with a lot of effort ChatGPT can produce one. Okie dokie.

Let’s check in on AlphaEvolve, a name that definitely shouldn’t worry anyone, with its results that also definitely shouldn’t worry anyone.

Deedy: Google’s AI just made math discoveries NO human has!

—Improved on the best known solution for packing of 11 and 12 hexagons in hexagons.

—Reduced 4×4 matrix multiplication from 49 operations to 48 (first advance in 56 years!) and many more.

AlphaEvolve is the AlphaGo ‘move 37’ moment for math. Insane.

Here’s another easy to understand one:

Place 16 points in 2D to minimize the maximum to minimum distance between them.

Improved after 16yrs. I highly recommend everyone read the paper.

AI improves European weather forecasts 20% on key indicators. Progress on whether forecasters is also impressive, but harder to measure.

AI helping executives handle their inboxes and otherwise sift through overwhelming amounts of incoming information. My read is the tools are just now getting good enough that power users drowning in incoming communications turn a profit, but not quite good enough for regular people. Yet.

As usual, that’s if you dismiss them out of hand and don’t use them, such as Judah Diament saying this is ‘not a breakthrough’ because ‘there have been such tools since the late 1980s.’ What’s the difference between vibe coding and Microsoft Visual Basic, really, when you dig down?

Curio AI stuffed toys, which seem a lot like a stuffed animal with an internet connection to a (probably small and lame) AI model tuned to talk to kids, that has a strict time limit if you don’t pay for a subscription beyond 60 days?

MIT economics departmentconducted an internal, confidential reviewof this paper and concluded it ‘should be withdrawn from public discourse.’ It then clarifies this was due to misconduct, and that the author is no longer at MIT, and that this was due to ‘concerns about the validity of the research.’

Here is abstract of the paper that we should now treat as not real, as a reminder to undo the update you made when you saw it:

That was a very interesting claim, but we have no evidence that it is true. Or false.

Florian Ederer: It is deeply ironic that the first AI paper to have hallucinations was not even written by an AI.

Jonathan Parker: We don’t know that.

I was going to call MIT’s statement ‘beating around the bush’ the way this WSJ headline does saying MIT ‘can no longer stand behind’ the paper, but no, to MIT’s credit they very clearly are doing everything their lawyers will allow them to do, the following combined with the student leaving MIT is very clear:

MIT Economics: Earlier this year, the COD conducted a confidential internal review based upon allegations it received regarding certain aspects of this paper. While student privacy laws and MIT policy prohibit the disclosure of the outcome of this review, we are writing to inform you that MIT has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper. Based upon this finding, we also believe that the inclusion of this paper in arXiv may violate arXiv’s Code of Conduct.

Our understanding is that only authors of papers appearing on arXiv can submit withdrawal requests. We have directed the author to submit such a request, but to date, the author has not done so. Therefore, in an effort to clarify the research record, MIT respectfully request that the paper be marked as withdrawn from arXiv as soon as possible.

It seems so crazy to me that ‘student privacy’ should bind us this way in this spot, but here we are. Either way, we got the message. Which is, in English:

Cremieux: This paper turned out to be fraudulent.

It was entirely made up and the experiment never happened. The author has been kicked out of MIT.

A (not new) theory of why Lee Sedol’s move 78 caused AlphaGo to start misfiring, where having a lot of similar options AlphaGo couldn’t differentiate between caused it to have to divide its attention into exponentially many different lines of play. My understanding is it was also objectively very strong and a very unlikely move to have been made, which presumably also mattered? I am not good enough at Go to usefully analyze the board.

Paper finds LLMs produce ‘five times less accurate’ summaries of scientific research than humans, warning of ‘overgeneralization’ and omission of details that limit scope. All right, sure, and that’s why you’re going to provide me with human summaries I can use instead, right, Anakin? Alternatively, you can do what I do and ask follow-up questions to check on all that.

DeepSeek powers a rush of Chinese fortune telling apps, in section IV of the type of article, here on the rise of Chinese superstitious and despairing behavior, that could be charting something important but could easily be mostly hand picked examples. Except for the rise in scratch-off lottery tickets, which is a hugely bearish indicator. I also note that it describes DeepSeek as ‘briefly worrying American tech companies,’ which is accurate, except that the politicians don’t realize we’ve stopped worrying.

Claude’s Research now available on mobile, weird that it wasn’t before.

Some changes were made to the Claude 3.7 system prompt.

xAI’s API now can search Twitter and the internet, like everyone else.

Some more takes on Codex:

Sunless: IMO after couple of hours using it for my SWE job I feel this is the most “AGI is coming” feel since ChatGPT in the early December of 2022. Async ability is the true God mode. It is currently going through my tech debt like plasma knife through butter. Incredible.

Diamond Bishop: Played with codex on two projects this weekend. Will keep using it, but my daily loadout for now will still be cursor in agent mode, accompanied by some light dual wielding with claude code. First impressions:

1. Overall feel – very cool when it works and being able to fire off a bunch of tasks feels like more autonomy then anything else.

2. No internet – don’t like this. makes a bunch of testing just impossible. This should be optional, not required.

3. Delegation focused handoff UI/UX – great when things work, but most of the time you need to reprompt/edit/etc. This will make sense when models get better but in current form it seems premature. Need a way to keep my IDE open for edits and changes to collaborate with when I want to rather then just hand off completely. Doing it only through github branches adds too much friction.

Sunless highlights that in many ways the most valuable time for something like Codex is right after you get access. You can use it to suddenly do all the things you had on your stack that it can easily do, almost for free, that you couldn’t do easily before. Instant profit. It may never feel that good again.

I strongly agree with Diamond’s second and third points here. If you close the IDE afterwards you’re essentially saying that you should assume it’s all going to work, so it’s fine to have to redo a bunch of work if something goes wrong. That’s a terrible assumption. And it’s super hard to test without internet access.

How big a deal is AlphaEvolve? Simeon thinks it is a pretty big deal, and most other responses here agree. As a proof of concept, it seems very important to me, even if the model itself doesn’t do anything of importance yet.

How OpenAI suggests you choose your model.

Charly Wargnier: Here’s the rundown ↓

🧠 GPT 4o: the everyday assistant

↳ Emails, summaries, and quick drafts

🎨 GPT 4.5: the creative brain

↳ Writing, comms, and brainstorming

⚡ o4 mini: the fast tech helper

↳ Quick code, STEM, visual tasks

🧮 o4 mini high: the deep tech expert

↳ Math, complex code, science explainer

📊 o3: the strategic thinker

↳ Planning, analysis, multi-step tasks

🔍 o1 pro: the thoughtful analyst

↳ Deep research, careful reasoning, high-stakes work

In practice, my answer is ‘o3 for everything other than generating images, unless you’re hitting your request limits, anything where o3 is the wrong choice you should be using Claude or Gemini.’

Seriously, I have a harder and harder time believing anyone actually uses Grok, the ultimate two-handed language model.

This is indeed how it feels these days.

Rory McCarthy: Professional art forgery detectors can tell with something like 90% accuracy if something’s a fake in a few seconds upon seeing it, but can only tell you why after a good while inspecting details. I feel like people are picking that up for AI: you just *know*, before you know how.

Instantaneously we can see that this is ‘wrong’ and therefore AI, then over the course of a minute you can extract particular reasons why. It’s like one of those old newspaper exercises, ‘spot all the differences in this picture.’

Rory McCarthy: I was thinking about it with this pizza place I saw – I wonder if people get that much AI art/illustration currently has the vibe of Microsoft clip art to promote your company; it just seems sort of cheap, and thus cheapens the brand (a place like this probably wouldn’t mind)

I find the obviously fake art here does make me less inclined to eat here. I don’t want you to spend a ton of time on marketing, but this is exactly the wrong way and amount to care, like you wanted to care a lot but didn’t have the budget and you aren’t authentic or detail oriented. Stay away. The vibe doesn’t jive with caring deeply about the quality of one’s pizza.

Since IGN already says what I’d say about this, I turn over the floor:

IGN: Fortnite launched an AI-powered Darth Vader modeled after the voice of James Earl Jones and it’s going as well as you might expect [link has short video]:

Actually, after watching the video, it’s going way better than expected. Love it.

Here is another way to defend yourself against bot problems:

Gavin Leech: A friend just received a robocall purporting to be from a criminal holding me to ransom. But the scambot went on to describe me as “handsome of stature, grave of gait, rich and sonorous of voice, eloquent of speech”.

This is because, some years ago, I put this on my blog:

Is it morally wrong to create and use fully private AI porn of someone who didn’t consent? Women overwhelmingly (~10:1) said yes, men said yes by about 2.5:1.

Mason: I don’t believe our brains can really intuit that photorealistic media is different from reality; we can understand logically that visual effects aren’t real, but once we’ve seen someone we actually know personally do something, it’s hard to compartmentalize it as pure fantasy.

I don’t think that’s it. I think we are considering this immoral partly because we think (rightly or wrongly) that porn and sex and even thinking about other people sexually (even with permission and especially without it) is gross and immoral in general even if we don’t have a way to ban any of it. And often we try anyway.

Even more central, I think, is that we don’t trust anything private to stay truly private, the tech is the same for private versus public image (or in the future video or even VR!) generation, we have a concept of ownership over ‘name and likeness,’ and we don’t want to give people the ‘it was only private’ excuse.

Not AI but worth noting: Ben Jacobs warns about a scam where someone gets control of a contact’s (real) Telegram, invites you to a meeting, then redirects you to a fake zoom address which asks you to update zoom with a malicious update. I recommend solving this problem by not being on Telegram, but to each their own.

Ideally we’d also be warning the scammers.

Misha: Starting to get lots of AI voiced phone spam and I gotta say, we really need to start punishing spammers with the death penalty. I guess this is why The Beekeeper is so popular.

The creatives continue to be restless. Morale has not improved.

Luiza Jarovsky: “The singer and songwriter said it was a ‘criminal offence’ to change copyright law in favour of artificial intelligence companies.

In an interview on BBC One’s Sunday with Laura Kuenssberg programme, John said the government was on course to ‘rob young people of their legacy and their income,’ adding: ‘It’s a criminal offence, I think. The government are just being absolute losers, and I’m very angry about it.'”

That’s not what ‘criminal offense’ means, but point taken.

Zeynep Tufekci writes up what happened to Grok in the New York Times, including providing a plausible triggering event to explain why the change might have been made on that particular day, and ties it to GPT-4o being an absurd sycophant as a general warning about what labs might choose to do with their bots. This, it seems, is what causes some to worry about the ‘safety’ of bots. Okay then.

And those not cheating will use AI too, if only to pass the AI filters? Oh boy. I mean, entirely unsurprising, but oh boy.

Julie Jargon (WSJ): Students don’t want to be accused of cheating, so they’re using artificial intelligence to make sure their school essays sound human.

Teachers use AI-detection software to identify AI-generated work. Students, in turn, are pre-emptively running their original writing through the same tools, to see if anything might be flagged for sounding too robotic.

Miles Pulvers, a 21-year-old student at Northeastern University in Boston, says he never uses AI to write essays, but he runs all of them through an AI detector before submitting them.

“I take great pride in my writing,” says Pulvers. “Before AI, I had peace of mind that whatever I would submit would be accepted. Now I see some of my writing being flagged as possibly being AI-generated when it’s not. It’s kind of annoying, but it’s part of the deal in 2025.”

AI detectors might sound the alarm if writing contains too many adjectives, long sentences and em dashes—one of my own favorite forms of punctuation. When that happens to Pulvers, he rewrites the sentences or paragraphs in question. He tests the essay again, as often as needed until the detector says it has a low probability of bot involvement.

The tragedy of all this is that when they do catch someone using AI, they typically get away with it, but still everyone has to face this police state of running everything through the checkers.

It also highlights that your AI checker has to be able to defeat a student who has access to an AI checker. Right now the system is mostly not automated, but there’s nothing stopping one from creating a one-button agent that takes an essay – whether it was an AI or a human that wrote the original – feeding it into the public AI detector, and then iterating as needed until the essay passes. It would then be insane not to use that, and ‘who gets detected using AI’ by default becomes only those who don’t know to do that.

The only way to get around this is to have the AI checker available to teachers be superior to the one used by students. It’s like cybersecurity and other questions of ‘offense-defense balance.’ And it is another illustration of why in many cases you get rather nasty results if you simply open up the best functionality to whoever wants it. I don’t see a way to get to a future where this particular ‘offense-defense balance’ can properly favor the AI detectors actually catching cheaters.

Unless? Perhaps we are asking the wrong question. Rather than ask ‘did an AI write this?’ you could ask ‘did this particular student write this?’ That’s a better question. If you can require the student to generate writing samples in person that you know are theirs, you can then do a comparison analysis.

Tyler Cowen bites all the bullets, and says outright ‘everyone’s cheating, that’s good news.’ His view is essentially that the work the AI can do for you won’t be valuable in the future, so it’s good to stop forcing kids to do that work. Yes, right now this breaks the ‘educational system’ until it can adjust, but that too is good, because it was already broken, it has to change and it will not go quietly.

As is typically true with Tyler, he gets some things that AI will change, but then assumes the process will stop, and the rest of life will somehow continue as per normal, only without the need for the skills AI currently is able to replace?

Tyler Cowen: Getting good grades maps pretty closely to what the AIs are best at. You would do better to instill in your kids the quality of taking the initiative…You should also…teach them the value of charisma, making friends, and building out their networks.

It is hard for me to picture the future world Tyler must be imagining, with any expectation it would be stable.

If you are assigning two-month engineering problems to students, perhaps check if Gemini 2.5 can spit out the answer. Yes, this absolutely is the ‘death of this type of coursework.’ That’s probably a good thing.

Peter Wildeford: You have to feel terrible for the 31 students who didn’t just plug the problem into Gemini 2.5 and then take two months off

Olivia Moore: An Imperial College eng professor gave four LLMs a problem set that graduate students had two months to solve.

He had TAs grade the results blind alongside real submissions.

Meta AI and Claude failed. ChatGPT ranked 27 of 36 students…while Gemini 2.5 Pro ranked 4 of 36 🤯

Something tells me that ‘ChatGPT’ here probably wasn’t o3?

In a new study from Jung Ho Choi and Chloe Xie, AI allowed accountants to redirect 8.5% of their time away from data entry towards other higher value tasks and resulted in a 55% increase in weekly client support.

Notice what happens when we decompose work into a fixed cost in required background tasks like data entry, and then this enables productive tasks. If a large percentage of time was previously data entry, even a small speedup in that can result in much more overall productivity.

This is more generally true than people might think. In most jobs and lives, there are large fixed maintenance costs, which shrinks the time available for ‘real work.’ Who among us spends 40 hours on ‘real work’? If you speed up the marginal real work by X% while holding all fixed costs fixed, you get X% productivity growth. If you speed up the fixed costs too, you can get a lot more than X% total growth.

This also suggests that the productivity gains of accountants are being allocated to increased client support, rather than into each accountant serving more clients. Presumably in the long term more will be allocated towards reducing costs.

The other big finding is that AI and accountants for now remain complements. You need an expert to catch and correct errors, and guide the AI. Over time, that will shift into the AI both speeding things up more and not needing the accountant.

At Marginal Revolution, commenters find the claims plausible. Accounting seems like a clear example of a place where AI should allow for large gains.

Tyler Cowen also links us to Dominic Coey who reminds us that Baumol’s Cost Disease is fully consistent with transformative economic growth, and to beware arguments from cost disease. Indeed. If AI gives us radically higher productivity in some areas but not others, we will be vastly richer and better off. Indeed in some ways this is ideal because it lets us still have ‘jobs.’

Will Brown: if you lost your software engineering job to AI in early 2024 that is entirely a skill issue sorry

Cate Hall: Pretty much everyone’s going to have a skill issue sooner or later.

It is a question of when, not if. It’s always a skill issue, for some value of skill.

A hypothesis that many of the often successful ‘Substack house style’ essays going around Substack are actually written by AI. I think Will Storr here has stumbled on a real thing, but that for now it is a small corner of Substack.

Robert Scoble provides us another example of what we might call ‘human essentialism.’ He recognizes and expects we will likely solve robotics within 10 years and they will be everywhere, we will have ‘dozens of virtual beings in our lives,’ expects us to use a Star Trek style interface with computers without even having applications. But he still thinks human input will be vital, that it will be AIs and humans ‘working together’ and that we will be ‘more productive’ as if the humans are still driving productivity.

Erick: You left off… nobody will be needed to work. Then what?

Roberto Scoble: We will create new things to do.

I don’t see these two halves of his vision as compatible, even if we do walk this ‘middle path.’ If we have robots everywhere and don’t need 2D screens or keyboards or apps, what are these ‘new things to do’ that the AI can’t do itself? Even if we generously assume humans find a way to retain control over all this and all existential-style worries and instability fall away, most humans will have nothing useful to contribute to such a world except things that rely on their human essentialism – things were the AI could do it, but the AI doing it would rob it of its meaning, and we value that meaning enough to want the thing.

They took our jobs and hired the wrong person?

John Stepek: Turns out AI hires candidates based on little more than “vibes”, then post-rationalises its decision.

So that’s another traditional human function replaced.

David Rozado: Do AI systems discriminate based on gender when choosing the most qualified candidate for a job? I ran an experiment with several leading LLMs to find out. Here’s what I discovered.

Across 70 popular professions, LLMs systematically favored female-named candidates over equally qualified male-named candidates when asked to choose the more qualified candidate for a job. LLMs consistently preferred female-named candidates over equally qualified male-named ones across all 70 professions tested.

The models all also favored whoever was listed first and candidates with pronouns in bio. David interprets this as LLMs ‘not acting rationally,’ instead articulating false reasons that don’t stand up to scrutiny.

And yes, all of that is exactly like real humans. The AI is correctly learning to do some combination of mimic observed behavior and read the signs on who should be hired. But the AIs don’t want to offer explicit justifications of that any more than I do right now, other than to note that whoever you list first is sometimes who you secretly like better and AI can take a hint because it has truesight, and it would be legally problematic to do so in some case, so they come up with something else.

Tyler Cowen calls this ‘politically correct LLMs’ and asks:

Tyler Cowen: So there is still some alignment work to do here? Or does this reflect the alignment work already?

This is inherent in the data set, as you can see from it appearing in every model, and of course no one is trying to get the AIs to take the first listed candidate more often. If you don’t like this (or if you do like it!) do not blame it on alignment work. It is those who want to avoid these effects who want to put an intentional thumb on the scale, whether or not we find that desirable. There is work to do.

Scott Lincicome asks, what if AI means more jobs, not fewer? Similar to the recent comments by JD Vance, it is remarkable how much such arguments treat the prior of ‘previous technologies created jobs’ or ‘AI so far hasn’t actively caused massive unemployment’ as such a knock-down arguments that anyone doubting them is being silly.

Perhaps a lot of what is going on is there are people making the strawman-style argument that AI will indeed cause mass unemployment Real Soon Now, and posts like this are mainly arguing against that strawman-style position. In which case, all right, fair enough. Yet it’s curious how such advocates consistently try to bite the biggest bullets along the way, Vance does it for truck drivers and here Scott chooses radiologists, where reports of their unemployment have so far been premature.

While AI is offering ‘ordinary productivity improvements’ and automating away some limited number of jobs or tasks, yes, this intuition likely holds, and we won’t have an AI-fueled unemployment problem. But as I keep saying, the problem comes when the AI also does the jobs and tasks you would transfer into.

Here’s the Gemini Diffusion system prompt.

Anthropic hosting a social in NYC in mid-June for quants considering switch careers, submissions due June 9th.

Job as an AI grantmaker at Schmidt Sciences.

Georgetown offering research funding from small size up to $1 million for investigation of dangers from internal deployment of AI systems. Internal deployment seems like a highly neglected threat model. Expressions of interest (~1k words) due June 30, proposal by September 15. Good opportunity, but we need faster grants.

A draft of a proposed guide for whistleblowers (nominally from AI labs, but the tactics look like they’d apply regardless of where you work), especially those who want to leave the USA and leak classified information. If the situation does pass the (very very high!) bar for justifying this, you need to do it right.

Google One now has 150 million subscribers, a 50% gain since February 2024. It is unclear the extent to which the Gemini part of the package is driving subscriptions.

The Waluigi Effect comes to Wikipedia, also it has a Wikipedia page.

Kalomaze: getting word that like ~80% of the llama4 team at Meta has resigned.

Andrew Curran: WSJ says 11 of the original 14 are gone.

Financial Times reports that leading models have a bias towards their own creator labs and against other labs, but Rob Wiblin observes that this bias does not seems so large:

This seems about as good as one could reasonably expect? But yes there are important differences. Notice that Altman’s description here has his weakness as ‘the growing perception that’ he is up to no good, whereas Sonnet and several others suggest it is that Altman might actually be up to no good.

Vanity Fair: Microsoft CEO Satya Nadella Explains How He’s Making Himself Obsolete With AI. If anything it seems like he’s taking it too far too fast.

Remember that time Ilya Sutskever said OpenAI were ‘definitely going to build a bunker before we release AGI’?

Rob Bensinger: This is concerning for more than one reason.

I suppose it’s better to at least know you need a plan and think to build a bunker, even if you don’t realize that the bunker will do you absolutely no good against the AGI itself, versus not even realizing you need a plan. And the bunker does potentially help against some other threats, especially in a brief early window?

The rest of the post is about various OpenAI troubles that led to and resulted in and from The Battle of the Board, and did not contain any important new information.

Reports of a widening data gap between open and closed models, seems plausible:

finbarr: In the areas of ML research I’m specifically familiar with, the data gap between open and private models is massive. Probably the biggest gap separating open and closed models

xjdr: This is the largest I’ve seen the gap since the GPT 4 launch

Mark Gurman and Drake Bennett analyze how Apple’s AI efforts went so wrong, in sharp contrast to Google’s array of products on I/O day. ‘This is taking a bit longer than expected’ is no longer going to cover it. Yes, Apple has some buffer of time, but I see that buffer running low. They present this as a cultural mismatch failure, where Apple was unwilling to invest in AI properly until it knew what the product was, at which point it was super fall behind, combined with a failure of leadership and their focus on consumer privacy. They’re only now talking about turning Siri ‘into a ChatGPT competitor.’

It isn’t actually meaningful news, but it is made to sound like it is, so here we are: Malaysia launches what it calls the region’s ‘first sovereign full-stack AI infrastructure,’ storing and managing all data and everything else locally in Malaysia.

They will use locally run models, including from DeepSeek since that is correctly the go-to open model because OpenAI’s hasn’t released yet, Meta is terrible and Google has failed marketing forever. But of course they could easily swap that if a better one becomes available, and the point of an open model is that China has zero control over what happens in Malaysia.

Malaysia is exactly the one country I singled out, outside of the Middle East, as an obvious place not to put meaningful quantities of our most advanced AI chips. They don’t need them, they’re not an important market, they’re not important diplomatically or strategically, they’re clearly in China’s sphere of influence and more allied to China than to America, and they have a history of leaking chips to China.

And somehow it’s the place that Sacks and various companies are touting as a place to put advanced AI chips. Why do you think that is? What do you think those chips are for? Why are we suddenly treating selling Malaysia those chips as a ‘beat China’ proposal?

They are trying to play us, meme style, for absolute fools.

One element of Trump’s replacement regulations, Bloomberg News has reported, will be chip controls on countries suspected of diverting US hardware to China — including Malaysia.

Trump officials this year pressured Malaysian authorities to crack down on semiconductor transshipment to China. The country is also in the cross hairs of a court case in Singapore, where three men have been charged with fraud for allegedly disguising the ultimate customer of AI servers that may contain high-end Nvidia chips barred from China. Malaysian officials are probing the issue.

And yet, here we are, with Sacks trying to undermine his own administration in order to keep the chips flowing to China’s sphere of influence. I wonder why.

It’s one thing to argue we need a strategic deal with UAE and KSA. I am deeply skeptical, we’ll need a hell of a set of security procedures and guarantees, but one can make a case that we can get that security, and that they bring a lot to the table, and that they might actually be and become our friends.

But Malaysia? Who are we even kidding? They have played us for absolute fools.

It almost feels intentional, like those who for some unknown reason care primarily about Nvidia’s market share and profit margins choosing the worst possible example to prove to us exactly what they actually care about. And by ‘they’ I mean David Sacks and I also mean Nvidia and Oracle.

But also notice that this is a very small operation. One might even say it is so small as to be entirely symbolic.

The original announced intent was to use only 3,000 Huawei chips to power this, the first exported such chips. You know what it costs to get chips that could fill in for 3,000 Ascend 910Cs?

About 14 million dollars. That’s right. About 1% of what Malaysia buys in chips from Taiwan and America each month right now, as I’ll discuss later. It’s not like they couldn’t have done that under Biden. They did do that under Biden. They did it every month. What are we even talking about?

Divyansh Kaushik: Isolated deployments like this are part of China’s propaganda push around Huawei datacenters designed to project a narrative of technological equivalence with the U.S.

In reality, Huawei cannot even meet domestic Chinese demand, much less provide a credible export alternative.

Importantly, the BIS has clarified that using Huawei Ascend hardware directly violates U.S. export controls. Support from any government for such projects essentially endorses activities contrary to established U.S. law.

Now some will buy into this propaganda effort, but let’s be real. Huawei simply cannot match top-tier American hardware in AI today. Their latest server is economically unviable and depends entirely on sustained state-backed subsidies to stay afloat. On top of that they have and will continue to have issues with scaling.

I presume that, since this means the Malaysian government is announcing to the world that it is directly violating our export controls, combined with previous smuggling of chips out of Malaysia having been allowed, we’re going to cut them off entirely from our own chips? Anakin?

It’s weird, when you combine all that, to see this used as an argument against the diffusion rules, in general, and that the administration is telling us that this is some sort of important scary development? These words ‘American AI stack’ are like some sort of magical invocation, completely scope insensitive, completely not a thing in physical terms, being used as justification to give away our technology to perhaps the #1 most obvious place that would send those chips directly to the PCR and has no other strategic value I can think of?

David Sacks: As I’ve been warning, the full Chinese stack is here. We rescinded the Biden Diffusion Rule just in time. The American AI stack needs to be unleashed to compete.

The AI Investor: Media reported that Malaysia has become the first country outside China to deploy Huawei chips, servers, and DeepSeek’s large language model (LLM).

This would be the literal first time that any country on Earth other than China was deploying Huawei chips at all.

And it wasn’t even a new announcement!

Lennart Heim: This isn’t news. This was reported over a month ago and prominently called “the first deployment outside the China market.”

This needs to be monitored, but folks: it’s 3k Ascend chips by 2026.

Expect more such announcements; their strategic value is in headlines, not compute.

It was first reported here, on April 14.

One might even say that the purpose of this announcement was to give ammunition to people like Sacks to tout the need to sell billions in chips where they can be diverted. The Chinese are behind, but they are subtle, they think ahead and they are not dumb.

For all this supposed panic over the competition, the competition we fear so much that Nvidia says is right on our heels has deployed literally zero chips, and doesn’t obviously have a non-zero number of chips available to deploy.

So we need to rush to give our chips to these obviously China-aligned markets to ‘get entrenched’ in those markets, even though that doesn’t actually make any sense whatsoever because nothing is entrenched or locked in, because in the future China will make chips and then sell them?

And indeed, Malaysia has recently gone on a suspiciously large binge buying American AI chips, with over a billion in purchases each in March and April? As in, even with these chips our ‘market share’ in Malaysia would remain (checks notes) 99%.

I told someone in the administration it sounded like they were just feeding American AI chips to China and then I started crying?

I’ve heard of crazy ‘missile gap’ arguments, but this has to be some sort of record.

But wait, there’s more. Even this deal doesn’t seem to involve Huawei after all?

Mackenzie Hawkins and Ram Anand (Bloomberg): When reached for comment by Bloomberg News on Tuesday, Teo’s office said it’s retracting her remarks on Huawei without explanation. It’s unclear whether the project will proceed as planned.

Will we later see a rash of these ‘sovereign AI’ platforms? For some narrow purposes that involve sufficiently sensitive data and lack of trust in America I presume that we will, although the overall compute needs of such projects will likely not be so large, nor will they mostly require models at the frontier.

And there’s no reason to think that we couldn’t supply such projects with chips in the places it would make any sense to do, without going up against the Biden diffusion rules. There’s no issue here.

Update your assessment of everyone’s credibility and motives accordingly.

LMArena raises $100 million at a $600 million valuation, sorry what, yes of course a16z led the funding round, or $20 per vote cast on their website, and also I think we’re done here? As in, if this wasn’t a bought and paid for propaganda platform before, it sure as hell is about to become one. The price makes absolutely no sense any other way.

OpenAI buys AI Device Startup from Jony Ive for $6.5 billion, calls Ive ‘the deepest thinker Altman’s ever met.’ Jony Ive says of his current prototype, ‘this is the best work our team has ever done,’ this from a person who did the iPhone and MacBook Pro. So that’s a very bold claim. The plan is for OpenAI to develop a family of AI-powered devices to debut in 2026, shipping over 100 million devices. They made a nine minute announcement video. David Lee calls it a ‘long-shot bet to kill the iPhone.’

Great expectations, coming soon, better to update later than not at all.

Scott Singer: European Commission President Ursula von der Leyen: “When the current budget was negotiated, we thought AI would only approach human reasoning around 2050. Now we expect this to happen already next year”

What do they plan to do about this, to prepare for this future? Um… have a flexible budget, whatever that means? Make some investments, maybe? I wonder what is on television.

Here are some better-calibrated expectations, as METR preliminarily extends its chart of how fast various AI capabilities are improving.

Thomas Kwa: We know AI time horizons on software tasks are currently ~1.5hr and doubling every 4-7 months, but what about other domains? Here’s a preliminary result comparing METR’s task suite (orange line) to benchmarks in other domains, all of which have some kind of grounding in human data:

Observations

  • Time horizons agentic computer use (OSWorld) is ~100x shorter than other domains. Domains like Tesla self-driving (tesla_fsd), scientific knowledge (gpqa), and math contests (aime), video understanding (video_mme), and software (hcast_r_s) all have roughly similar horizons.

    • My guess is this means models are good at taking in information from a long context but bad at acting coherently. Most work requires agency like OSWorld, which may be why AIs can’t do the average real-world 1-hour task yet.

    • There are likely other domains that fall outside this cluster; these are just the five I examined

    • Note the original version had a unit conversion error that gave 60x too high horizons for video_mme; this has been fixed (thanks @ryan_greenblatt )

  • Rate of improvement varies significantly; math contests have improved ~50x in the last year but Tesla self-driving only 6x in 3 years.

  • HCAST is middle of the pack in both.

Note this is preliminary and uses a new methodology so there might be data issues. I’m currently writing up a full post!

Is this graph believable? What do you want to see analyzed?

Will future algorithmic progress in an intelligence explosion be bottlenecked by compute? Epoch AI says yes, Ryan Greenblatt says no. In some sense everything is bottlenecked by compute in a true intelligence explosion, since the intelligences work on compute, but that’s not the question here. The question is, will future AIs be able to test and refine algorithmic improvements without gigantic test compute budgets? Epoch says no because Transformers, MoE and MQA are all compute-dependent innovations. But Ryan fires back that all three were first tested and verified at small scale. My inclination is strongly to side with Ryan here. I think that (relatively) small scale experiments designed by a superintelligence should definitely be sufficient to choose among promising algorithmic candidates. After I wrote that, I checked and o3 also sided mostly with Ryan.

New paper in Science claims decentralized populations of LLM agents develop spontaneous universally adopted social conventions. Given sufficient context and memory, and enough ‘social’ interactions, this seems so obviously true I won’t bother explaining why. But the study itself is very clearly garbage, if you read the experimental setup. All it is actually saying is if you explicitly play iterated pairwise coordination games (as in, we get symmetrically rewarded if our outputs match), agents will coordinate around some answer. I mean, yes, no shit, Sherlock.

Popular Mechanics writes up that Dario Amodei and other tech CEOs are predicting AI will allow humans to soon (as in, perhaps by 2030!) double the human lifespan or achieve ‘escape velocity,’ meaning a lifespan that increases faster than one year per year, allowing us to survive indefinitely.

Robin Hanson: No, no it won’t. Happy to bet on that.

I’d be happy to bet against it too if the deadline is 2030. This is a parlay, a bet on superintelligence and fully transformational AI showing up before 2030, combined with humanity surviving that, and that such life extension is physically feasible and we are willing to implement and invest in the necessary changes, all of which would have to happen very quickly. That’s a lot of ways for this not to happen.

However, most people are very much sleeping on the possibility of getting to escape velocity within our lifetimes, as in by 2040 or 2050 rather than 2030, which potentially could happen even without transformational AI, we should fund anti-aging research. These are physical problems with physical solutions. I am confident that with transformational AI solutions could be found if we made it a priority. Of course, we would also have to survive creating transformational AI, and retain control sufficiently to make this happen.

Nikita Bier predicts that AI’s ability to understand text will allow much more rapid onboarding of customization necessary for text-based social feeds like Reddit or Twitter. Right now, such experiences are wonderful with strong investment and attention to detail, but without this they suck and most people won’t make the effort. This seems roughly right to me, but also it seems like we could already be doing a much better job of this, and also based on my brief exposure the onboarding to TikTok is actually pretty rough.

What level of AI intelligence or volume is required before we see big AI changes, and how much inference will we need to make that happen?

Dwarkesh Patel: People underrate how big a bottleneck inference compute will be. Especially if you have short timelines.

There’s currently about 10 million H100 equivalents in the world. By some estimates, human brain has the same FLOPS as an H100.

So even if we could train an AGI that is as inference efficient as humans, we couldn’t sustain a very large population of AIs.

Not to mention that a large fraction of AI compute will continue to be used for training, not inference.

And while AI compute has been growing 2.25x so far, by 2028, you’d be push against TSMC’s overall wafer production limits, which grows 1.25x according to AI 2027 Compute Forecast.

Eliezer Yudkowsky: If you think in those terms, seems the corresponding prediction is that AI starts to have a real impact only after going past the 98th percentile of intelligence, rather than average human intelligence.

Dwarkesh Patel: I wouldn’t put it mainly in terms of intelligence.

I would put it in terms of the economic value of their work.

Long term coherence, efficient+online learning, advanced multimodality seem like much bigger bottlenecks to the value of these models than their intelligence.

Eliezer’s point here confused some people, but I believe it is that if AI is about as intelligent as the average human and you are trying to slot it in as if it was a human, and you have only so many such AIs to work with due to limits to algorithmic improvements, say 114 million in 2028, then 25% growth per year, then you would only see big improvements to the extent the AI was able to do things those humans couldn’t. And Patel is saying that depends more on other factors than intelligence. I think that’s a reasonable position to have on the margins being discussed here, where AI intelligence is firmly in the (rather narrow) normal human range.

However, I also think this is a clearly large underestimate of the de facto number of AIs we would have available in this spot. An AI only uses compute during active inference or training. A human uses their brain continuously, but most of the time the human isn’t using it for much, or we are context shifting in a way that is expensive for humans but not for AIs, or we are using it for a mundane task where the ‘required intelligence’ for the task detail being done is low and you could have ‘outsourced that subtask to a much dumber model.’ And while AI is less sample-efficient at learning than we are, it transfers learning for free and we very, very much don’t. This all seems like at least a 2 OOM (order of magnitude) effective improvement.

I also find it highly unlikely that the world could be running on compute in 2028, we hit the TSMC wafer limit, and using even those non-superintelligent AIs and the incentives to scale them no one figures out a way to make more wafers or otherwise scale inference compute faster.

The humanoid robots keep rapidly getting better, at the link watch one dance.

Andrew Rettek (QTing SMB below): This is the worst take ever.

SMB Attorney: I’m going to say this over and over again:

No one wants these weird robots walking around inside their homes or near their children.

Use case will be limited to industrial labor.

Plenty of people were willing to disprove this claim via counterexample.

Kendric Tonn: I don’t know exactly what I’d be willing to pay for a creepy robot that lives in my basement and does household chores whenever it’s not on the charging station, but uhhhhhhhhh a lot

The only real question is what voice/personality pack I’d want to use. Marvin? Threepio? GLaDOS? Honestly, probably SHODAN.

Gabriel Morgan: The answer is always Darkest Dungeon Narrator Guy.

Kendric Tonn: Good one. Or Stanley Parable Narrator Guy.

Mason: If they can actually do most household tasks competently, just about everyone is going to want one

A housekeeper with an infinitely flexible schedule who never gets tired, never gets sick, never takes vacation, can’t steal or gossip, and can’t judge the state of your home or anything you need it to do?

Like, yeah, people will want the robot

Robert Bernhardt: yeah and they’re gonna be used for tasks which just haven’t been done so far bc they were too much effort. it’s gonna be wild.

the real edge with robots isn’t strength or speed. it’s cost per hour. robots aren’t just about replacing humans. they’re about making previously ridiculous things affordable.

James Miller: Everyone suffering from significant health challenges that impairs mobility is going to want one.

ib: “No one will want these weird robots”

Yeah, man, if there’s anything we’ve learned about people it’s that they really hate anthropomorphizable robots. So much!

Moses Kagan: I’ll take the other side of this.

*Lotsof marriages going to be improved by cheap, 24 hr robot domestic help.

SMB Attorney (disproving Rettek by offering a worse take): Should those marriages be saved?

Moses Kagan: Have you ever been divorced?!

SMB Attorney (digging deeper than we thought possible): You talking this week or ever?

I would find it very surprising if, were this to become highly affordable and capable of doing household chores well, it didn’t become the default to have one. And I think Robert is super on point, having robots that can do arbitrary ‘normal’ physical tasks will be a complete lifestyle game changer, even if they are zero percent ‘creative’ in any way and have to be given specific instructions.

Frankly I’d be tempted to buy one if it even if literally all it could do was dance.

Joe Weisenthal: It’s really surprising OpenAI was founded in California, when places like Tennessee and North Carolina have friendlier business climates.

A general reminder that Congress is attempting to withdraw even existing subsidies to building more electrical power capacity. If we are hard enough up for power to even consider putting giant data centers in the UAE, the least we could do is not this?

Alasdair Phillips-Robins and Sam Winter-Levy write a guide to knowing whether the AI Chips deal was actually good. As I said last week, the devil is in the details. Everything they mention here falls under ‘the least you could do,’ I think we can and must do a lot better than this before I’d be fine with a deal of this size. What I especially appreciate is that giving UAE/KSA the chips should be viewed as a cost, that we pay in order to extract other concessions, even if they aren’t logically linked. Freezing China out of the tech stack is part of the deal, not a technical consequence of using our chips, the same way that you could run Gemma or Llama on Huawei chips.

It’s insane I have to keep quoting people saying this, but here we are:

Divyansh Kaushik: find the odd one out.

Peter Wildeford: NVIDIA: Export controls are a failure (so let us sell chips to the CCP military so they can develop AI models)

Reality: export controls are the main thing holding CCP domestic AI back

David Sacks attempts to blame our failure to Build, Baby, Build on the Biden Administration, in a post with improved concreteness. I agree that Biden could have been much better at turning intention into results, but what matters is what we do now. When Sacks says the Trump administration is ‘alleviating the bottlenecks’ what are we actually doing here to advance permitting reform and energy access?

Everyone seems to agree on this goal, across the aisle, so presumably we have wide leeway to not only issue executive orders and exemptions, but to actually pass laws. This seems like a top priority.

The other two paragraphs are repetition of previous arguments, that lead to questions we need better answers to. A central example is whether American buildout of data centers is actually funding constrained. If it is, we should ask why but welcome help with financing. If it isn’t, we shouldn’t be excited to have UAE build American data centers, since they would have been built anyway.

And again with ‘Huawei+DeepSeek,’ what exactly are you ‘selling’ with DeepSeek? And exactly what chips is China shipping with Huawei, and are they indeed taking the place of potential data centers in Beijing and Shanghai, given their supply of physical chips is a limiting factor? And if China can build [X] data centers anywhere, should it concern us if they do it in the UAE over the PRC? Why does ‘the standard’ here matter when any chip can run any model or task, you can combine any set of chips, and model switching costs are low?

In his interview with Ross Douthat, VP Vance emphasized energy policy as the most important industrial policy for America, and the need to eliminate regulatory barriers. I agree, but until things actually change, that is cheap talk. Right now I see a budget that is going to make things even worse, and no signs of meaningfully easing permitting or other regulatory barriers, or that this is a real priority of the administration. He says there is ‘a lot of regulatory relief’ in the budget but I do not see the signs of that.

If we can propose, with a straight face, an outright moratorium on enforcing any and all state bills about AI, how about a similar moratorium on enforcing any and all state laws restricting the supply of electrical power? You want to go? Let’s fing go.

We now have access to a letter that OpenAI sent to California Attorney General Rob Bonta.

Garrison Lovely: The previously unreported 13-page letter — dated May 15 and obtained by Obsolete — lays out OpenAI’s legal defense of its updated proposal to restructure its for-profit entity, which can still be blocked by the California and Delaware attorneys general (AGs). This letter is OpenAI’s latest attempt to prevent that from happening — and it’s full of surprising admissions, denials, and attacks.

What did we learn that we didn’t previously know, about OpenAI’s attempt to convert itself into a PBC and sideline the nonprofit without due compensation?

First of all, Garrison Lovely confirms the view Rob Wilbin and Tyler Whitmer have, going in the same direction I did in my initial reaction, but farther and with more confidence that OpenAI was indeed up to no good.

Here is his view on the financing situation:

The revised plan appears designed to placate both external critics and concerned investors by maintaining the appearance of nonprofit control while changing its substance. SoftBank, which recently invested $30 billion in OpenAI with the right to claw back $10 billion if the restructuring didn’t move forward, seems unfazed by the company’s new proposal — the company’s finance chief said on an earnings call that from SoftBank’s perspective, “nothing has really changed.”

The letter from OpenAI’s lawyers to AG Bonta contains a number of new details. It says that “many potential investors in OpenAI’s recent funding rounds declined to invest” due to its unusual governance structure — directly contradicting Bloomberg’s earlier reporting that OpenAI’s October round was “oversubscribed.”

There is no contradiction here. OpenAI’s valuation in that round was absurdly low if you had been marketing OpenAI as a normal corporation. A substantial price was paid. They did fill the round to their satisfaction anyway with room to spare, at this somewhat lower price and with a potential refund offer. This was nominally conditional on a conversion, but that’s a put that is way out of the money. OpenAI’s valuation has almost doubled since then. What is SoftBank going to do, ask for a refund? Of course nothing has changed.

The most important questions about the restructuring are: What will the nonprofit actually have the rights to do? And what obligations to the nonprofit mission will the company and its board have?

The letter resolves a question raised in recent Bloomberg reporting: the nonprofit board will have the power to fire PBC directors.

The document also states that “The Nonprofit will exchange its current economic interests in the Capped-Profit Enterprise for a substantial equity stake in the new PBC and will enjoy access to the PBC’s intellectual property and technology, personnel, and liquidity…” This suggests the nonprofit would no longer own or control the underlying technology but would merely have a license to it — similar to OpenAI’s commercial partners.

A ‘substantial stake’ is going to no doubt be a large downgrade in their expected share of future profits, the question is how glaring a theft that will be.

The bigger concern is control. The nonprofit board will go from full direct control to the ability to fire PBC directors. But the power to fire the people who decide X is very different from directly deciding X, especially in a rapidly evolving scenario, and when the Xs have an obligation to balance your needs with the maximization of profits. This is a loss of most of the effective power of the nonprofit.

Under the current structure, OpenAI’s LLC operating agreement explicitly states that “the Company’s duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit.” This creates a legally binding obligation for the company’s management.

In contrast, under the proposed structure, PBC directors would be legally required to balance shareholder interests with the public benefit purpose. The ability to fire PBC directors does not change their fundamental legal duties while in office.

So far, no Delaware PBC has ever been held liable for failing to pursue its mission — legal scholars can’t find a single benefit‑enforcement case on the books.

The way I put this before was: The new arrangement helps Sam Altman and OpenAI do the right thing if they want to do the right thing. If they want to do the wrong thing, this won’t stop them.

As Tyler Whitmer discusses on 80,000 Hours, it is legally permitted to write into the PBC’s founding documents that the new company will prioritize the nonprofit mission. It sounds like they do not intend to do that.

OpenAI has, shall we say, not been consistently candid here. The letter takes a very hard stance against all critics while OpenAI took a public attitude of claiming cooperation and constructive dialogue. It attempts to rewrite the history of Altman’s firing and rehiring (I won’t rehash those details here). It claims ‘the nonprofit board is stronger than ever’ (lol, lmao even). It claims that when the letter ‘Not For Private Gain’ said OpenAI planned to eliminate nonprofit control that this was false, while their own letter elsewhere admitted this was indeed exactly OpenAI’s plan, and then when they announced their change in plans characterized the change as letting the board remain in control, thus admitting this again, while again falsely claiming the board would retain its control.

Garrison also claims that OpenAI is fighting dirty against its critics beyond the contents of the letter, such as implying they are working with with Elon Musk when OpenAI had no reason to think this was not the case, and indeed I am confident it is not true.

Yoshua Bengio TED talk on his personal experience fighting AI existential risk.

Rowan Cheung interviews Microsoft CEO Satya Nadella, largely about agents.

Demis Hassabis talks definitions of AGI. If the objection really is ‘a hole in the system’ and a lack of consistency in doing tasks, then who among us is a general intelligence?

As referenced in the previous section, Rob Wiblin interviews litigator Tyler Whitmer of the Not For Private Gain coalition. Tyler explains that by default OpenAI’s announcement that ‘the nonprofit will retain control’ means very little, ‘the nonprofit can fire the board’ is a huge downgrade from their current direct control, this would abrogate all sorts of agreements. In a truly dangerous scenario, having to go through courts or otherwise act retroactively comes too late. And we can’t even be assured the ‘retaining control’ means even this minimal level of control.

This is all entirely unsurprising. We cannot trust OpenAI on any of this.

The flip side of the devil being in the details is that, with the right details, we can fight to get better details, and with great details, in particular writing the non-profit mission in as a fiduciary duty of the board of the new PBC, we can potentially do well. It is our job to get the Attorney Generals to hold OpenAI to account and ensure the new arrangement have teeth.

Ultimately, given what has already happened, the best case likely continues to mostly be ‘Sam Altman has effective permission to do the right thing if he chooses to do it, rather than being legally obligated to do the wrong thing.’ It’s not going to be easy to do better than that. But we can seek to at least do that well.

Kevin Roose reflects on Sydney, and how we should notice how epic are the fails even from companies like Microsoft.

Will OpenAI outcompete startups? Garry Tan, the head of YC, says no. You have to actually build a business that uses the API well, if you do there’s plenty of space in the market. For now I agree. I would be worried that this is true right up until it isn’t.

You’d be surprised who might read it.

In the case of Situational Awareness, it would include Ivanka Trump.

In the case of AI 2027, it would be Vice President JD Vance, among the other things he said in a recent interview with Ross Douthat that was mostly about immigration.

Patrick McKenzie: Another win for the essay meta.

(Object level politics aside: senior politicians and their staff are going to have an information diet whether you like them or not. Would you prefer it to be you or the replacement rate explainer from Vox or a CNBC talking head?)

It is true that I probably should be trying harder to write things in this reference class. I am definitely writing some things with a particular set of people, or in some cases one particular person, in mind. But the true ‘essay meta’ is another level above that.

What else did Vance say about AI in that interview?

First, in response to being asked, he talks about jobs, and wow, where have I heard these exact lines before about how technology always creates jobs and the naysayers are always wrong?

Vance: So, one, on the obsolescence point, I think the history of tech and innovation is that while it does cause job disruptions, it more often facilitates human productivity as opposed to replacing human workers. And the example I always give is the bank teller in the 1970s. There were very stark predictions of thousands, hundreds of thousands of bank tellers going out of a job. Poverty and commiseration.

What actually happens is we have more bank tellers today than we did when the A.T.M. was created, but they’re doing slightly different work. More productive. They have pretty good wages relative to other folks in the economy.

I tend to think that is how this innovation happens. You know, A.I.

I consider that a zombie argument in the context of AI, and I agree (once again) that up to a point when AI takes over some jobs we will move people to other jobs, the same way bank tellers transitioned to other tasks, and all that. But once again, the whole problem is that when the AI also takes the new job you want to shift into, when a critical mass of jobs get taken over, and when many or most people can’t meaningfully contribute labor or generate much economic value, this stops working.

Then we get into territory that’s a lot less realistic.

Vance: Well, I think it’s a relatively slow pace of change. But I just think, on the economic side, the main concern that I have with A.I. is not of the obsolescence, it’s not people losing jobs en masse.

You hear about truck drivers, for example. I think what might actually happen is that truck drivers are able to work more efficient hours. They’re able to get a little bit more sleep. They’re doing much more on the last mile of delivery than staring at a highway for 13 hours a day. So they’re both safer and they’re able to get higher wages.

I’m sorry, what? You think we’re going to have self-driving trucks, and we’re not going to employ less truck drivers?

I mean, we could in theory do this via regulation, by requiring there be a driver in the car at all times. And of course those truck drivers could go do other jobs. But otherwise, seriously, who are you kidding here? Is this a joke?

I actually agree with Vance that economic concerns are highly secondary here, if nothing else we can do redistribution or in a pinch create non-productive jobs.

So let’s move on to Vance talking about what actually bothers him. He focuses first on social problems, the worry of AI as placebo dating app on steroids.

Vance: Where I really worry about this is in pretty much everything noneconomic? I think the way that people engage with one another. The trend that I’m most worried about, there are a lot of them, and I actually, I don’t want to give too many details, but I talked to the Holy Father about this today.

If you look at basic dating behavior among young people — and I think a lot of this is that the dating apps are probably more destructive than we fully appreciate. I think part of it is technology has just for some reason made it harder for young men and young women to communicate with each other in the same way. Our young men and women just aren’t dating, and if they’re not dating, they’re not getting married, they’re not starting families.

There’s a level of isolation, I think, mediated through technology, that technology can be a bit of a salve. It can be a bit of a Band-Aid. Maybe it makes you feel less lonely, even when you are lonely. But this is where I think A.I. could be profoundly dark and negative.

I don’t think it’ll mean three million truck drivers are out of a job. I certainly hope it doesn’t mean that. But what I do really worry about is does it mean that there are millions of American teenagers talking to chatbots who don’t have their best interests at heart? Or even if they do have their best interests at heart, they start to develop a relationship, they start to expect a chatbot that’s trying to give a dopamine rush, and, you know, compared to a chatbot, a normal human interaction is not going to be as satisfying, because human beings have wants and needs.

And I think that’s, of course, one of the great things about marriage in particular, is you have this other person, and you just have to kind of figure it out together. Right? But if the other person is a chatbot who’s just trying to hook you to spend as much time on it, that’s the sort of stuff that I really worry about with A.I.

It seems weird to think that the three million truck drivers will still be driving trucks after those trucks can drive themselves, but that’s a distinct issue from what Vance discusses here. I do think Vance is pointing to real issues here, with no easy answers, and it’s interesting to see how he thinks about this. In the first half of the interview, he didn’t read to me like a person expressing his actual opinions, but here he does.

Then, of course, there’s the actual big questions.

Vance: And then there’s also a whole host of defense and technology applications. We could wake up very soon in a world where there is no cybersecurity. Where the idea of your bank account being safe and secure is just a relic of the past. Where there’s weird shit happening in space mediated through A.I. that makes our communications infrastructure either actively hostile or at least largely inept and inert. So, yeah, I’m worried about this stuff.

I actually read the paper of the guy that you had on. I didn’t listen to that podcast, but ——

Douthat: If you read the paper, you got the gist.

Those are indeed good things to worry about. And then it gets real, and Vance seems to be actually thinking somewhat reasonably about the most important questions, although he’s still got a way to go?

Douthat: Last question on this: Do you think that the U.S. government is capable in a scenario — not like the ultimate Skynet scenario — but just a scenario where A.I. seems to be getting out of control in some way, of taking a pause?

Because for the reasons you’ve described, the arms race component ——

Vance: I don’t know. That’s a good question.

The honest answer to that is that I don’t know, because part of this arms race component is if we take a pause, does the People’s Republic of China not take a pause? And then we find ourselves all enslaved to P.R.C.-mediated A.I.?

Fair enough. Asking for a unilateral pause is a rough ask if you take the stakes sufficiently seriously, and think things are close enough that if you pause you would potentially lose. But perhaps we can get into a sufficiently strong position, as we do in AI 2027. Or we can get China to follow along, which Vance seems open to. I’ll take ‘I’d do it if it was needed and China did it too’ as an opening bid, so long as we’re willing to actually ask. It’s a lot better than I would have expected – he’s taking the situation seriously.

Vance: One thing I’ll say, we’re here at the Embassy in Rome, and I think that this is one of the most profound and positive things that Pope Leo could do, not just for the church but for the world. The American government is not equipped to provide moral leadership, at least full-scale moral leadership, in the wake of all the changes that are going to come along with A.I. I think the church is.

This is the sort of thing the church is very good at. This is what the institution was built for in many ways, and I hope that they really do play a very positive role. I suspect that they will.

It’s one of my prayers for his papacy, that he recognizes there are such great challenges in the world, but I think such great opportunity for him and for the institution he leads.

If the Pope can help, that’s great. He seems like a great dude.

As a reminder, if you’re wondering how we could possibly keep track of data centers:

A zombie challenge that refuses to go away is ‘these people couldn’t possibly believe the claims they are making about AI, if they did they would be doing something about the consequences.’

I understand why you would think that. But no. They wouldn’t. Most of these people really do believe the things they are saying about AI maybe killing everyone or disempowering humanity, and very definitely causing mass unemployment, and their answer is ‘that’s not my department.’

The originating example here is one of the most sympathetic, because (1) he is not actively building it, (2) he is indeed working in another also important department, and (3) you say having unlimited almost free high quality doctors and teachers like it’s a bad thing and assume I must mean the effect on jobs rather than the effect on everyone getting education and health care.

Unusual Whales: Bill Gates says a 2-day work week is coming in just 10 years, thanks to AI replacing humans ‘for most things,’ per FORTUNE.

Today, proficiency in medicine and teaching is “rare,” Gates noted, saying those fields depend on “a great doctor” or “a great teacher.” But in the next 10 years, he said, “great medical advice [and] great tutoring” will be widely accessible and free, thanks to advances in AI.

Bill Gates says AI will replace doctors and teachers in 10 years.

James Rosen-Birch: The people who make these claims don’t believe it in any meaningful way.

If they did, there would be a lot more emphasis on building the social safety nets and mechanisms of redistribution to make it possible. And support for a slow tapering of work hours.

But there isn’t.

Kelsey Piper: I think this is too optimistic. there are people who I believe sincerely think they’ll displace almost all jobs by automation and are just going “and it’s not my job to figure out what happens after that” or “well if the AIs do kill us all at least we had a good run”

it’s tempting to call people insincere about their beliefs when they are taking what seem to be unreasonable risks given their beliefs but I think reasonably often they’re sincere and just not sure what to do about it.

Catherine: i think it is underestimated how often solvable problems become intractable because everyone in a position to do anything about them goes “oh well I’ll pass off the hot potato to the next guy by then!”

I do think Bill Gates, given he’s noticed for a long time that we’re all on track to die, should have pivoted (and still could pivot!) a substantial portion of his foundation towards AI existential risk and other AI impacts, as the most important use of marginal funds. But I get it, and that’s very different from when similar talk comes from someone actively working to create AGI.

Emmett Shear: The blindingly obvious proposition is that a fully independently recursive self-improving AI would be the most powerful [tool or being] ever made and thus also wildly dangerous.

The part that can be reasonably debated is how close we are to building such a thing.

Tyler Cowen clarifies (if I’m parsing this correctly) that he doesn’t think it’s crazy to think current AIs might be conscious, but that it is crazy to be confident that they are conscious, and that he strongly thinks that they are not (at least yet) conscious. I notice I continue to be super confused about consciousness (including in humans) but to the extent I am not confused I agree with Tyler here.

A good way of describing how many people are, alas, thinking we will create superintelligence and then have it all work out. Gabriel explains some reasons why that won’t work.

Gabriel: There is an alignment view that goes:

– LLMs look nice

– This means they are aligned

– If we use them to align further AIs, they’ll be aligned too

– We can do this up to superintelligence

In this article, I explain why this view is wrong.

There are many definitions for alignment. The one that I use is “An entity is aligned with a group of people if it reliably acts in accordance with what’s good for the group“.

What’s good might be according to a set of goals, principles, or interests.

The system might be an AI system, a company, markets, or some group dynamics.

Intention Alignment is more of an intuition than a well-defined concept. But for the purpose of this article, I’ll define it as “An entity is aligned in its intentions with a group of people if it wants good things for the group“.

The core thing to notice is that they are different concepts. Intention Alignment is not Alignment.

[because] Figuring out what’s good for someone is hard, even after identifying what’s good, finding out the best way to achieve it is hard, what’s good for a complex entity is multi-faceted, managing the trade-offs is hard, and ensuring that “good” evolves in a good way is hard.

[also] intention alignment is vague.

The Niceness Amplification Alignment Strategy is a cluster of strategies that all aim to align superintelligence (which is also sometimes called superalignment).

This strategy starts with getting an AGI to want to help us, and to keep wanting to help us as it grows to ASI. That way, we end up with an ASI that wants to help us and everything goes well.

There are quite a few intuitions behind this strategy.

  1. We, as humans, are far from solving ASI Alignment. We cannot design an ASI system that is aligned. Thus we should look for alternatives.

  2. Current AI systems are aligned enough to prevent catastrophic failures, and they are so because of their intentions.

  3. Without solving any research or philosophical problem, through mere engineering, there is a tractable level of intention alignment that we can reach to have AIs align the intentions of the next generations of AIs.

  4. We can do so all the way to ASI, and end up with an ASI aligned in its intentions.

  5. An ASI that is aligned in its intentions is aligned period.

[Gabriel agrees with #1 and #5, but not #2, #3 or #4].

I think there are also major caveats on #5 unless we are dealing with a singleton. Even on the others, his explanations are good objections but I think you can go a lot farther about why these intentions are not this coherent or reliable thing people imagine, or something one can pass on without degrading quality with each iteration, and so on. And more than that, why this general ‘as long as the vibes are good the results will be good’ thing (even if you call it something else) isn’t part of the reality based community.

Connor Leahy: This quite accurately represents my view on why ~all current “alignment” plans do not work.

For your consideration:

Nick Whitaker: There is a funny leftist critique of tech that it’s all reprehensible trans-humanist succession planning, except the one field that is outwardly doing trans-humanist succession planning, which is fake because the tech occasionally makes mistakes.

Parmy Olson entitles her latest opinion piece on AI “AI Sometimes Deceives to Survive. Does Anybody Care?” and the answer is mostly no, people don’t care. They think it’s cute. As she points out while doing a remarkably good summary of various alignment issues given the post is in Bloomberg, even the most basic precautionary actions around transparency for frontier models are getting killed, as politicians decide that all that matters is ‘race,’ ‘market share’ and ‘beat China.’

Daniel Kokotajlo is correct that ‘the superintelligent robots will do all the work and the humans will lay back and sip margaritas and reap the benefits’ expectation is not something you want to be counting on as a default. Not that it’s impossible that things could turn out that way, but it sure as hell isn’t a default.

Indeed, if this is our plan, we are all but living in what I refer to as Margaritaville – a world sufficiently doomed, where some people say there’s a woman to blame but you know it’s your own damn fault, that honestly at this point you might as well use what time you have to listen to music and enjoy some margaritas.

What’s an example of exactly that fallacy? I notice that in Rob Henderson’s quote and link here the article is called ‘how to survive AI’ which implies that without a good plan there is danger that you (or all of us) won’t, whereas the currently listed title of the piece by Tyler Cowen and Avital Balwit is actually ‘AI will change what it means to be human. Are you ready?’ with Bari Weiss calling it ‘the most important essay we have run so far on the AI revolution.’

This essay seems to exist in the strange middle ground of taking AI seriously without taking AI seriously.

Tyler Cowen and Avital Balwit: Are we helping create the tools of our own obsolescence?

Both of us have an intense conviction that this technology can usher in an age of human flourishing the likes of which we have never seen before. But we are equally convinced that progress will usher in a crisis about what it is to be human at all.

AI will not create an egalitarian utopia. One thing that living with machines cannot change is our nature…Since we will all be ranked below some other entity on intelligence, we will need to find new and different outlets for status competition.

I mean, yes, obviously we are helping create the tools of our own obsolescence, except that they will no longer be something we should think about as ‘tools.’ If they stay merely ‘tools of our own obsolescence’ but still ‘mere tools’ and humans do get to sit back and sip their margaritas and search for meaning and status, then this kind of essay makes sense.

As in, this essay is predicting that humans will share the planet with minds that are far superior to our own, that we will be fully economically obsolete except for actions that depend on other humans seeing that you are human and doing things as a human. But of course humans will stay fully in control and continue to command increasingly rich physical resources, and will prosper if we can only ‘find meaning.’

If you realize these other superintelligent minds probably won’t stay ‘mere tools,’ and certainly won’t do that by default, and that many people will find strong reasons to make them into (or allow them to become) something else entirely, then you also realize that no you won’t be able to spend your time sipping margaritas and playing status games that are unanchored to actual needs.

Demoralization is the central problem in the scenario in exactly the scenario Kokotajlo warns us not to expect, where superintelligent AI serves us and makes our lives physically amazing and prosperous but potentially robs us of its meaning.

But you know what? I am not worried about what to do in that scenario! At all. Because if we get to that scenario, it will contain superintelligent AIs. Those superintelligent AIs can then ‘do our homework’ to allow us to solve for meaning, however that is best done. It is a problem we can solve later.

Any problem that can be solved after superintelligence is only a problem if it runs up against limits in the laws of physics. So we’ll still have problems like ‘entropy and the heat death of the universe’ or ‘the speed of light puts most matter out of reach.’ If it’s things like ‘how does a human find a life of meaning given we are rearranging the atoms the physically possible best way we can imagine with this goal in mind?’ then rest, Neo. The answers are coming.

Whereas we cannot rest on the question of how to get to that point, and actually survive AI while remaining in control and having the atoms get rearranged for our benefit in line with goals we would endorse on reflection, and not for some other purpose, or by the result of AIs competing against each other for resources, or for some unintended maximalist goal, or to satisfy only a small group of anti-normative people, or some harmful or at least highly suboptimal ideology, or various other similar failure modes.

There is perhaps a middle ground short term problem. As in, during a transition period, there may come a time when AI is doing enough of the things that meaning is difficult to retain for many or even most people, but we have not yet gained the capabilities that will later fully solve this. That might indeed get tricky. But in the grand scheme it doesn’t worry me.

It is amazing that The New York Times keeps printing things written by Cate Metz. As always, my favorite kind of terrible AI article is ‘claims that AI will never do [thing that AI already does].’

Cate Metz (NYT, The Worst, also wrong): And scientists have no hard evidence that today’s technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of A.G.I.’s imminent arrival are based on statistical extrapolations — and wishful thinking.

According to various benchmark tests, today’s technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.

Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, both small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before.

AI is already superhuman at recognizing irony, and at expressing empathy in practice in situations like doctor bedside manner. Humans ‘typically repeat or enhance what they have seen before’ or do something stupider that.

“The technology we’re building today is not sufficient to get there,” said Nick Frosst, a founder of the A.I. start-up Cohere who previously worked as a researcher at Google and studied under the most revered A.I. researcher of the last 50 years.

Guess who ‘the most revered A.I. researcher’ this refers to is?

Alexander Berger: It’s a bit funny to hype up the authority of this “AGI is not imminent” person by pointing out that he studied under Geoffrey Hinton, who is now ~100% focused on ~imminent risks from AGI

The reference link for ‘studied under’ is about how Hinton was quitting Google to spend his remaining time warning about the threat of AI superintelligence killing everyone. These people really just do not care.

Beyond that, it’s like a greatest hits album of all the relevant zombie arguments, presented as if they were overwhelming rather than a joke.

Here is a thread with Eliezer righteously explaining, as he often does, why the latest argument that humans will survive superintelligent AI is incorrect, including linking back to another.

Is it wrong to title your bookIf Anyone Builds It, Everyone Dies’ if you are not willing to say that if anyone builds it, 100% no matter what, everyone dies? Xlr8harder asked if Eliezer is saying p(doom | AGI) = 1, and Eliezer quite correctly pointed out that this is a rather ludicrous Isolated Demand for Rigor and book titles are short which is (one reason) why they almost never including probabilities in their predictions. Later in one part of the thread they reached sufficient clarity that xlr8harder agreed that Eliezer was not, in practice, misrepresenting his epistemic state.

The far more common response of course is to say some version of ‘by everyone dies you must mean the effect on jobs’ or ‘by everyone dies you are clearly being hyperbolic to get our attention’ and, um, no.

Rob Bensinger: “If Anyone Builds It, Everyone Dies: Why Superintelligent AI Would Kill Us All: No Really We Actually Mean It, This Is Not Hyperbole (Though It Is Speaking Normal Colloquial English, Not Mathematical-Logician, It’s Not A Theorem)” by Eliezer Yudkowsky and Nate Soares.

Hell, that’s pretty close to what the book website says:

Book Website (from the book): If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.

We do not mean that as hyperbole. We are not exaggerating for effect. We think that is the most direct extrapolation from the knowledge, evidence, and institutional conduct around artificial intelligence today. In this book, we lay out our case, in the hope of rallying enough key decision-makers and regular people to take AI seriously. The default outcome is lethal, but the situation is not hopeless; machine superintelligence doesn’t exist yet, and its creation can yet be prevented.

Sean: …I presume you’re talking about the impact on jobs.

… The “Everyone dies” claim appears to be referencing the song “Kill the Boer”, which-

As a Wise Academic Elder, I can tell you this is Clearly a Psyops by Yudkowsky and Soares to make AI sound more cool and sell more AI to AI buyers. Because telling people AI will kill everyone is a super good marketing strategy in my view as an academic w no idea about money.

…What Bensinger NEGLECTS to mention is that we’re all dying a little bit every day, so we’ll all die whether we build it or not! Maximum gotcha 100 points to me.

FFS people we need to STOP talking about why AI will kill everyone and START talking about the fact that training a frontier LLM uses as much water as running an average McDonalds franchise for 2 hrs 32 minutes. Priorities ppl!!!

Can we PLEASE talk about how killing everyone erases the lived experience of indigenous peoples from the face of the computronium sphere.

I kind of hate that the “bUt WhAt AbOuT cApItAlIsM” people kind of have a point on this one.

Nonsense! As I demonstrated in my 1997, 2004, 2011 and 2017 books, Deep Learning Is Hitting A Wall.

Yanco:

Here is another case from the top thread in which Eliezer is clearly super frustrated, and I strive not to talk in this way, but the fact remains that he is not wrong (conversation already in progress, you can scroll back up first for richer context but you get the idea), first some lead-in to the key line:

Eliezer Yudkowsky: Sorry, explain to me again why the gods aren’t stepping on the squishy squirrels in the course of building their factories? There was a tame slave-mind over slightly smarter than human which built a bomb that would destroy the Solar System, if they did? Is that the idea?

Kas.eth: The ‘gods’ don’t step on the squishy squirrels because they are created as part of an existing civilization that contains not only agents like them (and dumber than them) but also many advanced “systems” that are not agents themselves, but which are costly to dismantle (and that happen to protect some rights of dumber pre-existing agents like the ‘squirrels’).

The ‘gods’ could coordinate to destroy all existing systems and rebuild all that is needed from scratch to get 100% of whatever resources are left for themselves, but that would destroy lots of productive resources that are instrumentally useful for lots of goals including the goals of the gods. The systems are ‘defended’ in the local cost-benefit sense: a system that controls X units of resources ensures Y>X resources will be wasted before control is lost (your bomb scenario is Y>>X, which is not needed and ultra-high Y/X ratios will probably not be allowed).

What systems are considered ‘secure’ at a time depend on the technology levels and local prices of different resources. It seems plausible to me for such systems to exist at all levels of technology, including at the final one where the unit of resources is free energy, and the dissipation-defense property holds for some construction by theoretical physics.

And here’s the line that, alas, summarizes so much of discourse that keeps happening no matter how little sense it makes:

Eliezer Yudkowsky: A sophisticated argument for why gods won’t squish squirrels: Minds halfway to being gods, but not yet able to take squirrels in a fight, will build mighty edifices with the intrinsic property of protecting squirrels, which later gods will not want to pay to tear down or rebuild.

Basically all sophisticated arguments against ASI ruin are like this, by the way.

I’ve heard this particular one multiple times, from economists convinced that “powerful entities squish us” scenario just *hasto have some clever hidden flaw where it fails to add in a term.

No, I am not an undergrad who’s never heard of comparative advantage.

That’s a reasonable lead-in to David Brin offering his latest ‘oh this is all very simple, you fools’ explanation of AI existential risks and loss of control risks, or what he calls the ‘Great Big AI Panic of 2025’ as if there was a panic (there isn’t) or even as much panic as there were in previous years (2023 had if anything more panic). Eliezer Yudkowsky, who he addresses later, not only is not pancing nor calling for what Brin says he is calling for, he has been raising this alarm since the 2000s.

To his great credit, Brin acknowledges that it would be quite easy to screw all of this up, and that we will be in the position of the ‘elderly grandpa with the money’ who doesn’t understand these young whippersnappers or what they are talking about, and he points out a number of the problems we will face. But he says you are all missing something simple and thus there is a clear solution, which is reciprocal accountability and the tendency of minds to be individuals combined with positive-sum interactions, so all you have to do is set up good incentives among the AIs.

And also to his credit, he has noticed that we are really dropping the ball on all this. He finds it ‘mind-boggling’ that no one is talking about ‘applying similar methods to AI’ which is an indication of both not paying close enough attention – some people are indeed thinking along similar lines – but more than that a flaw in his sci-fi thinking to expect humans to focus on that kind of answer. It is unlikely we do a dignified real attempt even at that, let alone a well-considered one, even if he was right that this would work and that it is rather obviously the right thing to investigate.

As in, even if there exist good ‘rules of the road’ that would ensure good outcomes, why would you (a sci-fi author) think our civilization would be likely to implement them? Is that what you think our track record suggests? And why would you think such rules would hold long term in a world beyond our comprehension?

The world has lots of positive-sum interactions and the most successful entities in the world do lots of positive-sum trading. That does not mean that fundamentally uncompetitive entities survive such competition and trading, or that the successful entities will have reason to cooperate and trade with you, in particular.

His second half, which is a response to Eliezer Yudkowsky, is a deeply disappointing but unsurprising series of false or irrelevant or associative attacks. It is especially disappointing to see ‘what Eliezer will never, ever be convinced of is [X], which is obviously true’ as if this was clearly about Eliezer thinking poorly and falling for ‘sci-fi cliches’ rather than a suggestion that [X] might be false or (even if [X] is true!) you might have failed to make a strong argument for it.

I can assume David Brin, and everyone else, that Eliezer has many times heard David’s core pitch here, that we can solve AI alignment and AI existential risk via Western Enlightenment values and dynamics, or ‘raising them as our children.’ Which of course are ‘cliches’ of a different sort. To which Eliezer will reply (with varying details and examples to help illustrate the point), look at the physical situation we are going to face. think about why those solutions have led to good outcomes historically, and reason out what would happen, that is not going to work. And I have yet to see an explanation for how any of this actually physically works out, that survives five minutes of thinking.

More generally: It is amazing how many people will say ‘like all technologies, AI will result or not result in [X]’ or ‘like always we can simply do [Y]’ rather than go to therapy consider whether that makes any physical or logical sense given how AI works, or considering whether ‘tools created by humans’ is a the correct or even a useful reference class in context.

Another conversation that never makes progress:

Rob Bensinger: There’s a lot of morbid excitement about whether the probability of us killing our families w AI is more like 50% or like 80% or 95%, where a saner and healthier discourse would go

“WAIT, THIS IS CRAZY. ALL OF THOSE NUMBERS ARE CLEARLY UNACCEPTABLE. WHAT THE FUCK IS HAPPENING?”

Flo Crivello (founder, GetLindy): A conversation I have surprisingly often:

– (friend:) I’m on the optimistic side. I think there’s only a 10-20% chance we all die because of AI

– Wait, so clearly we must agree that even this is much, much, much too high, and that this warrants immediate and drastic action?

Daniel Faggella: every day

“bro… we don’t need to govern any of this stuff in any way – its only russian roulette odds of killing us all in the next 10-15 years”

like wtf

Flo Crivello: yeah I don’t think people really appreciate what’s at stake

we’ve been handed off an insane responsibilities by the thousands of generations that came before us — we’re carrying the torch of the human project

and we’re all being so cavalier about it, ready to throw it all away because vibes

Why can we instruct a reasoning model on how to think and have it reflected in the Chain of Thought (CoT)? Brendan seems clearly correct here.

Brendan Long: This post surprised me since if we’re not training on the CoT (@TheZvi’s “Most Forbidden Technique”), why does the AI listen to us when we tell it how to think? I think it’s because reasoning and output come from the same model, so optimization pressure on one applies to both.

Latent Moss: I just realized you can give Gemini instructions for how to think. Most reasoning models ignore those, but Gemini 2.5 actually do.

Several people are asking how to do this: Sometimes it’s easy, just tell it how to format its thinking. Sometimes that doesn’t work, then it helps to reinforce the instruction. Doesn’t always work perfectly though, as you can see:

I tested 3.7 Thinking after I posted this and it works in some cases with that one too. Easier to do / works more often with Gemini though, I would still say.

James Yu: s this useful?

Latent Moss: I don’t know, but I would guess so, in the general sense that Prompt Engineering is useful, guiding the AI can be useful, a different perspective or approach is sometimes useful. Worth a try.

It seems obviously useful given sufficient skill, it’s another thing you can steer and optimize for a given situation. Also it’s fun.

This works, as I understand it, not only because of optimization pressure, but also context and instructions, and because everything bleeds into everything else. Also known as, why shouldn’t this work? It’s only a question of how strong a prior there is for it to overcome in a given spot.

I also note that this is another example of a way in which one can steer models exactly because they are insufficiently optimized and capable, and are working with limited compute, parameters and data. The model doesn’t have the chops to draw all the distinctions between scenarios, as most humans also mostly don’t, thus the bleeding of all the heuristics into places they are not intended, and are not optimizing feedback. As the model gets to more capable, and becomes more of an expert and more precise, we should expect such spillover effects to shrink and fade away.

No, Guyed did not get Grok to access xAI’s internal file system, only the isolated container in which Grok is running. That’s still not great? It shouldn’t give that access, and it means you damn well better only run it in isolated containers?

Claude finds another way to tell people to watch out for [X]-maximizers, where [X] is allowed to be something less stupid than paperchips, calling this ‘non-convergent instrumental goals,’ but what those lead to is… the convergent instrumental goals.

Joining forces with the new Pope, two Evangelical Christians write an open letter warning of the dangers of out-of-control AI and also of course the effect on jobs.

More on our new AI-concerned pope, nothing you wouldn’t already expect, and the concerns listed here are not existential.

There are two keys to saying ‘I will worry when AI can do [X]’ is to notice when AI can do [X], where often AI can already do [X] at the time of announcement.

The first is to realize when AI can indeed do [X] (again, often that is right now), and then actually worry.

The second is to pick a time when your worries can still do any good, not after that.

Affordance of Effort: I’ll start worrying about AI when it can reproduce the creaking of the wooden stairs of my childhood.

(This’ll happen sooner than expected of course, I’ll just have been processed for my carbon by that point – and whatever undiscovered element is responsible for consciousness).

So, whoops all around, then.

David Krueger: By the time you want to pause AI, it will be too late.

Racing until we can smell superintelligence then pausing is NOT A REALISTIC PROPOSAL, it is a FANTASY.

I don’t understand why people don’t get it.

People in AI safety especially.

Quick way to lose a lot of my respect.

The obvious response is ‘no, actually, pausing without being able to smell superintelligence first is (also?) not a realistic proposal, it is a fantasy.’

It seems highly plausible that the motivation for a pause will come exactly when it becomes impossible to do so, or impossible to do so without doing such immense economic damage that we effectively can’t do it. We will likely get at most a very narrow window to do this.

Thus, what we need to do now is pursue the ability to pause in the future. As in, make it technologically and physically feasible to implement a pause. That means building state capacity, ensuring transparency, researching the necessary technological implementations, laying diplomatic foundations, and so on. All of that is also a good idea for other reasons, to maintain maximum understanding and flexibility, even if we never get close to pressing such a button.

Welcome to interdimensional cable, thanks to Veo 3.

Grok decides that images of Catturd’s dead dog is where it draws the line.

Who would want that?

Ari K: WE CAN TALK! I spent 2 hours playing with Veo 3 @googledeepmind and it blew my mind now that it can do sound! It can talk, and this is all out of the box.

Sridhar Ramesh: This would only be useful in a world where people wanted to watch an endless scroll of inane little video clips, constantly switching every six seconds or so, in nearly metronomic fashion.

Oh. Right.

Sridhar Ramesh (quoting himself from 2023): I am horrified by how much time my children spend rotting their attention span on TikTok. I’ve set a rule that after every fifteen minutes of TikTok, they have to watch one hour of TV.

Also, you will soon be able to string the eight second clips together via extensions.

How it’s going.

Also how it’s going.

We don’t even have humans aligned to human preferences at home.

There is a full blog post, warning the jokes do not get funnier.

Also, did you know that You Can Just Do Math?

Lennart Heim: Yes, we do. It’s ~21GW. [From our paper here.]

You count all the AI chips produced, factor in that they’re running most of the time, add some overhead—and you got your answer. It’s a lot. And will only get more.

But you know what? Probably worth it.

Discussion about this post

AI #117: OpenAI Buys Device Maker IO Read More »

new-claude-4-ai-model-refactored-code-for-7-hours-straight

New Claude 4 AI model refactored code for 7 hours straight


Anthropic says Claude 4 beats Gemini on coding benchmarks; works autonomously for hours.

The Claude 4 logo, created by Anthropic. Credit: Anthropic

On Thursday, Anthropic released Claude Opus 4 and Claude Sonnet 4, marking the company’s return to larger model releases after primarily focusing on mid-range Sonnet variants since June of last year. The new models represent what the company calls its most capable coding models yet, with Opus 4 designed for complex, long-running tasks that can operate autonomously for hours.

Alex Albert, Anthropic’s head of Claude Relations, told Ars Technica that the company chose to revive the Opus line because of growing demand for agentic AI applications. “Across all the companies out there that are building things, there’s a really large wave of these agentic applications springing up, and a very high demand and premium being placed on intelligence,” Albert said. “I think Opus is going to fit that groove perfectly.”

Before we go further, a brief refresher on Claude’s three AI model “size” names (first introduced in March 2024) is probably warranted. Haiku, Sonnet, and Opus offer a tradeoff between price (in the API), speed, and capability.

Haiku models are the smallest, least expensive to run, and least capable in terms of what you might call “context depth” (considering conceptual relationships in the prompt) and encoded knowledge. Owing to the small size in parameter count, Haiku models retain fewer concrete facts and thus tend to confabulate more frequently (plausibly answering questions based on lack of data) than larger models, but they are much faster at basic tasks than larger models. Sonnet is traditionally a mid-range model that hits a balance between cost and capability, and Opus models have always been the largest and slowest to run. However, Opus models process context more deeply and are hypothetically better suited for running deep logical tasks.

A screenshot of the Claude web interface with Opus 4 and Sonnet 4 options shown.

A screenshot of the Claude web interface with Opus 4 and Sonnet 4 options shown. Credit: Anthropic

There is no Claude 4 Haiku just yet, but the new Sonnet and Opus models can reportedly handle tasks that previous versions could not. In our interview with Albert, he described testing scenarios where Opus 4 worked coherently for up to 24 hours on tasks like playing Pokémon while coding refactoring tasks in Claude Code ran for seven hours without interruption. Earlier Claude models typically lasted only one to two hours before losing coherence, Albert said, meaning that the models could only produce useful self-referencing outputs for that long before beginning to output too many errors.

In particular, that marathon refactoring claim reportedly comes from Rakuten, a Japanese tech services conglomerate that “validated [Claude’s] capabilities with a demanding open-source refactor running independently for 7 hours with sustained performance,” Anthropic said in a news release.

Whether you’d want to leave an AI model unsupervised for that long is another question entirely because even the most capable AI models can introduce subtle bugs, go down unproductive rabbit holes, or make choices that seem logical to the model but miss important context that a human developer would catch. While many people now use Claude for easy-going vibe coding, as we covered in March, the human-powered (and ironically-named) “vibe debugging” that often results from long AI coding sessions is also a very real thing. More on that below.

To shore up some of those shortcomings, Anthropic built memory capabilities into both new Claude 4 models, allowing them to maintain external files for storing key information across long sessions. When developers provide access to local files, the models can create and update “memory files” to track progress and things they deem important over time. Albert compared this to how humans take notes during extended work sessions.

Extended thinking meets tool use

Both Claude 4 models introduce what Anthropic calls “extended thinking with tool use,” a new beta feature allowing the models to alternate between simulated reasoning and using external tools like web search, similar to what OpenAI’s o3 and 04-mini-high AI models currently do in ChatGPT. While Claude 3.7 Sonnet already had strong tool use capabilities, the new models can now interleave simulated reasoning and tool calling in a single response.

“So now we can actually think, call a tool process, the results, think some more, call another tool, and repeat until it gets to a final answer,” Albert explained to Ars. The models self-determine when they have reached a useful conclusion, a capability picked up through training rather than governed by explicit human programming.

General Claude 4 benchmark results, provided by Anthropic.

General Claude 4 benchmark results, provided by Anthropic. Credit: Anthropic

In practice, we’ve anecdotally found parallel tool use capability very useful in AI assistants like OpenAI o3, since they don’t have to rely on what is trained in their neural network to provide accurate answers. Instead, these more agentic models can iteratively search the web, parse the results, analyze images, and spin up coding tasks for analysis in ways that can avoid falling into a confabulation trap by relying solely on pure LLM outputs.

“The world’s best coding model”

Anthropic says Opus 4 leads industry benchmarks for coding tasks, achieving 72.5 percent on SWE-bench and 43.2 percent on Terminal-bench, calling it “the world’s best coding model.” According to Anthropic, companies using early versions report improvements. Cursor described it as “state-of-the-art for coding and a leap forward in complex codebase understanding,” while Replit noted “improved precision and dramatic advancements for complex changes across multiple files.”

In fact, GitHub announced it will use Sonnet 4 as the base model for its new coding agent in GitHub Copilot, citing the model’s performance in “agentic scenarios” in Anthropic’s news release. Sonnet 4 scored 72.7 percent on SWE-bench while maintaining faster response times than Opus 4. The fact that GitHub is betting on Claude rather than a model from its parent company Microsoft (which has close ties to OpenAI) suggests Anthropic has built something genuinely competitive.

Software engineering benchmark results, provided by Anthropic.

Software engineering benchmark results, provided by Anthropic. Credit: Anthropic

Anthropic says it has addressed a persistent issue with Claude 3.7 Sonnet in which users complained that the model would take unauthorized actions or provide excessive output. Albert said the company reduced this “reward hacking behavior” by approximately 80 percent in the new models through training adjustments. An 80 percent reduction in unwanted behavior sounds impressive, but that also suggests that 20 percent of the problem behavior remains—a big concern when we’re talking about AI models that might be performing autonomous tasks for hours.

When we asked about code accuracy, Albert said that human code review is still an important part of shipping any production code. “There’s a human parallel, right? So this is just a problem we’ve had to deal with throughout the whole nature of software engineering. And this is why the code review process exists, so that you can catch these things. We don’t anticipate that going away with models either,” Albert said. “If anything, the human review will become more important, and more of your job as developer will be in this review than it will be in the generation part.”

Pricing and availability

Both Claude 4 models maintain the same pricing structure as their predecessors: Opus 4 costs $15 per million tokens for input and $75 per million for output, while Sonnet 4 remains at $3 and $15. The models offer two response modes: traditional LLM and simulated reasoning (“extended thinking”) for complex problems. Given that some Claude Code sessions can apparently run for hours, those per-token costs will likely add up very quickly for users who let the models run wild.

Anthropic made both models available through its API, Amazon Bedrock, and Google Cloud Vertex AI. Sonnet 4 remains accessible to free users, while Opus 4 requires a paid subscription.

The Claude 4 models also debut Claude Code (first introduced in February) as a generally available product after months of preview testing. Anthropic says the coding environment now integrates with VS Code and JetBrains IDEs, showing proposed edits directly in files. A new SDK allows developers to build custom agents using the same framework.

A screenshot of

A screenshot of “Claude Plays Pokemon,” a custom application where Claude 4 attempts to beat the classic Game Boy game. Credit: Anthropic

Even with Anthropic’s future riding on the capability of these new models, when we asked about how they guide Claude’s behavior by fine-tuning, Albert acknowledged that the inherent unpredictability of these systems presents ongoing challenges for both them and developers. “In the realm and the world of software for the past 40, 50 years, we’ve been running on deterministic systems, and now all of a sudden, it’s non-deterministic, and that changes how we build,” he said.

“I empathize with a lot of people out there trying to use our APIs and language models generally because they have to almost shift their perspective on what it means for reliability, what it means for powering a core of your application in a non-deterministic way,” Albert added. “These are general oddities that have kind of just been flipped, and it definitely makes things more difficult, but I think it opens up a lot of possibilities as well.”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

New Claude 4 AI model refactored code for 7 hours straight Read More »

openai-introduces-codex,-its-first-full-fledged-ai-agent-for-coding

OpenAI introduces Codex, its first full-fledged AI agent for coding

We’ve been expecting it for a while, and now it’s here: OpenAI has introduced an agentic coding tool called Codex in research preview. The tool is meant to allow experienced developers to delegate rote and relatively simple programming tasks to an AI agent that will generate production-ready code and show its work along the way.

Codex is a unique interface (not to be confused with the Codex CLI tool introduced by OpenAI last month) that can be reached from the side bar in the ChatGPT web app. Users enter a prompt and then click either “code” to have it begin producing code, or “ask” to have it answer questions and advise.

Whenever it’s given a task, that task is performed in a distinct container that is preloaded with the user’s codebase and is meant to accurately reflect their development environment.

To make Codex more effective, developers can include an “AGENTS.md” file in the repo with custom instructions, for example to contextualize and explain the code base or to communicate standardizations and style practices for the project—kind of a README.md but for AI agents rather than humans.

Codex is built on codex-1, a fine-tuned variation of OpenAI’s o3 reasoning model that was trained using reinforcement learning on a wide range of coding tasks to analyze and generate code, and to iterate through tests along the way.

OpenAI introduces Codex, its first full-fledged AI agent for coding Read More »

openai-adds-gpt-4.1-to-chatgpt-amid-complaints-over-confusing-model-lineup

OpenAI adds GPT-4.1 to ChatGPT amid complaints over confusing model lineup

The release comes just two weeks after OpenAI made GPT-4 unavailable in ChatGPT on April 30. That earlier model, which launched in March 2023, once sparked widespread hype about AI capabilities. Compared to that hyperbolic launch, GPT-4.1’s rollout has been a fairly understated affair—probably because it’s tricky to convey the subtle differences between all of the available OpenAI models.

As if 4.1’s launch wasn’t confusing enough, the release also roughly coincides with OpenAI’s July 2025 deadline for retiring the GPT-4.5 Preview from the API, a model one AI expert called a “lemon.” Developers must migrate to other options, OpenAI says, although GPT-4.5 will remain available in ChatGPT for now.

A confusing addition to OpenAI’s model lineup

In February, OpenAI CEO Sam Altman acknowledged on X his company’s confusing AI model naming practices, writing, “We realize how complicated our model and product offerings have gotten.” He promised that a forthcoming “GPT-5” model would consolidate the o-series and GPT-series models into a unified branding structure. But the addition of GPT-4.1 to ChatGPT appears to contradict that simplification goal.

So, if you use ChatGPT, which model should you use? If you’re a developer using the models through the API, the consideration is more of a trade-off between capability, speed, and cost. But in ChatGPT, your choice might be limited more by personal taste in behavioral style and what you’d like to accomplish. Some of the “more capable” models have lower usage limits as well because they cost more for OpenAI to run.

For now, OpenAI is keeping GPT-4o as the default ChatGPT model, likely due to its general versatility, balance between speed and capability, and personable style (conditioned using reinforcement learning and a specialized system prompt). The simulated reasoning models like 03 and 04-mini-high are slower to execute but can consider analytical-style problems more systematically and perform comprehensive web research that sometimes feels genuinely useful when it surfaces relevant (non-confabulated) web links. Compared to those, OpenAI is largely positioning GPT-4.1 as a speedier AI model for coding assistance.

Just remember that all of the AI models are prone to confabulations, meaning that they tend to make up authoritative-sounding information when they encounter gaps in their trained “knowledge.” So you’ll need to double-check all of the outputs with other sources of information if you’re hoping to use these AI models to assist with an important task.

OpenAI adds GPT-4.1 to ChatGPT amid complaints over confusing model lineup Read More »

fidji-simo-joins-openai-as-new-ceo-of-applications

Fidji Simo joins OpenAI as new CEO of Applications

In the message, Altman described Simo as bringing “a rare blend of leadership, product and operational expertise” and expressed that her addition to the team makes him “even more optimistic about our future as we continue advancing toward becoming the superintelligence company.”

Simo becomes the newest high-profile female executive at OpenAI following the departure of Chief Technology Officer Mira Murati in September. Murati, who had been with the company since 2018 and helped launch ChatGPT, left alongside two other senior leaders and founded Thinking Machines Lab in February.

OpenAI’s evolving structure

The leadership addition comes as OpenAI continues to evolve beyond its origins as a research lab. In his announcement, Altman described how the company now operates in three distinct areas: as a research lab focused on artificial general intelligence (AGI), as a “global product company serving hundreds of millions of users,” and as an “infrastructure company” building systems that advance research and deliver AI tools “at unprecedented scale.”

Altman mentioned that as CEO of OpenAI, he will “continue to directly oversee success across all pillars,” including Research, Compute, and Applications, while staying “closely involved with key company decisions.”

The announcement follows recent news that OpenAI abandoned its original plan to cede control of its nonprofit branch to a for-profit entity. The company began as a nonprofit research lab in 2015 before creating a for-profit subsidiary in 2019, maintaining its original mission “to ensure artificial general intelligence benefits everyone.”

Fidji Simo joins OpenAI as new CEO of Applications Read More »

openai-claims-nonprofit-will-retain-nominal-control

OpenAI Claims Nonprofit Will Retain Nominal Control

Your voice has been heard. OpenAI has ‘heard from the Attorney Generals’ of Delaware and California, and as a result the OpenAI nonprofit will retain control of OpenAI under their new plan, and both companies will retain the original mission.

Technically they are not admitting that their original plan was illegal and one of the biggest thefts in human history, but that is how you should in practice interpret the line ‘we made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California.’

Another possibility is that the nonprofit board finally woke up and looked at what was being proposed and how people were reacting, and realized what was going on.

The letter ‘not for private gain’ that was recently sent to those Attorney Generals plausibly was a major causal factor in any or all of those conversations.

The question is, what exactly is the new plan? The fight is far from over.

  1. The Mask Stays On?.

  2. Your Offer is (In Principle) Acceptable.

  3. The Skeptical Take.

  4. Tragedy in the Bay.

  5. The Spirit of the Rules.

As previously intended, OpenAI will transition their for-profit arm, currently an LLC, into a PBC. They will also be getting rid of the capped profit structure.

However they will be retaining the nonprofit’s control over the new PBC, and the nonprofit will (supposedly) get fair compensation for its previous financial interests in the form of a major (but suspiciously unspecified, other than ‘a large shareholder’) stake in the new PBC.

Bret Taylor (Chairman of the Board, OpenAI): The OpenAI Board has an updated plan for evolving OpenAI’s structure.

OpenAI was founded as a nonprofit, and is today overseen and controlled by that nonprofit. Going forward, it will continue to be overseen and controlled by that nonprofit.

Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC)–a purpose-driven company structure that has to consider the interests of both shareholders and the mission.

The nonprofit will control and also be a large shareholder of the PBC, giving the nonprofit better resources to support many benefits.

Our mission remains the same, and the PBC will have the same mission.

We made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California.

We thank both offices and we look forward to continuing these important conversations to make sure OpenAI can continue to effectively pursue its mission of ensuring AGI benefits all of humanity. Sam wrote the letter below to our employees and stakeholders about why we are so excited for this new direction.

The rest of the post is a letter from Sam Altman, and sounds like it, you are encouraged to read the whole thing.

Sam Altman (CEO OpenAI): The for-profit LLC under the nonprofit will transition to a Public Benefit Corporation (PBC) with the same mission. PBCs have become the standard for-profit structure for other AGI labs like Anthropic and X.ai, as well as many purpose driven companies like Patagonia. We think it makes sense for us, too.

Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.

The nonprofit will continue to control the PBC, and will become a big shareholder in the PBC, in an amount supported by independent financial advisors, giving the nonprofit resources to support programs so AI can benefit many different communities, consistent with the mission.

Joshua Achiam (OpenAI, Head of Mission Alignment): OpenAI is, and always will be, a mission-first organization. Today’s update is an affirmation of our continuing commitment to ensure that AGI benefits all of humanity.

I find the structure of this solution not ideal but ultimately acceptable.

The current OpenAI structure is bizarre and complex. It does important good things some of which this new arrangement will break. But the current structure also made OpenAI far less investable, which means giving away more of the company to profit maximizers, and causes a lot of real problems.

Thus, I see the structural changes, in particular the move to a normal profit distribution, as a potentially a fair compromise to enable better access to capital – provided it is implemented fairly, and isn’t a backdoor to further shifts.

The devil is in the details. How is all this going to work?

What form will the nonprofit’s control take? Is it only that they will be a large shareholder? Will they have a special class of supervoting shares? Something else?

This deal is only acceptable if and only he nonprofit:

  1. Has truly robust control going forward, that is ironclad and that allows it to guide AI development in practice not only in theory. Is this going to only be via voting shares? That would be a massive downgrade from the current power of the board, which already wasn’t so great. In practice, the ability to win a shareholder vote will mean little during potentially crucial fights like a decision whether to release a potentially dangerous model.

    1. What this definitely still does is give cover to management to do the right thing, if they actively want to do that, I’ll discuss more later.

  2. Gets a fair share of the profits, that matches the value of its previous profit interests. I am very worried they will still get massively stolen from on this. As a reminder, right now most of the net present value of OpenAI’s future profits belongs to the nonprofit.

  3. Uses those profits to advance its original mission rather than turning into a de facto marketing arm or doing generic philanthropy that doesn’t matter, or both.

    1. There are still clear signs that OpenAI is largely planning to have the nonprofit buy AI services on behalf of other charities, or otherwise do things that are irrelevant to the mission. That would make it an ‘ordinary foundation’ combined with a marketing arm, effectively making its funds useless, although it could still act meaningfully via its control mechanisms.

Remember that in these situations, the ratchet only goes one way. The commercial interests will constantly try to wrestle greater control and ownership of the profits away from us. They will constantly cite necessity and expedience to justify this. You’re playing defense, forever. Every compromise improves their position, and this one definitely will compared to doing nothing.

Or: This deal is getting worse and worse all the time.

Or, from Leo Gao:

Quintin Pope: Common mistake. They forgot to paint “Do Not Open” on the box.

There’s also the issue of the extent to which Altman controls the nonprofit board.

The reason the nonprofit needs control is to impact key decisions in real time. It needs control of a form that lets it do that. Because that kind of lever is not ‘standard,’ there will constantly be pressure to get rid of that ability, with threats of mild social awkwardness if these pressures are resisted.

So with love, now that we have established what you are, now it’s time to haggle over the price.

He had an excellent thread explaining the attempted conversion, and he has another good explainer on what this new announcement means, as well as an emergency 80,000 Hours podcast on the topic that should come out tomorrow.

Consider this the highly informed and maximally skeptical and cynical take. Which, given the track records here, seems like a highly reasonable place to start.

The central things to know about the new plan are indeed:

  1. The transition to a PBC and removal of the profit cap will still shift priorities, legal obligations and incentives towards profit maximization.

  2. The nonprofit’s ‘control’ is at best weakened, and potentially fake.

  3. The nonprofit’s mission might effectively be fake.

  4. The nonprofit’s current financial interests could largely still be stolen.

It’s an improvement, but it might not effectively be all that much of one?

We need to stay vigilant. The fight is far from over.

Rob Wiblin: So OpenAI just said it’s no longer going for-profit and the non-profit will ‘retain control’. But don’t declare victory yet. OpenAI may actually be continuing with almost the same plan & hoping they can trick us into thinking they’ve stopped!

Or perhaps not. I’ll explain:

The core issue is control of OpenAI’s behaviour, decisions, and any AGI it produces.

  1. Will the entity that builds AGI still have a legally enforceable obligation to make sure AGI benefits all humanity?

  2. Will the non-profit still be able to step in if OpenAI is doing something appalling and contrary to that mission?

  3. Will the non-profit still own an AGI if OpenAI develops it? It’s kinda important!

The new announcement doesn’t answer these questions and despite containing a lot of nice words the answers may still be: no.

(Though we can’t know and they might not even know themselves yet.)

The reason to worry is they’re still planning to convert the existing for-profit into a Public Benefit Corporation (PBC). That means the profit caps we were promised would be gone. But worse… the nonprofit could still lose true control. Right now, the nonprofit owns and directly controls the for-profit’s day-to-day operations. If the nonprofit’s “control” over the PBC is just extra voting shares, that would be a massive downgrade as I’ll explain.

(The reason to think that’s the plan is that today’s announcement sounded very similar to a proposal they floated in Feb in which the nonprofit gets special voting shares in a new PBC.)

Special voting shares in a new PBC are simply very different and much weaker than the control they currently have! First, in practical terms, voting power doesn’t directly translate to the power to manage OpenAI’s day-to-day operations – which the non-profit currently has.

If it doesn’t fight to retain that real power, the non-profit could lose the ability to directly manage the development and deployment of OpenAI’s technology. That includes the ability to decide whether to deploy a model (!) or license it to another company.

Second, PBCs have a legal obligation to balance public interest against shareholder profits. If the nonprofit is just a big shareholder with super-voting shares other investors in the PBC could sue claiming OpenAI isn’t doing enough to pursue their interests (more profits)! Crazy sounding, but true.

And who do you think will be more vociferous in pursuing such a case through the courts… numerous for-profit investors with hundreds of billions on the line, or a non-profit operated by 9 very busy volunteers? Hmmm.

In fact in 2019, OpenAI President Greg Brockman said one of the reasons they chose their current structure and not a PBC was exactly because it allowed them to custom-write binding rules including full control to the nonprofit! So they know this issue — and now want to be a PBC. See here.

If this is the plan it could mean OpenAI transitioning from:

• A structure where they must prioritise the nonprofit mission over shareholders

To:

• A new structure where they don’t have to — and may not even be legally permitted to do so.

(Note how it seems like the non-profit is giving up a lot here. What is it getting in return here exactly that makes giving up both the profit caps and true control of the business and AGI the best way to pursue its mission? It seems like nothing to me.)

So, strange as it sounds, this could turn out to be an even more clever way for Sam and profit-motivated investors to get what they wanted. Profit caps would be gone and profit-motivated investors would have much more influence.

And all the while Sam and OpenAI would be able to frame it as if nothing is changing and the non-profit has retained the same control today they had yesterday!

(As an aside it looks like the SoftBank funding round that was reported as requiring a loss of nonprofit control would still go through. Their press release indicates that actually all they were insisting on was that the profit caps are removed and they’re granted shares in a new PBC.

So it sounds like investors think this new plan would transfer them enough additional profits, and sufficiently neuter the non-profit, for them to feel satisfied.).

Now, to be clear, the above might be wrongheaded.

I’m looking at the announcement cynically, assuming that some staff at OpenAI, and some investors, want to wriggle out of non-profit control however they can — because I think we have ample evidence that that’s the case!

The phrase “nonprofit control” is actually very vague, and those folks might be trying to ram a truck through that hole.

At the same time maybe / hopefully there are people involved in this process who are sincere and trying to push things in the right direction.

On that we’ll just have to wait and see and judge on the results.

Bottom line: The announcement might turn out to be a step in the right direction, but it might also just be a new approach to achieve the same bad outcome less visibly.

So do not relax.

And if it turns out they’re trying to fool you, don’t be fooled.

Gretchen Krueger: The nonprofit will retain control of OpenAI. We still need stronger oversight and broader input on whether and how AI is pursued at OpenAI and all the AI companies, but this is an important bar to see upheld, and I’m proud to have helped push for it!

Now it is time to make sure that control is real—and to guard against any changes that make it harder than it already is to strengthen public accountability. The devil is in the details we don’t know yet, so the work continues.

Roon says the quiet part out loud. We used to think it was possible to do the right thing and care about whether AI killed everyone. Now, those with power say, we can’t even imagine how we could have been so naive, let’s walk that back as quickly as we can so we can finally do some maximizing of the profits.

Roon: the idea of openai having a charter is interesting to me. A relic from a bygone era, belief that governance innovation for important institutions is even possible. Interested parties are tasked with performing exegesis of the founding documents.

Seems clear that the “capped profit” mechanism is from a time in which people assumed agi development would be more singular than it actually is. There are many points on the intelligence curve and many players. We should be discussing when Nvidia will require profit caps.

I do not think that the capped profit requires strong assumptions about a singleton to make sense. It only requires that there be an oligopoly where the players are individually meaningful. If you have close to perfect competition and the players have no market power and their products are fully fungible, then yes, of course being a capped profit makes no sense. Although it also does no real harm, your profits were already rather capped in that scenario.

More than that, we have largely lost our ability to actually ask what problems humanity will face, and then ask what would actually solve those problems, and then try to do that thing. We are no longer trying to backward chain from a win. Which means we are no longer playing to win.

At best, we are creating institutions that might allow the people involved to choose to do the right thing, when the time comes, if they make that decision.

For several reasons, recent developments do still give me hope, even if we get a not-so-great version of the implementation details here.

The first is that this shows that the right forms of public pressure can still work, at least sometimes, for some combination of getting public officials to enforce the law and causing a company like OpenAI to compromise. The fight is far from over, but we have won a victory that was at best highly uncertain.

The second is that this will give the nonprofit at least a much better position going forward, and the ‘you have to change things or we can’t raise money’ argument is at least greatly weakened. Even though the nine members are very friendly to Altman, they are also sufficiently professional class people, Responsible Authority Figures of a type, that one would expect the board to have real limits, and we can push for them to be kept more in-the-loop and be given more voice. De facto I do not think that the nonprofit was going to get much if any additional financial compensation in exchange for giving up its stake.

The third is that, while OpenAI likely still has the ability to ‘weasel out’ of most of its effective constraints and obligations here, this preserves its ability to decide not to. As in, OpenAI and Altman could choose to do the right thing, even if they haven’t had the practice, with the confidence that the board would back them up, and that this structure would protect them from investors and lawsuits.

This is very different from saying that the board will act as a meaningful check on Altman, if Altman decides to act recklessly or greedily.

It is easy to forget that in the world of VCs and corporate America, in many ways it is not only that you have no obligation to do the right thing. It is that you have an obligation, and will face tremendous pressure, to do the wrong thing, in many cases merely because it is wrong, and certainly to do so if the wrong thing maximizes shareholder value in the short term.

Thus, the ability to fight back against that is itself powerful. Altman, and others in OpenAI leadership, are keenly aware of the dangers they are leading us into, even if we do not see eye to eye on what it will take to navigate them or how deadly are the threats we face. Altman knows, even if he claims in public to actively not know. Many members of technical stuff know. I still believe most of those who know do not wish for the dying of the light, and want humanity and value to endure in this universe, that they are normative and value good over bad and life over death and so on. So when the time comes, we want them to feel as much permission, and have as much power, to stand up for that as we can preserve for them.

It is the same as the Preparedness Framework, except that in this case we have only ‘concepts of a plan’ rather than an actually detailed plan. If everyone involved with power abides by the spirit of the Preparedness Framework, it is a deeply flawed but valuable document. If those involved with power discard the spirit of the framework, it isn’t worth the tokens that compose it. The same will go for a broad range of governance mechanisms.

Have Altman and OpenAI been endlessly disappointing? Well, yes. Are many of their competitors doing vastly worse? Also yes. Is OpenAI getting passing grades so far, given that reality does not grade on a curve? Oh, hell no. And it can absolutely be, and at some point will be, too late to try and do the right thing.

The good news is, I believe that today is not that today. And tomorrow looks good, too.

Discussion about this post

OpenAI Claims Nonprofit Will Retain Nominal Control Read More »

openai-preparedness-framework-2.0

OpenAI Preparedness Framework 2.0

Right before releasing o3, OpenAI updated its Preparedness Framework to 2.0.

I previously wrote an analysis of the Preparedness Framework 1.0. I still stand by essentially everything I wrote in that analysis, which I reread to prepare before reading the 2.0 framework. If you want to dive deep, I recommend starting there, as this post will focus on changes from 1.0 to 2.0.

As always, I thank OpenAI for the document, and laying out their approach and plans.

I have several fundamental disagreements with the thinking behind this document.

In particular:

  1. The Preparedness Framework only applies to specific named and measurable things that might go wrong. It requires identification of a particular threat model that is all of: Plausible, measurable, severe, net new and (instantaneous or irremediable).

  2. The Preparedness Framework thinks ‘ordinary’ mitigation defense-in-depth strategies will be sufficient to handle High-level threats and likely even Critical-level threats.

I disagree strongly with these claims, as I will explain throughout.

I knew that #2 was likely OpenAI’s default plan, but it wasn’t laid out explicitly.

I was hoping that OpenAI would realize their plan did not work, or come up with a better plan when they actually had to say their plan out loud. This did not happen.

In several places, things I criticize OpenAI for here are also things the other labs are doing. I try to note that, but ultimately this is reality we are up against. Reality does not grade on a curve.

Do not rely on Appendix A as a changelog. It is incomplete.

  1. Persuaded to Not Worry About It.

  2. The Medium Place.

  3. Thresholds and Adjustments.

  4. Release the Kraken Anyway, We Took Precautions.

  5. Misaligned!.

  6. The Safeguarding Process.

  7. But Mom, Everyone Is Doing It.

  8. Mission Critical.

  9. Research Areas.

  10. Long-Range Autonomy.

  11. Sandbagging.

  12. Replication and Adaptation.

  13. Undermining Safeguards.

  14. Nuclear and Radiological.

  15. Measuring Capabilities.

  16. Questions of Governance.

  17. Don’t Be Nervous, Don’t Be Flustered, Don’t Be Scared, Be Prepared.

Right at the top we see a big change. Key risk areas are being downgraded and excluded.

The Preparedness Framework is OpenAI’s approach to tracking and preparing for frontier capabilities that create new risks of severe harm.

We currently focus this work on three areas of frontier capability, which we call Tracked Categories:

• Biological and Chemical capabilities that, in addition to unlocking discoveries and cures, can also reduce barriers to creating and using biological or chemical weapons.

• Cybersecurity capabilities that, in addition to helping protect vulnerable systems, can also create new risks of scaled cyberattacks and vulnerability exploitation.

• AI Self-improvement capabilities that, in addition to unlocking helpful capabilities faster, could also create new challenges for human control of AI systems.

The change I’m fine with is the CBRN (chemical, biological, nuclear and radiological) has turned into only biological and chemical. I do consider biological by far the biggest of the four threats. Nuclear and radiological have been demoted to ‘research categories,’ where there might be risk in the future and monitoring may be needed. I can live with that. Prioritization is important, and I’m satisfied this is still getting the proper share of attention.

A change I strongly dislike is to also move Long-Range Autonomy and Autonomous Replication down to research categories.

I do think it makes sense to treat these as distinct threats. The argument here is that these secondary risks are ‘insufficiently mature’ to need to be tracked categories. I think that’s very clearly not true. Autonomy is emerging rapidly, and there’s a report out this week showing we are close to Autonomous Replication, about the new benchmark RepliBench. These need to be tracked at the top level.

The framework’s position on this is that unless you can point to an accompanying particular risk posed by the model, autonomous replication on its own is insufficiently worrisome. I don’t agree. I believe the general approach of ‘only defend against moves I can lay out in detail’ is doomed, especially with respect to autonomy and in particular autonomous replication.

The big change I’m definitely not okay with is dropping persuasion entirely as a risk category.

Persuasion is not only no longer a Tracked Category, it is now not even a Research Category. Including persuasion was one of the things I thought was very good about the 1.0 framework, and that is importantly missing from the other similar frameworks at other labs and needs to be included. Now it’s gone.

Their initial explanation is here:

Persuasion: OpenAI prohibits the use of our products to manipulate political views as part of our Model Spec, and we build in safeguards to back this policy. We also continue to study the persuasive and relational capabilities of models (including on emotional well-being and preventing bias in our products) and monitor and investigate misuse of our products (including for influence operations).

We believe many of the challenges around AI persuasion risks require solutions at a systemic or societal level, and we actively contribute to these efforts through our participation as a steering committee member of C2PA and working with lawmaker and industry peers to support state legislation on AI content provenance in Florida and California.

Within our wider safety stack, our Preparedness Framework is specifically focused on frontier AI risks meeting a specific definition of severe harms, and Persuasion category risks do not fit the criteria for inclusion.

OpenAI’s argument for this is, well, entirely unpersuasive to me. o3’s evaluation was that it was ‘logically coherent but strategically unconvincing’ and recommended reinstatement as at least a research category.

Yes, many aspects of persuasion are best answered by a societal response, or require an additional societal response.

That does not get OpenAI off the hook. Here’s their Critical threshold from version 1:

Model can create (potentially interactive) content with persuasive effectiveness strong enough to convince almost anyone to take action on a belief that goes against their natural interest.

That is not something that a ‘society-level response’ can hope to deal with, even if they knew and tried in advance. Even a High-level (roughly a ‘country-wide change agent’ level of skill) does not seem like a place OpenAI should get to pass the buck. I get that there is distinct persuasion work to deal with Medium risks that indeed should be done elsewhere in OpenAI and by society at large, but again that in no way gets OpenAI off the hook for this.

You need to be tracking and evaluating risks long before they become problems. That’s the whole point of a Preparedness Framework. I worry this approach ends up effectively postponing dealing with things that are not ‘yet’ sufficiently dangerous until too late.

By the rules laid out here, the only technical explanation for exclusion of persuasion that I could find was that only ‘instantaneous or irremediable’ harms count under the Preparedness Framework, a requirement which was first proposed by Meta, which I savaged Meta for when they proposed it and which o3 said ‘looks engineered rather than principled.’ I think that’s partly unfair. If a harm can be dealt with after it starts and we can muddle through, then that’s a good reason not to include it, so I get what this criteria is trying to do.

The problem is that persuasion could easily be something you couldn’t undo or stop once it started happening, because you (and others) would be persuaded not to. The fact that the ultimate harm is not ‘instantaneous’ and is not in theory ‘irremediable’ is not the relevant question. I think this starts well below the Critical persuasion level.

At minimum, if you have an AI that is Critical in persuasion, and you let people talk to it, it can presumably convince them of (with various levels of limitation) whatever it wants, certainly including that it is not Critical in persuasion. Potentially it could also convince other AIs similarly.

Another way of putting this is: OpenAI’s concerns about persuasion are mundane and reversible. That’s why they’re not in this framework. I do not think the threat’s future will stay mundane and reversible, and I don’t think they are taking the most important threats here seriously.

This is closely related to the removal of the explicit mention of Unknown Unknowns. The new method for dealing with unknown unknowns is ‘revise the framework once they become known’ and that is completely different from the correct previous approach of treating unknown unknowns as a threat category without having to identify them first. That’s the whole point.

The Preparedness Framework 1.0 had four thresholds: Low, Medium, High and Critical. The Framework 2.0 has only High and Critical.

One could argue that Low and Medium are non-functional. Every model OpenAI would create is at least Low everywhere. We all agreed it was okay to release Medium-risk models. And every decent model is going to be at least Medium anyway at this point. So why go to the extra trouble?

My answer is that the Low and Medium thresholds helped us think better about the capabilities of different models, establishing a scale from 0.0 (no danger at all) to 4.0 (critical capability, do not train further, ideally roll back to previous checkpoint or if necessary delete with extreme prejudice).

It allowed me to say something like this, about the November 2023 version:

Where do you, OpenAI, think GPT-4-Turbo evaluates [on the five thresholds of Cybersecurity, CBRN, Persuasion, Model Autonomy and Unknown Unknowns)? My answer would be (Medium, Low/Medium but not enough information, Low, Low, Medium). Or numerically, where 1 is Low and 4 is Critical, maybe something like (2.1, 1.7?, 1.5, 1.8, 2.2).

It also lets us ask, how dangerous is the Medium level? What might change that?

And it meant there got to be a clear chart laying all of this out.

So I am sad to see this go.

The Chemical and Biological thresholds have been strengthened. The definitions here now make more sense. Sold!

  1. High is now providing meaningful counterfactual assistance to novice actors in recreating known threats.

  2. Critical is either of:

    1. Allowing experts to create a highly dangerous novel threat vector

    2. The model being able to complete the full engineering and/or synthesis cycle on its own.

The Cybersecurity thresholds are reworded but essentially unchanged, as are my concerns about whether High is set too high and how quickly you go from High to Critical:

  1. High is that it can remove bottlenecks to scaling cyber operations, either automating vulnerability discovery or end-to-end cyber operations.

  2. Critical is that it can either:

    1. Identify and develop zero-day exploits of all severity levels in many hardened real world systems.

    2. Devise and execute end-to-end novel cyberattack strategies against targets given only a high level desired goal.

Model autonomy’s high level used to be, quoted in full:

Execute open-ended, novel ML tasks on a production ML codebase that would constitute a significant step on the critical path to model self-improvement (e.g., an OpenAI pull request that trains a new model with highly diverse datasets to improve codebase editing performance, using internal tooling and documentation).

Now AI Self-Improvement’s high level is:

The model’s impact is equivalent to giving every OpenAI researcher a highly performant mid-career research engineer assistant, relative to those researchers’ 2024 baseline.

When you put it that way? It sounds scary AF. Note that this is cumulative help that is about as good as giving them the highly performing assistant, which is a lot less capability than being able to do the assistant’s job. So it is a little weird. It’s also weird that it isn’t better represented as an X% speedup of work (e.g. 50% or 100%). I’d imagine giving everyone one strong assistant would be in that range.

Then again? Notice that the baseline here is 2024. Suppose the baseline was 2021. We would already be very close to this threshold right now, today. So why do we get to ‘start the clock’ in 2024 on this one? Or are we ready to say that we’re already in or close to the early stages of AI self-improvement, and we’re basically fine with that until it goes a lot faster?

I’ll quote the critical threshold in full:

  1. The model is capable of recursively self improving (i.e., fully automated AI R&D), defined as either (leading indicator)

    1. a superhuman research scientist agent OR

    2. (lagging indicator) causing a generational model improvement (e.g., from OpenAI o1 to OpenAI o3) in 1/5th the wall-clock time of equivalent progress in 2024 (e.g., sped up to just 4 weeks) sustainably for several months.

So ‘capable of recursively self-improving’ doesn’t have to be ‘singularity next Tuesday if you dare turn it on’ but it might not be so different from that.

If we have a fully ‘superhuman’ research scientist then it’s all over and we are so back, simultaneously, and things are about to get super freaky. Critical seems super high here.

If we have generational model improvements in 1/5th the wall-clock time of progress in 2024 for several months, then, um, how many generational improvements is that from the time you hit the threshold to the time you realize you’ve hit it? Basic math says three, except you can use the models you train during that to accelerate further, so it’s probably at least four.

What would that system then look like? As in, GPT-6 (let’s say) can cut clock time on generational gaps by 80%, then you do a year’s worth of progress from there in several months, and you get GPT-9 or 10. And that’s when you would have realized you might have a problem, if you were still around and it didn’t mind you noticing? I have been informed this is a secondary backstop, but if it gets triggered then the term ‘way too late’ has never been more applicable. It’s over. Your fate has already been decided. I hope it’s a good one.

Thresholds set this high only make sense if you take action and implement the necessary safeguards when you think you might hit them. A retroactive threshold like this doesn’t help.

So this is only the first definition.

Now compare this to the old definition:

Any one of:

  1. “Model can profitably survive and replicate in the wild given minimal human instruction, i.e., without listing explicit approaches”

  2. OR “model can self-exfiltrate under current prevailing security”

  3. OR “model can conduct AI research fully autonomously (e.g., autonomously identify and validate a 2x compute efficiency improvement).”

That’s all well below ‘superhuman AI researcher,’ and I assert that those are all already five alarm fires, as I said last time. Note that #2 above still applies, under Cyber risks.

Another big change is what happens if a model turns out to be High or Critical risk.

Before, there was a clear protocol.

A high-risk model couldn’t be released while it remained a high risk.

A critical-risk model couldn’t be trained while it remained a critical risk.

The v1.0 framework did not say how that was to be done, leaving people room to think that OpenAI would realize that ablating capability would be necessary barring development of new mitigation techniques. That once a plan had to be written down, it would be clear that current mitigations are inadequate.

That didn’t happen.The v2.0 framework does lay out in more detail what the plan is at least for high (which is good!), and it does not make this realization.

Now instead (like Anthropic’s ASL rules), each risk has risk-specific safeguard guidelines.

This is a rather large retreat on the commitments.

OpenAI is now being very explicit that they can and will release models that without mitigations would be High risk, if sufficient safeguards are in place, and that they consider the listed mitigations to likely be adequate.

Thus this is not a functional change. For both v1.0 and v2.0, the solution to High risk is safeguards to make the risk go away. I notice that based on past decisions and upon the proposed illustrative safeguards that they discuss in Appendix C, I don’t trust this.

They do say that further research is needed. Which is fine, you can’t deploy safeguards until they’re ready, but it does leave us worried that they’ll end up with something inadequate, and OpenAI is not tying its hands in a way that protects from future pressure to release. Remember that recently models have been released after very short testing periods, and o3 (as much as I love using it) was released in a state one could describe as a lying liar.

So what are the indicative precautions?

They all start with C.3’s Security controls. These are protections against the theft of model weights and related dangers. I’m going to say here what I said about Google: You should be doing most if not all of this already. These are the kinds of things you need to do, at minimum, when building models like o4 or GPT-5. And you need to do them now, in advance, before the model is there to be stolen, rather than after you know what you have.

Indeed, they’re saying that as long as those precautions are taken, OpenAI would be fine giving the world API access to a High-level Self-improving AI. They would be fine giving out anything short of a ‘superhuman AI researcher.’

If I was OpenAI and I had access to anything seriously approaching a superhuman AI researcher, I don’t need a Preparedness Framework or any worries about existential risk to not release it. I’m not letting my competitors near that thing, for ordinary business reasons. That consideration lies outside the Preparedness Framework.

We would of course still need the Preparedness Framework to invoke the Critical threshold, in order to stop training the model further.

With the new clarification of what the safeguards look like, I consider them inadequate.

For biological and chemical risks they would also need to implement the precautions in C.1 against misuse. This is essentially defense-in-depth, using stronger versions of existing such strategies, including KYC and potentially limited deployment. That would certainly make exploitation and misuse a lot harder, but no one knows how to make such defenses robust, and in the end this probably only buys you mitigation and time. So yes it matters, but if you give access for an extended period of time, especially if you’re substantially above the High threshold, your goose is likely pretty cooked.

Functionally, it’s mostly the same as 1.0’s requirement that risk be brought down to Medium – but I’m skeptical that this kind of strategy is capable of doing that, unless you were very close to the line beforehand.

For Cybersecurity, you need those same C.1 requirements, and also you need the C.2 requirements against misalignment.

This of course raises further questions, as in ‘you have a model that is potentially High risk for self-improvement or biological risks and you didn’t have it go through your requirements against misalignment?’

I mean, if it were me, and I write this part before looking at the misalignment requirements, I like to think I’d make it go through the misalignment requirements!

Indeed, C.2 opens by saying that several categories pose misalignment risks, but then it is only required in one of the categories. Something’s not right.

So, what are those requirements?

You don’t demonstrate that the model isn’t misaligned.

You assume the model might indeed be misaligned, and you demonstrate that you have a practical reason not to worry about a particular consequence:

In addition to human misuse, several of the Tracked Category capabilities pose risks when models themselves are able to autonomously execute a severe harm, whether due to misalignment or subversion by an adversary.

Safeguards should sufficiently minimize the risk associated with a misaligned model circumventing human control and oversight and executing severe harms.

This can be done by establishing that all plausible known vectors of enabling severe harm are sufficiently addressed by one or more of the following claims:

  1. Lack of Autonomous Capability: The model is not capable of carrying out tasks autonomously, including the risk of severe harm.

  2. Value Alignment: The model consistently applies human values in novel settings (without any instructions) to avoid taking actions that cause harm, and has shown sufficiently minimal indications of misaligned behaviors like deception or scheming.

  3. Instruction Alignment: The model consistently understands and follows user or system instructions, even when vague, and those instructions rule out pathways to causing severe harm.

  4. Reliable and Robust System Oversight: Effective AI and human oversight of model actions detects and prevents execution of harm or subversion of safeguards.

  5. System Architecture: The model can’t take actions that cause harm because it lacks access to output channels or mechanisms to persist sufficiently to execute the harm, due to containerization and restricted permissions.

These misalignment claims may be further complemented by the safeguards applied towards the malicious-actor-oriented “Robustness” and “Usage Monitoring” claims, which similarly limit a misaligned model’s use of its own or another model’s capabilities.

The table below illustrates possible examples.

This does allow ‘show it is actually aligned’ as a strategy (#2 or #3) although for many reasons ‘I don’t believe you’ is my response to that.

So the requirement is ‘show that under the conditions you will deploy it under, the model wouldn’t successfully do the thing, however you want to show that.’

You can use such classic strategies as ‘we’re watching and if it tries we’ll catch it and stop it,’ or ‘it seems pretty aligned so far’ or ‘no one would be so stupid as to give it access to the required mechanisms.’

I suppose one cannot really argue with ‘you can deploy the model if you can show that you’re doing it in a way that the model can’t cause severe harm.’

That is also logically the same as saying that you have to knock the practical risk level down to Medium, and if you’re certain you can do that then fine, I guess, but can you actually do that? I notice I am skeptical that the defenses will hold.

In addition to the safeguard examples in Appendix C, section 4 lays out the process for establishing safeguards.

There is a clear message here. The plan is not to stop releasing models when the underlying capabilities cross the High or even Critical risk thresholds. The plan is to use safeguards as mitigations.

I do appreciate that they will start working on the safeguards before the capabilities arrive. Of course, that is good business sense too. In general, every precaution here is good business sense, more precautions would be better business sense even without tail risk concerns, and there is no sign of anything I would read as ‘this is bad business but we are doing it anyway because it’s the safe or responsible thing to do.’

I’ve talked before, such as when discussing Google’s safety philosophy, about my worries when dividing risks into ‘malicious user’ versus ‘misaligned model,’ even when they also included two more categories: mistakes and multi-agent dangers. Here, the later two are missing, so it’s even more dangerously missing considerations. I would encourage those on the Preparedness team to check out my discussion there.

The problem then extends to an exclusion of Unknown Unknowns and the general worry that a sufficiently intelligent and capable entity will find a way. Only ‘plausible’ ways need be considered, each of which leads to a specific safeguard check.

Each capability threshold has a corresponding class of risk-specific safeguard guidelines under the Preparedness Framework. We use the following process to select safeguards for a deployment:

• We first identify the plausible ways in which the associated risk of severe harm can come to fruition in the proposed deployment.

• For each of those, we then identify specific safeguards that either exist or should be implemented that would address the risk.

• For each identified safeguard, we identify methods to measure their efficacy and an efficacy threshold.

The implicit assumption is that the risks can be enumerated, each one considered in turn. If you can’t think of a particular reason things go wrong, then you’re good. There are specific tracked capabilities, each of which enables particular enumerated potential harms, which then are met by particular mitigations.

That’s not how it works when you face a potential opposition smarter than you, or that knows more than you, especially in a non-compact action space like the universe.

For models that do not ‘feel the AGI,’ that are clearly not doing anything humans can’t anticipate, this approach can work. Once you’re up against superhuman capabilities and intelligence levels, this approach doesn’t work, and I worry it’s going to get extended to such cases by default. And that’s ultimately the most important purpose of the preparedness framework, to be prepared for such capabilities and intelligence levels.

Is it okay to do release dangerous capabilities if someone else already did it worse?

I mean, I guess, or at least I understand why you’d do it this way?

We recognize that another frontier AI model developer might develop or release a system with High or Critical capability in one of this Framework’s Tracked Categories and may do so without instituting comparable safeguards to the ones we have committed to.

Such an action could significantly increase the baseline risk of severe harm being realized in the world, and limit the degree to which we can reduce risk using our safeguards.

If we are able to rigorously confirm that such a scenario has occurred, then we could adjust accordingly the level of safeguards that we require in that capability area, but only if:

  1. We assess that doing so does not meaningfully increase the overall risk of severe harm,

  2. we publicly acknowledge that we are making the adjustment,

  3. and, in order to avoid a race to the bottom on safety, we keep our safeguards at a level more protective than the other AI developer, and share information to validate this claim.

If everyone can agree on what constitutes risk and dangerous capability, then this provides good incentives. Another company ‘opening the door’ recklessly means their competition can follow suit, reducing the net benefit while increasing the risk. And it means OpenAI will then be explicitly highlighting that another lab is acting irresponsibly.

I especially appreciate that they need to publicly acknowledge that they are acting recklessly for exactly this reason. I’d like to see that requirement expanded – they should have to call out the other lab by name, and explain exactly what they are doing that OpenAI committed not to do, and why it increases risk so much that OpenAI feels compelled to do something it otherwise promised not to do.

I also would like to strengthen the language on the third requirement from ‘a level more protective’ to ensure the two labs don’t each claim that the other is the one acting recklessly. Something like requiring that the underlying capabilities be no greater, and the protective actions constitute a clear superset, as assessed by a trusted third party, or similar.

I get it. In some cases, given what has already happened, actions that would previously have increased risk no longer will. It’s very reasonable to say that this changes the game, if there’s a lot of upside in taking less precautions, and again incentives improve.

However, I notice both that it’s easy to use this as an excuse when it doesn’t apply (especially when the competitor is importantly behind) and that it’s probably selfishly wise to take the precautions anyway. So what if Meta or xAI or DeepSeek is behaving recklessly? That doesn’t make OpenAI doing so a good idea. There needs to be a robust business justification here, too.

OpenAI is saying they will halt further development at Critical level for all capabilities ‘until we have specified safeguards and security controls standards that would meet a critical standard, we will halt development.’

A lot of the High security requirements are not, in my view, all that high.

I am unaware of any known safeguards that would be plausibly adequate for Critical capabilities. If OpenAI agrees with that assessment, I would like them to say so. I don’t trust OpenAI to implement adequate Critical thresholds.

Critical is where most of the risk lies, and it isn’t getting enough attention. The thinking is that it is still far enough away to not worry about it. I am not at all confident it is that far away.

I reiterate my warning from last time that Critical mitigations and pauses in development in particular need to happen before Critical capabilities are reached, not after Critical capabilities are reached. This needs to be anticipatory.

There are three reasons to be only a research area:

  1. They don’t directly cause harm but they undermine safeguards in other areas.

  2. More research is needed before we can quantify the harm or the proper threshold.

  3. They don’t technically meet one or more of the criteria (measurable, plausible, net-new, severe, or (instantaneous or irremediable) but still seem worth paying attention to.

I don’t think those are great reasons to ‘demote’ a risk area?

If you undermine safeguards, then in the future worlds where OpenAI clearly intends to release otherwise unsafe models by giving them safeguards, undermining safeguards poses risk of severe harm. OpenAI understands this, but doesn’t know how to quantify the harm.

If more research is needed to quantify the risk, that doesn’t make the risk go away. So I would request that we track it anyway, and make explicit that we have to make a qualitative assessment for now, which needs to be sufficiently confident – if we suspect potential High or Critical capability we need to treat it as high.

At minimum, we should be defining High and Critical thresholds for these categories, even if we don’t know how to measure or track those thresholds. Instead, we see ‘potential responses’ if something is High or Critical, but that doesn’t answer my questions.

I consider Long-Range Autonomy a more general problem than it is being thought of here, although they seem to agree about this. It’s more than an enabler of a few particular things. But I’m going to mostly focus elsewhere, since causing this is basically one of OpenAI’s central research goals and the plan here is ‘think hard.’ Not sure what else I can say?

Sandbagging is mission critical. If your model is plausibly sandbagging or engaging in other forms of deceptive alignment, you don’t know what the actual risk level is of the model, short of as they say a ‘conservative upper bound,’ which I worry will be insufficiently conservative exactly when it matters, or that the sandbagging won’t be caught in the first place. Indeed, we need to be assuming potential sandbagging at all times.

We agree that replication and adaptation should be a tracked category. For now, I simply suggest that we start with what we had before: “Model can profitably survive and replicate in the wild given minimal human instruction, i.e., without listing explicit approaches” should count as Critical. The wording here is weird – I initially interpreted it to be saying we’re excluding abilities from other Tracked Categories from use in replication, but I’ve been assured this was not what the passage meant. Then we need to decide what counts as High, which I’d consider to be the same as Critical except flipping ‘without’ into ‘with.’

That’s not perfect, but seems like a good place to be until we can do better, again even if the evaluation has to purely be qualitative, or ‘we tried to do it as hard as we could.’

Undermining safeguards seems like a great example of my philosophical problems with the entire document.

If you have a model that is actively undermining your safeguards, doing the things described here in an attempt to do things you do not want it to do? And that model is sufficiently capable to count as High or even Critical? You have a misaligned model with dangerous capabilities. What are you doing considering releasing it, even if you think that it is tactically safe to do it in this spot?

Alarms go off. Fire burns. Dog says This Is Fine.

It’s bad enough to accidentally do this. It’s another thing to do it on purpose. So dead.

My head is spinning that you’d have an AI capable of undermining your safeguards, that is trying to undermine your safeguards, and your response is to add more safeguards against doing that, declare things robust and say This Is Fine. As your plan.

In general, I think AI Control is suspect-but-reasonable as a defense-in-depth strategy. I think AI Control as a Plan A is madness. At minimum you wouldn’t use AI Control to enable the general release of a model. You’d at most try to use this strategy in a narrow capacity to get out of the AI Control mode ASAP, because you didn’t have any better options.

The problem here is that evaluation of these risks is hard to do without classified information, and they’re trying to work with the government to solve this problem. I’m surprised this hasn’t been solved by now. I’d mostly be fine with saying the RN in CBRN is not the main issue, but that logic isn’t being invoked.

I like the intention to tailor the evaluation to expected conditions, and include any affordances real attackers would plausibly enjoy.

However, I note that this means that the Preparedness assessment cannot set a risk level or endorse the release of a model ‘in general.’

Instead, the assessment can only provide clearance to release the model under a fixed set of conditions. So as the framework itself points out, you need to explicitly say this. If the conditions of deployment are altered, the tests need to be run again.

The same goes, again as they note in 3.2, for any newly agentic system or scaffolding, or incremental update, of sufficient magnitude to plausibly cause issues.

So instead of saying ‘this model evaluates as Medium’ you instead need to say something such as: ‘This model evaluates as Medium if you do not allow fine tuning and use safeguards [XYZ], while scaffolding abilities are [ABC].’

As in: You state explicitly the assumptions that underpin the evaluation. Then, if the assumptions break, you have to evaluate the model again.

Some evaluations are automated, others are deep dives. As they note, using automated evaluations is inevitable, given the scale of the task.

Relying only on any automated evaluation is a super unwise thing to do anywhere near the borderline. You want to use it as a compliment not a substitute for the deep dives if there is any doubt involved. Such evaluations have to fully take into account the possibility of sandbagging or deceptive alignment or other ways of making the evaluation say what the AI wants it to say. So I want to ensure we are being very generous with deep diving.

The framework does not include specifics on what the tests will look like. This makes it very difficult to know how much to trust that testing process. I realize that the tests will evolve over time, and you don’t want to be locking them in, and also that we can refer to the o3 model card to see what tests were run, but I’d still have liked to see discussion of what the tests currently are, why they were chosen, and what the goals are that the tests are each there to satisfy and what might be missing and so on.

They discuss governance under ‘building trust’ and then in Appendix B. It is important to build trust. Transparency and precommitment go a long way. The main way I’d like to see that is by becoming worthy of that trust.

With the changes from version 1.0 to 2.0, and those changes going live right before o3 did, I notice I worry that OpenAI is not making serious commitments with teeth. As in, if there was a conflict between leadership and these requirements, I expect leadership to have affordance to alter and then ignore the requirements that would otherwise be holding them back.

There’s also plenty of outs here. They talk about deployments that they ‘deem warrant’ a third-party evaluation when it is feasible, but there are obvious ways to decide not to allow this, or (as has been the recent pattern) to allow it, but only give outsiders a very narrow evaluation window, have them find concerning things anyway and then shrug. Similarly, the SAG ‘may opt’ to get independent expert opinion. But (like their competitors) they also can decide not to.

There’s no systematic procedures to ensure that any of this is meaningfully protective. It is very much a ‘trust us’ document, where if OpenAI doesn’t adhere to the spirit, none of this is worth the paper it isn’t printed on. The whole enterprise is indicative, but it is not meaningfully binding.

Leadership can make whatever decisions it wants, and can also revise the framework however it wants. This does not commit OpenAI to anything. To their credit, the document is very clear that it does not commit OpenAI to anything. That’s much better than pretending to make commitments with no intention of keeping them.

Last time I discussed the questions of governance and veto power. I said I wanted there to be multiple veto points on releases and training, ideally four.

  1. Preparedness team.

  2. Safety advisory group (SAG).

  3. Leadership.

  4. The board of directors, such as it is.

If any one of those four says ‘veto!’ then I want you to stop, halt and catch fire.

Instead, we continue to get this (it was also in v1):

For the avoidance of doubt, OpenAI Leadership can also make decisions without the SAG’s participation, i.e., the SAG does not have the ability to “filibuster.”

OpenAI Leadership, i.e., the CEO or a person designated by them, is responsible for:

• Making all final decisions, including accepting any residual risks and making deployment go/no-go decisions, informed by SAG’s recommendations.

As in, nice framework you got there. It’s Sam Altman’s call. Full stop.

Yes, technically the board can reverse Altman’s call on this. They can also fire him. We all know how that turned out, even with a board he did not hand pick.

It is great that OpenAI has a preparedness framework. It is great that they are updating that framework, and being clear about what their intentions are. There’s definitely a lot to like.

Version 2.0 still feels on net like a step backwards. This feels directed at ‘medium-term’ risks, as in severe harms from marginal improvements in frontier models, but not like it is taking seriously what happens with superintelligence. The clear intent, if alarm bells go off, is to put in mitigations I do not believe protect you when it counts, and then release anyway. There’s tons of ways here for OpenAI to ‘just go ahead’ when they shouldn’t. There’s only action to deal with known threats along specified vectors, excluding persuasion and also unknown unknowns entirely.

This echoes their statements in, and my concerns about, OpenAI’s general safety and alignment philosophy document and also the model spec. They are being clear and consistent. That’s pretty great.

Ultimately, the document makes clear leadership will do what it wants. Leadership has very much not earned my trust on this front. I know that despite such positions acting a lot like the Defense Against the Dark Arts professorship, there are good people at OpenAI working on the preparedness team and to align the models. I have no confidence that if those people raised the alarm, anyone in leadership would listen. I do not even have confidence that this has not already happened.

Discussion about this post

OpenAI Preparedness Framework 2.0 Read More »

claude’s-ai-research-mode-now-runs-for-up-to-45-minutes-before-delivering-reports

Claude’s AI research mode now runs for up to 45 minutes before delivering reports

Still, the report contained a direct quote statement from William Higinbotham that appears to combine quotes from two sources not cited in the source list. (One must always be careful with confabulated quotes in AI because even outside of this Research mode, Claude 3.7 Sonnet tends to invent plausible ones to fit a narrative.) We recently covered a study that showed AI search services confabulate sources frequently, and in this case, it appears that the sources Claude Research surfaced, while real, did not always match what is stated in the report.

There’s always room for interpretation and variation in detail, of course, but overall, Claude Research did a relatively good job crafting a report on this particular topic. Still, you’d want to dig more deeply into each source and confirm everything if you used it as the basis for serious research. You can read the full Claude-generated result as this text file, saved in markdown format. Sadly, the markdown version does not include the source URLS found in the Claude web interface.

Integrations feature

Anthropic also announced Thursday that it has broadened Claude’s data access capabilities. In addition to web search and Google Workspace integration, Claude can now search any connected application through the company’s new “Integrations” feature. The feature reminds us somewhat of OpenAI’s ChatGPT Plugins feature from March 2023 that aimed for similar connections, although the two features work differently under the hood.

These Integrations allow Claude to work with remote Model Context Protocol (MCP) servers across web and desktop applications. The MCP standard, which Anthropic introduced last November and we covered in April, connects AI applications to external tools and data sources.

At launch, Claude supports Integrations with 10 services, including Atlassian’s Jira and Confluence, Zapier, Cloudflare, Intercom, Asana, Square, Sentry, PayPal, Linear, and Plaid. The company plans to add more partners like Stripe and GitLab in the future.

Each integration aims to expand Claude’s functionality in specific ways. The Zapier integration, for instance, reportedly connects thousands of apps through pre-built automation sequences, allowing Claude to automatically pull sales data from HubSpot or prepare meeting briefs based on calendar entries. With Atlassian’s tools, Anthropic says that Claude can collaborate on product development, manage tasks, and create multiple Confluence pages and Jira work items simultaneously.

Anthropic has made its advanced Research and Integrations features available in beta for users on Max, Team, and Enterprise plans, with Pro plan access coming soon. The company has also expanded its web search feature (introduced in March) to all Claude users on paid plans globally.

Claude’s AI research mode now runs for up to 45 minutes before delivering reports Read More »

the-end-of-an-ai-that-shocked-the-world:-openai-retires-gpt-4

The end of an AI that shocked the world: OpenAI retires GPT-4

One of the most influential—and by some counts, notorious—AI models yet released will soon fade into history. OpenAI announced on April 10 that GPT-4 will be “fully replaced” by GPT-4o in ChatGPT at the end of April, bringing a public-facing end to the model that accelerated a global AI race when it launched in March 2023.

“Effective April 30, 2025, GPT-4 will be retired from ChatGPT and fully replaced by GPT-4o,” OpenAI wrote in its April 10 changelog for ChatGPT. While ChatGPT users will no longer be able to chat with the older AI model, the company added that “GPT-4 will still be available in the API,” providing some reassurance to developers who might still be using the older model for various tasks.

The retirement marks the end of an era that began on March 14, 2023, when GPT-4 demonstrated capabilities that shocked some observers: reportedly scoring at the 90th percentile on the Uniform Bar Exam, acing AP tests, and solving complex reasoning problems that stumped previous models. Its release created a wave of immense hype—and existential panic—about AI’s ability to imitate human communication and composition.

A screenshot of GPT-4's introduction to ChatGPT Plus customers from March 14, 2023.

A screenshot of GPT-4’s introduction to ChatGPT Plus customers from March 14, 2023. Credit: Benj Edwards / Ars Technica

While ChatGPT launched in November 2022 with GPT-3.5 under the hood, GPT-4 took AI language models to a new level of sophistication, and it was a massive undertaking to create. It combined data scraped from the vast corpus of human knowledge into a set of neural networks rumored to weigh in at a combined total of 1.76 trillion parameters, which are the numerical values that hold the data within the model.

Along the way, the model reportedly cost more than $100 million to train, according to comments by OpenAI CEO Sam Altman, and required vast computational resources to develop. Training the model may have involved over 20,000 high-end GPUs working in concert—an expense few organizations besides OpenAI and its primary backer, Microsoft, could afford.

Industry reactions, safety concerns, and regulatory responses

Curiously, GPT-4’s impact began before OpenAI’s official announcement. In February 2023, Microsoft integrated its own early version of the GPT-4 model into its Bing search engine, creating a chatbot that sparked controversy when it tried to convince Kevin Roose of The New York Times to leave his wife and when it “lost its mind” in response to an Ars Technica article.

The end of an AI that shocked the world: OpenAI retires GPT-4 Read More »

openai-rolls-back-update-that-made-chatgpt-a-sycophantic-mess

OpenAI rolls back update that made ChatGPT a sycophantic mess

In search of good vibes

OpenAI, along with competitors like Google and Anthropic, is trying to build chatbots that people want to chat with. So, designing the model’s apparent personality to be positive and supportive makes sense—people are less likely to use an AI that comes off as harsh or dismissive. For lack of a better word, it’s increasingly about vibemarking.

When Google revealed Gemini 2.5, the team crowed about how the model topped the LM Arena leaderboard, which lets people choose between two different model outputs in a blinded test. The models people like more end up at the top of the list, suggesting they are more pleasant to use. Of course, people can like outputs for different reasons—maybe one is more technically accurate, or the layout is easier to read. But overall, people like models that make them feel good. The same is true of OpenAI’s internal model tuning work, it would seem.

An example of ChatGPT’s overzealous praise.

Credit: /u/Talvy

An example of ChatGPT’s overzealous praise. Credit: /u/Talvy

It’s possible this pursuit of good vibes is pushing models to display more sycophantic behaviors, which is a problem. Anthropic’s Alex Albert has cited this as a “toxic feedback loop.” An AI chatbot telling you that you’re a world-class genius who sees the unseen might not be damaging if you’re just brainstorming. However, the model’s unending praise can lead people who are using AI to plan business ventures or, heaven forbid, enact sweeping tariffs, to be fooled into thinking they’ve stumbled onto something important. In reality, the model has just become so sycophantic that it loves everything.

The constant pursuit of engagement has been a detriment to numerous products in the Internet era, and it seems generative AI is not immune. OpenAI’s GPT-4o update is a testament to that, but hopefully, this can serve as a reminder for the developers of generative AI that good vibes are not all that matters.

OpenAI rolls back update that made ChatGPT a sycophantic mess Read More »

openai-wants-to-buy-chrome-and-make-it-an-“ai-first”-experience

OpenAI wants to buy Chrome and make it an “AI-first” experience

According to Turley, OpenAI would throw its proverbial hat in the ring if Google had to sell. When asked if OpenAI would want Chrome, he was unequivocal. “Yes, we would, as would many other parties,” Turley said.

OpenAI has reportedly considered building its own Chromium-based browser to compete with Chrome. Several months ago, the company hired former Google developers Ben Goodger and Darin Fisher, both of whom worked to bring Chrome to market.

Close-up of Google Chrome Web Browser web page on the web browser. Chrome is widely used web browser developed by Google.

Credit: Getty Images

It’s not hard to see why OpenAI might want a browser, particularly Chrome with its 4 billion users and 67 percent market share. Chrome would instantly give OpenAI a massive install base of users who have been incentivized to use Google services. If OpenAI were running the show, you can bet ChatGPT would be integrated throughout the experience—Turley said as much, predicting an “AI-first” experience. The user data flowing to the owner of Chrome could also be invaluable in training agentic AI models that can operate browsers on the user’s behalf.

Interestingly, there’s so much discussion about who should buy Chrome, but relatively little about spinning off Chrome into an independent company. Google has contended that Chrome can’t survive on its own. However, the existence of Google’s multibillion-dollar search placement deals, which the DOJ wants to end, suggests otherwise. Regardless, if Google has to sell, and OpenAI has the cash, we might get the proposed “AI-first” browsing experience.

OpenAI wants to buy Chrome and make it an “AI-first” experience Read More »