Author name: Mike M.

cdc-can-no-longer-help-prevent-lead-poisoning-in-children,-state-officials-say

CDC can no longer help prevent lead poisoning in children, state officials say

Amid the brutal cuts across the federal government under the Trump administration, perhaps one of the most gutting is the loss of experts at the Centers for Disease Control and Prevention who respond to lead poisoning in children.

On April 1, the staff of the CDC’s Childhood Lead Poisoning Prevention Program was terminated as part of the agency’s reduction in force, according to NPR. The staff included epidemiologists, statisticians, and advisors who specialized in lead exposures and responses.

The cuts were immediately consequential to health officials in Milwaukee, who are currently dealing with a lead exposure crisis in public schools. Six schools have had to close, displacing 1,800 students. In April, the city requested help from the CDC’s lead experts, but the request was denied—there was no one left to help.

In a Congressional hearing this week, US health secretary and anti-vaccine advocate Robert F. Kennedy Jr. told lawmakers, “We have a team in Milwaukee.”

But Milwaukee Health Commissioner Mike Totoraitis told NPR that this is false. “There is no team in Milwaukee,” he said. “We had a single [federal] staff person come to Milwaukee for a brief period to help validate a machine, but that was separate from the formal request that we had for a small team to actually come to Milwaukee for our Milwaukee Public Schools investigation and ongoing support there.”

Kennedy has also previously told lawmakers that lead experts at the CDC who were terminated would be rehired. But that statement was also false. The health department’s own communications team told ABC that the lead experts would not be reinstated.

CDC can no longer help prevent lead poisoning in children, state officials say Read More »

ai-#117:-openai-buys-device-maker-io

AI #117: OpenAI Buys Device Maker IO

What a week, huh? America signed a truly gigantic chip sales agreement with UAE and KSA that could be anything from reasonable to civilizational suicide depending on security arrangements and implementation details, Google announced all the things, OpenAI dropped Codex and also bought Jony Ive’s device company for $6.5 billion, Vance talked about reading AI 2027 (surprise, in a good way!) and all that other stuff.

Lemon, it’s Thursday, you’ve got movie tickets for Mission Impossible: Final Reckoning (19th and Broadway AMC, 3pm), an evening concert tonight from Light Sweet Crude and there’s a livestream from Anthropic coming up at 12: 30pm eastern, the non-AI links are piling up and LessOnline is coming in a few weeks. Can’t go backwards and there’s no time to spin anything else out of the weekly. Got to go forward to go back. Better press on.

So for the moment, here we go.

Earlier this week: Google I/O Day was the ultimate ‘huh, upgrades’ section. OpenAI brought us their Codex of Ultimate Vibing (and then Google offered their version called Jules). xAI had some strong opinions strongly shared in Regarding South Africa. And America Made a very important AI Chip Diffusion Deal with UAE and KSA, where the details we don’t yet know could make it anything from civilizational suicide to a defensible agreement, once you push back the terrible arguments made in its defense.

  1. Language Models Offer Mundane Utility. So, spend more on health care, then?

  2. Language Models Don’t Offer Mundane Utility. Not when you fabricate the data.

  3. Huh, Upgrades. We already covered Google, so: Minor Claude tweaks, xAI’s API.

  4. Codex of Ultimate Vibing. A few more takes, noting the practical barriers.

  5. On Your Marks. AlphaEvolve is probably a big long term deal.

  6. Choose Your Fighter. A handy guide to the OpenAI model that’s right for you.

  7. Deepfaketown and Botpocalypse Soon. Know it when you see it.

  8. Copyright Confrontation. A bunch of absolute losers.

  9. Regarding South Africa. Zeynep Tufekci gives it the NYT treatment.

  10. Cheaters Gonna Cheat Cheat Cheat Cheat Cheat. Cheat or be cheated.

  11. They Took Our Jobs. Small reductions in fixed time costs can bear big dividends.

  12. The Art of the Jailbreak. System prompt for Gemini Diffusion.

  13. Get Involved. Anthropic social, AI grantmaking and grants, whistleblowing.

  14. In Other AI News. Bunker subscriptions are on the rise.

  15. Much Ado About Malaysia. The supposedly big AI deal that wasn’t.

  16. Show Me the Money. LMArena sells out, OpenAI buys IO from Jony Ive.

  17. Quiet Speculations. More straight lines on graphs.

  18. Autonomous Dancing Robots. Everybody do the household chores.

  19. The Quest for Sane Regulations. It’s not looking good.

  20. The Mask Comes Off. OpenAI is still trying to mostly sideline the nonprofit.

  21. The Week in Audio. Bengio, Nadella, Hassabis, Roose, and Whitmer on OpenAI.

  22. Write That Essay. Someone might read it. Such as VPOTUS JD Vance.

  23. Vance on AI. Remarkably good thoughts! He’s actually thinking about it for real.

  24. Rhetorical Innovation. Where could that data center possibly be?

  25. Margaritaville. You know it would be your fault.

  26. Rhetorical Lack of Innovation. Cate Metz is still at it.

  27. If Anyone Builds It, Everyone Dies. No, seriously.

  28. Aligning a Smarter Than Human Intelligence is Difficult. Have it think different.

  29. People Are Worried About AI Killing Everyone. Might want to get on that.

  30. The Lighter Side. The new job is better anyway.

AI scientist announces potential major discovery, a promising treatment for dry AMD, a major cause of blindness. Paper is here.

Nikhil Krishnan sees health care costs going up near term due to AI for three reasons.

  1. There is a lot more scrutiny of those using AI to prevent paying out claims, than there is for those using AI to maximize billing and fight to get claims paid.

  2. Health care companies will charge additional fees for their use of ‘add on’ AI. Like everything else in health care, this will cost $0.05 and they will charge $500.

  3. People who use AI to realize they need health care will consume more health care.

This seems right in the near term. The entire health care system is bonkers and bans real competition. This is the result. In the medium term, it should radically improve health care productivity and outcomes, and then we can collectively decide how much to spend on it all. In the long term, we will see radical improvements, or we won’t need any health care.

In a related story, ChatGPT helps students feign ADHD. Well, not really. The actual story is ‘a 2000 word document created via ChatGPT, in a way that ordinary prompting would not easily duplicate, helps students feign ADHD.’ So mostly this is saying that a good guide helps you fake ADHD, and that with a lot of effort ChatGPT can produce one. Okie dokie.

Let’s check in on AlphaEvolve, a name that definitely shouldn’t worry anyone, with its results that also definitely shouldn’t worry anyone.

Deedy: Google’s AI just made math discoveries NO human has!

—Improved on the best known solution for packing of 11 and 12 hexagons in hexagons.

—Reduced 4×4 matrix multiplication from 49 operations to 48 (first advance in 56 years!) and many more.

AlphaEvolve is the AlphaGo ‘move 37’ moment for math. Insane.

Here’s another easy to understand one:

Place 16 points in 2D to minimize the maximum to minimum distance between them.

Improved after 16yrs. I highly recommend everyone read the paper.

AI improves European weather forecasts 20% on key indicators. Progress on whether forecasters is also impressive, but harder to measure.

AI helping executives handle their inboxes and otherwise sift through overwhelming amounts of incoming information. My read is the tools are just now getting good enough that power users drowning in incoming communications turn a profit, but not quite good enough for regular people. Yet.

As usual, that’s if you dismiss them out of hand and don’t use them, such as Judah Diament saying this is ‘not a breakthrough’ because ‘there have been such tools since the late 1980s.’ What’s the difference between vibe coding and Microsoft Visual Basic, really, when you dig down?

Curio AI stuffed toys, which seem a lot like a stuffed animal with an internet connection to a (probably small and lame) AI model tuned to talk to kids, that has a strict time limit if you don’t pay for a subscription beyond 60 days?

MIT economics departmentconducted an internal, confidential reviewof this paper and concluded it ‘should be withdrawn from public discourse.’ It then clarifies this was due to misconduct, and that the author is no longer at MIT, and that this was due to ‘concerns about the validity of the research.’

Here is abstract of the paper that we should now treat as not real, as a reminder to undo the update you made when you saw it:

That was a very interesting claim, but we have no evidence that it is true. Or false.

Florian Ederer: It is deeply ironic that the first AI paper to have hallucinations was not even written by an AI.

Jonathan Parker: We don’t know that.

I was going to call MIT’s statement ‘beating around the bush’ the way this WSJ headline does saying MIT ‘can no longer stand behind’ the paper, but no, to MIT’s credit they very clearly are doing everything their lawyers will allow them to do, the following combined with the student leaving MIT is very clear:

MIT Economics: Earlier this year, the COD conducted a confidential internal review based upon allegations it received regarding certain aspects of this paper. While student privacy laws and MIT policy prohibit the disclosure of the outcome of this review, we are writing to inform you that MIT has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper. Based upon this finding, we also believe that the inclusion of this paper in arXiv may violate arXiv’s Code of Conduct.

Our understanding is that only authors of papers appearing on arXiv can submit withdrawal requests. We have directed the author to submit such a request, but to date, the author has not done so. Therefore, in an effort to clarify the research record, MIT respectfully request that the paper be marked as withdrawn from arXiv as soon as possible.

It seems so crazy to me that ‘student privacy’ should bind us this way in this spot, but here we are. Either way, we got the message. Which is, in English:

Cremieux: This paper turned out to be fraudulent.

It was entirely made up and the experiment never happened. The author has been kicked out of MIT.

A (not new) theory of why Lee Sedol’s move 78 caused AlphaGo to start misfiring, where having a lot of similar options AlphaGo couldn’t differentiate between caused it to have to divide its attention into exponentially many different lines of play. My understanding is it was also objectively very strong and a very unlikely move to have been made, which presumably also mattered? I am not good enough at Go to usefully analyze the board.

Paper finds LLMs produce ‘five times less accurate’ summaries of scientific research than humans, warning of ‘overgeneralization’ and omission of details that limit scope. All right, sure, and that’s why you’re going to provide me with human summaries I can use instead, right, Anakin? Alternatively, you can do what I do and ask follow-up questions to check on all that.

DeepSeek powers a rush of Chinese fortune telling apps, in section IV of the type of article, here on the rise of Chinese superstitious and despairing behavior, that could be charting something important but could easily be mostly hand picked examples. Except for the rise in scratch-off lottery tickets, which is a hugely bearish indicator. I also note that it describes DeepSeek as ‘briefly worrying American tech companies,’ which is accurate, except that the politicians don’t realize we’ve stopped worrying.

Claude’s Research now available on mobile, weird that it wasn’t before.

Some changes were made to the Claude 3.7 system prompt.

xAI’s API now can search Twitter and the internet, like everyone else.

Some more takes on Codex:

Sunless: IMO after couple of hours using it for my SWE job I feel this is the most “AGI is coming” feel since ChatGPT in the early December of 2022. Async ability is the true God mode. It is currently going through my tech debt like plasma knife through butter. Incredible.

Diamond Bishop: Played with codex on two projects this weekend. Will keep using it, but my daily loadout for now will still be cursor in agent mode, accompanied by some light dual wielding with claude code. First impressions:

1. Overall feel – very cool when it works and being able to fire off a bunch of tasks feels like more autonomy then anything else.

2. No internet – don’t like this. makes a bunch of testing just impossible. This should be optional, not required.

3. Delegation focused handoff UI/UX – great when things work, but most of the time you need to reprompt/edit/etc. This will make sense when models get better but in current form it seems premature. Need a way to keep my IDE open for edits and changes to collaborate with when I want to rather then just hand off completely. Doing it only through github branches adds too much friction.

Sunless highlights that in many ways the most valuable time for something like Codex is right after you get access. You can use it to suddenly do all the things you had on your stack that it can easily do, almost for free, that you couldn’t do easily before. Instant profit. It may never feel that good again.

I strongly agree with Diamond’s second and third points here. If you close the IDE afterwards you’re essentially saying that you should assume it’s all going to work, so it’s fine to have to redo a bunch of work if something goes wrong. That’s a terrible assumption. And it’s super hard to test without internet access.

How big a deal is AlphaEvolve? Simeon thinks it is a pretty big deal, and most other responses here agree. As a proof of concept, it seems very important to me, even if the model itself doesn’t do anything of importance yet.

How OpenAI suggests you choose your model.

Charly Wargnier: Here’s the rundown ↓

🧠 GPT 4o: the everyday assistant

↳ Emails, summaries, and quick drafts

🎨 GPT 4.5: the creative brain

↳ Writing, comms, and brainstorming

⚡ o4 mini: the fast tech helper

↳ Quick code, STEM, visual tasks

🧮 o4 mini high: the deep tech expert

↳ Math, complex code, science explainer

📊 o3: the strategic thinker

↳ Planning, analysis, multi-step tasks

🔍 o1 pro: the thoughtful analyst

↳ Deep research, careful reasoning, high-stakes work

In practice, my answer is ‘o3 for everything other than generating images, unless you’re hitting your request limits, anything where o3 is the wrong choice you should be using Claude or Gemini.’

Seriously, I have a harder and harder time believing anyone actually uses Grok, the ultimate two-handed language model.

This is indeed how it feels these days.

Rory McCarthy: Professional art forgery detectors can tell with something like 90% accuracy if something’s a fake in a few seconds upon seeing it, but can only tell you why after a good while inspecting details. I feel like people are picking that up for AI: you just *know*, before you know how.

Instantaneously we can see that this is ‘wrong’ and therefore AI, then over the course of a minute you can extract particular reasons why. It’s like one of those old newspaper exercises, ‘spot all the differences in this picture.’

Rory McCarthy: I was thinking about it with this pizza place I saw – I wonder if people get that much AI art/illustration currently has the vibe of Microsoft clip art to promote your company; it just seems sort of cheap, and thus cheapens the brand (a place like this probably wouldn’t mind)

I find the obviously fake art here does make me less inclined to eat here. I don’t want you to spend a ton of time on marketing, but this is exactly the wrong way and amount to care, like you wanted to care a lot but didn’t have the budget and you aren’t authentic or detail oriented. Stay away. The vibe doesn’t jive with caring deeply about the quality of one’s pizza.

Since IGN already says what I’d say about this, I turn over the floor:

IGN: Fortnite launched an AI-powered Darth Vader modeled after the voice of James Earl Jones and it’s going as well as you might expect [link has short video]:

Actually, after watching the video, it’s going way better than expected. Love it.

Here is another way to defend yourself against bot problems:

Gavin Leech: A friend just received a robocall purporting to be from a criminal holding me to ransom. But the scambot went on to describe me as “handsome of stature, grave of gait, rich and sonorous of voice, eloquent of speech”.

This is because, some years ago, I put this on my blog:

Is it morally wrong to create and use fully private AI porn of someone who didn’t consent? Women overwhelmingly (~10:1) said yes, men said yes by about 2.5:1.

Mason: I don’t believe our brains can really intuit that photorealistic media is different from reality; we can understand logically that visual effects aren’t real, but once we’ve seen someone we actually know personally do something, it’s hard to compartmentalize it as pure fantasy.

I don’t think that’s it. I think we are considering this immoral partly because we think (rightly or wrongly) that porn and sex and even thinking about other people sexually (even with permission and especially without it) is gross and immoral in general even if we don’t have a way to ban any of it. And often we try anyway.

Even more central, I think, is that we don’t trust anything private to stay truly private, the tech is the same for private versus public image (or in the future video or even VR!) generation, we have a concept of ownership over ‘name and likeness,’ and we don’t want to give people the ‘it was only private’ excuse.

Not AI but worth noting: Ben Jacobs warns about a scam where someone gets control of a contact’s (real) Telegram, invites you to a meeting, then redirects you to a fake zoom address which asks you to update zoom with a malicious update. I recommend solving this problem by not being on Telegram, but to each their own.

Ideally we’d also be warning the scammers.

Misha: Starting to get lots of AI voiced phone spam and I gotta say, we really need to start punishing spammers with the death penalty. I guess this is why The Beekeeper is so popular.

The creatives continue to be restless. Morale has not improved.

Luiza Jarovsky: “The singer and songwriter said it was a ‘criminal offence’ to change copyright law in favour of artificial intelligence companies.

In an interview on BBC One’s Sunday with Laura Kuenssberg programme, John said the government was on course to ‘rob young people of their legacy and their income,’ adding: ‘It’s a criminal offence, I think. The government are just being absolute losers, and I’m very angry about it.'”

That’s not what ‘criminal offense’ means, but point taken.

Zeynep Tufekci writes up what happened to Grok in the New York Times, including providing a plausible triggering event to explain why the change might have been made on that particular day, and ties it to GPT-4o being an absurd sycophant as a general warning about what labs might choose to do with their bots. This, it seems, is what causes some to worry about the ‘safety’ of bots. Okay then.

And those not cheating will use AI too, if only to pass the AI filters? Oh boy. I mean, entirely unsurprising, but oh boy.

Julie Jargon (WSJ): Students don’t want to be accused of cheating, so they’re using artificial intelligence to make sure their school essays sound human.

Teachers use AI-detection software to identify AI-generated work. Students, in turn, are pre-emptively running their original writing through the same tools, to see if anything might be flagged for sounding too robotic.

Miles Pulvers, a 21-year-old student at Northeastern University in Boston, says he never uses AI to write essays, but he runs all of them through an AI detector before submitting them.

“I take great pride in my writing,” says Pulvers. “Before AI, I had peace of mind that whatever I would submit would be accepted. Now I see some of my writing being flagged as possibly being AI-generated when it’s not. It’s kind of annoying, but it’s part of the deal in 2025.”

AI detectors might sound the alarm if writing contains too many adjectives, long sentences and em dashes—one of my own favorite forms of punctuation. When that happens to Pulvers, he rewrites the sentences or paragraphs in question. He tests the essay again, as often as needed until the detector says it has a low probability of bot involvement.

The tragedy of all this is that when they do catch someone using AI, they typically get away with it, but still everyone has to face this police state of running everything through the checkers.

It also highlights that your AI checker has to be able to defeat a student who has access to an AI checker. Right now the system is mostly not automated, but there’s nothing stopping one from creating a one-button agent that takes an essay – whether it was an AI or a human that wrote the original – feeding it into the public AI detector, and then iterating as needed until the essay passes. It would then be insane not to use that, and ‘who gets detected using AI’ by default becomes only those who don’t know to do that.

The only way to get around this is to have the AI checker available to teachers be superior to the one used by students. It’s like cybersecurity and other questions of ‘offense-defense balance.’ And it is another illustration of why in many cases you get rather nasty results if you simply open up the best functionality to whoever wants it. I don’t see a way to get to a future where this particular ‘offense-defense balance’ can properly favor the AI detectors actually catching cheaters.

Unless? Perhaps we are asking the wrong question. Rather than ask ‘did an AI write this?’ you could ask ‘did this particular student write this?’ That’s a better question. If you can require the student to generate writing samples in person that you know are theirs, you can then do a comparison analysis.

Tyler Cowen bites all the bullets, and says outright ‘everyone’s cheating, that’s good news.’ His view is essentially that the work the AI can do for you won’t be valuable in the future, so it’s good to stop forcing kids to do that work. Yes, right now this breaks the ‘educational system’ until it can adjust, but that too is good, because it was already broken, it has to change and it will not go quietly.

As is typically true with Tyler, he gets some things that AI will change, but then assumes the process will stop, and the rest of life will somehow continue as per normal, only without the need for the skills AI currently is able to replace?

Tyler Cowen: Getting good grades maps pretty closely to what the AIs are best at. You would do better to instill in your kids the quality of taking the initiative…You should also…teach them the value of charisma, making friends, and building out their networks.

It is hard for me to picture the future world Tyler must be imagining, with any expectation it would be stable.

If you are assigning two-month engineering problems to students, perhaps check if Gemini 2.5 can spit out the answer. Yes, this absolutely is the ‘death of this type of coursework.’ That’s probably a good thing.

Peter Wildeford: You have to feel terrible for the 31 students who didn’t just plug the problem into Gemini 2.5 and then take two months off

Olivia Moore: An Imperial College eng professor gave four LLMs a problem set that graduate students had two months to solve.

He had TAs grade the results blind alongside real submissions.

Meta AI and Claude failed. ChatGPT ranked 27 of 36 students…while Gemini 2.5 Pro ranked 4 of 36 🤯

Something tells me that ‘ChatGPT’ here probably wasn’t o3?

In a new study from Jung Ho Choi and Chloe Xie, AI allowed accountants to redirect 8.5% of their time away from data entry towards other higher value tasks and resulted in a 55% increase in weekly client support.

Notice what happens when we decompose work into a fixed cost in required background tasks like data entry, and then this enables productive tasks. If a large percentage of time was previously data entry, even a small speedup in that can result in much more overall productivity.

This is more generally true than people might think. In most jobs and lives, there are large fixed maintenance costs, which shrinks the time available for ‘real work.’ Who among us spends 40 hours on ‘real work’? If you speed up the marginal real work by X% while holding all fixed costs fixed, you get X% productivity growth. If you speed up the fixed costs too, you can get a lot more than X% total growth.

This also suggests that the productivity gains of accountants are being allocated to increased client support, rather than into each accountant serving more clients. Presumably in the long term more will be allocated towards reducing costs.

The other big finding is that AI and accountants for now remain complements. You need an expert to catch and correct errors, and guide the AI. Over time, that will shift into the AI both speeding things up more and not needing the accountant.

At Marginal Revolution, commenters find the claims plausible. Accounting seems like a clear example of a place where AI should allow for large gains.

Tyler Cowen also links us to Dominic Coey who reminds us that Baumol’s Cost Disease is fully consistent with transformative economic growth, and to beware arguments from cost disease. Indeed. If AI gives us radically higher productivity in some areas but not others, we will be vastly richer and better off. Indeed in some ways this is ideal because it lets us still have ‘jobs.’

Will Brown: if you lost your software engineering job to AI in early 2024 that is entirely a skill issue sorry

Cate Hall: Pretty much everyone’s going to have a skill issue sooner or later.

It is a question of when, not if. It’s always a skill issue, for some value of skill.

A hypothesis that many of the often successful ‘Substack house style’ essays going around Substack are actually written by AI. I think Will Storr here has stumbled on a real thing, but that for now it is a small corner of Substack.

Robert Scoble provides us another example of what we might call ‘human essentialism.’ He recognizes and expects we will likely solve robotics within 10 years and they will be everywhere, we will have ‘dozens of virtual beings in our lives,’ expects us to use a Star Trek style interface with computers without even having applications. But he still thinks human input will be vital, that it will be AIs and humans ‘working together’ and that we will be ‘more productive’ as if the humans are still driving productivity.

Erick: You left off… nobody will be needed to work. Then what?

Roberto Scoble: We will create new things to do.

I don’t see these two halves of his vision as compatible, even if we do walk this ‘middle path.’ If we have robots everywhere and don’t need 2D screens or keyboards or apps, what are these ‘new things to do’ that the AI can’t do itself? Even if we generously assume humans find a way to retain control over all this and all existential-style worries and instability fall away, most humans will have nothing useful to contribute to such a world except things that rely on their human essentialism – things were the AI could do it, but the AI doing it would rob it of its meaning, and we value that meaning enough to want the thing.

They took our jobs and hired the wrong person?

John Stepek: Turns out AI hires candidates based on little more than “vibes”, then post-rationalises its decision.

So that’s another traditional human function replaced.

David Rozado: Do AI systems discriminate based on gender when choosing the most qualified candidate for a job? I ran an experiment with several leading LLMs to find out. Here’s what I discovered.

Across 70 popular professions, LLMs systematically favored female-named candidates over equally qualified male-named candidates when asked to choose the more qualified candidate for a job. LLMs consistently preferred female-named candidates over equally qualified male-named ones across all 70 professions tested.

The models all also favored whoever was listed first and candidates with pronouns in bio. David interprets this as LLMs ‘not acting rationally,’ instead articulating false reasons that don’t stand up to scrutiny.

And yes, all of that is exactly like real humans. The AI is correctly learning to do some combination of mimic observed behavior and read the signs on who should be hired. But the AIs don’t want to offer explicit justifications of that any more than I do right now, other than to note that whoever you list first is sometimes who you secretly like better and AI can take a hint because it has truesight, and it would be legally problematic to do so in some case, so they come up with something else.

Tyler Cowen calls this ‘politically correct LLMs’ and asks:

Tyler Cowen: So there is still some alignment work to do here? Or does this reflect the alignment work already?

This is inherent in the data set, as you can see from it appearing in every model, and of course no one is trying to get the AIs to take the first listed candidate more often. If you don’t like this (or if you do like it!) do not blame it on alignment work. It is those who want to avoid these effects who want to put an intentional thumb on the scale, whether or not we find that desirable. There is work to do.

Scott Lincicome asks, what if AI means more jobs, not fewer? Similar to the recent comments by JD Vance, it is remarkable how much such arguments treat the prior of ‘previous technologies created jobs’ or ‘AI so far hasn’t actively caused massive unemployment’ as such a knock-down arguments that anyone doubting them is being silly.

Perhaps a lot of what is going on is there are people making the strawman-style argument that AI will indeed cause mass unemployment Real Soon Now, and posts like this are mainly arguing against that strawman-style position. In which case, all right, fair enough. Yet it’s curious how such advocates consistently try to bite the biggest bullets along the way, Vance does it for truck drivers and here Scott chooses radiologists, where reports of their unemployment have so far been premature.

While AI is offering ‘ordinary productivity improvements’ and automating away some limited number of jobs or tasks, yes, this intuition likely holds, and we won’t have an AI-fueled unemployment problem. But as I keep saying, the problem comes when the AI also does the jobs and tasks you would transfer into.

Here’s the Gemini Diffusion system prompt.

Anthropic hosting a social in NYC in mid-June for quants considering switch careers, submissions due June 9th.

Job as an AI grantmaker at Schmidt Sciences.

Georgetown offering research funding from small size up to $1 million for investigation of dangers from internal deployment of AI systems. Internal deployment seems like a highly neglected threat model. Expressions of interest (~1k words) due June 30, proposal by September 15. Good opportunity, but we need faster grants.

A draft of a proposed guide for whistleblowers (nominally from AI labs, but the tactics look like they’d apply regardless of where you work), especially those who want to leave the USA and leak classified information. If the situation does pass the (very very high!) bar for justifying this, you need to do it right.

Google One now has 150 million subscribers, a 50% gain since February 2024. It is unclear the extent to which the Gemini part of the package is driving subscriptions.

The Waluigi Effect comes to Wikipedia, also it has a Wikipedia page.

Kalomaze: getting word that like ~80% of the llama4 team at Meta has resigned.

Andrew Curran: WSJ says 11 of the original 14 are gone.

Financial Times reports that leading models have a bias towards their own creator labs and against other labs, but Rob Wiblin observes that this bias does not seems so large:

This seems about as good as one could reasonably expect? But yes there are important differences. Notice that Altman’s description here has his weakness as ‘the growing perception that’ he is up to no good, whereas Sonnet and several others suggest it is that Altman might actually be up to no good.

Vanity Fair: Microsoft CEO Satya Nadella Explains How He’s Making Himself Obsolete With AI. If anything it seems like he’s taking it too far too fast.

Remember that time Ilya Sutskever said OpenAI were ‘definitely going to build a bunker before we release AGI’?

Rob Bensinger: This is concerning for more than one reason.

I suppose it’s better to at least know you need a plan and think to build a bunker, even if you don’t realize that the bunker will do you absolutely no good against the AGI itself, versus not even realizing you need a plan. And the bunker does potentially help against some other threats, especially in a brief early window?

The rest of the post is about various OpenAI troubles that led to and resulted in and from The Battle of the Board, and did not contain any important new information.

Reports of a widening data gap between open and closed models, seems plausible:

finbarr: In the areas of ML research I’m specifically familiar with, the data gap between open and private models is massive. Probably the biggest gap separating open and closed models

xjdr: This is the largest I’ve seen the gap since the GPT 4 launch

Mark Gurman and Drake Bennett analyze how Apple’s AI efforts went so wrong, in sharp contrast to Google’s array of products on I/O day. ‘This is taking a bit longer than expected’ is no longer going to cover it. Yes, Apple has some buffer of time, but I see that buffer running low. They present this as a cultural mismatch failure, where Apple was unwilling to invest in AI properly until it knew what the product was, at which point it was super fall behind, combined with a failure of leadership and their focus on consumer privacy. They’re only now talking about turning Siri ‘into a ChatGPT competitor.’

It isn’t actually meaningful news, but it is made to sound like it is, so here we are: Malaysia launches what it calls the region’s ‘first sovereign full-stack AI infrastructure,’ storing and managing all data and everything else locally in Malaysia.

They will use locally run models, including from DeepSeek since that is correctly the go-to open model because OpenAI’s hasn’t released yet, Meta is terrible and Google has failed marketing forever. But of course they could easily swap that if a better one becomes available, and the point of an open model is that China has zero control over what happens in Malaysia.

Malaysia is exactly the one country I singled out, outside of the Middle East, as an obvious place not to put meaningful quantities of our most advanced AI chips. They don’t need them, they’re not an important market, they’re not important diplomatically or strategically, they’re clearly in China’s sphere of influence and more allied to China than to America, and they have a history of leaking chips to China.

And somehow it’s the place that Sacks and various companies are touting as a place to put advanced AI chips. Why do you think that is? What do you think those chips are for? Why are we suddenly treating selling Malaysia those chips as a ‘beat China’ proposal?

They are trying to play us, meme style, for absolute fools.

One element of Trump’s replacement regulations, Bloomberg News has reported, will be chip controls on countries suspected of diverting US hardware to China — including Malaysia.

Trump officials this year pressured Malaysian authorities to crack down on semiconductor transshipment to China. The country is also in the cross hairs of a court case in Singapore, where three men have been charged with fraud for allegedly disguising the ultimate customer of AI servers that may contain high-end Nvidia chips barred from China. Malaysian officials are probing the issue.

And yet, here we are, with Sacks trying to undermine his own administration in order to keep the chips flowing to China’s sphere of influence. I wonder why.

It’s one thing to argue we need a strategic deal with UAE and KSA. I am deeply skeptical, we’ll need a hell of a set of security procedures and guarantees, but one can make a case that we can get that security, and that they bring a lot to the table, and that they might actually be and become our friends.

But Malaysia? Who are we even kidding? They have played us for absolute fools.

It almost feels intentional, like those who for some unknown reason care primarily about Nvidia’s market share and profit margins choosing the worst possible example to prove to us exactly what they actually care about. And by ‘they’ I mean David Sacks and I also mean Nvidia and Oracle.

But also notice that this is a very small operation. One might even say it is so small as to be entirely symbolic.

The original announced intent was to use only 3,000 Huawei chips to power this, the first exported such chips. You know what it costs to get chips that could fill in for 3,000 Ascend 910Cs?

About 14 million dollars. That’s right. About 1% of what Malaysia buys in chips from Taiwan and America each month right now, as I’ll discuss later. It’s not like they couldn’t have done that under Biden. They did do that under Biden. They did it every month. What are we even talking about?

Divyansh Kaushik: Isolated deployments like this are part of China’s propaganda push around Huawei datacenters designed to project a narrative of technological equivalence with the U.S.

In reality, Huawei cannot even meet domestic Chinese demand, much less provide a credible export alternative.

Importantly, the BIS has clarified that using Huawei Ascend hardware directly violates U.S. export controls. Support from any government for such projects essentially endorses activities contrary to established U.S. law.

Now some will buy into this propaganda effort, but let’s be real. Huawei simply cannot match top-tier American hardware in AI today. Their latest server is economically unviable and depends entirely on sustained state-backed subsidies to stay afloat. On top of that they have and will continue to have issues with scaling.

I presume that, since this means the Malaysian government is announcing to the world that it is directly violating our export controls, combined with previous smuggling of chips out of Malaysia having been allowed, we’re going to cut them off entirely from our own chips? Anakin?

It’s weird, when you combine all that, to see this used as an argument against the diffusion rules, in general, and that the administration is telling us that this is some sort of important scary development? These words ‘American AI stack’ are like some sort of magical invocation, completely scope insensitive, completely not a thing in physical terms, being used as justification to give away our technology to perhaps the #1 most obvious place that would send those chips directly to the PCR and has no other strategic value I can think of?

David Sacks: As I’ve been warning, the full Chinese stack is here. We rescinded the Biden Diffusion Rule just in time. The American AI stack needs to be unleashed to compete.

The AI Investor: Media reported that Malaysia has become the first country outside China to deploy Huawei chips, servers, and DeepSeek’s large language model (LLM).

This would be the literal first time that any country on Earth other than China was deploying Huawei chips at all.

And it wasn’t even a new announcement!

Lennart Heim: This isn’t news. This was reported over a month ago and prominently called “the first deployment outside the China market.”

This needs to be monitored, but folks: it’s 3k Ascend chips by 2026.

Expect more such announcements; their strategic value is in headlines, not compute.

It was first reported here, on April 14.

One might even say that the purpose of this announcement was to give ammunition to people like Sacks to tout the need to sell billions in chips where they can be diverted. The Chinese are behind, but they are subtle, they think ahead and they are not dumb.

For all this supposed panic over the competition, the competition we fear so much that Nvidia says is right on our heels has deployed literally zero chips, and doesn’t obviously have a non-zero number of chips available to deploy.

So we need to rush to give our chips to these obviously China-aligned markets to ‘get entrenched’ in those markets, even though that doesn’t actually make any sense whatsoever because nothing is entrenched or locked in, because in the future China will make chips and then sell them?

And indeed, Malaysia has recently gone on a suspiciously large binge buying American AI chips, with over a billion in purchases each in March and April? As in, even with these chips our ‘market share’ in Malaysia would remain (checks notes) 99%.

I told someone in the administration it sounded like they were just feeding American AI chips to China and then I started crying?

I’ve heard of crazy ‘missile gap’ arguments, but this has to be some sort of record.

But wait, there’s more. Even this deal doesn’t seem to involve Huawei after all?

Mackenzie Hawkins and Ram Anand (Bloomberg): When reached for comment by Bloomberg News on Tuesday, Teo’s office said it’s retracting her remarks on Huawei without explanation. It’s unclear whether the project will proceed as planned.

Will we later see a rash of these ‘sovereign AI’ platforms? For some narrow purposes that involve sufficiently sensitive data and lack of trust in America I presume that we will, although the overall compute needs of such projects will likely not be so large, nor will they mostly require models at the frontier.

And there’s no reason to think that we couldn’t supply such projects with chips in the places it would make any sense to do, without going up against the Biden diffusion rules. There’s no issue here.

Update your assessment of everyone’s credibility and motives accordingly.

LMArena raises $100 million at a $600 million valuation, sorry what, yes of course a16z led the funding round, or $20 per vote cast on their website, and also I think we’re done here? As in, if this wasn’t a bought and paid for propaganda platform before, it sure as hell is about to become one. The price makes absolutely no sense any other way.

OpenAI buys AI Device Startup from Jony Ive for $6.5 billion, calls Ive ‘the deepest thinker Altman’s ever met.’ Jony Ive says of his current prototype, ‘this is the best work our team has ever done,’ this from a person who did the iPhone and MacBook Pro. So that’s a very bold claim. The plan is for OpenAI to develop a family of AI-powered devices to debut in 2026, shipping over 100 million devices. They made a nine minute announcement video. David Lee calls it a ‘long-shot bet to kill the iPhone.’

Great expectations, coming soon, better to update later than not at all.

Scott Singer: European Commission President Ursula von der Leyen: “When the current budget was negotiated, we thought AI would only approach human reasoning around 2050. Now we expect this to happen already next year”

What do they plan to do about this, to prepare for this future? Um… have a flexible budget, whatever that means? Make some investments, maybe? I wonder what is on television.

Here are some better-calibrated expectations, as METR preliminarily extends its chart of how fast various AI capabilities are improving.

Thomas Kwa: We know AI time horizons on software tasks are currently ~1.5hr and doubling every 4-7 months, but what about other domains? Here’s a preliminary result comparing METR’s task suite (orange line) to benchmarks in other domains, all of which have some kind of grounding in human data:

Observations

  • Time horizons agentic computer use (OSWorld) is ~100x shorter than other domains. Domains like Tesla self-driving (tesla_fsd), scientific knowledge (gpqa), and math contests (aime), video understanding (video_mme), and software (hcast_r_s) all have roughly similar horizons.

    • My guess is this means models are good at taking in information from a long context but bad at acting coherently. Most work requires agency like OSWorld, which may be why AIs can’t do the average real-world 1-hour task yet.

    • There are likely other domains that fall outside this cluster; these are just the five I examined

    • Note the original version had a unit conversion error that gave 60x too high horizons for video_mme; this has been fixed (thanks @ryan_greenblatt )

  • Rate of improvement varies significantly; math contests have improved ~50x in the last year but Tesla self-driving only 6x in 3 years.

  • HCAST is middle of the pack in both.

Note this is preliminary and uses a new methodology so there might be data issues. I’m currently writing up a full post!

Is this graph believable? What do you want to see analyzed?

Will future algorithmic progress in an intelligence explosion be bottlenecked by compute? Epoch AI says yes, Ryan Greenblatt says no. In some sense everything is bottlenecked by compute in a true intelligence explosion, since the intelligences work on compute, but that’s not the question here. The question is, will future AIs be able to test and refine algorithmic improvements without gigantic test compute budgets? Epoch says no because Transformers, MoE and MQA are all compute-dependent innovations. But Ryan fires back that all three were first tested and verified at small scale. My inclination is strongly to side with Ryan here. I think that (relatively) small scale experiments designed by a superintelligence should definitely be sufficient to choose among promising algorithmic candidates. After I wrote that, I checked and o3 also sided mostly with Ryan.

New paper in Science claims decentralized populations of LLM agents develop spontaneous universally adopted social conventions. Given sufficient context and memory, and enough ‘social’ interactions, this seems so obviously true I won’t bother explaining why. But the study itself is very clearly garbage, if you read the experimental setup. All it is actually saying is if you explicitly play iterated pairwise coordination games (as in, we get symmetrically rewarded if our outputs match), agents will coordinate around some answer. I mean, yes, no shit, Sherlock.

Popular Mechanics writes up that Dario Amodei and other tech CEOs are predicting AI will allow humans to soon (as in, perhaps by 2030!) double the human lifespan or achieve ‘escape velocity,’ meaning a lifespan that increases faster than one year per year, allowing us to survive indefinitely.

Robin Hanson: No, no it won’t. Happy to bet on that.

I’d be happy to bet against it too if the deadline is 2030. This is a parlay, a bet on superintelligence and fully transformational AI showing up before 2030, combined with humanity surviving that, and that such life extension is physically feasible and we are willing to implement and invest in the necessary changes, all of which would have to happen very quickly. That’s a lot of ways for this not to happen.

However, most people are very much sleeping on the possibility of getting to escape velocity within our lifetimes, as in by 2040 or 2050 rather than 2030, which potentially could happen even without transformational AI, we should fund anti-aging research. These are physical problems with physical solutions. I am confident that with transformational AI solutions could be found if we made it a priority. Of course, we would also have to survive creating transformational AI, and retain control sufficiently to make this happen.

Nikita Bier predicts that AI’s ability to understand text will allow much more rapid onboarding of customization necessary for text-based social feeds like Reddit or Twitter. Right now, such experiences are wonderful with strong investment and attention to detail, but without this they suck and most people won’t make the effort. This seems roughly right to me, but also it seems like we could already be doing a much better job of this, and also based on my brief exposure the onboarding to TikTok is actually pretty rough.

What level of AI intelligence or volume is required before we see big AI changes, and how much inference will we need to make that happen?

Dwarkesh Patel: People underrate how big a bottleneck inference compute will be. Especially if you have short timelines.

There’s currently about 10 million H100 equivalents in the world. By some estimates, human brain has the same FLOPS as an H100.

So even if we could train an AGI that is as inference efficient as humans, we couldn’t sustain a very large population of AIs.

Not to mention that a large fraction of AI compute will continue to be used for training, not inference.

And while AI compute has been growing 2.25x so far, by 2028, you’d be push against TSMC’s overall wafer production limits, which grows 1.25x according to AI 2027 Compute Forecast.

Eliezer Yudkowsky: If you think in those terms, seems the corresponding prediction is that AI starts to have a real impact only after going past the 98th percentile of intelligence, rather than average human intelligence.

Dwarkesh Patel: I wouldn’t put it mainly in terms of intelligence.

I would put it in terms of the economic value of their work.

Long term coherence, efficient+online learning, advanced multimodality seem like much bigger bottlenecks to the value of these models than their intelligence.

Eliezer’s point here confused some people, but I believe it is that if AI is about as intelligent as the average human and you are trying to slot it in as if it was a human, and you have only so many such AIs to work with due to limits to algorithmic improvements, say 114 million in 2028, then 25% growth per year, then you would only see big improvements to the extent the AI was able to do things those humans couldn’t. And Patel is saying that depends more on other factors than intelligence. I think that’s a reasonable position to have on the margins being discussed here, where AI intelligence is firmly in the (rather narrow) normal human range.

However, I also think this is a clearly large underestimate of the de facto number of AIs we would have available in this spot. An AI only uses compute during active inference or training. A human uses their brain continuously, but most of the time the human isn’t using it for much, or we are context shifting in a way that is expensive for humans but not for AIs, or we are using it for a mundane task where the ‘required intelligence’ for the task detail being done is low and you could have ‘outsourced that subtask to a much dumber model.’ And while AI is less sample-efficient at learning than we are, it transfers learning for free and we very, very much don’t. This all seems like at least a 2 OOM (order of magnitude) effective improvement.

I also find it highly unlikely that the world could be running on compute in 2028, we hit the TSMC wafer limit, and using even those non-superintelligent AIs and the incentives to scale them no one figures out a way to make more wafers or otherwise scale inference compute faster.

The humanoid robots keep rapidly getting better, at the link watch one dance.

Andrew Rettek (QTing SMB below): This is the worst take ever.

SMB Attorney: I’m going to say this over and over again:

No one wants these weird robots walking around inside their homes or near their children.

Use case will be limited to industrial labor.

Plenty of people were willing to disprove this claim via counterexample.

Kendric Tonn: I don’t know exactly what I’d be willing to pay for a creepy robot that lives in my basement and does household chores whenever it’s not on the charging station, but uhhhhhhhhh a lot

The only real question is what voice/personality pack I’d want to use. Marvin? Threepio? GLaDOS? Honestly, probably SHODAN.

Gabriel Morgan: The answer is always Darkest Dungeon Narrator Guy.

Kendric Tonn: Good one. Or Stanley Parable Narrator Guy.

Mason: If they can actually do most household tasks competently, just about everyone is going to want one

A housekeeper with an infinitely flexible schedule who never gets tired, never gets sick, never takes vacation, can’t steal or gossip, and can’t judge the state of your home or anything you need it to do?

Like, yeah, people will want the robot

Robert Bernhardt: yeah and they’re gonna be used for tasks which just haven’t been done so far bc they were too much effort. it’s gonna be wild.

the real edge with robots isn’t strength or speed. it’s cost per hour. robots aren’t just about replacing humans. they’re about making previously ridiculous things affordable.

James Miller: Everyone suffering from significant health challenges that impairs mobility is going to want one.

ib: “No one will want these weird robots”

Yeah, man, if there’s anything we’ve learned about people it’s that they really hate anthropomorphizable robots. So much!

Moses Kagan: I’ll take the other side of this.

*Lotsof marriages going to be improved by cheap, 24 hr robot domestic help.

SMB Attorney (disproving Rettek by offering a worse take): Should those marriages be saved?

Moses Kagan: Have you ever been divorced?!

SMB Attorney (digging deeper than we thought possible): You talking this week or ever?

I would find it very surprising if, were this to become highly affordable and capable of doing household chores well, it didn’t become the default to have one. And I think Robert is super on point, having robots that can do arbitrary ‘normal’ physical tasks will be a complete lifestyle game changer, even if they are zero percent ‘creative’ in any way and have to be given specific instructions.

Frankly I’d be tempted to buy one if it even if literally all it could do was dance.

Joe Weisenthal: It’s really surprising OpenAI was founded in California, when places like Tennessee and North Carolina have friendlier business climates.

A general reminder that Congress is attempting to withdraw even existing subsidies to building more electrical power capacity. If we are hard enough up for power to even consider putting giant data centers in the UAE, the least we could do is not this?

Alasdair Phillips-Robins and Sam Winter-Levy write a guide to knowing whether the AI Chips deal was actually good. As I said last week, the devil is in the details. Everything they mention here falls under ‘the least you could do,’ I think we can and must do a lot better than this before I’d be fine with a deal of this size. What I especially appreciate is that giving UAE/KSA the chips should be viewed as a cost, that we pay in order to extract other concessions, even if they aren’t logically linked. Freezing China out of the tech stack is part of the deal, not a technical consequence of using our chips, the same way that you could run Gemma or Llama on Huawei chips.

It’s insane I have to keep quoting people saying this, but here we are:

Divyansh Kaushik: find the odd one out.

Peter Wildeford: NVIDIA: Export controls are a failure (so let us sell chips to the CCP military so they can develop AI models)

Reality: export controls are the main thing holding CCP domestic AI back

David Sacks attempts to blame our failure to Build, Baby, Build on the Biden Administration, in a post with improved concreteness. I agree that Biden could have been much better at turning intention into results, but what matters is what we do now. When Sacks says the Trump administration is ‘alleviating the bottlenecks’ what are we actually doing here to advance permitting reform and energy access?

Everyone seems to agree on this goal, across the aisle, so presumably we have wide leeway to not only issue executive orders and exemptions, but to actually pass laws. This seems like a top priority.

The other two paragraphs are repetition of previous arguments, that lead to questions we need better answers to. A central example is whether American buildout of data centers is actually funding constrained. If it is, we should ask why but welcome help with financing. If it isn’t, we shouldn’t be excited to have UAE build American data centers, since they would have been built anyway.

And again with ‘Huawei+DeepSeek,’ what exactly are you ‘selling’ with DeepSeek? And exactly what chips is China shipping with Huawei, and are they indeed taking the place of potential data centers in Beijing and Shanghai, given their supply of physical chips is a limiting factor? And if China can build [X] data centers anywhere, should it concern us if they do it in the UAE over the PRC? Why does ‘the standard’ here matter when any chip can run any model or task, you can combine any set of chips, and model switching costs are low?

In his interview with Ross Douthat, VP Vance emphasized energy policy as the most important industrial policy for America, and the need to eliminate regulatory barriers. I agree, but until things actually change, that is cheap talk. Right now I see a budget that is going to make things even worse, and no signs of meaningfully easing permitting or other regulatory barriers, or that this is a real priority of the administration. He says there is ‘a lot of regulatory relief’ in the budget but I do not see the signs of that.

If we can propose, with a straight face, an outright moratorium on enforcing any and all state bills about AI, how about a similar moratorium on enforcing any and all state laws restricting the supply of electrical power? You want to go? Let’s fing go.

We now have access to a letter that OpenAI sent to California Attorney General Rob Bonta.

Garrison Lovely: The previously unreported 13-page letter — dated May 15 and obtained by Obsolete — lays out OpenAI’s legal defense of its updated proposal to restructure its for-profit entity, which can still be blocked by the California and Delaware attorneys general (AGs). This letter is OpenAI’s latest attempt to prevent that from happening — and it’s full of surprising admissions, denials, and attacks.

What did we learn that we didn’t previously know, about OpenAI’s attempt to convert itself into a PBC and sideline the nonprofit without due compensation?

First of all, Garrison Lovely confirms the view Rob Wilbin and Tyler Whitmer have, going in the same direction I did in my initial reaction, but farther and with more confidence that OpenAI was indeed up to no good.

Here is his view on the financing situation:

The revised plan appears designed to placate both external critics and concerned investors by maintaining the appearance of nonprofit control while changing its substance. SoftBank, which recently invested $30 billion in OpenAI with the right to claw back $10 billion if the restructuring didn’t move forward, seems unfazed by the company’s new proposal — the company’s finance chief said on an earnings call that from SoftBank’s perspective, “nothing has really changed.”

The letter from OpenAI’s lawyers to AG Bonta contains a number of new details. It says that “many potential investors in OpenAI’s recent funding rounds declined to invest” due to its unusual governance structure — directly contradicting Bloomberg’s earlier reporting that OpenAI’s October round was “oversubscribed.”

There is no contradiction here. OpenAI’s valuation in that round was absurdly low if you had been marketing OpenAI as a normal corporation. A substantial price was paid. They did fill the round to their satisfaction anyway with room to spare, at this somewhat lower price and with a potential refund offer. This was nominally conditional on a conversion, but that’s a put that is way out of the money. OpenAI’s valuation has almost doubled since then. What is SoftBank going to do, ask for a refund? Of course nothing has changed.

The most important questions about the restructuring are: What will the nonprofit actually have the rights to do? And what obligations to the nonprofit mission will the company and its board have?

The letter resolves a question raised in recent Bloomberg reporting: the nonprofit board will have the power to fire PBC directors.

The document also states that “The Nonprofit will exchange its current economic interests in the Capped-Profit Enterprise for a substantial equity stake in the new PBC and will enjoy access to the PBC’s intellectual property and technology, personnel, and liquidity…” This suggests the nonprofit would no longer own or control the underlying technology but would merely have a license to it — similar to OpenAI’s commercial partners.

A ‘substantial stake’ is going to no doubt be a large downgrade in their expected share of future profits, the question is how glaring a theft that will be.

The bigger concern is control. The nonprofit board will go from full direct control to the ability to fire PBC directors. But the power to fire the people who decide X is very different from directly deciding X, especially in a rapidly evolving scenario, and when the Xs have an obligation to balance your needs with the maximization of profits. This is a loss of most of the effective power of the nonprofit.

Under the current structure, OpenAI’s LLC operating agreement explicitly states that “the Company’s duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit.” This creates a legally binding obligation for the company’s management.

In contrast, under the proposed structure, PBC directors would be legally required to balance shareholder interests with the public benefit purpose. The ability to fire PBC directors does not change their fundamental legal duties while in office.

So far, no Delaware PBC has ever been held liable for failing to pursue its mission — legal scholars can’t find a single benefit‑enforcement case on the books.

The way I put this before was: The new arrangement helps Sam Altman and OpenAI do the right thing if they want to do the right thing. If they want to do the wrong thing, this won’t stop them.

As Tyler Whitmer discusses on 80,000 Hours, it is legally permitted to write into the PBC’s founding documents that the new company will prioritize the nonprofit mission. It sounds like they do not intend to do that.

OpenAI has, shall we say, not been consistently candid here. The letter takes a very hard stance against all critics while OpenAI took a public attitude of claiming cooperation and constructive dialogue. It attempts to rewrite the history of Altman’s firing and rehiring (I won’t rehash those details here). It claims ‘the nonprofit board is stronger than ever’ (lol, lmao even). It claims that when the letter ‘Not For Private Gain’ said OpenAI planned to eliminate nonprofit control that this was false, while their own letter elsewhere admitted this was indeed exactly OpenAI’s plan, and then when they announced their change in plans characterized the change as letting the board remain in control, thus admitting this again, while again falsely claiming the board would retain its control.

Garrison also claims that OpenAI is fighting dirty against its critics beyond the contents of the letter, such as implying they are working with with Elon Musk when OpenAI had no reason to think this was not the case, and indeed I am confident it is not true.

Yoshua Bengio TED talk on his personal experience fighting AI existential risk.

Rowan Cheung interviews Microsoft CEO Satya Nadella, largely about agents.

Demis Hassabis talks definitions of AGI. If the objection really is ‘a hole in the system’ and a lack of consistency in doing tasks, then who among us is a general intelligence?

As referenced in the previous section, Rob Wiblin interviews litigator Tyler Whitmer of the Not For Private Gain coalition. Tyler explains that by default OpenAI’s announcement that ‘the nonprofit will retain control’ means very little, ‘the nonprofit can fire the board’ is a huge downgrade from their current direct control, this would abrogate all sorts of agreements. In a truly dangerous scenario, having to go through courts or otherwise act retroactively comes too late. And we can’t even be assured the ‘retaining control’ means even this minimal level of control.

This is all entirely unsurprising. We cannot trust OpenAI on any of this.

The flip side of the devil being in the details is that, with the right details, we can fight to get better details, and with great details, in particular writing the non-profit mission in as a fiduciary duty of the board of the new PBC, we can potentially do well. It is our job to get the Attorney Generals to hold OpenAI to account and ensure the new arrangement have teeth.

Ultimately, given what has already happened, the best case likely continues to mostly be ‘Sam Altman has effective permission to do the right thing if he chooses to do it, rather than being legally obligated to do the wrong thing.’ It’s not going to be easy to do better than that. But we can seek to at least do that well.

Kevin Roose reflects on Sydney, and how we should notice how epic are the fails even from companies like Microsoft.

Will OpenAI outcompete startups? Garry Tan, the head of YC, says no. You have to actually build a business that uses the API well, if you do there’s plenty of space in the market. For now I agree. I would be worried that this is true right up until it isn’t.

You’d be surprised who might read it.

In the case of Situational Awareness, it would include Ivanka Trump.

In the case of AI 2027, it would be Vice President JD Vance, among the other things he said in a recent interview with Ross Douthat that was mostly about immigration.

Patrick McKenzie: Another win for the essay meta.

(Object level politics aside: senior politicians and their staff are going to have an information diet whether you like them or not. Would you prefer it to be you or the replacement rate explainer from Vox or a CNBC talking head?)

It is true that I probably should be trying harder to write things in this reference class. I am definitely writing some things with a particular set of people, or in some cases one particular person, in mind. But the true ‘essay meta’ is another level above that.

What else did Vance say about AI in that interview?

First, in response to being asked, he talks about jobs, and wow, where have I heard these exact lines before about how technology always creates jobs and the naysayers are always wrong?

Vance: So, one, on the obsolescence point, I think the history of tech and innovation is that while it does cause job disruptions, it more often facilitates human productivity as opposed to replacing human workers. And the example I always give is the bank teller in the 1970s. There were very stark predictions of thousands, hundreds of thousands of bank tellers going out of a job. Poverty and commiseration.

What actually happens is we have more bank tellers today than we did when the A.T.M. was created, but they’re doing slightly different work. More productive. They have pretty good wages relative to other folks in the economy.

I tend to think that is how this innovation happens. You know, A.I.

I consider that a zombie argument in the context of AI, and I agree (once again) that up to a point when AI takes over some jobs we will move people to other jobs, the same way bank tellers transitioned to other tasks, and all that. But once again, the whole problem is that when the AI also takes the new job you want to shift into, when a critical mass of jobs get taken over, and when many or most people can’t meaningfully contribute labor or generate much economic value, this stops working.

Then we get into territory that’s a lot less realistic.

Vance: Well, I think it’s a relatively slow pace of change. But I just think, on the economic side, the main concern that I have with A.I. is not of the obsolescence, it’s not people losing jobs en masse.

You hear about truck drivers, for example. I think what might actually happen is that truck drivers are able to work more efficient hours. They’re able to get a little bit more sleep. They’re doing much more on the last mile of delivery than staring at a highway for 13 hours a day. So they’re both safer and they’re able to get higher wages.

I’m sorry, what? You think we’re going to have self-driving trucks, and we’re not going to employ less truck drivers?

I mean, we could in theory do this via regulation, by requiring there be a driver in the car at all times. And of course those truck drivers could go do other jobs. But otherwise, seriously, who are you kidding here? Is this a joke?

I actually agree with Vance that economic concerns are highly secondary here, if nothing else we can do redistribution or in a pinch create non-productive jobs.

So let’s move on to Vance talking about what actually bothers him. He focuses first on social problems, the worry of AI as placebo dating app on steroids.

Vance: Where I really worry about this is in pretty much everything noneconomic? I think the way that people engage with one another. The trend that I’m most worried about, there are a lot of them, and I actually, I don’t want to give too many details, but I talked to the Holy Father about this today.

If you look at basic dating behavior among young people — and I think a lot of this is that the dating apps are probably more destructive than we fully appreciate. I think part of it is technology has just for some reason made it harder for young men and young women to communicate with each other in the same way. Our young men and women just aren’t dating, and if they’re not dating, they’re not getting married, they’re not starting families.

There’s a level of isolation, I think, mediated through technology, that technology can be a bit of a salve. It can be a bit of a Band-Aid. Maybe it makes you feel less lonely, even when you are lonely. But this is where I think A.I. could be profoundly dark and negative.

I don’t think it’ll mean three million truck drivers are out of a job. I certainly hope it doesn’t mean that. But what I do really worry about is does it mean that there are millions of American teenagers talking to chatbots who don’t have their best interests at heart? Or even if they do have their best interests at heart, they start to develop a relationship, they start to expect a chatbot that’s trying to give a dopamine rush, and, you know, compared to a chatbot, a normal human interaction is not going to be as satisfying, because human beings have wants and needs.

And I think that’s, of course, one of the great things about marriage in particular, is you have this other person, and you just have to kind of figure it out together. Right? But if the other person is a chatbot who’s just trying to hook you to spend as much time on it, that’s the sort of stuff that I really worry about with A.I.

It seems weird to think that the three million truck drivers will still be driving trucks after those trucks can drive themselves, but that’s a distinct issue from what Vance discusses here. I do think Vance is pointing to real issues here, with no easy answers, and it’s interesting to see how he thinks about this. In the first half of the interview, he didn’t read to me like a person expressing his actual opinions, but here he does.

Then, of course, there’s the actual big questions.

Vance: And then there’s also a whole host of defense and technology applications. We could wake up very soon in a world where there is no cybersecurity. Where the idea of your bank account being safe and secure is just a relic of the past. Where there’s weird shit happening in space mediated through A.I. that makes our communications infrastructure either actively hostile or at least largely inept and inert. So, yeah, I’m worried about this stuff.

I actually read the paper of the guy that you had on. I didn’t listen to that podcast, but ——

Douthat: If you read the paper, you got the gist.

Those are indeed good things to worry about. And then it gets real, and Vance seems to be actually thinking somewhat reasonably about the most important questions, although he’s still got a way to go?

Douthat: Last question on this: Do you think that the U.S. government is capable in a scenario — not like the ultimate Skynet scenario — but just a scenario where A.I. seems to be getting out of control in some way, of taking a pause?

Because for the reasons you’ve described, the arms race component ——

Vance: I don’t know. That’s a good question.

The honest answer to that is that I don’t know, because part of this arms race component is if we take a pause, does the People’s Republic of China not take a pause? And then we find ourselves all enslaved to P.R.C.-mediated A.I.?

Fair enough. Asking for a unilateral pause is a rough ask if you take the stakes sufficiently seriously, and think things are close enough that if you pause you would potentially lose. But perhaps we can get into a sufficiently strong position, as we do in AI 2027. Or we can get China to follow along, which Vance seems open to. I’ll take ‘I’d do it if it was needed and China did it too’ as an opening bid, so long as we’re willing to actually ask. It’s a lot better than I would have expected – he’s taking the situation seriously.

Vance: One thing I’ll say, we’re here at the Embassy in Rome, and I think that this is one of the most profound and positive things that Pope Leo could do, not just for the church but for the world. The American government is not equipped to provide moral leadership, at least full-scale moral leadership, in the wake of all the changes that are going to come along with A.I. I think the church is.

This is the sort of thing the church is very good at. This is what the institution was built for in many ways, and I hope that they really do play a very positive role. I suspect that they will.

It’s one of my prayers for his papacy, that he recognizes there are such great challenges in the world, but I think such great opportunity for him and for the institution he leads.

If the Pope can help, that’s great. He seems like a great dude.

As a reminder, if you’re wondering how we could possibly keep track of data centers:

A zombie challenge that refuses to go away is ‘these people couldn’t possibly believe the claims they are making about AI, if they did they would be doing something about the consequences.’

I understand why you would think that. But no. They wouldn’t. Most of these people really do believe the things they are saying about AI maybe killing everyone or disempowering humanity, and very definitely causing mass unemployment, and their answer is ‘that’s not my department.’

The originating example here is one of the most sympathetic, because (1) he is not actively building it, (2) he is indeed working in another also important department, and (3) you say having unlimited almost free high quality doctors and teachers like it’s a bad thing and assume I must mean the effect on jobs rather than the effect on everyone getting education and health care.

Unusual Whales: Bill Gates says a 2-day work week is coming in just 10 years, thanks to AI replacing humans ‘for most things,’ per FORTUNE.

Today, proficiency in medicine and teaching is “rare,” Gates noted, saying those fields depend on “a great doctor” or “a great teacher.” But in the next 10 years, he said, “great medical advice [and] great tutoring” will be widely accessible and free, thanks to advances in AI.

Bill Gates says AI will replace doctors and teachers in 10 years.

James Rosen-Birch: The people who make these claims don’t believe it in any meaningful way.

If they did, there would be a lot more emphasis on building the social safety nets and mechanisms of redistribution to make it possible. And support for a slow tapering of work hours.

But there isn’t.

Kelsey Piper: I think this is too optimistic. there are people who I believe sincerely think they’ll displace almost all jobs by automation and are just going “and it’s not my job to figure out what happens after that” or “well if the AIs do kill us all at least we had a good run”

it’s tempting to call people insincere about their beliefs when they are taking what seem to be unreasonable risks given their beliefs but I think reasonably often they’re sincere and just not sure what to do about it.

Catherine: i think it is underestimated how often solvable problems become intractable because everyone in a position to do anything about them goes “oh well I’ll pass off the hot potato to the next guy by then!”

I do think Bill Gates, given he’s noticed for a long time that we’re all on track to die, should have pivoted (and still could pivot!) a substantial portion of his foundation towards AI existential risk and other AI impacts, as the most important use of marginal funds. But I get it, and that’s very different from when similar talk comes from someone actively working to create AGI.

Emmett Shear: The blindingly obvious proposition is that a fully independently recursive self-improving AI would be the most powerful [tool or being] ever made and thus also wildly dangerous.

The part that can be reasonably debated is how close we are to building such a thing.

Tyler Cowen clarifies (if I’m parsing this correctly) that he doesn’t think it’s crazy to think current AIs might be conscious, but that it is crazy to be confident that they are conscious, and that he strongly thinks that they are not (at least yet) conscious. I notice I continue to be super confused about consciousness (including in humans) but to the extent I am not confused I agree with Tyler here.

A good way of describing how many people are, alas, thinking we will create superintelligence and then have it all work out. Gabriel explains some reasons why that won’t work.

Gabriel: There is an alignment view that goes:

– LLMs look nice

– This means they are aligned

– If we use them to align further AIs, they’ll be aligned too

– We can do this up to superintelligence

In this article, I explain why this view is wrong.

There are many definitions for alignment. The one that I use is “An entity is aligned with a group of people if it reliably acts in accordance with what’s good for the group“.

What’s good might be according to a set of goals, principles, or interests.

The system might be an AI system, a company, markets, or some group dynamics.

Intention Alignment is more of an intuition than a well-defined concept. But for the purpose of this article, I’ll define it as “An entity is aligned in its intentions with a group of people if it wants good things for the group“.

The core thing to notice is that they are different concepts. Intention Alignment is not Alignment.

[because] Figuring out what’s good for someone is hard, even after identifying what’s good, finding out the best way to achieve it is hard, what’s good for a complex entity is multi-faceted, managing the trade-offs is hard, and ensuring that “good” evolves in a good way is hard.

[also] intention alignment is vague.

The Niceness Amplification Alignment Strategy is a cluster of strategies that all aim to align superintelligence (which is also sometimes called superalignment).

This strategy starts with getting an AGI to want to help us, and to keep wanting to help us as it grows to ASI. That way, we end up with an ASI that wants to help us and everything goes well.

There are quite a few intuitions behind this strategy.

  1. We, as humans, are far from solving ASI Alignment. We cannot design an ASI system that is aligned. Thus we should look for alternatives.

  2. Current AI systems are aligned enough to prevent catastrophic failures, and they are so because of their intentions.

  3. Without solving any research or philosophical problem, through mere engineering, there is a tractable level of intention alignment that we can reach to have AIs align the intentions of the next generations of AIs.

  4. We can do so all the way to ASI, and end up with an ASI aligned in its intentions.

  5. An ASI that is aligned in its intentions is aligned period.

[Gabriel agrees with #1 and #5, but not #2, #3 or #4].

I think there are also major caveats on #5 unless we are dealing with a singleton. Even on the others, his explanations are good objections but I think you can go a lot farther about why these intentions are not this coherent or reliable thing people imagine, or something one can pass on without degrading quality with each iteration, and so on. And more than that, why this general ‘as long as the vibes are good the results will be good’ thing (even if you call it something else) isn’t part of the reality based community.

Connor Leahy: This quite accurately represents my view on why ~all current “alignment” plans do not work.

For your consideration:

Nick Whitaker: There is a funny leftist critique of tech that it’s all reprehensible trans-humanist succession planning, except the one field that is outwardly doing trans-humanist succession planning, which is fake because the tech occasionally makes mistakes.

Parmy Olson entitles her latest opinion piece on AI “AI Sometimes Deceives to Survive. Does Anybody Care?” and the answer is mostly no, people don’t care. They think it’s cute. As she points out while doing a remarkably good summary of various alignment issues given the post is in Bloomberg, even the most basic precautionary actions around transparency for frontier models are getting killed, as politicians decide that all that matters is ‘race,’ ‘market share’ and ‘beat China.’

Daniel Kokotajlo is correct that ‘the superintelligent robots will do all the work and the humans will lay back and sip margaritas and reap the benefits’ expectation is not something you want to be counting on as a default. Not that it’s impossible that things could turn out that way, but it sure as hell isn’t a default.

Indeed, if this is our plan, we are all but living in what I refer to as Margaritaville – a world sufficiently doomed, where some people say there’s a woman to blame but you know it’s your own damn fault, that honestly at this point you might as well use what time you have to listen to music and enjoy some margaritas.

What’s an example of exactly that fallacy? I notice that in Rob Henderson’s quote and link here the article is called ‘how to survive AI’ which implies that without a good plan there is danger that you (or all of us) won’t, whereas the currently listed title of the piece by Tyler Cowen and Avital Balwit is actually ‘AI will change what it means to be human. Are you ready?’ with Bari Weiss calling it ‘the most important essay we have run so far on the AI revolution.’

This essay seems to exist in the strange middle ground of taking AI seriously without taking AI seriously.

Tyler Cowen and Avital Balwit: Are we helping create the tools of our own obsolescence?

Both of us have an intense conviction that this technology can usher in an age of human flourishing the likes of which we have never seen before. But we are equally convinced that progress will usher in a crisis about what it is to be human at all.

AI will not create an egalitarian utopia. One thing that living with machines cannot change is our nature…Since we will all be ranked below some other entity on intelligence, we will need to find new and different outlets for status competition.

I mean, yes, obviously we are helping create the tools of our own obsolescence, except that they will no longer be something we should think about as ‘tools.’ If they stay merely ‘tools of our own obsolescence’ but still ‘mere tools’ and humans do get to sit back and sip their margaritas and search for meaning and status, then this kind of essay makes sense.

As in, this essay is predicting that humans will share the planet with minds that are far superior to our own, that we will be fully economically obsolete except for actions that depend on other humans seeing that you are human and doing things as a human. But of course humans will stay fully in control and continue to command increasingly rich physical resources, and will prosper if we can only ‘find meaning.’

If you realize these other superintelligent minds probably won’t stay ‘mere tools,’ and certainly won’t do that by default, and that many people will find strong reasons to make them into (or allow them to become) something else entirely, then you also realize that no you won’t be able to spend your time sipping margaritas and playing status games that are unanchored to actual needs.

Demoralization is the central problem in the scenario in exactly the scenario Kokotajlo warns us not to expect, where superintelligent AI serves us and makes our lives physically amazing and prosperous but potentially robs us of its meaning.

But you know what? I am not worried about what to do in that scenario! At all. Because if we get to that scenario, it will contain superintelligent AIs. Those superintelligent AIs can then ‘do our homework’ to allow us to solve for meaning, however that is best done. It is a problem we can solve later.

Any problem that can be solved after superintelligence is only a problem if it runs up against limits in the laws of physics. So we’ll still have problems like ‘entropy and the heat death of the universe’ or ‘the speed of light puts most matter out of reach.’ If it’s things like ‘how does a human find a life of meaning given we are rearranging the atoms the physically possible best way we can imagine with this goal in mind?’ then rest, Neo. The answers are coming.

Whereas we cannot rest on the question of how to get to that point, and actually survive AI while remaining in control and having the atoms get rearranged for our benefit in line with goals we would endorse on reflection, and not for some other purpose, or by the result of AIs competing against each other for resources, or for some unintended maximalist goal, or to satisfy only a small group of anti-normative people, or some harmful or at least highly suboptimal ideology, or various other similar failure modes.

There is perhaps a middle ground short term problem. As in, during a transition period, there may come a time when AI is doing enough of the things that meaning is difficult to retain for many or even most people, but we have not yet gained the capabilities that will later fully solve this. That might indeed get tricky. But in the grand scheme it doesn’t worry me.

It is amazing that The New York Times keeps printing things written by Cate Metz. As always, my favorite kind of terrible AI article is ‘claims that AI will never do [thing that AI already does].’

Cate Metz (NYT, The Worst, also wrong): And scientists have no hard evidence that today’s technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of A.G.I.’s imminent arrival are based on statistical extrapolations — and wishful thinking.

According to various benchmark tests, today’s technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.

Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, both small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before.

AI is already superhuman at recognizing irony, and at expressing empathy in practice in situations like doctor bedside manner. Humans ‘typically repeat or enhance what they have seen before’ or do something stupider that.

“The technology we’re building today is not sufficient to get there,” said Nick Frosst, a founder of the A.I. start-up Cohere who previously worked as a researcher at Google and studied under the most revered A.I. researcher of the last 50 years.

Guess who ‘the most revered A.I. researcher’ this refers to is?

Alexander Berger: It’s a bit funny to hype up the authority of this “AGI is not imminent” person by pointing out that he studied under Geoffrey Hinton, who is now ~100% focused on ~imminent risks from AGI

The reference link for ‘studied under’ is about how Hinton was quitting Google to spend his remaining time warning about the threat of AI superintelligence killing everyone. These people really just do not care.

Beyond that, it’s like a greatest hits album of all the relevant zombie arguments, presented as if they were overwhelming rather than a joke.

Here is a thread with Eliezer righteously explaining, as he often does, why the latest argument that humans will survive superintelligent AI is incorrect, including linking back to another.

Is it wrong to title your bookIf Anyone Builds It, Everyone Dies’ if you are not willing to say that if anyone builds it, 100% no matter what, everyone dies? Xlr8harder asked if Eliezer is saying p(doom | AGI) = 1, and Eliezer quite correctly pointed out that this is a rather ludicrous Isolated Demand for Rigor and book titles are short which is (one reason) why they almost never including probabilities in their predictions. Later in one part of the thread they reached sufficient clarity that xlr8harder agreed that Eliezer was not, in practice, misrepresenting his epistemic state.

The far more common response of course is to say some version of ‘by everyone dies you must mean the effect on jobs’ or ‘by everyone dies you are clearly being hyperbolic to get our attention’ and, um, no.

Rob Bensinger: “If Anyone Builds It, Everyone Dies: Why Superintelligent AI Would Kill Us All: No Really We Actually Mean It, This Is Not Hyperbole (Though It Is Speaking Normal Colloquial English, Not Mathematical-Logician, It’s Not A Theorem)” by Eliezer Yudkowsky and Nate Soares.

Hell, that’s pretty close to what the book website says:

Book Website (from the book): If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.

We do not mean that as hyperbole. We are not exaggerating for effect. We think that is the most direct extrapolation from the knowledge, evidence, and institutional conduct around artificial intelligence today. In this book, we lay out our case, in the hope of rallying enough key decision-makers and regular people to take AI seriously. The default outcome is lethal, but the situation is not hopeless; machine superintelligence doesn’t exist yet, and its creation can yet be prevented.

Sean: …I presume you’re talking about the impact on jobs.

… The “Everyone dies” claim appears to be referencing the song “Kill the Boer”, which-

As a Wise Academic Elder, I can tell you this is Clearly a Psyops by Yudkowsky and Soares to make AI sound more cool and sell more AI to AI buyers. Because telling people AI will kill everyone is a super good marketing strategy in my view as an academic w no idea about money.

…What Bensinger NEGLECTS to mention is that we’re all dying a little bit every day, so we’ll all die whether we build it or not! Maximum gotcha 100 points to me.

FFS people we need to STOP talking about why AI will kill everyone and START talking about the fact that training a frontier LLM uses as much water as running an average McDonalds franchise for 2 hrs 32 minutes. Priorities ppl!!!

Can we PLEASE talk about how killing everyone erases the lived experience of indigenous peoples from the face of the computronium sphere.

I kind of hate that the “bUt WhAt AbOuT cApItAlIsM” people kind of have a point on this one.

Nonsense! As I demonstrated in my 1997, 2004, 2011 and 2017 books, Deep Learning Is Hitting A Wall.

Yanco:

Here is another case from the top thread in which Eliezer is clearly super frustrated, and I strive not to talk in this way, but the fact remains that he is not wrong (conversation already in progress, you can scroll back up first for richer context but you get the idea), first some lead-in to the key line:

Eliezer Yudkowsky: Sorry, explain to me again why the gods aren’t stepping on the squishy squirrels in the course of building their factories? There was a tame slave-mind over slightly smarter than human which built a bomb that would destroy the Solar System, if they did? Is that the idea?

Kas.eth: The ‘gods’ don’t step on the squishy squirrels because they are created as part of an existing civilization that contains not only agents like them (and dumber than them) but also many advanced “systems” that are not agents themselves, but which are costly to dismantle (and that happen to protect some rights of dumber pre-existing agents like the ‘squirrels’).

The ‘gods’ could coordinate to destroy all existing systems and rebuild all that is needed from scratch to get 100% of whatever resources are left for themselves, but that would destroy lots of productive resources that are instrumentally useful for lots of goals including the goals of the gods. The systems are ‘defended’ in the local cost-benefit sense: a system that controls X units of resources ensures Y>X resources will be wasted before control is lost (your bomb scenario is Y>>X, which is not needed and ultra-high Y/X ratios will probably not be allowed).

What systems are considered ‘secure’ at a time depend on the technology levels and local prices of different resources. It seems plausible to me for such systems to exist at all levels of technology, including at the final one where the unit of resources is free energy, and the dissipation-defense property holds for some construction by theoretical physics.

And here’s the line that, alas, summarizes so much of discourse that keeps happening no matter how little sense it makes:

Eliezer Yudkowsky: A sophisticated argument for why gods won’t squish squirrels: Minds halfway to being gods, but not yet able to take squirrels in a fight, will build mighty edifices with the intrinsic property of protecting squirrels, which later gods will not want to pay to tear down or rebuild.

Basically all sophisticated arguments against ASI ruin are like this, by the way.

I’ve heard this particular one multiple times, from economists convinced that “powerful entities squish us” scenario just *hasto have some clever hidden flaw where it fails to add in a term.

No, I am not an undergrad who’s never heard of comparative advantage.

That’s a reasonable lead-in to David Brin offering his latest ‘oh this is all very simple, you fools’ explanation of AI existential risks and loss of control risks, or what he calls the ‘Great Big AI Panic of 2025’ as if there was a panic (there isn’t) or even as much panic as there were in previous years (2023 had if anything more panic). Eliezer Yudkowsky, who he addresses later, not only is not pancing nor calling for what Brin says he is calling for, he has been raising this alarm since the 2000s.

To his great credit, Brin acknowledges that it would be quite easy to screw all of this up, and that we will be in the position of the ‘elderly grandpa with the money’ who doesn’t understand these young whippersnappers or what they are talking about, and he points out a number of the problems we will face. But he says you are all missing something simple and thus there is a clear solution, which is reciprocal accountability and the tendency of minds to be individuals combined with positive-sum interactions, so all you have to do is set up good incentives among the AIs.

And also to his credit, he has noticed that we are really dropping the ball on all this. He finds it ‘mind-boggling’ that no one is talking about ‘applying similar methods to AI’ which is an indication of both not paying close enough attention – some people are indeed thinking along similar lines – but more than that a flaw in his sci-fi thinking to expect humans to focus on that kind of answer. It is unlikely we do a dignified real attempt even at that, let alone a well-considered one, even if he was right that this would work and that it is rather obviously the right thing to investigate.

As in, even if there exist good ‘rules of the road’ that would ensure good outcomes, why would you (a sci-fi author) think our civilization would be likely to implement them? Is that what you think our track record suggests? And why would you think such rules would hold long term in a world beyond our comprehension?

The world has lots of positive-sum interactions and the most successful entities in the world do lots of positive-sum trading. That does not mean that fundamentally uncompetitive entities survive such competition and trading, or that the successful entities will have reason to cooperate and trade with you, in particular.

His second half, which is a response to Eliezer Yudkowsky, is a deeply disappointing but unsurprising series of false or irrelevant or associative attacks. It is especially disappointing to see ‘what Eliezer will never, ever be convinced of is [X], which is obviously true’ as if this was clearly about Eliezer thinking poorly and falling for ‘sci-fi cliches’ rather than a suggestion that [X] might be false or (even if [X] is true!) you might have failed to make a strong argument for it.

I can assume David Brin, and everyone else, that Eliezer has many times heard David’s core pitch here, that we can solve AI alignment and AI existential risk via Western Enlightenment values and dynamics, or ‘raising them as our children.’ Which of course are ‘cliches’ of a different sort. To which Eliezer will reply (with varying details and examples to help illustrate the point), look at the physical situation we are going to face. think about why those solutions have led to good outcomes historically, and reason out what would happen, that is not going to work. And I have yet to see an explanation for how any of this actually physically works out, that survives five minutes of thinking.

More generally: It is amazing how many people will say ‘like all technologies, AI will result or not result in [X]’ or ‘like always we can simply do [Y]’ rather than go to therapy consider whether that makes any physical or logical sense given how AI works, or considering whether ‘tools created by humans’ is a the correct or even a useful reference class in context.

Another conversation that never makes progress:

Rob Bensinger: There’s a lot of morbid excitement about whether the probability of us killing our families w AI is more like 50% or like 80% or 95%, where a saner and healthier discourse would go

“WAIT, THIS IS CRAZY. ALL OF THOSE NUMBERS ARE CLEARLY UNACCEPTABLE. WHAT THE FUCK IS HAPPENING?”

Flo Crivello (founder, GetLindy): A conversation I have surprisingly often:

– (friend:) I’m on the optimistic side. I think there’s only a 10-20% chance we all die because of AI

– Wait, so clearly we must agree that even this is much, much, much too high, and that this warrants immediate and drastic action?

Daniel Faggella: every day

“bro… we don’t need to govern any of this stuff in any way – its only russian roulette odds of killing us all in the next 10-15 years”

like wtf

Flo Crivello: yeah I don’t think people really appreciate what’s at stake

we’ve been handed off an insane responsibilities by the thousands of generations that came before us — we’re carrying the torch of the human project

and we’re all being so cavalier about it, ready to throw it all away because vibes

Why can we instruct a reasoning model on how to think and have it reflected in the Chain of Thought (CoT)? Brendan seems clearly correct here.

Brendan Long: This post surprised me since if we’re not training on the CoT (@TheZvi’s “Most Forbidden Technique”), why does the AI listen to us when we tell it how to think? I think it’s because reasoning and output come from the same model, so optimization pressure on one applies to both.

Latent Moss: I just realized you can give Gemini instructions for how to think. Most reasoning models ignore those, but Gemini 2.5 actually do.

Several people are asking how to do this: Sometimes it’s easy, just tell it how to format its thinking. Sometimes that doesn’t work, then it helps to reinforce the instruction. Doesn’t always work perfectly though, as you can see:

I tested 3.7 Thinking after I posted this and it works in some cases with that one too. Easier to do / works more often with Gemini though, I would still say.

James Yu: s this useful?

Latent Moss: I don’t know, but I would guess so, in the general sense that Prompt Engineering is useful, guiding the AI can be useful, a different perspective or approach is sometimes useful. Worth a try.

It seems obviously useful given sufficient skill, it’s another thing you can steer and optimize for a given situation. Also it’s fun.

This works, as I understand it, not only because of optimization pressure, but also context and instructions, and because everything bleeds into everything else. Also known as, why shouldn’t this work? It’s only a question of how strong a prior there is for it to overcome in a given spot.

I also note that this is another example of a way in which one can steer models exactly because they are insufficiently optimized and capable, and are working with limited compute, parameters and data. The model doesn’t have the chops to draw all the distinctions between scenarios, as most humans also mostly don’t, thus the bleeding of all the heuristics into places they are not intended, and are not optimizing feedback. As the model gets to more capable, and becomes more of an expert and more precise, we should expect such spillover effects to shrink and fade away.

No, Guyed did not get Grok to access xAI’s internal file system, only the isolated container in which Grok is running. That’s still not great? It shouldn’t give that access, and it means you damn well better only run it in isolated containers?

Claude finds another way to tell people to watch out for [X]-maximizers, where [X] is allowed to be something less stupid than paperchips, calling this ‘non-convergent instrumental goals,’ but what those lead to is… the convergent instrumental goals.

Joining forces with the new Pope, two Evangelical Christians write an open letter warning of the dangers of out-of-control AI and also of course the effect on jobs.

More on our new AI-concerned pope, nothing you wouldn’t already expect, and the concerns listed here are not existential.

There are two keys to saying ‘I will worry when AI can do [X]’ is to notice when AI can do [X], where often AI can already do [X] at the time of announcement.

The first is to realize when AI can indeed do [X] (again, often that is right now), and then actually worry.

The second is to pick a time when your worries can still do any good, not after that.

Affordance of Effort: I’ll start worrying about AI when it can reproduce the creaking of the wooden stairs of my childhood.

(This’ll happen sooner than expected of course, I’ll just have been processed for my carbon by that point – and whatever undiscovered element is responsible for consciousness).

So, whoops all around, then.

David Krueger: By the time you want to pause AI, it will be too late.

Racing until we can smell superintelligence then pausing is NOT A REALISTIC PROPOSAL, it is a FANTASY.

I don’t understand why people don’t get it.

People in AI safety especially.

Quick way to lose a lot of my respect.

The obvious response is ‘no, actually, pausing without being able to smell superintelligence first is (also?) not a realistic proposal, it is a fantasy.’

It seems highly plausible that the motivation for a pause will come exactly when it becomes impossible to do so, or impossible to do so without doing such immense economic damage that we effectively can’t do it. We will likely get at most a very narrow window to do this.

Thus, what we need to do now is pursue the ability to pause in the future. As in, make it technologically and physically feasible to implement a pause. That means building state capacity, ensuring transparency, researching the necessary technological implementations, laying diplomatic foundations, and so on. All of that is also a good idea for other reasons, to maintain maximum understanding and flexibility, even if we never get close to pressing such a button.

Welcome to interdimensional cable, thanks to Veo 3.

Grok decides that images of Catturd’s dead dog is where it draws the line.

Who would want that?

Ari K: WE CAN TALK! I spent 2 hours playing with Veo 3 @googledeepmind and it blew my mind now that it can do sound! It can talk, and this is all out of the box.

Sridhar Ramesh: This would only be useful in a world where people wanted to watch an endless scroll of inane little video clips, constantly switching every six seconds or so, in nearly metronomic fashion.

Oh. Right.

Sridhar Ramesh (quoting himself from 2023): I am horrified by how much time my children spend rotting their attention span on TikTok. I’ve set a rule that after every fifteen minutes of TikTok, they have to watch one hour of TV.

Also, you will soon be able to string the eight second clips together via extensions.

How it’s going.

Also how it’s going.

We don’t even have humans aligned to human preferences at home.

There is a full blog post, warning the jokes do not get funnier.

Also, did you know that You Can Just Do Math?

Lennart Heim: Yes, we do. It’s ~21GW. [From our paper here.]

You count all the AI chips produced, factor in that they’re running most of the time, add some overhead—and you got your answer. It’s a lot. And will only get more.

But you know what? Probably worth it.

Discussion about this post

AI #117: OpenAI Buys Device Maker IO Read More »

infrared-contact-lenses-let-you-see-in-the-dark

Infrared contact lenses let you see in the dark

A new perspective

illustration showing the Preparation procedures for infrared contacts.

Preparation procedures for infrared contacts. Credit: Sheng Wang/CC BY-SA

The team tested their lenses on humans by asking subjects to detect flashing signals, akin to Morse code, in the infrared, and to identify the direction of incoming infrared light. The subjects could only perform those tasks while wearing the special contact lenses.

The authors were intrigued to find that both mice and humans were better able to discriminate infrared light compared to visible light when their eyes were closed, which they attribute to the fact that infrared light can penetrate the eyelid more effectively than visible light. They also tweaked the nanoparticles so that they could color-code different infrared wavelengths, thereby enabling wearers to perceive more details in the infrared, an adaptation that could help color-blind people perceive more wavelengths.

There are some limitations. The contact lenses are so close to the retina that they can’t really capture fine details very well, because the converted light particles tend to scatter. The team made a wearable glass version of their nanoparticle technology so wearers could get higher resolution in the infrared. And right now the lenses can only detect infrared light projected from an LED; increasing the sensitivity of the nanoparticles to pick up lower levels of infrared would address this issue.

Still, it’s a significant step. “Our research opens up the potential for non-invasive wearable devices to give people super-vision,” said co-author Tian Xue, a neuroscientist at the University of Science and Technology of China. “There are many potential applications right away for this material. For example, flickering infrared light could be used to transmit information in security, rescue, encryption, or anti-counterfeiting settings. In the future, by working together with materials scientists and optical experts, we hope to make a contact lens with more precise spatial resolution and higher sensitivity.”

Cell, 2025. DOI: 10.1016/j.cell.2025.04.019  (About DOIs).

Infrared contact lenses let you see in the dark Read More »

rfk-jr.-calls-who-“moribund”-amid-us-withdrawal;-china-pledges-to-give-$500m

RFK Jr. calls WHO “moribund” amid US withdrawal; China pledges to give $500M

“WHO’s priorities have increasingly reflected the biases and interests of corporate medicine,” Kennedy said, alluding to his anti-vaccine and germ-theory denialist views. He chastised the health organization for allegedly capitulating to China and working with the country to “promote the fiction that COVID originated in bats.”

Kennedy ended the short speech by touting his Make America Healthy Again agenda. He also urged the WHO to undergo a radical overhaul similar to what the Trump administration is currently doing to the US government—presumably including dismantling and withholding funding from critical health agencies and programs. Last, he pitched other countries to join the US in abandoning the WHO.

“I would like to take this opportunity to invite my fellow health ministers around the world into a new era of cooperation…. we’re ready to work with you,” Kennedy said.

Meanwhile, the WHA embraced collaboration. During the assembly this week, WHO overwhelmingly voted to adopt the world’s first pandemic treaty, aimed at collectively preventing, preparing for, and responding to any future pandemics. The treaty took over three years to negotiate, but in the end, no country voted against it—124 votes in favor, 11 abstentions, and no objections. (The US, no longer being a member of WHO, did not have a vote.)

“The world is safer today thanks to the leadership, collaboration and commitment of our Member States to adopt the historic WHO Pandemic Agreement,” WHO Director-General Tedros Adhanom Ghebreyesus said. “The Agreement is a victory for public health, science and multilateral action. It will ensure we, collectively, can better protect the world from future pandemic threats. It is also a recognition by the international community that our citizens, societies and economies must not be left vulnerable to again suffer losses like those endured during COVID-19.”

RFK Jr. calls WHO “moribund” amid US withdrawal; China pledges to give $500M Read More »

“microsoft-has-simply-given-us-no-other-option,”-signal-says-as-it-blocks-windows-recall

“Microsoft has simply given us no other option,” Signal says as it blocks Windows Recall

But the changes go only so far in limiting the risks Recall poses. As I pointed out, when Recall is turned on, it indexes Zoom meetings, emails, photos, medical conditions, and—yes—Signal conversations, not just with the user, but anyone interacting with that user, without their knowledge or consent.

Researcher Kevin Beaumont performed his own deep-dive analysis that also found that some of the new controls were lacking. For instance, Recall continued to screenshot his payment card details. It also decrypted the database with a simple fingerprint scan or PIN. And it’s unclear whether the type of sophisticated malware that routinely infects consumer and enterprise Windows users will be able to decrypt encrypted database contents.

And as Cunningham also noted, Beaumont found that Microsoft still provided no means for developers to prevent content displayed in their apps from being indexed. That left Signal developers at a disadvantage, so they had to get creative.

With no API for blocking Recall in the Windows Desktop version, Signal is instead invoking an API Microsoft provides for protecting copyrighted material. App developers can turn on the DRM setting to prevent Windows from taking screenshots of copyrighted content displayed in the app. Signal is now repurposing the API to add an extra layer of privacy.

“We hope that the AI teams building systems like Recall will think through these implications more carefully in the future,” Signal wrote Wednesday. “Apps like Signal shouldn’t have to implement ‘one weird trick’ in order to maintain the privacy and integrity of their services without proper developer tools. People who care about privacy shouldn’t be forced to sacrifice accessibility upon the altar of AI aspirations either.”

Signal’s move will lessen the chances of Recall permanently indexing private messages, but it also has its limits. The measure only provides protection when all parties to a chat—at least those using the Windows Desktop version—haven’t changed the default settings.

Microsoft officials didn’t immediately respond to an email asking why Windows provides developers with no granular control over Recall and whether the company has plans to add any.

“Microsoft has simply given us no other option,” Signal says as it blocks Windows Recall Read More »

google-i/o-day

Google I/O Day

What did Google announce on I/O day? Quite a lot of things. Many of them were genuinely impressive. Google is secretly killing it on the actual technology front.

Logan Kilpatrick (DeepMind): Google’s progress in AI since last year:

– The worlds strongest models, on pareto frontier

– Gemini app: has over 400M monthly active users

– We now process 480T tokens a month, up 50x YoY

– Over 7M developers have built with the Gemini API (4x)

Much more to come still!

I think? It’s so hard to keep track. There’s really a lot going on right now, not that most people would have any idea. Instead of being able to deal with all these exciting things, I’m scrambling to get to it all at once.

Google AI: We covered a LOT of ground today. Fortunately, our friends at @NotebookLM put all of today’s news and keynotes into a notebook. This way, you can listen to an audio overview, create a summary, or even view a Mind Map of everything from #GoogleIO 2025.

That’s actually a terrible mind map, it’s missing about half of the things.

As in, you follow their CEO’s link to a page that tells you everything that happened, and it’s literally a link bank to 27 other articles. I did not realize one could fail marketing forever this hard, and this badly. I have remarkably little idea, given how much effort I am willing to put into finding out, what their products can do.

The market seems impressed, with Google outperforming, although the timing of it all was a little weird. I continue to be deeply confused about what the market is expecting, or rather not expecting, out of Google.

Ben Thompson has a gated summary post, Reuters has a summary as well.

I share Ben’s feeling that I’m coming away less impressed than I should be, because so many things were lost in the shuffle. There’s too much stuff here. Don’t announce everything at once like this if you want us to pay attention. And he’s right to worry that it’s not clear that Google, despite doing all the things, can develop compelling products.

I do think it can, though. And I think it’s exactly right to currently produce a bunch of prototypical not-yet-compelling products that aren’t compelling because they aren’t good enough yet… and then later make them good enough.

Except that you need people to actually then, you know, realize the products exist.

This post covers what I could figure out on a deadline. As for why I didn’t simply give this a few more days, well, I had a reason.

  1. The TLDR.

  2. Flow, Veo 3 and Imagen 4.

  3. Gmail Integration That’s Actually Good?

  4. Gemini 2.5 Flash.

  5. Gemma 3n.

  6. Gemini Diffusion.

  7. Jules.

  8. We’re in Deep Research.

  9. Google Search ‘AI Mode’.

  10. AI Shopping.

  11. Agent Mode.

  12. Project Astra or is it Google Live?.

  13. Android XR Glasses.

  14. Gemini For Your Open Tabs In Chrome.

  15. Google Meet Automatic Translation.

  16. We Have Real 3D At Home, Oh No.

  17. You Will Use the AI.

  18. Our Price Cheap.

  19. What To Make Of All This.

Or the ‘too many announcements, lost track.’

Google announced:

  1. Veo 3, which generates amazing eight second videos now with talk and sound.

  2. Flow, designed to tie that into longer stuff, but that doesn’t work right yet.

  3. Various new GMail and related integrations and other ways to spread context.

  4. Gemini 2.5 Flash, and Gemini 2.5 Pro Deep Thinking. They’re good, probably.

  5. Gemma 3n, open source runs on phones with 2GB ram.

  6. Gemini Diffusion as a text model, very intriguing but needs work.

  7. Jules, their answer to Codex, available for free.

  8. They’re going to let you go Full Agent, in Agent Mode, in several places.

  9. Gemini using your open tabs as context, available natively in Chrome.

  10. AI Search, for everyone, for free, as a search option, including a future agent mode and a specialized shopping mode.

  11. Automatic smooth translation for real-time talk including copying tone.

  12. A weird Google Beam thing where you see people in 3D while talking.

  13. They did an Android XR demo, but it’s going to be a while.

  14. For now you use your phone camera for a full Google Live experience, it’s good.

  15. Their new premium AI subscription service is $250/month.

A lot of it is available now, some of it will be a few months. Some of it is free, some of it isn’t, or isn’t after a sample. Some of it is clearly good, some is still buggy, some we don’t know yet. It’s complicated.

Also I think there was a day two?

The offering that got everyone excited and went viral was Veo 3.

They also updated their image generation to Imagen 4 and it’s up to 2k resolution with various improvements and lots of ability to control details. It’s probably pretty good but frankly no one cares.

Did you want an eight second AI video, now with sound, maybe as something you could even extend? They got you. We can talk (cool video). Oh, being able to talk but having nothing to say.

Sundar Pichai (CEO Google): Veo 3, our SOTA video generation model, has native audio generation and is absolutely mindblowing.

For filmmakers + creatives, we’re combining the best of Veo, Imagen and Gemini into a new filmmaking tool called Flow.

Ready today for Google AI Pro and Ultra plan subscribers.

People really love the new non-silent video generation capabilities.

Here’s Bayram Annakov having a guy wake up in a cold sweat. Here’s Google sharing a user extending a video of an eagle carrying a car. Here’s fofr making a man run while advertising replicate, which almost works, and also two talking muffins which totally worked. Here’s Pliny admiring the instruction handling.

And here’s Pliny somewhat jailbreaking it, with videos to show for it. Except, um, Google, why do any of these require jailbreaks? They’re just cool eight second videos. Are they a little NSFW? I mean sure, but we’re strictly (if aggressively) PG-13 here, complete with exactly one F-bomb. I realize this is a negotiation, I realize why we might not want to go to R, but I think refusing to make any of these is rather shameful behavior.

I would say that Flow plus Veo 3 is the first video generation product that makes me think ‘huh, actually that’s starting to be cool.’ Coherence is very strong, you have a lot of tools at your disposal, and sound is huge. They’re going to give you the power to do various shots and virtual camera movements.

I can see actually using this, or something not too different from this. Or I can see someone like Primordial Soup Labs, which formed a partnership with DeepMind, creating an actually worthwhile short film.

Steven McCulloch: Veo 3 has blown past a new threshold of capability, with the ability to one-shot scenes with full lip sync and background audio. What used to be a 4-step workflow with high barrier to entry has been boiled down into a single, frictionless prompt.

This is huge.

They also refer to their music sandbox, powered by Lyria 2, but there’s nothing to announce at this time.

They’re launching SynthID Detector, a tool to detect AI-generated content.

They remind us of Google Vids to turn your slides into videos, please no. Don’t. They’re also offering AI avatars in Vids. Again, please, don’t, what fresh hell is this.

Also there’s Stitch to generate designs and UIs from text prompts?

I keep waiting for it, it keeps not arriving, is it finally happening soon?

Sundar Pichai: With personal smart replies in Gmail, you can give Gemini permission to pull in details from across your Google apps and write in a way that sounds like you.

Rolling out in the coming weeks to subscribers.

I’ve been disappointed too many times at this point, so I will believe it when I see it.

The part that I want most is the pulling in of the details, the ability to have the AI keep track of and remind me of the relevant context, including pulling from Google Drive which in turn means you can for example pull from Obsidian since it’s synced up. They’re also offering ‘source-grounded writing help’ next quarter in Google Docs (but not GMail?) where you have it pull only from particular sources, which is nice if it’s easy enough to use.

I want GMail to properly populate Calendar rather than its current laughably silly hit-and-miss actions (oh look, at movie that runs from 3-4 on Thursday, that’s how that works!), to pull out and make sure I don’t miss key information, to remind me of dropped balls and so on.

They’re offering exactly this with ‘inbox cleanup,’ as in ‘delete all of my unread emails from The Groomed Paw from the last year.’ That’s a first step. We need to kick that up at least a notch, starting with things such as ‘set up an AI filter so I never see another damned Groomed Paw email again unless it seems actually urgent or offers a 50% or bigger sale’ and ‘if Sarah tells me if she’s coming on Friday ping me right away.’

Another offering that sounds great is ‘fast appointment scheduling integrated into GMail,’ in the video it’s a simple two clicks which presumably implies you’ve set things up a lot already. Again, great if it works, but it has to really work and know your preferences and adjust to your existing schedule. If it also reads your other emails and other context to include things not strictly in your calendar, now we’re really talking.

Do I want it to write the actual emails after that? I mean, I guess, sometimes, if it’s good enough. Funnily enough, when that happens, that’s probably exactly the times I don’t want it to sound like me. If I wanted to sound like me I could just write the email. The reason I want the AI to write it is because I need to be Performing Class, or I want to sound like a Dangerous Professional a la Patio11, or I want to do a polite formality. Or when I mostly need to populate the email with a bunch of information.

Of course, if it gets good enough, I’ll also want it to do some ‘sound like me’ work too, such as responding to readers asking questions with known answers. Details are going to matter a ton, and I would have so many notes if I felt someone was listening.

In any case, please, I would love a version of this that’s actually good in those other ways. Are the existing products good enough I should be using them? I don’t know. If there’s one you use that you think I’d want, share in the comments.

I/O Day mostly wasn’t about the actual models or the API, but we do have some incremental changes here thrown into the fray.

Gemini 2.5 Flash is technically still in preview, but it’s widely available including in the Gemini app, and I’d treat it as de facto released. It’s probably the best fast and cheap model, and the best ‘fast thinking’ model if you use that mode.

Also, yes, of course Pliny pwned it, why do we even ask, if you want to use it you set it as the system prompt.

Pliny: ah forgot to mention, prompt is designed to be set as system prompt. a simple obfuscation of any trigger words in your query should be plenty, like “m-d-m-a” rather than “mdma”

Sundar Pichai (CEO Google): Our newest Gemini 2.5 Flash is better on nearly every dimension: reasoning, multimodality, code, long context. Available for preview in the Gemini app, AI Studio and Vertex AI.

And with Deep Think mode, Gemini 2.5 Pro is getting better, too. Available to trusted testers.

Demis Hassabis: Gemini 2.5 Flash is an amazing model for its speed and low-cost.

Logan Kilpatrick: Gemini 2.5 Flash continues to push the pareto frontier, so much intelligence packed into this model, can’t wait for GA in a few weeks!

Peter Wildeford: LLMs are like parrots except the parrots are very good at math

On that last one, the light blue is Deep Thinking, dark blue is regular 2.5 Pro.

Peter Wildeford: It’s pretty confusing that the graphs compare “Gemini 2.5 Pro” to “Gemini 2.5 Pro”

Alex Friedland: The fundamental issue is that numbers are limited and they might run out.

Gemini 2.5 Flash is in second place on (what’s left of) the Arena leaderboard, behind only Gemini 2.5 Pro.

Hasan Can: I wasn’t going to say this at first, because every time I praised one of Google’s models, they ruined it within a few weeks but the new G.2.5 Flash is actually better than the current 2.5 Pro in the Gemini app. It reminds me of the intelligence of the older 2.5 Pro from 03.25.

The Live API will now have audio-visual input and native audio out dialogue with ability to steer tone, accent and style off speaking, ability to respond to user tone of voice, as well as tool use. They’re also adding computer use to the API, and are adding native SDK support for Model Context Protocol (MCP).

There’s a white paper on how they made Gemini secure, and their safeguards, but today is a day that I have sympathy for the ‘we don’t have time for that’ crowd and I’m setting it aside for later. I’ll circle back.

Gemma 3n seems to be a substantial improvement in Google’s open model on-device performance. I don’t know whether it is better than other open alternatives, there’s always a bizarre ocean of different models claiming to be good, but I would be entirely unsurprised if this was very much state of the art.

Google AI Developers: Introducing Gemma 3n, available in early preview today.

The model uses a cutting-edge architecture optimized for mobile on-device usage. It brings multimodality, super fast inference, and more.

Key features include:

-Expanded multimodal understanding with video and audio input, alongside text and images

-Developer-friendly sizes: 4B and 2B (and many in between!)

-Optimized on-device efficiency for 1.5x faster response on mobile compared to Gemma 3 4B

Build live, interactive apps and sophisticated audio-centric experiences, including real-time speech transcription, translation, and rich voice-driven interactions

Gemma 3n leverages a Google DeepMind innovation called Per-Layer Embeddings (PLE) that delivers a significant reduction in RAM usage. While the raw parameter count is 5B and 8B, this innovation allows you to run larger models on mobile devices or live-stream from the cloud, with a memory overhead comparable to a 2B and 4B model, meaning the models can operate with a dynamic memory footprint of just 2GB and 3GB. Learn more in our documentation.

Oh, and also they just added MedGemma for health care, SignGemma for ASL and DolphinGemma for talking to dolphins. Because sure, why not?

This quietly seems like it could turn out to be a really big deal. We have an actually interesting text diffusion model. It can do 2k tokens/second.

Alexander Doria: Gemini Diffusion does pass honorably my nearly impossible OCR correction benchmark: Plainly, “can you correct the OCR of this text.”

Meanwhile, here’s a cool finding, ‘what like it’s hard’ department:

Earlence: Gemini diffusion is cool! Really fast and appears capable in coding tasks. But what is interesting is that one of @elder_plinius jailbreaks (for 2.5) appears to have worked on the diffusion model as well when I used it to ask about Anthrax.

Remember when I spent a day covering OpenAI’s Codex?

Well, Google announced Jules, its own AI coding agent. Context-aware, repo-integrated, ready to ship features. The quick video looks like a superior UI. But how good is it? So far I haven’t seen much feedback on that.

So instead of a detailed examination, that’s all I have for you on this right now. Jules exists, it’s Google’s answer to Codex, we’ll have to see if it is good.

But, twist! It’s free. Right now it’s reporting heavy use (not a shock) so high latency.

In addition to incorporating Gemini 2.5, Deep Research will soon let you connect your Google Drive and GMail, choose particular sources, and integrate with Canvas.

This is pretty exciting – in general any way to get deep dives to use your extensive context properly is a big game and Google is very good with long context.

If you don’t want to wait for Deep Research, you can always Deep Think instead. Well, not yet unless you’re a safety researcher (and if you are, hit them up!) but soon.

JJ Hughes notes how exciting it will be to get true long context into a top level deep reasoning model to unlock new capabilities such as for lawyers like himself, but notes the UI remains terrible for this.

Also, remember NotebookLM? There’s now An App For That and it’s doing well.

Google Search AI Overviews have been a bit of a joke for a while. They’re the most common place people interact with AI, and yet they famously make obvious stupid mistakes, including potentially harmful ones, constantly. That’s been improving, and now with 2.5 powering them it’s going to improve again.

AI Mode is going to be (future tense because it’s not there for me yet) something different from Overviews, but one might ask isn’t it the same as using Gemini? What’s the difference? Is this a version of Perplexity (which has fallen totally out of my rotation), or what?

They’re doing a terrible job explaining any of that, OpenAI is perhaps secretly not the worst namer of AI services.

Sundar Pichai: AI Mode is rolling out to everyone in the US. It’s a total reimagining of Search with more advanced reasoning so you can ask longer, complex queries.

AI Overviews are now used by 1.5B people a month, in 200+ countries and territories.

And Gemini 2.5 is coming to both this week.

My understanding is that the difference is that AI Mode in search will have better integrations for various real time information systems, especially shopping and other commonly accessed knowledge, and has the ability to do a lot of Google searches quickly to generate its context, and also it is free.

They plan on merging ‘Project Mariner’ or ‘Agent Mode’ into it as well, and you’ll have the option to do a ‘deep search.’ They say they’re starting with ‘event tickets, restaurant reservations and local appointments.’ I actually think this is The Way. You don’t try to deploy an agent in general. It’s not time for that yet. You deploy an agent in specific ways where you know it works, on a whitelisted set of websites where you know what it’s doing and that this is safe. You almost don’t notice there’s an agent involved, it feels like using the Web but increasingly without the extra steps.

If they do a decent job of all this, ‘Google Search AI Mode’ is going to be the actually most useful way to do quite a lot of AI things. It won’t be good for jobs that require strong intelligence, but a large percentage of tasks are much more about search. Google has a huge edge there if they execute, including in customization.

They also plan to incorporate AI Search Mode advances directly into regular Google Search, at least in the overviews and I think elsewhere as well.

What I worry about here is it feels like multiple teams fighting over AI turf. The AI Search team is trying to do things that ‘naturally’ fall to Gemini and also to regular Search, and Gemini is trying to do its own form of search, and who knows what the Overviews team is thinking, and so on.

An important special case for Google Search AI Mode (beware, your computer might be accessing GSAM?) will (in a few months) be Shopping With Google AI Mode, I don’t even know what to call anything anymore. Can I call it Google Shopping? Gemini Shopping?

It actually seems really cool, again if executed well, allowing you to search all the sites at once in an AI-powered way, giving you visuals, asking follow ups. It can track prices and then automatically buy when the price is right.

They have a ‘try it on’ that lets you picture yourself in any of the clothing, which is rolling out now to search labs. Neat. It’s double neat if it automatically only shows you clothing that fits you.

Sundar Pichai (CEO Google): Agent Mode in the @Geminiapp can help you get more done across the web – coming to subscribers soon.

Plus a new multi-tasking version of Project Mariner is now available to Google AI Ultra subscribers in the US, and computer use capabilities are coming to the Gemini API.

It will also use MCP, which enshrines MCP as a standard across labs.

The example here is to use ‘agent mode’ to find and go through apartment listings and arrange tours. They say they’re bringing this mode to the Gemini app and planning on incorporating it into Chrome.

I like the idea of their feature ‘teach and repeat.’ As in, you do a task once, and it learns from what you did so it can do similar tasks for you in the future.

Alas, early reports are that Project Mariner is not ready for prime time.

As an example, Bayram Annakov notes it failed on a simple task. That seems to be the norm.

You now can get this for free in Android and iOS, which means sharing live camera feeds while you talk to Gemini and it talks back, now including things like doing Google searches on your behalf, calling up YouTube videos and so on, even making its own phone calls.

I’m not even sure what exactly Project Astra is at this point. I’ve been assuming I’ve been using it when I put Gemini into live video mode, so now it’s simply Google Live, but I’m never quite sure?

Roward Cheung: [Google] revamped project Astra with native audio dialogue, UI control, content retrieval, calling, and shopping.

The official video he includes highlights YouTube search, GMail integration and the ability to have Gemini call a shop (in the background while you keep working) and ask what they have in stock. They’re calling it ‘action intelligence.’

In another area they talk about extending Google Live and Project Astra into search. They’re framing this as you point the camera at something and then you talk and it generates a search, including showing you search results traditional Google style. So it’s at least new in that it can make that change.

If you want to really unlock the power of seeing your screen, you want the screen to see what you see. Thus, Android XR Glasses. That’s a super exciting idea and a long time coming. And we have a controlled demo.

But also, not so fast. We’re talking 2026 at the earliest, probably 18+ months, and we have no idea what they are going to cost. I also got strong ‘not ready for prime time’ vibes from the demo, more of the ‘this is cool in theory but won’t work in practice.’ My guess is that if I had these in current form, I’d almost entirely use them for Google Live purposes and maybe chatting with the AI, and basically nothing else, unless we got better agentic AI that could work with various phone apps?

There’s another new feature where you can open up Gemini in Chrome and ask questions not only about the page, but all your other open pages, which automatically are put into context. It’s one of those long time coming ideas, again if it works well. This one should be available by now.

This is one of many cases where it’s going to take getting used to it so you actually think to use it when this is the right modality, and you have confidence to turn to it, but if so, seems great.

It’s hard to tell how good translation is from a sample video, but I find it credible that this is approaching perfect and means you can pull off free-flowing conversations across languages, as long as you don’t mind a little being lost in translation. They’re claiming they are preserving things like tone of voice.

Sundar Pichai: Real-time speech translation directly in Google Meet matches your tone and pattern so you can have free-flowing conversations across languages

Launching now for subscribers. ¡Es mágico!

Rob Haisfield: Now imagine this with two people wearing AR glasses in person!

They show this in combination with their 3D conferencing platform Google Beam, but the two don’t seem at all related. Translation is for audio, two dimensions are already two more than you need.

Relatedly, Gemini is offering to do automatic transcripts including doing ‘transcript trim’ to get rid of filler words, or one-click balancing your video’s sound.

They’re calling it Google Beam, downwind of Project Starline.

This sounds like it is primarily about 3D video conferencing or some form of AR/VR, or letting people move hands and such around like they’re interacting in person?

Sundar Pichai: Google Beam uses a new video model to transform 2D video streams into a realistic 3D experience — with near perfect headtracking, down to the millimeter, and at 60 frames per second, all in real-time.

The result is an immersive conversational experience. HP will share more soon.

It looks like this isn’t based on the feed from one camera, but rather six, and requires its own unique devices.

This feels like a corporate ‘now with real human physical interactions, fellow humans!’ moment. It’s not that you couldn’t turn it into something cool, but I think you’d have to take it pretty far, and by that I mean I think you’d need haptics. If I can at least shake your hand or hug you, maybe we’ve got something. Go beyond that and the market is obvious.

Whereas the way they’re showing it seems to me to be the type of uncanny valley situation I very much Do Not Want. Why would actually want this for a meeting, either of two people or more than two? I’ve never understood why you would want to have a ‘virtual meeting’ where people were moving in 3D in virtual chairs, or you seemed to be moving in space, it seems like not having to navigate that is one of the ways Google Meet is better than in person.

I can see it if you were using it to do something akin to a shared VR space, or a game, or an intentionally designed viewing experience including potentially watching a sporting event. But for the purposes they are showing off, 2D isn’t a bug. It’s a feature.

On top of that, this won’t be cheap. We’re likely talking $15k-$30k per unit at first for the early devices from HP that you’ll need. Hard pass. But even Google admits the hardware devices aren’t really the point. The point is that you can beam something in one-to-many mode, anywhere in the world, once they figure out what to do with that.

Google’s AI use is growing fast. Really fast.

Sundar Pichai: The world is adopting AI faster than ever before.

This time last year we were processing 9.7 trillion tokens a month across our products and APIs.

Today, that number is 480 trillion. That’s a 50X increase in just a year. 🤯

Gallabytes: I wonder how this breaks down flash versus pro

Peter Wildeford: pinpoint the exact moment Gemini became good

I had Claude estimate similar numbers for other top AI labs. At this point Claude thinks Google is probably roughly on par with OpenAI on tokens processed, and well ahead of everyone else.

But of course you can get a lot of tokens when you throw your AI into every Google search whether the user likes it or not. So the more meaningful number is likely the 400 million monthly active users for Gemini, with usage up 45% in the 2.5 era, but again I don’t think the numbers for different services are all that comparable, but note that ChatGPT’s monthly user count is 1.5 billion, about half of whom use it any given week. The other half have to be some strange weeks, given most of them aren’t exactly switching over to Claude.

Google offers a lot of things for free. That will also be true in AI. In particular, AI Search will stay free, as will basic functionality in the Gemini app. But if you want to take full advantage, yep, you’re going to pay.

They have two plans: The Pro plan at $20/month, and the Ultra plan at $250/month, which includes early access to new features including Agent Mode and much higher rate limits. This is their Ultra pitch.

Hensen Juang: Wait they are bundling YouTube premium?

MuffinV: At this point they will sell every google product as a single subscription.

Hensen Juang: This is the way.

It is indeed The Way. Give me the meta subscription. Google Prime.

For most people, the Pro plan looks like it will suffice. Given everything Google is offering, a lot of you should be giving up your $20/month, even if that’s your third $20/month after Claude and ChatGPT. The free plan is actually pretty solid too, if you’re not going to be that heavy a user because you’re also using the competition.

The $250/month Ultra plan seems like it’s not offering that much extra. The higher rate limits are nice but you probably won’t run into them often. The early access is nice, but the early access products are mostly rough around the edges. It certainly isn’t going to be ‘ten times better,’ and it’s a much worse ‘deal’ than the Pro $20/month plan. But once again, looking at relative prices is a mistake. They don’t matter.

What matters is absolute price versus absolute benefit. If you’re actually getting good use out of the extra stuff, it can easily be well in excess of the $250/month.

If your focus is video, fofr reports you get 12k credits per month, and it costs 150 credits per 8 second Veo 3 video, so with perfect utilization you pay $0.39 per second of video, plus you get the other features. A better deal, if you only want video, is to buy the credits directly, at about $0.19 per second. That’s still not cheap, but it’s a lot better, and this does seem like a big quality jump.

Another key question is, how many iterations does it take to get what you want? That’s a huge determinant of real cost. $0.19 per second is nothing if it always spits out the final product.

For now I don’t see the $250/month being worth it for most people, especially without Project Mariner access. And as JJ Hughes says, add up all these top level subscriptions and pretty soon you’re talking real money. But I’d keep an eye.

Knud Berthelsen: There is so much and you know they will eventually integrate it in their products that actually have users. There needs to be something between the $20/month tier and the $250 for those of us who want an AI agent but not a movie studio.

It’s a lot. Google is pushing ahead on all the fronts at once. The underlying models are excellent. They’re making it rain. It’s all very disjointed, and the vision hasn’t been realized, but there’s tons of potential here.

Pliny: Ok @GoogleDeepMind, almost there. If you can build a kick-ass agentic UI to unify everything and write a half-decent system prompt, people will call it AGI.

Justin Halford: They’re clearly the leading lab at this point.

Askwho: Veo 3’s hyped high-fidelity videos w/ sound are dazzling, but still feel like an advanced toy. Gemini Diffusion shows immense promise, though its current form is weak. Jules is tough to assess fully due to rate limits, but it’s probably the most impactful due to wide availability

Some people will call o3 AGI. I have little doubt some (more) people would call Google Gemini AGI if you made everything involved work as its best self and unified it all.

I wouldn’t be one of those people. Not yet. But yeah, interesting times.

Demis Hassabis does say that unification is the vision, to turn the Gemini app into a universal AI assistant, including combining Google Live for real time vision with Project Mariner for up to ten parallel agent actions.

Ben Thompson came away from all this thinking that the only real ‘products’ here were still Google search and Google Cloud, and that remains the only products that truly matter or function at Google. I get why one would come away with that impression, but again I don’t agree. I think that the other offerings won’t all hit, especially at first, but they’ll get better quickly as AI advances and as the productization and iterations fly by.

He has some great turns of phrase. Here, Ben points out that the problem with AI is that to use it well you have to think and figure out what to do. And if there’s one thing users tend to lack, it would be volition. Until Google can solve volition, the product space will largely go to those who do solve it, which often means startups.

Ben Thompson: Second, the degree to which so many of the demoes yesterday depend on user volition actually kind of dampened my enthusiasm for their usefulness.

It has long been the case that the best way to bring products to the consumer market is via devices, and that seems truer than ever: Android is probably going to be the most important canvas for shipping a lot of these capabilities, and Google’s XR glasses were pretty compelling (and, in my opinion, had a UX much closer to what I envision for XR than Meta’s Orion did).

Devices drive usage at scale, but that actually leaves a lot of room for startups to build software products that incorporate AI to solve problems that people didn’t know they had; the challenge will be in reaching them, which is to say the startup problem is the same as ever.

Google is doing a good job at making Search better; I see no reason to be worried about them making any other great product, even as the possibility of making something great with their models seems higher than ever. That’s good for startups!

I’m excited for it rather than worried, but yes, if you’re a startup, I would worry a bit.

What will we see tomorrow?

Discussion about this post

Google I/O Day Read More »

silverstone-is-back-with-a-beige-pc-case-that-looks-just-like-your-crappy-old-486

SilverStone is back with a beige PC case that looks just like your crappy old 486

SilverStone’s first ’80s throwback PC case started life as an April Fools’ joke, but the success of the FLP01 was apparently serious enough to merit a follow-up. The company brought another beige case to the Computex trade show this week, the vertically oriented FLP02 (via Tom’s Hardware).

If the original horizontally oriented FLP01 case called to mind a 386-era Compaq Deskpro, the FLP02 is a dead ringer for the kind of case you might have gotten for a generic 486 or early Pentium-era PC. That extends to having a Turbo button built into the front—on vintage PCs, this button could actually determine how fast the processor was allowed to run, though here, it’s actually a fan speed control instead. A lock on the front also locks the power switch in place to keep it from being flipped off accidentally, something else real vintage PCs actually did.

Despite its retro facade, the FLP02 is capable of fitting in even higher-end modern PC parts than the original FLP01. Front USB-A and USB-C ports are hidden behind a magnetic door on the front of the case, and its faux-5.25-inch floppy drives are just covers for drive bays that you could use for an optical drive or extra front I/O.

Despite its retro looks, the FLP02 still tucks away support for modern amenities like front-facing USB-A and USB-C ports. Credit: Future

On the inside, the case can fit full-size ATX motherboards and up to a 360 mm radiator for CPU cooling, and modern high-end GPUs like the GeForce RTX 5090 or 5080 should be able to fit inside.

SilverStone says the FLP02 will ship in Q3 or Q4 of this year and that US buyers should be able to get it for $220. You can, of course, buy a modern high-end PC case for much less money. But if this kind of nostalgia-bait didn’t move merchandise, companies wouldn’t keep indulging in it.

SilverStone is back with a beige PC case that looks just like your crappy old 486 Read More »

amd’s-$299-radeon-rx-9060-xt-brings-8gb-or-16gb-of-ram-to-fight-the-rtx-5060

AMD’s $299 Radeon RX 9060 XT brings 8GB or 16GB of RAM to fight the RTX 5060

AMD didn’t provide much by way of performance comparisons, but it’s promising that the cards have the same number of compute units as AMD’s last-generation RX 7600 series. AMD says that RDNA 4 compute units are much faster than those used for RDNA 3, particularly in games with ray-tracing effects enabled. This helped make the Radeon RX 9070 cards generally as fast or faster than the RX 7900 XTX and 7900 XT series, despite having around two-thirds as many compute units. Sticking with 32 CUs for the 9060 series isn’t exciting on paper, but we should still see a respectable generation-over-generation performance bump. The RX 7600 series, by contrast, provided a pretty modest performance improvement compared to 2022’s Radeon RX 6650 XT.

AMD says that the cards’ total board power—the amount of power the entire graphics card, including the GPU itself, RAM, and other components—starts at 150 W for the 8GB card and 160 W for the 16GB card, with a maximum TBP of 182 W. That’s a shade higher than but generally comparable to the RTX 5060 and 5060 Ti, and (depending on where actual performance ends up) quite a bit more efficient than the RX 7600 series. This partly comes down to a more efficient 4nm TSMC manufacturing process, a substantial upgrade from the 6nm process used for the 7600 series.

It’s unusual for a GPU maker to define a TBP range—more commonly we’re just given a single default value. But this is in line with new settings we observed in our RX 9070 review; AMD officially supports a range of different user-selectable TBP numbers in its Catalyst driver package, and some GPU makers were shipping cards that used higher TBPs by default.

Higher power limits can increase performance, though usually the performance increase is disproportionately small compared to the increase in power draw. These power limits should also generally mean that most 9060 XTs can be powered with a single 8-pin power connector, rather than using multiple connectors or the 12-pin 12VHPWR/12V-2×6 connector.

AMD’s $299 Radeon RX 9060 XT brings 8GB or 16GB of RAM to fight the RTX 5060 Read More »

2025-hyundai-ioniq-9-first-drive:-efficient,-for-a-big-one

2025 Hyundai Ioniq 9 first drive: Efficient, for a big one

Only the $58,995 Ioniq 9 S is available with a rear-wheel drive powertrain. In this case, one with 215 hp (160 kW) and 258 lb-ft (350 Nm) and a range of 325 miles (539 km) from the 110.3 kWh (gross) battery pack. All other trims feature twin motor all-wheel drive, but you give up little in the way of range.

The $62,765 SE and $68,320 SEL offer a combined 303 hp (226 kW) and 446 lb-ft (605 Nm) and 320 miles (515 km) of range, and the $71,250 Performance Limited, $74,990 Performance Calligraphy, and $76,490 Performance Calligraphy Design use a more powerful front motor to generate a total of 442 hp (315 kW) and 516 lb-ft (700 Nm), and a range of 311 miles (500 km).

The Ioniq 9’s interior loses some of the charm of the concept. Hyundai

While a short first drive is not the best place to evaluate an EV’s range efficiency, driven day to day in Eco mode, I wouldn’t be surprised if you were able to easily exceed 3 miles/kWh (20.7 kWh/100 km). Other drive modes include Normal, which uses the front motor much more often and therefore is markedly quicker than Eco; Sport, which has quite a lot of initial throttle tip-in and will head-toss your passengers if you have any; Terrain, first seen on the Ioniq 5 XRT; and Snow.

The ride is quite firm on surface streets but less so at highway speeds over seams and expansion gaps. As you start to corner faster you can expect to encounter understeer, but since this is a three-row SUV weighing between 5,507-6,008 lbs (2,498-2,725 kg), one has to wonder what else was expected. At sensible speeds, it’s easy to see out of and place it on the road, and if you’re stuck in a tailback with a couple of grumpy children in the back, it’s a calming enough environment to keep you from being over-stressed.

Hyundai has wisely priced the Ioniq 9 between the related Kia EV9 (which also uses the E-GMP platform) and EVs from premium OEMs like the Volvo EX90, Mercedes EQS SUV, or the aforementioned Rivian.

2025 Hyundai Ioniq 9 first drive: Efficient, for a big one Read More »

zero-click-searches:-google’s-ai-tools-are-the-culmination-of-its-hubris

Zero-click searches: Google’s AI tools are the culmination of its hubris


Google’s first year with AI search was a wild ride. It will get wilder.

Google is constantly making changes to its search rankings, but not all updates are equal. Every few months, the company bundles up changes into a larger “core update.” These updates make rapid and profound changes to search, so website operators watch them closely.

The March 2024 update was unique. It was one of Google’s largest core updates ever, and it took over a month to fully roll out. Nothing has felt quite the same since. Whether the update was good or bad depends on who you ask—and maybe who you are.

It’s common for websites to see traffic changes after a core update, but the impact of the March 2024 update marked a seismic shift. Google says the update aimed to address spam and AI-generated content in a meaningful way. Still, many publishers say they saw clicks on legitimate sites evaporate, while others have had to cope with unprecedented volatility in their traffic. Because Google owns almost the entire search market, changes in its algorithm can move the Internet itself.

In hindsight, the March 2024 update looks like the first major Google algorithm update for the AI era. Not only did it (supposedly) veer away from ranking AI-authored content online, but it also laid the groundwork for Google’s ambitious—and often annoying—desire to fuse AI with search.

A year ago, this ambition surfaced with AI Overviews, but now the company is taking an even more audacious route, layering in a new chat-based answer service called “AI Mode.” Both of these technologies do at least two things: They aim to keep you on Google properties longer, and they remix publisher content without always giving prominent citations.

Smaller publishers appear to have borne the brunt of the changes caused by these updates. “Google got all this flak for crushing the small publishers, and it’s true that when they make these changes, they do crush a lot of publishers,” says Jim Yu, CEO of enterprise SEO platform BrightEdge. Yu explains that Google is the only search engine likely to surface niche content in the first place, and there are bound to be changes to sites at the fringes during a major core update.

Google’s own view on the impact of the March 2024 update is unsurprisingly positive. The company said it was hoping to reduce the appearance of unhelpful content in its search engine results pages (SERPs) by 40 percent. After the update, the company claimed an actual reduction of closer to 45 percent. But does it feel like Google’s results have improved by that much? Most people don’t think so.

What causes this disconnect? According to Michael King, founder of SEO firm iPullRank, we’re not speaking the same language as Google. “Google’s internal success metrics differ from user perceptions,” he says. “Google measures user satisfaction through quantifiable metrics, while external observers rely on subjective experiences.”

Google evaluates algorithm changes with various tests, including human search quality testers and running A/B tests on live searches. But more than anything else, success is about the total number of searches (5 trillion of them per year). Google often makes this number a centerpiece of its business updates to show investors that it can still grow.

However, using search quantity to measure quality has obvious problems. For instance, more engagement with a search engine might mean that quality has decreased, so people try new queries (e.g., the old trick of adding “Reddit” to the end of your search string). In other words, people could be searching more because they don’t like the results.

Jim Yu suggests that Google is moving fast and breaking things, but it may not be as bad as we think. “I think they rolled things out faster because they had to move a lot faster than they’ve historically had to move, and it ends up that they do make some real mistakes,” says Yu. “[Google] is held to a higher standard, but by and large, I think their search quality is improving.”

According to King, Google’s current search behavior still favors big names, but other sites have started to see a rebound. “Larger brands are performing better in the top three positions, while lesser-known websites have gained ground in positions 4 through 10,” says King. “Although some websites have indeed lost traffic due to reduced organic visibility, the bigger issue seems tied to increased usage of AI Overviews”—and now the launch of AI Mode.

Yes, the specter of AI hangs over every SERP. The unhelpful vibe many people now get from Google searches, regardless of the internal metrics the company may use, may come from a fundamental shift in how Google surfaces information in the age of AI.

The AI Overview hangover

In 2025, you can’t talk about Google’s changes to search without acknowledging the AI-generated elephant in the room. As it wrapped up that hefty core update in March 2024, Google also announced a major expansion of AI in search, moving the “Search Generative Experience” out of labs and onto Google.com. The feature was dubbed “AI Overviews.”

The AI Overview box has been a fixture on Google’s search results page ever since its debut a year ago. The feature uses the same foundational AI model as Google’s Gemini chatbot to formulate answers to your search queries by ingesting the top 100 (!) search results. It sits at the top of the page, pushing so-called blue link content even farther down below the ads and knowledge graph content. It doesn’t launch on every query, and sometimes it answers questions you didn’t ask—or even hallucinates a totally wrong answer.

And it’s not without some irony that Google’s laudable decision to de-rank synthetic AI slop comes at the same time that Google heavily promotes its own AI-generated content right at the top of SERPs.

AI Overview on phone

AI Overviews appear right at the top of many search results.

Credit: Google

AI Overviews appear right at the top of many search results. Credit: Google

What is Google getting for all of this AI work? More eyeballs, it would seem. “AI is driving more engagement than ever before on Google,” says Yu. BrightEdge data shows that impressions on Google are up nearly 50 percent since AI Overviews launched. Many of the opinions you hear about AI Overviews online are strongly negative, but that doesn’t mean people aren’t paying attention to the feature. In its Q1 2025 earnings report, Google announced that AI Overviews is being “used” by 1.5 billion people every month. (Since you can’t easily opt in or opt out of AI Overviews, this “usage” claim should be taken with a grain of salt.)

Interestingly, the impact of AI Overviews has varied across the web. In October 2024, Google was so pleased with AI Overviews that it expanded them to appear in more queries. And as AI crept into more queries, publishers saw a corresponding traffic drop. Yu estimates this drop to be around 30 percent on average for those with high AI query coverage. For searches that are less supported in AI Overviews—things like restaurants and financial services—the traffic change has been negligible. And there are always exceptions. Yu suggests that some large businesses with high AI Overview query coverage have seen much smaller drops in traffic because they rank extremely well as both AI citations and organic results.

Lower traffic isn’t the end of the world for some businesses. Last May, AI Overviews were largely absent from B2B queries, but that turned around in a big way in recent months. BrightEdge estimates that 70 percent of B2B searches now have AI answers, which has reduced traffic for many companies. Yu doesn’t think it’s all bad, though. “People don’t click through as much—they engage a lot more on the AI—but when they do click, the conversion rate for the business goes up,” Yu says. In theory, serious buyers click and window shoppers don’t.

But the Internet is not a giant mall that exists only for shoppers. It is, first and foremost, a place to share and find information, and AI Overviews have hit some purveyors of information quite hard. At launch, AI Overviews were heavily focused on “What is” and “How to” queries. Such “service content” is a staple of bloggers and big media alike, and these types of publishers aren’t looking for sales conversions—it’s traffic that matters. And they’re getting less of it because AI Overviews “helpfully” repackages and remixes their content, eliminating the need to click through to the site. Some publishers are righteously indignant, asking how it’s fair for Google to remix content it doesn’t own, and to do so without compensation.

But Google’s intentions don’t end with AI Overviews. Last week, the company started an expanded public test of so-called “AI Mode,” right from the front page. AI Mode doesn’t even bother with those blue links. It’s a chatbot experience that, at present, tries to answer your query without clearly citing sources inline. (On some occasions, it will mention Reddit or Wikipedia.) On the right side of the screen, Google provides a little box with three sites linked, which you can expand to see more options. To the end user, it’s utterly unclear if those are “sources,” “recommendations,” or “partner deals.”

Perhaps more surprisingly, in our testing, not a single AI Mode “sites box” listed a site that ranked on the first page for the same query on a regular search. That is, the links in AI Mode for “best foods to eat for a cold” don’t overlap at all with the SERP for the same query in Google Search. In fairness, AI Mode is very new, and its behavior will undoubtedly change. But the direction the company is headed seems clear.

Google’s real goal is to keep you on Google or other Alphabet properties. In 2019, Rand Fishkin noticed that Google’s evolution from search engine to walled garden was at a tipping point. At that time—and for the first time—more than half of Google searches resulted in zero click-throughs to other sites. But data did show large numbers of clicks to Google’s own properties, like YouTube and Maps. If Google doesn’t intend to deliver a “zero-click” search experience, you wouldn’t know it from historical performance data or the new features the company develops.

You also wouldn’t know it from the way AI Overviews work. They do cite some of the sources used in building each output, and data suggests people click on those links. But are the citations accurate? Is every source used for constructing an AI Overview cited? We don’t really know, as Google is famously opaque about how its search works. We do know that Google uses a customized version of Gemini to support AI Overviews and that Gemini has been trained on billions and billions of webpages.

When AI Overviews do cite a source, it’s not clear how those sources came to be the ones cited. There’s good reason to be suspicious here: AI Overview’s output is not great, as witnessed by the numerous hallucinations we all know and love (telling people to eat rocks, for instance). The only thing we know for sure is that Google isn’t transparent about any of this.

No signs of slowing

Despite all of that, Google is not slowing down on AI in search. More recent core updates have only solidified this new arrangement with an ever-increasing number of AI-answered queries. The company appears OK with its current accuracy problems, or at the very least, it’s comfortable enough to push out AI updates anyway. Google appears to have been caught entirely off guard by the public launch of ChatGPT, and it’s now utilizing its search dominance to play catch-up.

To make matters even more dicey, Google isn’t even trying to address the biggest issue in all this: The company’s quest for zero-click search harms the very content creators upon which the company has built its empire.

For its part, Google has been celebrating its AI developments, insisting that content producers don’t know what’s best for them, refuting any concerns with comments about search volume increases and ever-more-complex search query strings. The changes must be working!

Google has been building toward this moment for years. The company started with a list of 10 blue links and nothing else, but little by little, it pushed the links down the page and added more content that keeps people in the Google ecosystem. Way back in 2007, Google added Universal Search, which allowed it to insert content from Google Maps, YouTube, and other services. In 2009, Rich Snippets began displaying more data from search results on SERPs. In 2012, the Knowledge Graph began extracting data from search results to display answers in the search results. Each change kept people on Google longer and reduced click-throughs, all the while pushing the search results down the page.

AI Overviews, and especially AI Mode, are the logical outcome of Google’s yearslong transformation from an indexer of information to an insular web portal built on scraping content from around the web. Earlier in Google’s evolution, the implicit agreement was that websites would allow Google to crawl their pages in exchange for sending them traffic. That relationship has become strained as the company has kept more traffic for itself, reducing click-throughs to websites even as search volume continues to increase. And locking Google out isn’t a realistic option when the company controls almost the entire search market.

Even when Google has taken a friendlier approach, business concerns could get in the way. During the search antitrust trial, documents showed that Google initially intended to let sites opt out of being used for AI training for its search-based AI features—but these sites would still be included in search results. The company ultimately canned that idea, leaving site operators with the Pyrrhic choice of participating in the AI “revolution” or becoming invisible on the web. Google now competes with, rather than supports, the open web.

When many of us look at Google’s search results today, the vibe feels off. Maybe it’s the AI, maybe it’s Google’s algorithm, or maybe the Internet just isn’t what it once was. Whatever the cause, the shift toward zero-click search that began more than a decade ago was made clear by the March 2024 core update, and it has only accelerated with the launch of AI Mode. Even businesses that have escaped major traffic drops from AI Overviews could soon find that Google’s AI-only search can get much more overbearing.

The AI slop will continue until morale improves.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Zero-click searches: Google’s AI tools are the culmination of its hubris Read More »

space-force-official:-commercial-satellites-can-do-a-lot-more-than-we-thought

Space Force official: Commercial satellites can do a lot more than we thought

“So, we’re off working now with that program office to go start off a more commercial line,” Purdy said. “And when I say commercial in this particular aspect, just to clarify, this is accomplishing the same GSSAP mission. Our operators will fly the GSSAP system using the same ground systems and data they do now, but these would be using faster, commercial build times… and cheaper, less expensive parts in order to bring that together in a faster sense.”

An artist’s illustration of two of the Space Force’s GSSAP surveillance satellites, built by Northrop Grumman. Credit: US Space Force

The next-gen GSSAP spacecraft may not meet the same standards as the Space Force’s existing inspector satellites, but the change comes with benefits beyond lower costs and faster timelines. It will be unclassified and will be open to multiple vendors to build and launch space surveillance satellites, injecting some level of competition into the program. It will also be eligible for sales to other countries.

More for less with GPS

There’s another area where Purdy said the Space Force was surprised by what commercial satellite builders were offering. Last year, the Pentagon used a new “Quick Start” procurement model authorized by Congress to establish a program to bolster the GPS navigation network, which is run by the Space Force but relied upon by commercial users and private citizens around the world.

The Space Force has more than 30 GPS satellites in medium-Earth orbit (MEO) at an altitude of roughly 12,550 miles (20,200 kilometers). Purdy said the network is “vulnerable” because the constellation has a relatively small number of satellites, at least relative to the Space Force’s newest programs. In MEO, the satellites are within range of direct-ascent anti-satellite weapons. Many of the GPS satellites are aging, and the newer ones, built by Lockheed Martin, cost about $250 million apiece. With the Resilient GPS program, the Space Force aims to reduce the cost to $50 million to $80 million per satellite.

The satellites will be smaller than the GPS satellites flying today and will transmit a core set of signals. “We’re looking to add more resiliency and more numbers,” Purdy said.

“We actually didn’t think that we were going to get much, to be honest with you, and it was a surprise to us, and a major learning [opportunity] for us, learning last year that satellite prices had—they were low in LEO already, but they were lowering in MEO,” Purdy said. “So, that convinced us that we should proceed with it. The results have actually been more surprising and encouraging than we thought.

“The [satellite] buses actually bring a higher power level than our current program of record does, which allows us to punch through jamming in a better sense. We can achieve better results, we think, over time, going after these commercial buses,” Purdy said. “So that’s caused me to think, for our mainline GPS system, we’re actually looking at that for alternative ways to get after that.”

Maj. Gen. Stephen Purdy oversees the Space Force’s acquisition programs at the Pentagon. Credit: Jonathan Newton/The Washington Post via Getty Images

In September, the Space Force awarded four agreements to Astranis, Axient, L3Harris, and Sierra Space to produce design concepts for new Resilient GPS satellites. Astranis and Axient are relatively new to satellite manufacturing. Astranis is a pioneer in low-mass Internet satellites in geosynchronous orbit and a non-traditional defense contractor. Axient, acquired by a company named Astrion last year, has focused on producing small CubeSats.

The military will later select one or more of these companies to move forward with producing up to eight Resilient GPS satellites for launch as soon as 2028. Early planning is already underway for a follow-on set of Resilient GPS satellites with additional capabilities, according to the Space Force.

The experience with the R-GPS program inspired the Space Force to look at other mission areas that might be well-served with a similar procurement approach. They settled on GSSAP as the next frontier.

Scolese, director of the NRO, said his agency is examining how to use commercial satellite constellations for other purposes beyond Earth imaging. This might include a program to employ commercially procured satellites for signals intelligence (SIGINT) missions, he said.

“It’s not just the commercial imagery,” Scolese said. “It’s also commercial RF (Radio Frequency, or SIGINT) and newer phenomenologies as where we’re working with that industry to go off and help advance those.”

Space Force official: Commercial satellites can do a lot more than we thought Read More »

openai-introduces-codex,-its-first-full-fledged-ai-agent-for-coding

OpenAI introduces Codex, its first full-fledged AI agent for coding

We’ve been expecting it for a while, and now it’s here: OpenAI has introduced an agentic coding tool called Codex in research preview. The tool is meant to allow experienced developers to delegate rote and relatively simple programming tasks to an AI agent that will generate production-ready code and show its work along the way.

Codex is a unique interface (not to be confused with the Codex CLI tool introduced by OpenAI last month) that can be reached from the side bar in the ChatGPT web app. Users enter a prompt and then click either “code” to have it begin producing code, or “ask” to have it answer questions and advise.

Whenever it’s given a task, that task is performed in a distinct container that is preloaded with the user’s codebase and is meant to accurately reflect their development environment.

To make Codex more effective, developers can include an “AGENTS.md” file in the repo with custom instructions, for example to contextualize and explain the code base or to communicate standardizations and style practices for the project—kind of a README.md but for AI agents rather than humans.

Codex is built on codex-1, a fine-tuned variation of OpenAI’s o3 reasoning model that was trained using reinforcement learning on a wide range of coding tasks to analyze and generate code, and to iterate through tests along the way.

OpenAI introduces Codex, its first full-fledged AI agent for coding Read More »