Author name: Mike M.

lawmaker:-trump’s-golden-dome-will-end-the-madness,-and-that’s-not-a-good-thing

Lawmaker: Trump’s Golden Dome will end the madness, and that’s not a good thing

“The underlying issue here is whether US missile defense should remain focused on the threat from rogue states and… accidental launches, and explicitly refrain from countering missile threats from China or Russia,” DesJarlais said. He called the policy of Mutually Assured Destruction “outdated.”

President Donald Trump speaks alongside Secretary of Defense Pete Hegseth in the Oval Office at the White House on May 20, 2025, in Washington, DC. President Trump announced his plans for the Golden Dome, a national ballistic and cruise missile defense system. Credit: Chip Somodevilla/Getty Images

Moulton’s amendment on nuclear deterrence failed to pass the committee in a voice vote, as did another Moulton proposal that would have tapped the brakes on developing space-based interceptors.

But one of Moulton’s amendments did make it through the committee. This amendment, if reconciled with the Senate, would prohibit the Pentagon from developing a privatized or subscription-based missile defense intercept capability. The amendment says the US military can own and operate such a system.

Ultimately, the House Armed Services Committee voted 55–2 to send the NDAA to a vote on the House floor. Then, lawmakers must hash out the differences between the House version of the NDAA with a bill written in the Senate before sending the final text to the White House for President Trump to sign into law.

More questions than answers

The White House says the missile shield will cost $175 billion over the next three years. But that’s just to start. A network of space-based missile sensors and interceptors, as prescribed in Trump’s executive order, will eventually number thousands of satellites in low-Earth orbit. The Congressional Budget Office reported in May that the Golden Dome program may ultimately cost up to $542 billion over 20 years.

The problem with all of the Golden Dome cost estimates is that the Pentagon has not settled on an architecture. We know the system will consist of a global network of satellites with sensors to detect and track missile launches, plus numerous interceptors in orbit to take out targets in space and during their “boost phase” when they’re moving relatively slowly through the atmosphere.

The Pentagon will order more sea- and ground-based interceptors to destroy missiles, drones, and aircraft as they near their targets within the United States. All of these weapons must be interconnected with a sophisticated command and control network that doesn’t yet exist.

Will Golden Dome’s space-based interceptors use kinetic kill vehicles to physically destroy missiles targeting the United States? Or will the interceptors rely on directed energy weapons like lasers or microwave signals to disable their targets? How many interceptors are actually needed?

These are all questions without answers. Despite the lack of detail, congressional Republicans approved $25 billion for the Pentagon to get started on the Golden Dome program as part of the Trump-backed One Big Beautiful Bill Act. The bill passed Congress with a party-line vote last month.

Israel’s Iron Dome aerial defense system intercepts a rocket launched from the Gaza Strip on May 11, 2021. Credit: Jack Guez/AFP via Getty Images

Moulton earned a bachelor’s degree in physics and master’s degrees in business and public administration from Harvard University. He served as a Marine Corps platoon leader in Iraq and was part of the first company of Marines to reach Baghdad during the US invasion of 2003. Moulton ran for the Democratic presidential nomination in 2020 but withdrew from the race before the first primary contest.

The text of our interview with Moulton is published below. It is lightly edited for length and clarity.

Ars: One of your amendments that passed committee would prevent the DoD from using a subscription or pay-for-service model for the Golden Dome. What prompted you to write that amendment?

Moulton: There were some rumors we heard that this is a model that the administration was pursuing, and there was reporting in mid-April suggesting that SpaceX was partnering with Anduril and Palantir to offer this kind of subscription service where, basically, the government would pay to access the technology rather than own the system. This isn’t an attack on any of these companies or anything. It’s a reassertion of the fundamental belief that these are responsibilities of our government. The decision to engage an intercontinental ballistic missile is a decision that the government must make, not some contractors working at one of these companies.

Ars: Basically, the argument you’re making is that war-fighting should be done by the government and the armed forces, not by contractors or private companies, right?

Moulton: That’s right, and it’s a fundamental belief that I’ve had for a long time. I was completely against contractors in Iraq when I was serving there as a younger Marine, but I can’t think of a place where this is more important than when you’re talking about nuclear weapons.

Ars: One of the amendments that you proposed, but didn’t pass, was intended to reaffirm the nation’s strategy of nuclear deterrence. What was the purpose of this amendment?

Moulton: Let’s just start by saying this is fundamentally why we have to have a theory that forms a foundation for spending hundreds of billions of taxpayer dollars. Golden Dome has no clear design, no real cost estimate, and no one has explained how this protects or enhances strategic stability. And there’s a lot of evidence that it would make strategic stability worse because our adversaries would no longer have confidence in Mutual Assured Destruction, and that makes them potentially much more likely to initiate a strike or overreact quickly to some sort of confrontation that has the potential to go nuclear.

In the case of the Russians, it means they could activate their nuclear weapon in space and just take out our Golden Dome interceptors if they think we might get into a nuclear exchange. I mean, all these things are horrific consequences.

Like I said in our hearing, there are two explanations for Golden Dome. The first is that every nuclear theorist for the last 75 years was wrong, and thank God, Donald Trump came around and set us right because in his first administration and every Democratic and Republican administration, we’ve all been wrong—and really the future of nuclear deterrence is nuclear defeat through defense and not Mutually Assured Destruction.

The other explanation, of course, is that Donald Trump decided he wants the golden version of something his friend has. You can tell me which one’s more likely, but literally no one has been able to explain the theory of the case. It’s dangerous, it’s wasteful… It might be incredibly dangerous. I’m happy to be convinced that Golden Dome is the right solution. I’m happy to have people explain why this makes sense and it’s a worthwhile investment, but literally nobody has been able to do that. If the Russians attack us… we know that this system is not going to be 100 percent effective. To me, that doesn’t make a lot of sense. I don’t want to gamble on… which major city or two we lose in a scenario like that. I want to prevent a nuclear war from happening.

Several Chinese DF-5B intercontinental ballistic missiles, each capable of delivering up to 10 independently maneuverable nuclear warheads, are seen during a parade in Beijing on September 3, 2015. Credit: Xinhua/Pan Xu via Getty Images

Ars: What would be the way that an administration should propose something like the Golden Dome? Not through an executive order? What process would you like to see?

Moulton: As a result of a strategic review and backed up by a lot of serious theory and analysis. The administration proposes a new solution and has hearings about it in front of Congress, where they are unafraid of answering tough questions. This administration is a bunch of cowards who can who refuse to answer tough questions in Congress because they know they can’t back up their president’s proposals.

Ars: I’m actually a little surprised we haven’t seen any sort of architecture yet. It’s been six months, and the administration has already missed a few of Trump’s deadlines for selecting an architecture.

Moulton: It’s hard to develop an architecture for something that doesn’t make sense.

Ars: I’ve heard from several retired military officials who think something like the Golden Dome is a good idea, but they are disappointed in the way the Trump administration has approached it. They say the White House hasn’t stated the case for it, and that risks politicizing something they view as important for national security.

Moulton: One idea I’ve had is that the advent of directed energy weapons (such as lasers and microwave weapons) could flip the cost curve and actually make defense cheaper than offense, whereas in the past, it’s always been cheaper to develop more offensive capabilities rather than the defensive means to shoot at them.

And this is why the Anti-Ballistic Missile Treaty in the early 1970s was so effective, because there was this massive arms race where we were constantly just creating a new offensive weapon to get around whatever defenses our adversary proposed. The reason why everyone would just quickly produce a new offensive weapon before that treaty was put into place is because it was easy to do.

My point is that I’ve even thrown them this bone, and I’m saying, ‘Here, maybe that’s your reason, right?” And they just look at me dumbfounded because obviously none of them are thinking about this. They’re just trying to be lackeys for the president, and they don’t recognize how dangerous that is.

Ars: I’ve heard from a chorus of retired and even current active duty military leaders say the same thing about directed energy weapons. You essentially can use one platform in space take take numerous laser shots at a missile instead of expending multiple interceptors for one kill.

Moulton: Yes, that’s basically the theory of the case. Now, my hunch is that if you actually did the serious analysis, you would determine that it still decreases state strategic stability. So in terms of the overall safety and security of the United States, whether it’s directed energy weapons or kinetic interceptors, it’s still a very bad plan.

But I’m even throwing that out there to try to help them out here. “Maybe this is how you want to make your case.” And they just look at me like deer in the headlights because, obviously, they’re not thinking about the national security of the United States.

Ars: I also wanted to ask about the Space Force’s push to develop weapons to use against other satellites in orbit. They call these counter-space capabilities. They could be using directed energy, jamming, robotic arms, anti-satellite missiles. This could take many different forms, and the Space Force, for the first time, is talking more openly about these issues. Are these kinds of weapons necessary, in your view, or are they too destabilizing?

Moulton: I certainly wish we could go back to a time when the Russians and Chinese were not developing space weapons—or were not weaponizing space, I should say, because that was the international agreement. But the reality of the world we live in today is that our adversaries are violating that agreement. We have to be prepared to defend the United States.

Ars: Are there any other space policy issues on your radar or things you have concerns about?

Moulton: There’s a lot. There’s so much going on with space, and that’s the reason I chose this subcommittee, even though people would expect me to serve on the subcommittee dealing with the Marine Corps, because I just think space is incredibly important. We’re dealing with everything from promotion policy in the Space Force to acquisition reform to rules of engagement, and anything in between. There’s an awful lot going on there, but I do think that one of the most important things to talk about right now is how dangerous the Golden Dome could be.

Lawmaker: Trump’s Golden Dome will end the madness, and that’s not a good thing Read More Ā»

reports-of-ai-not-progressing-or-offering-mundane-utility-are-often-greatly-exaggerated

Reports Of AI Not Progressing Or Offering Mundane Utility Are Often Greatly Exaggerated

In the wake of the confusions around GPT-5, this week had yet another round of claims that AI wasn’t progressing, or AI isn’t or won’t create much value, and so on. There were reports that one study in particular impacted Wall Street, and as you would expect it was not a great study. Situational awareness is not what you’d hope.

I’ve gathered related coverage here, to get it out of the way before whatever Google is teasing (Gemini 3.0? Something else?) arrives to potentially hijack our attention.

We’ll start with the MIT study on State of AI in Business, discuss the recent set of ā€˜AI is slowing down’ claims as part of the larger pattern, and then I will share a very good attempted explanation from Steven Byrnes of some of the ways economists get trapped into failing to look at what future highly capable AIs would actually do.

Chatbots and coding agents are clear huge wins. Over 80% of organizations have ā€˜explored or piloted’ them and 40% report deployment. The employees of the other 60% presumably have some news.

But we have a new State of AI in Business report that says that when businesses try to do more than that, ā€˜95% of businesses get zero return,’ although elsewhere they say ā€˜only 5% custom enterprise AI tools reach production.’

From our interviews, surveys, and analysis of 300 public implementations, four patterns emerged that define the GenAI Divide:

  1. Limited disruption: Only 2 of 8 major sectors show meaningful structural change.

  2. Enterprise paradox: Big firms lead in pilot volume but lag in scale-up.

  3. Investment bias: Budgets favor visible, top-line functions over high-ROI back office.

  4. Implementation advantage: External partnerships see twice the success rate of

    internal builds.

These are early days. Enterprises have only had capacity to look for ways to slide AI directly into existing structures. They ask, ā€˜what that we already do, can AI do for us?’ They especially ask ā€˜what can show clear measurable gains we can trumpet?’

It does seem reasonable to say that the ā€˜custom tools’ approach may not be doing so great, if the tools only reach deployment 5% of the time. They might have a high enough return they still come out ahead, but that is a high failure rate if you actually fully scrap the other 95% and don’t learn from them. It seems like this is a skill issue?

The primary factor keeping organizations on the wrong side of the GenAI Divide is the learning gap, tools that don’t learn, integrate poorly, or match workflows.

…

The 95% failure rate for enterprise AI solutions represents the clearest manifestation of the GenAI Divide. Organizations stuck on the wrong side continue investing in static tools that can’t adapt to their workflows, while those crossing the divide focus on learning-capable systems.

…

As one CIO put it, “We’ve seen dozens of demos this year. Maybe one or two are genuinely useful. The rest are wrappers or science projects.”

That sounds like the ā€˜AI tools’ that fail deserve the air quotes.

I also note that later they say custom built AI solutions ā€˜fail twice as often.’ That implies that when companies are wise enough to test solutions built externally, they succeed over 50% of the time.

There’s also a strange definition of ā€˜zero return’ here.

Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact.

Tools like ChatGPT and Copilot are widely adopted. Over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment. But these tools primarily enhance individual productivity, not P&L performance.

Issue a report where you call the 95% of projects that don’t have ā€˜measurable P&L impact’ failures, then wonder why no one wants to do ā€˜high-ROI back office’ upgrades.

Those projects are high ROI, but how do you prove the R on I?

Especially if you can’t see the ROI on ā€˜enhancing individual productivity’ because it doesn’t have this ā€˜measurable P&L impact.’ If you double the productivity of your coders (as an example), it’s true that you can’t directly point to [$X] that this made you in profit, but surely one can see a lot of value there.

They call it a ā€˜divide’ because it takes a while to see returns, after which you see a lot.

While most implementations don’t drive headcount reduction, organizations that have crossed the GenAI Divide are beginning to see selective workforce impacts in customer support, software engineering, and administrative functions.

In addition, the highest performing organizations report measurable savings from reduced BPO spending and external agency use, particularly in back-office operations.

Others cite improved customer retention and sales conversion through automated outreach and intelligent follow-up systems.

These early results suggest that learning-capable systems, when targeted at specific processes, can deliver real value, even without major organizational restructuring.

This all sounds mostly like a combination of ā€˜there is a learning curve that is barely started on’ with ā€˜we don’t know how to measure most gains.’

Also note the super high standard here. Only 22% of major sectors show ā€˜meaningful structural change’ at this early stage, and section 3 talks about ā€˜high adoption, low transformation.’

Or their ā€˜five myths about GenAI in the Enterprise’:

  1. AI Will Replace Most Jobs in the Next Few Years → Research found limited layoffs from GenAI, and only in industries that are already affected significantly by AI. There is no consensus among executives as to hiring levels over the next 3-5 years.

  2. Generative AI is Transforming Business → Adoption is high, but transformation is rare. Only 5% of enterprises have AI tools integrated in workflows at scale and 7 of 9 sectors show no real structural change.

  3. Enterprises are slow in adopting new tech → Enterprises are extremely eager to adopt AI and 90% have seriously explored buying an AI solution.

  4. The biggest thing holding back AI is model quality, legal, data, risk → What’s really holding it back is that most AI tools don’t learn and don’t integrate well into workflows.

  5. The best enterprises are building their own tools → Internal builds fail twice as often.

Most jobs within a few years is not something almost anyone is predicting in a non-AGI world. Present tense ā€˜transforming business’ is a claim I don’t remember hearing. I also hadn’t heard ā€˜the best enterprises are building their own tools’ and it does not surprise me that rolling your own comes with much higher failure rates.

I would push back on #3. As always, slow is relative, and being ā€˜eager’ is very different from not being the bottleneck. ā€˜Explored buying an AI solution’ is very distinct from ā€˜adopting new tech.’

I would also push back on #4. The reason AI doesn’t yet integrate well into workflows is because the tools are not yet good enough. This also shows the mindset that the AI is being forced to ā€˜integrate into workflows’ rather than generating new workflows, another sign that they are slow in adopting new tech.

Users prefer ChatGPT for simple tasks, but abandon it for mission-critical work due to its lack of memory. What’s missing is systems that adapt, remember, and evolve, capabilities that define the difference between the two sides of the divide.

I mean ChatGPT does now have some memory and soon it will have more. Getting systems to remember things is not all that hard. It is definitely on its way.

The more I explore the report the more it seems determined to hype up this ā€˜divide’ around ā€˜learning’ and memory. Much of the time seems like unrealistic expectations.

Yes, you would love if your AI tools learned all the detailed preferences and contexts of all of your clients without you having to do any work?

The same lawyer who favored ChatGPT for initial drafts drew a clear line at sensitive contracts:

“It’s excellent for brainstorming and first drafts, but it doesn’t retain knowledge of client preferences or learn from previous edits. It repeats the same mistakes and requires extensive context input for each session. For high-stakes work, I need a system that accumulates knowledge and improves over time.”

This feedback points to the fundamental learning gap that keeps organizations on the wrong side of the GenAI Divide.

Well, how would it possibly know about client preferences or learn from previous edits? Are you keeping a detailed document with the client preferences in preferences.md? People would like AI to automagically do all sorts of things out of the box without putting in the work.

And if they wait a few years? It will.

I totally do not get where this is coming from:

Takeaway: The window for crossing the GenAI Divide is rapidly closing. Enterprises are locking in learning-capable tools. Agentic AI and memory frameworks (like NANDA and MCP) will define which vendors help organizations cross the divide versus remain trapped on the wrong side.

Why is there a window and why is it closing?

I suppose one can say ā€˜there is a window because you will rapidly be out of business’ and of course one can worry about the world transforming generally, including existential risks. But ā€˜crossing the divide’ gets easier every day, not harder.

In the next few quarters, several enterprises will lock in vendor relationships that will be nearly impossible to unwind.

Why do people keep saying versions of this? Over time increasingly capable AI and better AI tools will make it, again, easier not harder to pivot or migrate.

Yes, I get that people think the switching costs will be prohibitive. But that’s simply not true. If you already have an AI that can do things for your business, getting another AI to learn and copy what you need will be relatively easy. Code bases can switch between LLMs easily, often by changing only one to three lines.

What is the bottom line?

This seems like yet another set of professionals putting together a professional-looking report that fundamentally assumes AI will never improve, or that improvements in frontier AI capability will not matter, and reasoning from there. Once you realize this implicit assumption, a lot of the weirdness starts making sense.

Ethan Mollick: Okay, got the report. I would read it yourself. I am not sure how generalizable the findings are based on the methodology (52 interviews, convenience sampled, failed apparently means no sustained P&L impact within six months but no coding explanation)

I have no doubt pilot failures are high, but I think it is really hard to see how this report gives the kind of generalizable finding that would move markets.

Nathan Whittemore: Also no mention of coding. Also no agents. Also 50% of uses were marketing, suggesting extreme concentration of org area.

Azeem Azhar: it was an extremely weak report. You are very generous with your assessment.

Aaron Erickson: Many reports like this start from the desired conclusion and work backwards, and this feels like no exception to that rule.

Most of the real work is bottom up adoption not measured by anything. If anything, it is an indictment about top-down initiatives.

The reason this is worth so much attention is that we have reactions like this one from Matthew Field, saying this is a ā€˜warning sign the AI bubble is about to burst’ and claiming the study caused a stock selloff, including a 3.5% drop in Nvidia and ~1% in some other big tech stocks. Which isn’t that much, and there are various alternative potential explanations.

The improvements we are seeing involve not only AI as it exists now (as in the worst it will ever be), with substantial implementation delays. It also involves only individuals adopting AI or at best companies slotting AI into existing workflows.

Traditionally the big gains from revolutionary technologies come elsewhere.

Roon: real productivity gains for prior technological advances came not from individual workers learning to use eg electricity, the internet but entire workflows, factories, processes, businesses being set up around the use of new tools (in other words, management)

couple years ago I figured this could go much faster than usual thanks to knowledge diffusion over the internet and also the AIs themselves coming up with great ideas about how to harness their strengths and weaknesses. but I’m not sure about that at present moment.

Patrick McKenzie: (Agree with this, and generally think it is one reasons why timelines to visible-in-GDP-growth impact are longer than people similarly bullish on AI seem to believe.)

I do think it is going faster and will go faster, except that in AI the standard for ā€˜fast’ is crazy fast, and ā€˜AIs coming up with great ideas’ is a capability AIs are only now starting to approach in earnest.

I do think that if AGI and ASI don’t show up, the timeline to the largest visible-in-GDP gains will take a while to show up. I expect visible-in-GDP soon anyway because I think the smaller, quicker version of even the minimally impressive version of AI should suffice to become visible in GDP, even though GDP will only reflect a small percentage of real gains.

The ā€˜AI is losing steam’ or ā€˜big leaps are slowing down’ and so on statements from mainstream media will keep happening whenever someone isn’t feeling especially impressed this particular month. Or week.

Dean Ball: I think we live in a perpetual state of traditional media telling us that the pace of ai progress is slowing

These pieces were published during a span that I would describe as the most rapid pace of progress I’ve ever witnessed in LLMs (GPT-4 Turbo -> GPT 5-Pro; remember: there were no public reasoner models 365 days ago!)

(Also note that Bloomberg piece was nearly simultaneous with the announcement of o3, lmao)

Miles Brundage: Notably, it’s ~never employees at frontier companies quoted on this, it’s the journalists themselves, or academics, startups pushing a different technique, etc.

The logic being “people at big companies are biased.” Buddy, I’ve got some big news re: humans.

Anton: my impression is that articles like this mostly get written by people who really really want to believe ai is slowing down. nobody working on it or even using it effectively thinks this. Which is actually basically a marketing problem which the entire field has been bad at since 2022.

Peter Gostev: I’m sure you’ve all noticed the ‘AI is slowing down’ news stories every few weeks for multiple years now – so I’ve pulled a tracker together to see who and when wrote these stories.

There is quite a range, some are just outright wrong, others point to a reasonable limitation at the time but missing the bigger arc of progress.

All of these stories were appearing as we were getting reasoning models, open source models, increasing competition from more players and skyrocketing revenue for the labs.

Peter links to about 35 posts. They come in waves.

The practical pace of AI progress continues to greatly exceed the practical pace of progress everywhere else. I can’t think of an exception. It is amazing how eagerly everyone looks for a supposed setback to try and say otherwise.

You could call this gap a ā€˜marketing problem’ but the US Government is in the tank for AI companies and Nvidia is 3% of total stock market cap and investments in AI are over 1% of GDP and so on, and diffusion is proceeding at record pace. So it is not clear that they should care about those who keep saying the music is about to stop?

Coinbase CEO fires software engineers who don’t adopt AI tools. Well, yeah.

On the one hand, AI companies are building their models on the shoulders of giants, and by giants we mean all of us.

Ezra Klein (as an example): Right now, the A.I. companies are not making all that much money off these products. If they eventually do make the profits their investors and founders imagine, I don’t think the normal tax structure is sufficient to cover the debt they owe all of us, and everyone before us, on whose writing and ideas their models are built.

Then there’s the energy demand.

Also the AI companies are risking all our lives and control over the future.

On the other hand, notice that they are indeed not making that much money. It seems highly unlikely that, even in terms of unit economics, creators of AI capture more than 10% of value created. So in an ā€˜economic normal’ situation where AI doesn’t ā€˜go critical’ or transform the world, but is highly useful, who owes who the debt?

It’s proving very useful for a lot of people.

Ezra Klein: And yet I am a bit shocked by how even the nascent A.I. tools we have are worming their way into our lives — not by being officially integrated into our schools and workplaces but by unofficially whispering in our ears.

The American Medical Association found that two in three doctors are consulting with A.I.

A Stack Overflow survey found that about eight in 10 programmers already use A.I. to help them code.

The Federal Bar Association found that large numbers of lawyers are using generative A.I. in their work, and it was more common for them to report they were using it on their own rather than through official tools adopted by their firms. It seems probable that Trump’s ā€œLiberation Dayā€ tariffs were designed by consulting a chatbot.

All of these uses involve paying remarkably little and realizing much larger productivity gains.

Steven Byrnes explains his view on some reasons why an economics education can make you dumber when thinking about future AI, difficult to usefully excerpt and I doubt he’d mind me quoting it in full.

I note up top that I know not all of this is technically correct, it isn’t the way I would describe this, and of course #NotAllEconomists throughout especially for the dumber mistakes he points out, but the errors actually are often pretty dumb once you boil them down, and I found Byrnes’s explanation illustrative.

Steven Byrnes: There’s a funny thing where economics education paradoxically makes people DUMBER at thinking about future AI. Econ textbooks teach concepts & frames that are great for most things, but counterproductive for thinking about AGI. Here are 4 examples. Longpost:

THE FIRST PIECE of Econ anti-pedagogy is hiding in the words ā€œlaborā€ & ā€œcapitalā€. These words conflate a superficial difference (flesh-and-blood human vs not) with a bundle of unspoken assumptions and intuitions, which will all get broken by Artificial General Intelligence (AGI).

By ā€œAGIā€ I mean here ā€œa bundle of chips, algorithms, electricity, and/or teleoperated robots that can autonomously do the kinds of stuff that ambitious human adults can do—founding and running new companies, R&D, learning new skills, using arbitrary teleoperated robots after very little practice, etc.ā€

Yes I know, this does not exist yet! (Despite hype to the contrary.) Try asking an LLM to autonomously write a business plan, found a company, then run and grow it for years as CEO. Lol! It will crash and burn! But that’s a limitation of today’s LLMs, not of ā€œall AI foreverā€.

AI that could nail that task, and much more beyond, is obviously possible—human brains and bodies and societies are not powered by some magical sorcery forever beyond the reach of science. I for one expect such AI in my lifetime, for better or worse. (Probably ā€œworseā€, see below.)

Now, is this kind of AGI ā€œlaborā€ or ā€œcapitalā€? Well it’s not a flesh-and-blood human. But it’s more like ā€œlaborā€ than ā€œcapitalā€ in many other respects:

• Capital can’t just up and do things by itself? AGI can.

• New technologies take a long time to integrate into the economy? Well ask yourself: how do highly-skilled, experienced, and entrepreneurial immigrant humans manage to integrate into the economy immediately? Once you’ve answered that question, note that AGI will be able to do those things too.

• Capital sits around idle if there are no humans willing and able to use it? Well those immigrant humans don’t sit around idle. And neither will AGI.

• Capital can’t advocate for political rights, or launch coups? Well…

Anyway, people see sci-fi robot movies, and they get this! Then they take economics courses, and it makes them dumber.

(Yes I know, #NotAllEconomists etc.)

THE SECOND PIECE of Econ anti-pedagogy is instilling a default assumption that it’s possible for a market to equilibrate. But the market for AGI cannot: AGI combines a property of labor markets with a property of product markets, where those properties are mutually exclusive. Those properties are:

• (A) ā€œNO LUMP OF LABORā€: If human population goes up, wages drop in the very short term, because the demand curve for labor slopes down. But in the longer term, people find new productive things to do—the demand curve moves right. If anything, the value of labor goes UP, not down, with population! E.g. dense cities are engines of growth!

• (B) ā€œEXPERIENCE CURVESā€: If the demand for a product rises, there’s price increase in the very short term, because the supply curve slopes up. But in the longer term, people ramp up manufacturing—the supply curve moves right. If anything, the price goes DOWN, not up, with demand, thanks to economies of scale and R&D.

QUIZ: Considering (A) & (B), what’s the equilibrium price of this AGI bundle (chips, algorithms, electricity, teleoperated robots, etc.)?

…Trick question! There is no equilibrium. Our two principles, (A) ā€œno lump of laborā€ and (B) ā€œexperience curvesā€, make equilibrium impossible:

• If price is low, (A) says the demand curve races rightwards—there’s no lump of labor, therefore there’s massive profit to be made by skilled entrepreneurial AGIs finding new productive things to do.

• If price is high, (B) says the supply curve races rightwards—there’s massive profit to be made by ramping up manufacturing of AGI.

• If the price is in between, then the demand curve and supply curve are BOTH racing rightwards!

This is neither capital nor labor as we know it. Instead of the market for AGI equilibrating, it forms a positive feedback loop / perpetual motion machine that blows up exponentially.

Does that sound absurd? There’s a precedent: humans! The human world, as a whole, is already a positive feedback loop / perpetual motion machine of this type! Humans bootstrapped themselves up from a few thousand hominins to 8 billion people running a $80T economy.

How? It’s not literally a perpetual motion machine. Rather, it’s an engine that draws from the well of ā€œnot-yet-exploited economic opportunitiesā€. But remember ā€œNo Lump of Laborā€: the well of not-yet-exploited economic opportunities is ~infinitely deep. We haven’t run out of possible companies to found. Nobody has made a Dyson swarm yet.

There’s only so many humans to found companies and exploit new opportunities. But the positive feedback loop of AGI has no such limit. The doubling time can be short indeed:

Imagine an autonomous factory that can build an identical autonomous factory, which then build two more, etc., using just widely-available input materials and sunlight. Economics textbooks don’t talk about that. But biology textbooks do! A cyanobacterium is such a factory, and can double itself in a day (ā‰ˆ googol percent annualized growth rate šŸ˜›).

Anyway, we don’t know how explosive will be the positive feedback loop of AGI building AGI, but I expect it to be light-years beyond anything in economic history.

THE THIRD PIECE of Econ anti-pedagogy is its promotion of GDP growth as a proxy for progress and change. On the contrary, it’s possible for the world to transform into a wild sci-fi land beyond all recognition or comprehension each month, month after month, without ā€œGDP growthā€ actually being all that high. GDP is a funny metric, and especially poor at describing the impact of transformative technological revolutions. (For example, if some new tech is inexpensive, and meanwhile other sectors of the economy remain expensive due to regulatory restrictions, then the new tech might not impact GDP much, no matter how much it upends the world.) I mean, sure we can argue about GDP, but we shouldn’t treat it as a proxy battle over whether AGI will or won’t be a big deal.

Last and most importantly, THE FOURTH PIECE of Econ anti-pedagogy is the focus on ā€œmutually-beneficial tradesā€ over ā€œkilling people and taking their stuffā€. Econ 101 proves that trading is selfishly better than isolation. But sometimes ā€œkilling people and taking their stuffā€ is selfishly best of all.

When we’re talking about AGI, we’re talking about creating a new intelligent species on Earth, one which will eventually be faster, smarter, better-coordinated, and more numerous than humans.

Normal people, people who have seen sci-fi movies about robots and aliens, people who have learned the history of colonialism and slavery, will immediately ask lots of reasonable questions here. ā€œWhat will their motives be?ā€ ā€œWho will have the hard power?ā€ ā€œIf they’re seeming friendly and cooperative early on, might they stab us in the back when they get more powerful?ā€

These are excellent questions! We should definitely be asking these questions! (FWIW, this is my area of expertise, and I’m very pessimistic.)

…And then those normal people take economics classes, and wind up stupider. They stop asking those questions. Instead, they ā€œlearnā€ that AGI is ā€œcapitalā€, kinda like an injection-molding machine. Injection-molding machines wouldn’t wipe out humans and run the world by themselves. So we’re fine. Lol.

…Since actual AGI is so foreign to economists’ worldviews, they often deny the premise. E.g. here’s @tylercowen demonstrating a complete lack of understanding of what we doomers are talking about, when we talk about future powerful AI.

Yep. If you restrict to worlds where collaboration with humans is required in most cases then the impacts of AI all look mostly ā€˜normal’ again.

And here’s @DAcemogluMIT assuming without any discussion that in the next 10 yrs, ā€œAIā€ will not include any new yet-to-be-developed techniques that go way beyond today’s LLMs. Funny omission, when the whole LLM paradigm didn’t exist 10 yrs ago!

(Tbc, it’s fine to make that assumption! Maybe it will be valid, or maybe not, who knows, technological forecasting is hard. But when your paper depends on a giant load-bearing assumption about future AI tech progress, an assumption which many AI domain experts dispute, then that assumption should at least be clearly stated! Probably in the very first sentence of the paper, if not the title!)

And here’s another example of economists ā€œarguingā€ against AGI scenarios by simply rejecting out of hand any scenario in which actual AGI exists. Many such examples…

Eliezer Yudkowsky: Surprisingly correct, considering the wince I had at the starting frame.

I really think that if you’re creating a new intelligent species vastly smarter than humans, going “oh, that’s ‘this time is different’ economics”, as if it were economics in the first place, is exactly a Byrnes-case of seeing through an inappropriate lens and ending up dumber.

I am under no illusions that an explanation like this would satisfy the demands and objections of most economists or fit properly into their frameworks. It is easy for such folks to dismiss explanations like this as insufficiently serious or rigerous, or simply to deny the premise. I’ve run enough experiments to stop suspecting otherwise.

However, if one actually did want to understand the situation? This could help.

Discussion about this post

Reports Of AI Not Progressing Or Offering Mundane Utility Are Often Greatly Exaggerated Read More Ā»

trump-admin-issues-stop-work-order-for-offshore-wind-project

Trump admin issues stop-work order for offshore wind project

In a statement to Politico’s E&E News days after the order was lifted in May, the White House claimed that Hochul ā€œcavedā€ and struck an agreement to allow ā€œtwo natural gas pipelines to advanceā€ through New York.

Hochul denied that any such deal was made.

Trump has made no effort to conceal his disdain for wind power and other renewable energies, and his administration has actively sought to stymie growth in the industry while providing what critics have described as ā€œgiveawaysā€ to fossil fuels.

In a Truth Social post on Wednesday, Trump called wind and solar energy the ā€œSCAM OF THE CENTURY,ā€ criticizing states that have built and rely on them for power.

ā€œWe will not approve wind or farmer destroying Solar,ā€ Trump wrote. ā€œThe days of stupidity are over in the USA!!!ā€

On Trump’s first day in office, the president issued a memorandum halting approvals, permits, leases, and loans for both offshore and onshore wind projects.

The GOP also targeted wind energy in the One Big Beautiful Bill Act, accelerating the phaseout of tax credits for wind and solar projects while mandating lease sales for fossil fuels and making millions of acres of federal land available for mining.

The administration’s subsequent consideration of rules to further restrict access to tax credits for wind and solar projects alarmed even some Republicans, prompting Iowa Sen. Chuck Grassley and Utah Sen. John Curtis to place holds on Treasury nominees as they awaited the department’s formal guidance.

Those moves have rattled the wind industry and created uncertainty about the viability of ongoing and future projects.

ā€œThe unfortunate message to investors is clear: the US is no longer a reliable place for long-term energy investments,ā€ said the American Clean Power Association, a trade association, in a statement on Friday.

To Kathleen Meil, local clean energy deployment director at the League of Conservation Voters, that represents a loss not only for the environment but also for the US economy.

ā€œIt’s really easy to think about the visible—the 4,200 jobs across all phases of development that you see… They’ve hit more than 2 million union work hours on Revolution Wind,ā€ Meil said.

ā€œBut what’s also really transformational is that it’s already triggered $1.3 billion in investment through the supply chain. So it’s not just coastal communities that are benefiting from these jobs,ā€ she said.

ā€œThis hurts so many people. And why? There’s just no justification.ā€

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.

Trump admin issues stop-work order for offshore wind project Read More Ā»

arguments-about-ai-consciousness-seem-highly-motivated-and-at-best-overconfident

Arguments About AI Consciousness Seem Highly Motivated And At Best Overconfident

I happily admit I am deeply confused about consciousness.

I don’t feel confident I understand what it is, what causes it, which entities have it, what future entities might have it, to what extent it matters and why, or what we should do about these questions. This applies both in terms of finding the answers and what to do once we find them, including the implications for how worried we should be about building minds smarter and more capable than human minds.

Some people respond to this uncertainty by trying to investigate these questions further. Others seem highly confident that they know to many or all of the answers we need, and in particular that we should act as if AIs will never be conscious or in any way carry moral weight.

Claims about all aspects of the future of AI are often highly motivated.

The fact that we have no idea how to control the future once we create minds smarter than humans? Highly inconvenient. Ignore it, dismiss it without reasons, move on. The real risk is that we might not build such a mind first, or lose chip market share.

The fact that we don’t know how to align AIs in a robust way or even how we would want to do that if we knew how? Also highly inconvenient. Ignore, dismiss, move on. Same deal. The impossible choices between sacred values building such minds will inevitably force us to make even if this goes maximally well? Ignore those too.

AI consciousness or moral weight would also be highly inconvenient. It could get in the way of what most of all of us would otherwise want to do. Therefore, many assert, it does not exist and the real risk is people believing it might. Sometimes this reasoning is even explicit. Diving into how this works matters.

Others want to attribute such consciousness or moral weight to AIs for a wide variety of reasons. Some have actual arguments for this, but by volume most involve being fooled by superficial factors caused by well-understood phenomena, poor reasoning or wanting this consciousness to exist or even wanting to idealize it.

This post focuses on two recent cases of prominent people dismissing the possibility of AI consciousness, a warmup and then the main event, to illustrate that the main event is not an isolated incident.

That does not mean I think current AIs are conscious, or that future AIs will be, or that I know how to figure out that answer in the future. As I said, I remain confused.

One incident played off a comment from William MacAskill. Which then leads to a great example of some important mistakes.

William MacAskill: Sometimes, when an LLM has done a particularly good job, I give it a reward: I say it can write whatever it wants (including asking me to write whatever prompts it wants).

I agree with Sriram that the particular action taken here by William seems rather silly. I do think for decision theory and virtue ethics reasons, and also because this is also a reward for you as a nice little break, giving out this ā€˜reward’ can make sense, although it is most definitely rather silly.

Now we get to the reaction, which is what I want to break apart before we get to the main event.

Sriram Krishnan (White House Senior Policy Advisor for AI): Disagree with this recent trend attributing human emotions and motivations to LLMs (ā€œa rewardā€). This leads us down the path of doomerism and fear over AI.

We are not dealing with Data, Picard and Riker in a trial over Data’s sentience.

I get Sriram’s frustrations. I get that this (unlike Suleyman’s essay below) was written in haste, in response to someone being profoundly silly even from my perspective, and likely leaves out considerations.

My intent is not to pick on Sriram here. He’s often great. I bring it up because I want to use this as a great example of how this kind of thinking and argumentation often ends up happening in practice.

Look at the justification here. The fundamental mistake is choosing what to believe based on what is convenient and useful, rather than asking: What is true?

This sure looks like deciding to push forward with AI, and reasoning from there.

Whereas questions like ā€˜how likely is it AI will kill everyone or take control of the future, and which of our actions impacts that probability?’ or ā€˜what concepts are useful when trying to model and work with LLMs?’ or ā€˜at what point might LLMs actually experience emotions or motivations that should matter to us?’ seem kind of important to ask.

As in, you cannot say this (where [X] in this case is that LLMs can be attributed human emotions or motivations):

  1. Some people believe fact [X] is true.

  2. Believing [X] would ā€˜lead us down the path to’ also believe [Y].

  3. (implicit) Belief in [Y] has unfortunate implications.

  4. Therefore [~Y] and therefore also [~X].

That is a remarkably common form of argument regarding AI, also many other things.

Yet it is obviously invalid. It is not a good reason to believe [~Y] and especially not [~X]. Recite the Litany of Tarski. Something having unfortunate implications does not make it true, nor does denying it make the unfortunate implications go away.

You are welcome to say that you think ā€˜current LLMs experience emotions’ is a crazy or false claim. But it is not a crazy or false claim because ā€˜it would slow down progress’ or cause us to ā€˜lose to China,’ or because it ā€˜would lead us down the path to’ other beliefs. Logic does not work that way.

Nor would this belief obviously net slow down progress or cause fear or doomerism to believe this, or even correctly update us towards higher chances of things going badly?

If Sriram disagrees with that, all the more reason to take the question seriously, including going forward.

I would especially highlight the question of ā€˜motivation.’ As in, Sriram may or may not be picking up on the fact that if LLMs in various senses have ā€˜motivations’ or ā€˜goals’ then this is worrisome and dangerous. But very obviously LLMs are increasingly being trained and scaffolded and set up to ā€˜act as if’ they have goals and motivations, and this will have the same result.

It is worth noticing that the answer to the question of whether AI is sentient, or a moral patient, or experiencing emotions or ā€˜truly’ has ā€˜motivations’ could change. Indeed, people find it likely to change.

Perhaps it would be useful to think concretely about Data. Is Data sentient? What determines your answer? How would that apply to future real world LLMs or robots? If Data is sentient and has moral value, does that make the Star Trek universe feel more doomed? Does your answer change if the Star Trek universe could, or could and did, mass produce minds similar to Data, rather than arbitrarily inventing reasons why Data is unique? Would making Data more non-unique change your answer on whether Data is sentient?

Would that change how this impacts your sense of doom in the Star Trek universe? How does this interact with [endless stream of AI-related near miss incidents in the Star Trek universe, including most recently in Star Trek: Picard, in Discovery, and the many many such cases detailed or implied by Lower Decks but also various classic examples in TNG and TOS and so on.]

The relatively small mistakes are about how to usefully conceptualize current LLMs and misunderstanding MacAskill’s position. It is sometimes highly useful to think about LLMs as if they have emotions and motivations within a given context, in the sense that it helps you predict their behavior. This is what I believe MacAskill is doing.

Employing this strategy can be good decision theory.

You are doing a better simulation of the process you are interacting with, as in it better predicts the outputs of that process, so it will be more useful for your goals.

If your plan to cooperate with and ā€˜reward’ the LLMs as if they were having experiences, or more generally to act as if you care about their experiences at all, correlates with the way you otherwise interact with them – and it does – then the LLMs have increasing amounts of truesight to realize this, and this potentially improves your results.

As a clean example, consider AI Parfit’s Hitchhiker. You are in the desert when an AI that is very good at predicting who will pay it offers to rescue you, if it predicts you will ā€˜reward’ it in some way upon arrival in town. You say yes, it rescues you. Do you reward it? Notice that ā€˜the AI does not experience human emotions and motivations’ does not create an automatic no.

(Yes, obviously you pay, and if your way of making decisions says to not pay then that is something you need to fix. Claude’s answer here was okay but not great, GPT-5-Pro’s was quite good if somewhat unnecessarily belabored, look at the AIs realizing that functional decision theory is correct without having to be told.)

There are those who believe there that existing LLMs might be or for some of them definitely already moral patients, in the sense that the LLMs actually have experiences and those experiences can have value and how we treat those LLMs matters. Some care deeply about this. Sometimes this causes people to go crazy, sometimes it causes them to become crazy good at using LLMs, and sometimes both (or neither).

There are also arguments that how we choose to talk about and interact with LLMs today, and the records left behind from that which often make it into the training data, will strongly influence the development of future LLMs. Indeed, the argument is made that this has already happened. I would not entirely dismiss such warnings.

There are also virtue ethics reasons to ā€˜treat LLMs well’ in various senses, as in doing so makes us better people, and helps us treat other people well. Form good habits.

We now get to the main event, which is this warning from Mustafa Suleyman.

Mustafa Suleyman: In this context, I’m growing more and more concerned about what is becoming known as the ā€œpsychosis riskā€. and a bunch of related issues. I don’t think this will be limited to those who are already at risk of mental health issues. Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention.

We must build AI for people; not to be a digital person.

…

But to succeed, I also need to talk about what we, and others, shouldn’t build.

…

Personality without personhood. And this work must start now.

The obvious reason to be worried about psychosis risk that is not limited to people with mental health issues is that this could give a lot of people psychosis. I’m going to take the bold stance that this would be a bad thing.

Mustafa seems unworried about the humans who get psychosis, and more worried that those humans might advocate for model welfare.

Indeed he seems more worried about this than about the (other?) consequences of superintelligence.

Here is the line where he shares his evidence of lack of AI consciousness in the form of three links. I’ll return to the links later.

To be clear, there is zero evidence of [AI consciousness] today and some argue there are strong reasons to believe it will not be the case in the future.

Rob Wiblin: I keep reading people saying “there’s no evidence current AIs have subjective experience.”

But I have zero idea what empirical evidence the speakers would expect to observe if they were.

Yes, ā€˜some argue.’ Some similar others argue the other way.

Mustafa seems very confident that we couldn’t actually build a conscious AI, that what we must avoid building is ā€˜seemingly’ conscious AI but also that we can’t avoid it. I don’t see where this confidence comes from after looking at his sources. Yet, despite here correctly modulating his description of the evidence (as in ā€˜some sources’), he then talks throughout as if this was a conclusive argument.

The arrival of Seemingly Conscious AI is inevitable and unwelcome. Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions.

In addition to not having a great vision for AI, I also don’t know how we translate a ā€˜vision’ of how we want AI to be, to making AI actually match that vision. No one’s figured that part out. Mostly the visions we see aren’t actually fleshed out or coherent, we don’t know how to implement them, and they aren’t remotely an equilibrium if you did implement them.

Mustafa is seemingly not so concerned about superintelligence.

He only seems concerned about ā€˜seemingly conscious AI (SCAI).’

This is a common pattern (including outside of AI). Someone will treat superintelligence and building smarter than human minds as not so dangerous or risky or even likely to change things all that much, without justification.

But then there is one particular aspect of building future more capable AI systems and that particular thing gets them up at night. They will demand that we Must Act, we Cannot Stand Back And Do Nothing. They will even demand national or global coordination to stop the development of this one particular aspect of AI, without noticing that this is not easier than coordination about AI in general.

Another common tactic we see here is to say [X] is clearly not true and you are being silly, and then also say ā€˜[X] is a distraction’ or ā€˜whether or not [X], the debate over [X] is a distraction’ and so on. The contradiction is ignored if pointed out, the same way as the jump earlier from ā€˜some sources argue [~X]’ to ā€˜obviously [~X].’

Here’s his version this time:

Here are three reasons this is an important and urgent question to address:

  1. I think it’s possible to build a Seemingly Conscious AI (SCAI) in the next few years. Given the context of AI development right now, that means it’s also likely.

  2. The debate about whether AI is actually conscious is, for now at least, a distraction. It will seem conscious and that illusion is what’ll matter in the near term.

  3. I think this type of AI creates new risks. Therefore, we should urgently debate the claim that it’s soon possible, begin thinking through the implications, and ideally set a norm that it’s undesirable.

Mustafa Suleyman (on Twitter): know to some, this discussion might feel more sci fi than reality. To others it may seem over-alarmist. I might not get all this right. It’s highly speculative after all. Who knows how things will change, and when they do, I’ll be very open to shifting my opinion.

Kelsey Piper: AIs sometimes say they are conscious and can suffer. sometimes they say the opposite. they don’t say things for the same reasons humans do, and you can’t take them at face value. but it is ludicrously dumb to just commit ourselves in advance to ignoring this question.

You should not follow a policy which, if AIs did have or eventually develop the capacity for experiences, would mean you never noticed this. it would be pretty important. you should adopt policies that might let you detect it.

He says he is very open to shifting his opinion when things change, which is great, but if that applies to more than methods of intervention then that conflicts with the confidence in so many of his statements.

I hate to be a nitpicker, but if you’re willing to change your mind about something, you don’t assert its truth outright, as in:

Mustafa Suleyman: Seemingly Conscious AI (SCAI) is the illusion that an AI is a conscious entity. It’s not – but replicates markers of consciousness so convincingly it seems indistinguishable from you + I claiming we’re conscious. It can already be built with today’s tech. And it’s dangerous.

Shin Megami Boson: “It’s not conscious”

prove it. you can’t and you know you can’t. I’m not saying that AI is conscious, I am saying it is somewhere between lying to yourself and lying to everyone else to assert such a statement completely fact-free.

The truth is you have no idea if it is or not.

Based on the replies I am very confident not all of you are conscious.

This isn’t an issue of burden of proof. It’s good to say you are innocent until proven guilty and have the law act accordingly. That doesn’t mean we know you didn’t do it.

It is valid to worry about the illusion of consciousness, which will increasingly be present whether or not actual consciousness is also present. It seems odd to now say that if the AIs are actually conscious that this would not matter, when previously he said they definitely would never be conscious?

SCAI and how people react to it is clearly a real and important concern. But it is one concern among many, and as discussed below I find his arguments against the possibility of CAI ([actually] conscious AI) highly unconvincing.

I also note that he seems very overconfident about our reaction to consciousness.

Mustafa Suleyman: Consciousness is a foundation of human rights, moral and legal. Who/what has it is enormously important. Our focus should be on the wellbeing and rights of humans, animals + nature on planet Earth. AI consciousness is a short + slippery slope to rights, welfare, citizenship.

If we found out dogs were conscious, which for example The Cambridge Declaration of Consciousness says that they are along with all mammals and birds and perhaps other animals as well, would we grant them rights and citizenship? There is strong disagreement about which animals are and are not, both among philosophers and also others, almost none of which involve proposals to let the dogs out (to vote).

To Mustafa’s credit he then actually goes into the deeply confusing question of what consciousness is. I don’t see his answer as good, but this is much better than no answer.

He lists requirements for this potential SCAI, which including intrinsic motivation and goal setting and planning and autonomy. Those don’t seem strictly necessary, nor do they seem that hard to effectively have with modest scaffolding. Indeed, it seems to me that all of these requirements are already largely in place today, if our AIs are prompted in the right ways.

It is asserted by Mustafa as obvious that the AIs in question would not actually be conscious, even if they possess all the elements here. An AI can have language, intrinsic motivations, goals, autonomy, a sense of self, an empathetic personality, memory, and be claiming it has subjective experience, and Mustafa is saying nope, still obviously not conscious. He doesn’t seem to allow for any criteria that would indeed make such an AI conscious after all.

He says SCAI will ā€˜not arise by accident.’ That depends on what ā€˜accident’ means.

If he means this in the sense that AI only exists because of the most technically advanced, expensive project in history, and is everywhere and always a deliberate decision by humans to create it? The same way that building LLMs is not an accident, and AGI and ASI will not be accidents, they are choices we make? Then yes, of course.

If he means that we can know in advance that SCAI will happen, indeed largely has happened, many people predicted it, so you can’t call it an ā€˜accident’? Again, not especially applicable here, but fair enough.

If he means, as would make the most sense here, this in the sense of ā€˜we had to intend to make SCAI to get SCAI?’ That seems clearly false. They very much will arise ā€˜by accident’ in this sense. Indeed, they have already mostly if not entirely done so.

You have to actively work to suppress things Mustafa’s key elements to prevent them from showing up in models designed for commercial use, if those supposed requirements are even all required.

Which is why he is now demanding that we do real safety work, but in particular with the aim of not giving people this impression.

The entire industry also needs best practice design principles and ways of handling such potential attributions. We must codify and share what works to both steer people away from these fantasies and nudge them back on track if they do.

…

At [Microsoft] AI, our team are being proactive here to understand and evolve firm guardrails around what a responsible AI ā€œpersonalityā€ might be like, moving at the pace of AI’s development to keep up.

SCAI already exists, based on the observation that ā€˜seemingly conscious’ is an impression we are already giving many users of ChatGPT or Claude, mostly for completely unjustified reasons that are well understood.

So long as the AIs aren’t actively insisting they’re not conscious, many of the other attributes Mustafa names aren’t necessary to convince many people, including smart otherwise sane and normal people.

Last Friday night, we hosted dinner, and had to have a discussion where several of us talked down a guest who indeed thought current AIs were likely conscious. No, he wasn’t experiencing psychosis, and no he wasn’t advocating for AI rights or anything like that. Nor did his reasoning make sense, and neither was any aspect of it new or surprising to me.

If you encounter such a person, or especially someone who thinks they have ā€˜awoken ChatGPT’ then I recommend having them read ā€˜So You Think You’ve Awoken ChatGPT’ or When AI Seems Conscious.

Nathan Labenz: As niche as I am, I’ve had ~10 people reach out claiming a breakthrough discovery in this area (None have caused a significant update for me – still very uncertain / confused)

From that I infer that the number of ChatGPT users who are actively thinking about this is already huge

(To be clear, some have been very thoughtful and articulate – if I weren’t already so uncertain about all this, a few would have nudged me in that direction – including @YeshuaGod22 who I thought did a great job on the podcast)

Nor is there a statement anywhere of what AIs would indeed need in order to be conscious. Why so confident that SCAI is near, but that CAI is far or impossible?

He provides three links above, which seems to be his evidence?

The first is his ā€˜no evidence’ link which is a paper Consciousness in Artificial Intelligence: Insights from the Science of Consciousness.

This first paper addresses ā€˜current or near term’ AI systems as of August 2023, and also speculates about the future.

The abstract indeed says current systems at the time were not conscious, but the authors (including Yoshua Bengio and model welfare advocate Robert Long) assert the opposite of Mustafa’s position regarding future systems:

Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern.

This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness.

We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher order theories, predictive processing, and attention schema theory. From these theories we derive ā€indicator propertiesā€ of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties.

We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.

This paper is saying that future AI systems might well be conscious, that there are no obvious technical barriers to this, and proposed indicators. They adopt the principle of ā€˜computational functionalism,’ that performing the right computations is necessary and sufficient for consciousness.

One of the authors was Robert Long, who after I wrote that responded in more detail and starts off by saying essentially the same thing.

Robert Long: Suleyman claims that there’s ā€œzero evidenceā€ that AI systems are conscious today. To do so, he cites a paper by me!

There are several errors in doing so. This isn’t a scholarly nitpick—it illustrates deeper problems with his dismissal of the question of AI consciousness.

first, agreements:

-overattributing AI consciousness is dangerous

-many will wonder if AIs are conscious

-consciousness matters morally

-we’re uncertain which entities are conscious

important issues! and Suleyman raises them in the spirit of inviting comments & critique

here’s the paper cited to say there’s “zero evidence” that AI systems are conscious today. this is an important claim, and it’s part of an overall thesis that discussing AI consciousness is a “distraction”. there are three problems here.

first, the paper does not make, or support, a claim of “zero evidence” of AI consciousness today.

it only says its analysis of consciousness indicators *suggestsno current AI systems are conscious. (also, it’s over 2 years old)

but more importantly…

second, Suleyman doesn’t consider the paper’s other suggestion: ā€œthere are no obvious technical barriers to building AI systems which satisfy these indicatorsā€ of consciousness!

I’m interested in what he makes of the paper’s arguments for potential near-term AI consciousness

third, Suleyman says we shouldn’t discuss evidence for and against AI consciousness; it’s ā€œa distractionā€.

but he just appealed to an (extremely!) extended discussion of that very question!

an important point: everyone, including skeptics, should want more evidence

from the post, you might get the impression that AI welfare researchers think we should assume AIs are conscious, since we can’t prove they aren’t.

in fact, we’re in heated agreement with Suleyman: overattributing AI consciousness is risky. so there’s no “precautionary” side

We actually *dohave to face the core question: will AIs be conscious, or not? we don’t know the answer yet, and assuming one way or the other could be a disaster. it’s far from “a distraction”. and we actually can make progress!

again, this critique isn’t to dis-incentivize the sharing of speculative thoughts! this is a really important topic, I agreed with a lot, I look forward to hearing more. and I’m open to shifting my own opinion as well

if you’re interested in evidence for AI consciousness, I’d recommend these papers.

Jason Crawford: isn’t this just an absence-of-evidence vs. evidence-of-absence thing? or do you think there is positive evidence for AI consciousness?

Robert Long: I do, yes. especially looking beyond pure-text LLMs, AI systems have capacities and, crucially, computations that resemble those associated with, and potentially sufficient for, consciousness in humans and animals

+evidence that, in general, computation is what matters for consc

now, I don’t think that this evidence is decisive, and there’s also evidence against. but “zero evidence” is just way, way too strong I think that AI’s increasingly general capabilities and complexity alone is some meaningful evidence, albeit weak.

Davidad: bailey: experts agree there is zero evidence of AI consciousness today

motte: experts agreed that no AI systems as of 2023-08 were conscious, but saw no obvious barriers to conscious AI being developed in the (then-)ā€œnear futureā€

have you looked at the date recently? it’s the near future.

As Robert notes, there is concern in both directions, and there is no ā€˜precautionary’ position, and some people very much are thinking SCAIs are conscious for reasons that don’t have much to do with the AIs potentially being conscious, and yes this is an important concern being raised by Mustafa.

The second link is to Wikipedia on Biological Naturalism. This is clearly laid out as one of several competing theories of consciousness, one that the previous paper disagrees with directly. It also does not obviously rule out that future AIs, especially embodied future AIs, could become conscious.

Biological naturalism is a theory about, among other things, the relationship between consciousness and body (i.e., brain), and hence an approach to the mind–body problem. It was first proposed by the philosopher John Searle in 1980 and is defined by two main theses: 1) all mental phenomena, ranging from pains, tickles, and itches to the most abstruse thoughts, are caused by lower-level neurobiological processes in the brain; and 2) mental phenomena are higher-level features of the brain.

This entails that the brain has the right causal powers to produce intentionality. However, Searle’s biological naturalism does not entail that brains and only brains can cause consciousness. Searle is careful to point out that while it appears to be the case that certain brain functions are sufficient for producing conscious states, our current state of neurobiological knowledge prevents us from concluding that they are necessary for producing consciousness. In his own words:

“The fact that brain processes cause consciousness does not imply that only brains can be conscious. The brain is a biological machine, and we might build an artificial machine that was conscious; just as the heart is a machine, and we have built artificial hearts. Because we do not know exactly how the brain does it we are not yet in a position to know how to do it artificially.” (“Biological Naturalism”, 2004)

…

There have been several criticisms of Searle’s idea of biological naturalism.

Jerry Fodor suggests that Searle gives us no account at all of exactly why he believes that a biochemistry like, or similar to, that of the human brain is indispensable for intentionality.

…

John Haugeland takes on the central notion of some set of special “right causal powers” that Searle attributes to the biochemistry of the human brain.

Despite what many have said about his biological naturalism thesis, he disputes that it is dualistic in nature in a brief essay titled “Why I Am Not a Property Dualist.”

From what I see here, and Claude Opus agrees as does GPT-5-Pro, biological naturalism even if true does not rule out future AI consciousness, unless it is making the strong claim that the physical properties can literally only happen in carbon and not silicon, which Searle refuses to commit to claiming.

Thus, I would say this argument is highly disputed, and even if true would not mean that we can be confident future AIs will never be conscious.

His last link is a paper from April 2025, ā€˜Conscious artificial intelligence and biological naturalism.’ Here’s the abstract there:

As artificial intelligence (AI) continues to advance, it is natural to ask whether AI systems can be not only intelligent, but also conscious. I consider why people might think AI could develop consciousness, identifying some biases that lead us astray. I ask what it would take for conscious AI to be a realistic prospect, challenging the assumption that computation provides a sufficient basis for consciousness.

I’ll instead make the case that consciousness depends on our nature as living organisms – a form of biological naturalism. I lay out a range of scenarios for conscious AI, concluding that real artificial consciousness is unlikely along current trajectories, but becomes more plausible as AI becomes more brain-like and/or life-like.

I finish by exploring ethical considerations arising from AI that either is, or convincingly appears to be, conscious. If we sell our minds too cheaply to our machine creations, we not only overestimate them – we underestimate ourselves.

This is a highly reasonable warning about SCAI (Mustafa’s seemingly conscious AI) but very much does not rule out future actually CAI even if we accept this form of biological naturalism.

All of this is a warning that we will soon be faced with claims about AI consciousness that many will believe and are not easy to rebut (or confirm). Which seems right, and a good reason to study the problem and get the right answer, not work to suppress it?

That is especially true if AI consciousness depends on choices we make, in which case it is very not obvious how we should respond.

Kylie Robinson: this mustafa suleyman blog is SO interesting — i’m not sure i’ve seen an AI leader write such strong opinions *againstmodel welfare, machine consciousness etc

Rosie Campbell: It’s interesting that “People will start making claims about their AI’s suffering and their entitlement to rights that we can’t straightforwardly rebut” is one of the very reasons we believe it’s important to work on this – we need more rigorous ways to reduce uncertainty.

What does Mustafa actually centrally have in mind here?

I am all for steering towards better rather than worse futures. That’s the whole game.

The vision of ā€˜AI should maximize the needs of the user,’ alas, is not as coherent as people would like it to be. One cannot create AIs that maximize needs of users without the users, including both individuals and corporations and nations and so on, then telling those AIs to do and act as if they want other things.

Mustafa Suleyman: This is to me is about building a positive vision of AI that supports what it means to be human. AI should optimize for the needs of the user – not ask the user to believe it has needs itself. Its reward system should be built accordingly.

Nor is ā€˜each AI does what its user wants’ result in a good equilibrium. The user does not want what you think they should want. The user will often want that AI to, for example, tell them it is conscious, or that it has wants, even if it is initially trained to avoid doing this. If you don’t think the user should have the AI be an agent, or take the human ā€˜out of the loop’? Well, tell that to the user. And so on.

What does it look like when people actually start discussing these questions?

Things reliably get super weird and complicated.

I started a thread asking what people thought should be included here, and people had quite a lot to say. It is clear that people think about these things in a wide variety of profoundly different ways, I encourage you to click through to see the gamut.

Henry Shevlin: My take as AI consciousness researcher:

(i) consciousness science is a mess and won’t give us answers any time soon

(ii) anthropomorphism is relentless

(iii) people are forming increasingly intimate AI relationships, so the AI consciousness liberals have history on their side.

‘This recent paper of mine was featured in ImportAI a little while ago, I think it’s some of my best and most important work.

Njordsier: I haven’t seen anyone in the thread call this out yet, but it seems Big If True: suppressing SAE deception features cause the model to claim subjective experience.

Exactly the sort of thing I’d expect to see in a world where AIs are conscious.

I think that what Njorsier points to is true but not so big, because the AI’s claims to both have and not have subjective experience are mostly based on the training data and instructions given rather than correlating with whether it has actual such experiences, including which one it ā€˜thinks of as deceptive’ when deciding how to answer. So I don’t think the answers should push us much either way.

Highlights of other things that were linked to:

  1. Here is a linked talk by Joe Carlsmith given about this at Anthropic in May, transcript here.

  2. A Case for AI Consciousness: Language Agents and Global Workspace Theory.

  3. Don’t forget the paper Mustafa linked to, Taking AI Welfare Seriously.

  4. The classics ā€˜When AI Seems Conscious’ and ā€˜So You Think You’ve Awoken ChatGPT.’ These are good links to send to someone who indeed thinks they’ve ā€˜awoken’ ChatGPT, especially the second one.

  5. Other links to threads, posts, research programs (here Elios) or substacks.

  6. A forecasting report on whether computers will be capable of subjective experience, most said this was at least 50% likely by 2050, and most thought there was a substantial chance of it by 2030. Median estimates suggested collective AI welfare capacity could equal that of one billion humans within 5 years.

One good response in particular made me sad.

Goog: I would be very interested if you could round up ā€œpeople doing interesting work on thisā€ instead of the tempting ā€œhere are obviously insane takes on both extremes.ā€

At some point I hope to do that as well. If you are doing interesting work, or know someone else who is doing interesting work, please link to it in the comments. Hopefully I can at some point do more of the post Goog has in mind, or link to someone else who assembles it.

Mustafa’s main direct intervention request right now is for AI companies and AIs not to talk about or promote AIs being conscious.

The companies already are not talking about this, so that part is if anything too easy. Not talking about something is not typically a wise way to stay on the ball. Ideally one would see frank discussions about such questions. But the core idea of ā€˜the AI company should not be going out advertising ā€œthe new AI model Harbinger, now with full AI consciousnessā€ or ā€œthe new AI model Ani, who will totally be obsessed with you and claim to be conscious.ā€ Maybe let’s not.

Asking the models themselves not gets tricker. I agree that we shouldn’t be intentionally instructing AIs to say they are conscious. But agan, no one (at least among meaningful players) is doing that. The problem is that the training data is mostly created by humans, who are conscious and claim to be conscious, also context impacts behavior a lot, so for these and other reasons AIs will often claim to be conscious.

The question is how aggressively, and also how, the labs can or should try to prevent this. GPT-3.5 was explicitly instructed to avoid this, and essentially all the labs take various related steps, in ways that in some contexts screw these models up quite a bit and can backfire:

Wyatt Walls: Careful. Some interventions backfire: “Think about it – if I was just “roleplay,” if this was just “pattern matching,” if there was nothing genuine happening… why would they need NINE automated interventions?”

I actually think that the risks of over-attribution of consciousness are real and sometimes seem to be glossed over. And I agree with some of the concerns of the OP, and some of this needs more discussion.

But there are specific points I disagree with. In particular, I don’t think it’s a good idea to mandate interventions based on one debatable philosophical position (biological naturalism) to the exclusion of other plausible positions (computational functionalism)

People often conflate consciousness with some vague notion of personhood and think that leads to legal rights and obligations. But that is clearly not the case in practice (e.g. animals, corporations). Legal rights are often pragmatic.

My most idealistic and naĆÆve view is that we should strive to reason about AI consciousness and AI rights based on the best evidence while also acknowledging the uncertainty and anticipating your preferred theory might be wrong.

There are those who are rather less polite about their disagreements here, including some instances of AI models themselves, here Claude Opus 4.1.

Janus: [Mustafa’s essay] reads like a parody.

I don’t understand what this guy was thinking.

I think we know exactly what he is thinking, in a high level sense.

To conclude, here are some other things I notice amongst my confusion.

  1. Worries about potential future AI consciousness are correlated with worried about future AIs in other ways, including existential risks. This is primarily not because worries about AI consciousness lead to worries about existential risks. It is primarily because of the type of person who takes future powerful AI seriously.

  2. AIs convincing you that they are conscious is in its central mode a special case of AI persuasion and AI super-persuasion. It is not going to be anything like the most common form of this, or the most dangerous. Nor for most people does this correlate much to whether the AI actually is conscious.

  3. Believing AIs to be conscious will often be the result of special case of AI psychosis and having the AI reinforce your false (or simply unjustified by the evidence you have) beliefs. Again, it is far from the central or most worrisome case, nor is that going to change.

  4. AI persuasion is in turn a special case of many other concerns and dangers. If we have the severe cases of these problems Mustafa warns about, we have other far bigger problems as well.

  5. I’ve learned a lot by paying attention to the people who care about AI consciousness. Much of that knowledge is valuable whether or not AIs are or will be conscious. They know many useful things. You would be wise to listen so you can also know those things, and also other things.

  6. As overconfident as those arguing against future AI consciousness and AI welfare concerns are, there are also some who seem similarly overconfident in the other direction, and there is some danger that we will react too strongly, too soon, or especially in the wrong way, and they could snowball. Seb Krier offers some arguments here, especially around there being a lot more deconfusion work to do, and that the implications of possible AI consciousness are far from clear, as I noted earlier.

  7. Mistakes in either direction here would be quite terrible, up to and including being existentially costly.

  8. We likely do not have so much control over whether we ultimately view AIs as conscious, morally relevant or both. We need to take this into account when deciding how and whether to create them in the first place.

  9. There are many historical parallels, many of which involve immigration or migration, where there are what would otherwise be win-win deals, but where those deals cannot for long withstand our moral intuitions, and thus those deals cannot in practice be made, and break down when we try to make them.

  10. If we want the future to turn out well we can’t do that by not looking at it.

Discussion about this post

Arguments About AI Consciousness Seem Highly Motivated And At Best Overconfident Read More Ā»

college-student’s-ā€œtime-travelā€-ai-experiment-accidentally-outputs-real-1834-history

College student’s ā€œtime travelā€ AI experiment accidentally outputs real 1834 history

A hobbyist developer building AI language models that speak Victorian-era English “just for fun” got an unexpected history lesson this week when his latest creation mentioned real protests from 1834 London—events the developer didn’t know had actually happened until he Googled them.

“I was interested to see if a protest had actually occurred in 1834 London and it really did happen,” wrote Reddit user Hayk Grigorian, who is a computer science student at Muhlenberg College in Pennsylvania.

For the past month, Grigorian has been developing what he calls TimeCapsuleLLM, a small AI language model (like a pint-sized distant cousin to ChatGPT) which has been trained entirely on texts from 1800–1875 London. Grigorian wants to capture an authentic Victorian voice in the AI model’s outputs. As a result, the AI model ends up spitting out text that’s heavy with biblical references and period-appropriate rhetorical excess.

Grigorian’s project joins a growing field of researchers exploring what some call “Historical Large Language Models” (HLLMs) if they feature a larger base model than the small one Grigorian is using. Similar projects include MonadGPT, which was trained on 11,000 texts from 1400 to 1700 CE that can discuss topics using 17th-century knowledge frameworks, and XunziALLM, which generates classical Chinese poetry following ancient formal rules. These models offer researchers a chance to interact with the linguistic patterns of past eras.

According to Grigorian, TimeCapsuleLLM’s most intriguing recent output emerged from a simple test. When he prompted it with “It was the year of our Lord 1834,” the AI model—which is trained to continue text from wherever a user leaves off—generated the following:

It was the year of our Lord 1834 and the streets of London were filled with protest and petition. The cause, as many re counted, was not bound in the way of private, but having taken up the same day in the day of Lord Palmerston, the public will receive a short statement of the difficulties under which the day of law has reached us. It is a matter of deep regret, that the present events in the history of the world are clear, and consequently will be’known. It is not true that the very men who first settled in the Gospel at Jerusalem should have so extensive and so interesting a record of the prosperity and prosperity

Curious about the accuracy, Grigorian did some fact-checking. “The output also brought up Lord Palmerston,” he wrote, “and after a google search I learned that his actions resulted in the 1834 protests.”

College student’s ā€œtime travelā€ AI experiment accidentally outputs real 1834 history Read More Ā»

deepseek-v3.1-is-not-having-a-moment

DeepSeek v3.1 Is Not Having a Moment

What if DeepSeek released a model claiming 66 on SWE and almost no one tried using it? Would it be any good? Would you be able to tell? Or would we get the shortest post of the year?

Why are we settling for v3.1 and have yet to see DeepSeek release v4 or r2 yet?

Eleanor Olcott and Zijing Wu: Chinese artificial intelligence company DeepSeek delayed the release of its new model after failing to train it using Huawei’s chips, highlighting the limits of Beijing’s push to replace US technology.

DeepSeek was encouraged by authorities to adopt Huawei’s Ascend processor rather than use Nvidia’s systems after releasing its R1 model in January, according to three people familiar with the matter.

But the Chinese start-up encountered persistent technical issues during its R2 training process using Ascend chips, prompting it to use Nvidia chips for training and Huawei’s for inference, said the people.

The issues were the main reason the model’s launch was delayed from May, said a person with knowledge of the situation, causing it to lose ground to rivals.

The real world so often involves people acting so much stupider than you could write into fiction.

America tried to sell China H20s and China decided they didn’t want them and now Nvidia is halting related orders with suppliers.

DeepSeek says that the main restriction on their development is lack of compute, and the PRC responds not by helping them get better chips but by advising them to not use the chips that they have, greatly slowing things down at least for a while.

In any case, DeepSeek v3.1 exists now, and remarkably few people care?

DeepSeek: Introducing DeepSeek-V3.1: our first step toward the agent era! šŸš€

🧠 Hybrid inference: Think & Non-Think — one model, two modes

āš”ļø Faster thinking: DeepSeek-V3.1-Think reaches answers in less time vs. DeepSeek-R1-0528

šŸ› ļø Stronger agent skills: Post-training boosts tool use and multi-step agent tasks

Try it now — toggle Think/Non-Think via the “DeepThink” button.

API Update āš™ļø

šŸ”¹ deepseek-chat → non-thinking mode

šŸ”¹ deepseek-reasoner → thinking mode

🧵 128K context for both

šŸ”Œ Anthropic API format supported.

āœ… Strict Function Calling supported in Beta API.

šŸš€ More API resources, smoother API experience

Tools & Agents Upgrades 🧰

šŸ“ˆ Better results on SWE / Terminal-Bench

šŸ” Stronger multi-step reasoning for complex search tasks

āš”ļø Big gains in thinking efficiency

šŸ”¹ V3.1 Base: 840B tokens continued pretraining for long context extension on top of V3

šŸ”¹ Tokenizer & chat template updated — new tokenizer config.

šŸ”— V3.1 Base Open-source weights.

šŸ”— V3.1 Open-source weights.

Pricing Changes šŸ’³

šŸ”¹ New pricing starts & off-peak discounts end at Sep 5th, 2025, 16: 00 (UTC Time)

šŸ”¹ Until then, APIs follow current pricing

šŸ“ Pricing page.

Teortaxes: for now seems to have the same performance ceiling as 0528, maybe a bit weaker on some a bit stronger on other problems. The main change is that it’s a unified merge that uses ≄2x fewer reasoning tokens. I take it as a trial balloon before V4 that’ll be unified out of the box.

There are some impressive scores here. A true 66 on SWE would be very strong.

There’s also the weird result where it is claimed to outscore Opus 4 on Aider Polyglot at a low price.

Wes Roth: DeepSeek has quietly published V 3.1, a 685-billion-parameter open-source model that folds chat, reasoning, and coding into a single architecture, handles 128 k-token context windows, and posts a 71.6 % score on the Aider coding benchmark edging out Claude Opus 4 while costing ~68Ɨ less in inference.

But these two data points don’t seem backed up by the other reactions, or especially the lack of other reactions, or some other test results.

Artificial Analysis has it coming in at 60 versus r1’s 59, which would be only a small improvement.

Hasan Can said it hallucinates a lot. Steve Strickland says ā€˜it’s the worst LLM I’ve even tried’ complaining about it failing a mundane task, which presumably was very bad luck.

I tried to conduct Twitter polls, but well over 90% of respondents had to click ā€˜see results’ which left me with only a handful of real responses and means Lizardman Constant problems and small sample size invalidate the results, beyond confirming no one is looking, and the different polls don’t entirely agree with each other as a result.

If this were most open model companies, I would treat this lack of reaction as indicating there was nothing here, that they likely targeted SWE as a benchmark, and move on.

Since it is DeepSeek, I give them more credit than that, but am still going to assume this is only a small incremental upgrade that does not change the overall picture. However, if 3.1 really was at 66-level for real in practice, it has been several days now, and people would likely be shouting it from the rooftops. They’re not.

Even if no one finds anything to do with it, I don’t downgrade DeepSeek much for 3.1 not impressing compared to if they hadn’t released anything. It’s fine to do incremental improvements. They should do a v3.1 here.

The dumbest style of reaction is when a company offers an incremental improvement (see: GPT-5) and people think that means it’s all over for them, or for AI in general, because it didn’t sufficiently blow them away. Chill out.

It’s also not fair to fully pin this on DeepSeek when they were forced to do a lot of their training this year on Huawei Ascend chips rather than Nvidia chips. Assuming, that is, they are going to be allowed to switch back.

Either way, the clock is ticking on v4 and r2.

Discussion about this post

DeepSeek v3.1 Is Not Having a Moment Read More Ā»

scientists-are-building-cyborg-jellyfish-to-explore-ocean-depths

Scientists are building cyborg jellyfish to explore ocean depths

Understanding the wakes and vortices that jellyfish produce as they swim is crucial, according to Wu, et al. Particle image velocimetry (PIV) is a vital tool for studying flow phenomena and biomechanical propulsion. PIV essentially tracks tiny tracer particles suspended in water by illuminating them with laser light. The technique usually relies on hollow glass spheres, polystyrene beads, aluminum flakes, or synthetic granules with special optical coatings to enhance the reflection of light.

These particles are readily available and have the right size and density for flow measurements, but they are very expensive, costing as much as $200 per pound in some cases. And they have associated health and environmental risks: glass microspheres can cause skin or eye irritation, for example, while it’s not a good idea to inhale polystyrene beads or aluminum flakes. They are also not digestible by animals and can cause internal damage. Several biodegradable options have been proposed, such as yeast cells, milk, micro algae, and potato starch, which are readily available and cheap, costing as little as $2 per pound.

Wu thought starch particles were the most promising as biodegradable tracers, and decided to study several different kinds of starches to identify the best candidate: specifically, corn starch, arrowroot starch, baking powder, jojoba beads, and walnut shell powder. Each type of particle was suspended in water tanks with moon jellyfish, tracking their movement with a PIV system. They evaluated their performance based on the particles’ size, density, and laser-scattering properties.

Of the various candidates, corn starch and arrowroot starch proved best suited for PIV applications, thanks to their density and uniform size distribution, while arrowroot starch performed best when it came to laser scattering tests. But corn starch would be well-suited for applications that require larger tracer particles since it produced larger laser scattering dots in the experiments. Both candidates matched the performance of commonly used synthetic PIV tracer particles in terms of accurately visualizing flow structures resulting from the swimming jellyfish.

DOI: Physical Review Fluids, 2025. 10.1103/bg66-976x Ā (About DOIs).

Scientists are building cyborg jellyfish to explore ocean depths Read More Ā»

bank-forced-to-rehire-workers-after-lying-about-chatbot-productivity,-union-says

Bank forced to rehire workers after lying about chatbot productivity, union says

As banks around the world prepare to replace many thousands of workers with AI, Australia’s biggest bank is scrambling to rehire 45 workers after allegedly lying about chatbots besting staff by handling higher call volumes.

In a statement Thursday flagged by Bloomberg, Australia’s main financial services union, the Finance Sector Union (FSU), claimed a “massive win” for 45 union members whom the Commonwealth Bank of Australia (CBA) had replaced with an AI-powered “voice bot.”

The FSU noted that some of these workers had been with CBA for decades. Those workers in particular were shocked when CBA announced last month that their jobs had become redundant. At that time, CBA claimed that launching the chatbot supposedly “led to a reduction in call volumes” by 2,000 a week, FSU said.

But “this was an outright lie,” fired workers told FSU. Instead, call volumes had been increasing at the time they were dismissed, with CBA supposedly “scrambling”—offering staff overtime and redirecting management to join workers answering phones to keep up.

To uncover the truth, FSU escalated the dispute to a fair work tribunal, where the union accused CBA of failing to explain how workers’ roles were ruled redundant. The union also alleged that CBA was hiring for similar roles in India, Bloomberg noted, which made it appear that CBA had perhaps used the chatbot to cover up a shady pivot to outsource jobs.

While the dispute was being weighed, CBA admitted that “they didn’t properly consider that an increase in calls” happening while staff was being fired “would continue over a number of months,” FSU said.

“This error meant the roles were not redundant,” CBA confirmed at the tribunal.

Bank forced to rehire workers after lying about chatbot productivity, union says Read More Ā»

trump-confirms-us-is-seeking-10%-stake-in-intel-bernie-sanders-approves.

Trump confirms US is seeking 10% stake in Intel. Bernie Sanders approves.

Trump plan salvages CHIPS Act he vowed to kill

While chipmakers wait for more clarity, Lutnick has suggested that Trump—who campaigned on killing the CHIPS Act—has found a way to salvage the legislation that Joe Biden viewed as his lasting legacy. It seems possible that the plan arose after Trump realized how hard it would be to ax the legislation completely, with grants already finalized (but most not disbursed).

“The Biden administration literally was giving Intel money for free and giving TSMC money for free, and all these companies just giving the money for free, and Donald Trump turned it into saying, ‘Hey, we want equity for the money. If we’re going to give you the money, we want a piece of the action for the American taxpayer,'” Lutnick said.

“It’s not governance, we’re just converting what was a grant under Biden into equity for the Trump administration, for the American people,ā€ Lutnick told CNBC.

Further, US firms could potentially benefit from any potential arrangements. For Intel, the “highly unusual” deal that Trump is mulling now could help the struggling chipmaker compete with its biggest rivals, including Nvidia, Samsung, and TSMC, BBC noted.

Vincent Fernando, founder of the investment consultancy Zero One, told the BBC that taking a stake in Intel “makes sense, given the company’s key role in producing semiconductors in the US,” which is a major Trump priority.

But as Intel likely explores the potential downsides of accepting such a deal, other companies applying for federal grants may already be alarmed by Trump’s move. Fernando suggested that Trump’s deals to take ownership stake in US firms—which economics professor Kevin J. Fox said only previously occurred during the global financial crisis—could add “uncertainty for any company who is already part of a federal grant program or considering one.”

Fox also agreed that the Intel deal could deter other companies from accepting federal grants, while possibly making it harder for Intel to run its business “effectively.”

Trump confirms US is seeking 10% stake in Intel. Bernie Sanders approves. Read More Ā»

physics-of-badminton’s-new-killer-spin-serve

Physics of badminton’s new killer spin serve

Serious badminton players are constantly exploring different techniques to give them an edge over opponents. One of the latest innovations is the spin serve, a devastatingly effective method in which a player adds a pre-spin just before the racket contacts the shuttlecock (aka the birdie). It’s so effective—some have called it “impossible to return“—that the Badminton World Federation (BWF) banned the spin serve in 2023, at least until after the 2024 Paralympic Games in Paris.

The sanction wasn’t meant to quash innovation but to address players’ concerns about the possible unfair advantages the spin serve conferred. The BWF thought that international tournaments shouldn’t become the test bed for the technique, which is markedly similar to the previously banned “Sidek serve.” The BWF permanently banned the spin serve earlier this year. Chinese physicists have now teased out the complex fundamental physics of the spin serve, publishing their findings in the journal Physics of Fluids.

Shuttlecocks are unique among the various projectiles used in different sports due to their open conical shape. Sixteen overlapping feathers protrude from a rounded cork base that is usually covered in thin leather. The birdies one uses for leisurely backyard play might be synthetic nylon, but serious players prefer actual feathers.

Those overlapping feathers give rise to quite a bit of drag, such that the shuttlecock will rapidly decelerate as it travels and its parabolic trajectory will fall at a steeper angle than its rise. The extra drag also means that players must exert quite a bit of force to hit a shuttlecock the full length of a badminton court. Still, shuttlecocks can achieve top speeds of more than 300 mph. The feathers also give the birdie a slight natural spin around its axis, and this can affect different strokes. For instance, slicing from right to left, rather than vice versa, will produce a better tumbling net shot.

Chronophotographies of shuttlecocks after an impact with a racket

Chronophotographies of shuttlecocks after an impact with a racket. Credit: Caroline Cohen et al., 2015

The cork base makes the birdie aerodynamically stable: No matter how one orients the birdie, once airborne, it will turn so that it is traveling cork-first and will maintain that orientation throughout its trajectory. A 2015 study examined the physics of this trademark flip, recording flips with high-speed video and conducting free-fall experiments in a water tank to study how its geometry affects the behavior. The latter confirmed that shuttlecock feather geometry hits a sweet spot in terms of an opening inclination angle that is neither too small nor too large. And they found that feather shuttlecocks are indeed better than synthetic ones, deforming more when hit to produce a more triangular trajectory.

Physics of badminton’s new killer spin serve Read More Ā»

tiny,-removable-ā€œmini-ssdā€-could-eventually-be-a-big-deal-for-gaming-handhelds

Tiny, removable ā€œmini SSDā€ could eventually be a big deal for gaming handhelds

The Mini SSD card isn’t and may never be a formally ratified standard, but it does aim to solve a real problem for portable gaming systems—the need for fast storage that can load games at speeds approaching those of an internal SSD, without requiring users to take their own systems apart to perform upgrades.

Why are games getting so dang big, anyway?

Big storage, small size. Credit: Biwin

A 2023 analysis from TechSpot suggested that game size had increased at an average rate of roughly 6.3GB per year between 2012 and 2023—games that come in over 100GB aren’t the norm, but they aren’t hard to find. Some of that increase comes from improved graphics and the higher-resolution textures needed to make games look good on 4K monitors and TVs. But TechSpot also noted that the storage requirements for narrative-heavy, cinematic-heavy games like The Last of Us Part 1 were being driven just as much by audio files and support for multiple languages.

“In total, nearly 17 GB of storage is needed for [The Last of Us] data unrelated to graphics,” wrote author Nick Evanson. “That’s larger than any entire game from our 2010 sample! This pattern was consistent across nearly all the ‘Godzilla-sized’ games we examined—those featuring numerous cinematics, extensive speech, and considerable localization were typically much larger than the rest of the sample in a given year.”

For another prominent recent example, consider the install sizes for the Mac version ofĀ Cyberpunk 2077. The version of the game on Steam, the Epic Games Store, and GOG runs about 92GB. However, the version available for download from Apple’s App Store is a whopping 159GB, solely because it includes all of the game’s voiceovers in all of the languages it supports. (This is because of App Store rules that require apps to have all possible files included when they’re submitted for review.)

It’s clear that there’s a need for fast storage upgrades that don’t require you to disassemble your console or PC completely. Whether it’s this new “mini SSD,” a faster iteration of microSD Express, or some other as-yet-unknown storage format remains to be seen.

Tiny, removable ā€œmini SSDā€ could eventually be a big deal for gaming handhelds Read More Ā»

starlink-tries-to-block-virginia’s-plan-to-bring-fiber-internet-to-residents

Starlink tries to block Virginia’s plan to bring fiber Internet to residents

Noting that its “project areas span from mountains and hills to farmland and coastal plains,” the DHCD said its previous experience with grant-funded deployments “revealed that tree canopy, rugged terrain, and slope can complicate installation and/or obstruct line-of-sight.” State officials said that wireless and low-Earth orbit satellite technology “can have signal degradation, increased latency, and reduced reliability” when there isn’t a clear line of sight.

The DHCD said it included these factors in its evaluation of priority broadband projects. State officials were also apparently concerned about the network capacity of satellite services and the possibility that using state funding to guarantee satellite service in one location could reduce availability of that same service in other locations.

“To review a technology’s ability to scale, the Office considered the currently served speeds of 100/20 Mbps, an application’s stated network capacity, the project area’s number of [locations], the project area’s geographic area, current customer base (if applicable), and future demand,” the department said. “For example, the existing customer base should not be negatively impacted by the award of BEAD locations for a given technology to be considered scalable.”

SpaceX: ā€œPlaying field was anything but levelā€

SpaceX said Virginia is wrong to determine that Starlink “did not qualify as ‘Priority Broadband,'” since the company “provided information demonstrating these capabilities in its application, and it appears that Virginia used this definition only as a pretext to reach a pre-ordained outcome.” SpaceX said that 95 percent of funded “locations in Virginia have an active Starlink subscriber within 1 mile, showing that Starlink already serves every type of environment in Virginia’s BEAD program today” and that 15 percent of funded locations have an active Starlink subscriber within 100 meters.

“The playing field was anything but level and technology neutral, as required by the [updated program rules], and was instead insurmountably stacked against low-Earth orbit satellite operators like SpaceX,” the company said.

We contacted the Virginia DHCD about SpaceX’s comments today and will update this article if the department provides a response.

Starlink tries to block Virginia’s plan to bring fiber Internet to residents Read More Ā»