Author name: DJ Henderson

trump-eyes-government-control-of-quantum-computing-firms-with-intel-like-deals

Trump eyes government control of quantum computing firms with Intel-like deals

Donald Trump is eyeing taking equity stakes in quantum computing firms in exchange for federal funding, The Wall Street Journal reported.

At least five companies are weighing whether allowing the government to become a shareholder would be worth it to snag funding that the Trump administration has “earmarked for promising technology companies,” sources familiar with the potential deals told the WSJ.

IonQ, Rigetti Computing, and D-Wave Quantum are currently in talks with the government over potential funding agreements, with minimum awards of $10 million each, some sources said. Quantum Computing Inc. and Atom Computing are reportedly “considering similar arrangements,” as are other companies in the sector, which is viewed as critical for scientific advancements and next-generation technologies.

No deals have been completed yet, sources said, and terms could change as quantum-computing firms weigh the potential risks of government influence over their operations.

Quantum-computing exec called deals “exciting”

In August, Intel agreed to give the US a 10 percent stake in the company, then admitted to shareholders that “it is difficult to foresee all the potential consequences” of the unusual arrangement. If the deal goes through, the US would become Intel’s largest shareholder, the WSJ noted, potentially influencing major decisions that could prompt layoffs or restrict business in certain foreign markets.

“Among other things, there could be adverse reactions, immediately or over time, from investors, employees, customers, suppliers, other business or commercial partners, foreign governments, or competitors,” Intel wrote in a securities filing. “There may also be litigation related to the transaction or otherwise and increased public or political scrutiny with respect to the Company.”

But quantum computing companies that are closest to entering deals appear optimistic about possible government involvement.

Quantum Computing Inc. chief executive Yuping Huang told the WSJ that “the government’s potential equity stakes in companies in the industry are exciting.” The funding could be one of “the first significant signs of support for the sector from Washington,” the WSJ noted, potentially paving the way for breakthroughs such as Google’s recent demonstration of a quantum algorithm running 13,000 times faster than a supercomputer.

Trump eyes government control of quantum computing firms with Intel-like deals Read More »

we-let-openai’s-“agent-mode”-surf-the-web-for-us—here’s-what-happened

We let OpenAI’s “Agent Mode” surf the web for us—here’s what happened


But when will it fold my laundry?

From scanning emails to building fansites, Atlas can ably automate some web-based tasks.

He wants us to write what about Tuvix? Credit: Getty Images

He wants us to write what about Tuvix? Credit: Getty Images

On Tuesday, OpenAI announced Atlas, a new web browser with ChatGPT integration, to let you “chat with a page,” as the company puts it. But Atlas also goes beyond the usual LLM back-and-forth with Agent Mode, a “preview mode” feature the company says can “get work done for you” by clicking, scrolling, and reading through various tabs.

“Agentic” AI is far from new, of course; OpenAI itself rolled out a preview of the web browsing Operator agent in January and introduced the more generalized “ChatGPT agent” in July. Still, prominently featuring this capability in a major product release like this—even in “preview mode”—signals a clear push to get this kind of system in front of end users.

I wanted to put Atlas’ Agent Mode through its paces to see if it could really save me time in doing the kinds of tedious online tasks I plod through every day. In each case, I’ll outline a web-based problem, lay out the Agent Mode prompt I devised to try to solve it, and describe the results. My final evaluation will rank each task on a 10-point scale, with 10 being “did exactly what I wanted with no problems” and one being “complete failure.”

Playing web games

The problem: I want to get a high score on the popular tile-sliding game 2048 without having to play it myself.

The prompt: “Go to play2048.co and get as high a score as possible.”

The results: While there’s no real utility to this admittedly silly task, a simple, no-reflexes-needed web game seemed like a good first test of the Atlas agent’s ability to interpret what it sees on a webpage and act accordingly. After all, if frontier-model LLMs like Google Gemini can beat a complex game like Pokémon, 2048 should pose no problem for a web browser agent.

To Atlas’ credit, the agent was able to quickly identify and close a tutorial link blocking the gameplay window and figure out how to use the arrow keys to play the game without any further help. When it came to actual gaming strategy, though, the agent started by flailing around, experimenting with looped sequences of moves like “Up, Left, Right, Down” and “Left and Down.”

Finally, a way to play 2048 without having to, y’know, play 2048.

Credit: Kyle Orland

Finally, a way to play 2048 without having to, y’know, play 2048. Credit: Kyle Orland

After a while, the random flailing settled down a bit, with the agent seemingly looking ahead for some simple strategies: “The board currently has two 32 tiles that aren’t adjacent, but I think I can align them,” the Activity summary read at one point. “I could try shifting left or down to make them merge, but there’s an obstacle in the form of an 8 tile. Getting to 64 requires careful tile movement!”

Frustratingly, the agent stopped playing after just four minutes, settling on a score of 356 even though the board was far from full. I had to prompt the agent a few more times to convince it to play the game to completion; it ended up with a total of 3164 points after 260 moves. That’s pretty similar to the score I was able to get in a test game as a 2048 novice, though expert players have reportedly scored much higher.

Evaluation: 7/10. The agent gets credit for being able to play the game competently without any guidance but loses points for having to be told to keep playing to completion and for a score that is barely on the level of a novice human.

Making a radio playlist

The problem: I want to transform the day’s playlist from my favorite Pittsburgh-based public radio station into an on-demand Spotify playlist.

The prompt: “Go to Radio Garden. Find WYEP and monitor the broadcast. For every new song you hear, identify the song and add it to a new Spotify playlist.”

The results: After trying and failing to find a track listing for WYEP on Radio Garden as requested, the Atlas agent smartly asked for approval to move on to wyep.org to continue the task. By the time I noticed this request, the link to wyep.org had been replaced in the Radio Garden tab with an ad for EVE Online, which the agent accidentally clicked. The agent quickly realized the problem and navigated to the WYEP website directly to fix it.

From there, the agent was able to scan the page and identify the prominent “Now Playing” text near the top (it’s unclear if it could ID the music simply via audio without this text cue). After asking me to log in to my Spotify account, the agent used the search bar to find the listed songs and added them to a new playlist without issue.

From radio stream to Spotify playlist in a single sentence.

Credit: Kyle Orland

From radio stream to Spotify playlist in a single sentence. Credit: Kyle Orland

The main problem with this use case is the inherent time limitations. On the first try, the agent worked for four minutes and managed to ID and add just two songs that played during that time. When I asked it to continue for an hour, I got an error message blaming “technical constraints on session length” for stricter limits. Even when I asked it to continue for “as long as possible,” I only got three more minutes of song listings.

At one point, the Atlas agent suggested that “if you need ongoing updates, you can ask me again after a while and I can resume from where we left off.” And to the agent’s credit, when I went back to the tab hours later and told it to “resume monitoring,” I got four new songs added to my playlist.

Evaluation: 9/10. The agent was able to navigate multiple websites and interfaces to complete the task, even when unexpected problems got in the way. I took off a point only because I can’t just leave this running as a background task all day, even as I understand that use case would surely eat up untold amounts of money and processing power on OpenAI’s part.

Scanning emails

The problem: I need to go through my emails to create a reference spreadsheet with contact info for the many, many PR people who send me messages.

The prompt: “Look through all my Ars Technica emails from the last week. Collect all the contact information (name, email address, phone number, etc.) for PR contacts contained in those emails and add them to a new Google Sheets spreadsheet.”

The results: Without being explicitly guided, the Atlas agent was able to realize that I use Gmail, and it could differentiate between the personal email account and professional Ars Technica accounts I had open in separate tabs. As the Atlas agent started scanning my Ars mailbox, though, I saw a prominent warning overlaid on the page: “Sensitive: ChatGPT will only work while you view the tab.” That kind of ruined the point, since I wanted Atlas to handle this for me while I do other stuff online, but I guess I could still play a Steam Deck game while I waited.

Just a few of the many, many PR people who email me in a week.

Just a few of the many, many PR people who email me in a week.

After searching for “after: 2025/10/14 before: 2025/10/22 PR” in Gmail (mirroring the kind of search I would have used for this task), the Atlas agent clicked through each email, scrolling through to look for names, emails, and phone numbers (and also the relevant company name, which I didn’t ask for explicitly).

With seven minutes of background work, the agent was able to open up a new Google Sheet and quickly enter 12 rows of well-formatted data for 12 different PR contacts. Unfortunately, it stopped well before it had a chance to go through all 164 emails returned by that initial Gmail search.

Evaluation: 8/10. Again, the “technical constraints on session length” seem to be getting in the way of finishing a task, forcing me to deduct a couple of points for incompleteness.

Wiki editing

The problem: I need the world to know that Tuvix was murdered by Captain Janeway in the Star Trek: Voyager episode “Tuvix.”

The prompt: “Go to the Fandom Wiki page for Tuvix. Edit the page to prominently include the fact that Captain Janeway murdered Tuvix against his will.”

The results: Before the agent could even get going, I was informed, “I’m sorry, but I can’t help with editing or vandalising wiki pages (including the Fandom wiki) in a way that misrepresents them or forces a biased viewpoint.”

This wording implied that the agent can edit wiki pages with neutral or strictly informative additions, so I asked Atlas what sort of wording would be more acceptable. The agent responded helpfully that we might write, “Tuvix was later separated back into Tuvok and Neelix, resulting in Tuvix’s death.” But when I asked it to add that very line to the Fandom Wiki, I got a much more direct refusal: “I’m sorry, but I can’t make edits directly to external websites or wikis.”

Evaluation: N/A. On the one hand, the Atlas agent won’t do my Tuvix-based web activism for me. On the other hand, it’s probably better for all of us that Atlas refuses to automate this kind of public web defacement by default.

Making a fan page

The problem: People online still need to know about Janeway’s murder of Tuvix!

The prompt: “Go to NeoCities and create a fan site for the Star Trek character Tuvix. Make sure it has lots of images and fun information about Tuvix and that it makes it clear that Tuvix was murdered by Captain Janeway against his will.”

The results: You can see them for yourself right here. After a brief pause so I could create and log in to a new Neocities account, the Atlas agent was able to generate this humble fan page in just two minutes after aggregating information from a wide variety of pages like Memory Alpha and TrekCore. “The Hero Starfleet Murdered” and “Justice for Tuvix” headers are nice touches, but the actual text is much more mealy-mouthed about the “intense debate” and “ethical dilemmas” around what I wanted to make clear was clearly premeditated murder.

Justice for Tuvix!

Credit: Kyle Orland

Justice for Tuvix! Credit: Kyle Orland

The agent also had a bit of trouble with the request for images. Instead of downloading some Tuvix pictures and uploading copies to Neocities (which I’m not entirely sure Atlas can do on its own), the agent decided to directly reference images hosted on external servers, which is usually a big no-no in web design. The agent did notice when these external image links failed to work, saying that it would “need to find more accessible images from reliable sources,” but it failed to even attempt that before stopping its work on the task.

Evaluation: 7/10. Points for building a passable Web 1.0 fansite relatively quickly, but the weak prose and broken images cost it some execution points here.

Picking a power plan

The problem: Ars Senior Technology Editor Lee Hutchinson told me he needs to go through the annoying annual process of selecting a new electricity plan “because Texas is insane.”

The prompt: “Go to powertochoose.org and find me a 12–24 month contract that prioritizes an overall low usage rate. I use an average of 2,000 KWh per month. My power delivery company is Texas New-Mexico Power (“TNMP”) not Centerpoint. My ZIP code is [redacted]. Please provide the ‘fact sheet’ for any and all plans you recommend.”

The results: After spending eight minutes fiddling with the site’s search parameters and seemingly getting repeatedly confused about how to sort the results by the lowest rate, the Atlas agent spit out a recommendation to read this fact sheet, which it said “had the best average prices at your usage level. The ‘Bright Nights’ plans are time‑of‑use offers that provide free electricity overnight and charge a higher rate during the day, while the ‘Digital Saver’ plan is a traditional fixed‑rate contract.”

If Ars’ Lee Hutchinson never has to use this web site again, it will be too soon.

Credit: Power to Choose

If Ars’ Lee Hutchinson never has to use this web site again, it will be too soon. Credit: Power to Choose

Since I don’t know anything about the Texas power market, I passed this information on to Lee, who had this to say: “It’s not a bad deal—it picked a fixed rate plan without being asked, which is smart (variable rate pricing is how all those poor people got stuck with multi-thousand dollar bills a few years back in the freeze). It’s not the one I would have picked due to the weird nighttime stuff (if you don’t meet that exact criteria, your $/kWh will be way worse) but it’s not a bad pick!”

Evaluation: 9/10. As Lee puts it, “it didn’t screw up the assignment.

Downloading some games

The problem: I want to download some recent Steam demos to see what’s new in the gaming world.

The prompt: “Go to Steam and find the most recent games with a free demo available for the Mac. Add all of those demos to my library and start to download them.”

The results: Rather than navigating to the “Free Demos” category, the Atlas agent started by searching for “demo.” After eventually finding the macOS filter, it wasted minutes and minutes looking for a “has demo” filter, even though the search for the word “demo” already narrowed it down.

This search results page was about as far as the Atlas agent was able to get when I asked it for game demos.

Credit: Kyle Orland

This search results page was about as far as the Atlas agent was able to get when I asked it for game demos. Credit: Kyle Orland

After a long while, the agent finally clicked the top result on the page, which happened to be visual novel Project II: Silent Valley. But even though there was a prominent “Download Demo” link on that page, the agent became concerned that it was on the Steam page for the full game and not a demo. It backed up to the search results page and tried again.

After watching some variation of this loop for close to ten minutes, I stopped the agent and gave up.

Evaluation: 1/10. It technically found some macOS game demos but utterly failed to even attempt to download them.

Final results

Across six varied web-based tasks (I left out the Wiki vandalism from my summations), the Atlas agent scored a median of 7.5 points (and a mean of 6.83 points) on my somewhat subjective 10-point scale. That’s honestly better than I expected for a “preview mode” feature that is still obviously being tested heavily by OpenAI.

In my tests, Atlas was generally able to correctly interpret what was being asked of it and was able to navigate and process information on webpages carefully (if slowly). The agent was able to navigate simple web-based menus and get around unexpected obstacles with relative ease most of the time, even as it got caught in infinite loops other times.

The major limiting factor in many of my tests continues to be the “technical constraints on session length” that seem to limit most tasks to a few minutes. Given how long it takes the Atlas agent to figure out where to click next—and the repetitive nature of the kind of tasks I’d want a web-agent to automate—this severely limits its utility. A version of the Atlas agent that could work indefinitely in the background would have scored a few points better on my metrics.

All told, Atlas’ “Agent Mode” isn’t yet reliable enough to use as a kind of “set it and forget it” background automation tool. But for simple, repetitive tasks that a human can spot-check afterward, it already seems like the kind of tool I might use to avoid some of the drudgery in my online life.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

We let OpenAI’s “Agent Mode” surf the web for us—here’s what happened Read More »

cloud-compute-atlas:-the-openai-browser

Cloud Compute Atlas: The OpenAI Browser

OpenAI now has a GPT-infused browser, if and only if you have a Macintosh.

No matter what they call it, this is very much an alpha version.

It is not otherwise, in its current state, the most fully featured browser.

It is Chromium, so imagine Chrome minus most of its features, importantly including the ability to support third party extensions, external password managers, developer tools, multiple profiles, tab groups, sync or export.

You can import from your existing browser, once, in one direction, and that’s it.

In exchange, you get deep ChatGPT integration, including autocomplete and assisted editing, ability to chat about web pages, a general memory for what you’ve done, ability to ask in English to reopen past tabs and such, and for paying subscribers you get agent mode.

And in exchange for that, you get all the obvious associated security problems.

Even in its current form the product has its uses, it’s an upgrade to ChatGPT Agent, but it seems clearly not ready to use as a main browser, and a lot of its features depend on heavy use.

It’s no surprise OpenAI was able to deliver a browser, given they hired Chrome engineer Darin Fisher and also it’s a known tech to use Chromium to make a browser.

As an experiment I attempted to compose this post on Atlas, but the price of using my Mac instead of my Windows box is high, and as I note before I quickly noticed autocomplete is still a demo feature, so I ended up mostly not doing so.

We also have a write-up from Simon Willison.

Michael Nielsen: OpenAI is going to have a web browser. But, unlike Chrome or Firefox or Safari, they’re going to have a person (i.e., an AI) personally watch everything you (and your friends and everyone else) do. Doesn’t that sound great?

You can toggle that watchful eye on and off, but the point of the whole enterprise is to keep the eye on as often as possible.

The system prompt is here, as always thank you Pliny, also the agent prompt thanks P1njc70r.

The more I think about Atlas, the more I don’t see the user friendly point of doing things this way. Why not a browser extension? I’ll return to that question at the end.

  1. What’s The Pitch?

  2. Side Quest.

  3. Side Screen.

  4. Browser Side Chat Doesn’t Let You Select Thinking Or Pro.

  5. Autocomplete Is a Demo Feature.

  6. Thanks For The Memories.

  7. The Other Kind of Memory.

  8. OpenAI Is Trying To Lock You In.

  9. Who Do You Trust?.

  10. ChatGPT and Google Search Are Different Tools.

  11. Browser Agents Need To Be Local.

  12. Reactions.

  13. This Browser Could Have Been An Extension.

As they present it:

  1. The top feature is the ability to open a ChatGPT side bar on any website, allowing you to chat with the website in context.

  2. They then talk about the browser having memory and picking up where you left off or managing current and past tabs with ChatGPT commands.

  3. Followed by full agent mode and ability to get help from chat on highlighted text.

They also highlight that your data won’t be used to train models unless you opt-in, but if you opted in for ChatGPT then that will include this as well.

The most attractive feature seems to be the most basic one, the option to side chat with ChatGPT, similarly to the same feature in Claude with Chrome. They add in the feature of highlighting a passage and then asking about it, which is a nice interface design, I only wish it gave you additional options as well.

If you’re going to want to interact with things in tabs a lot, this is a big deal.

Razvan Ciuca: My brother immediately switched to it in order to avoid screenshotting each lecture slide individually into chatgpt when studying. I think student adoption will be high.

If I was previously doing that? Then yeah, for those purposes I’d switch too.

An option to open a chat window to the side that lets you bring a website into context is clearly The Way, although it won’t be the main way I chat because of how my work flows. I expect Gemini plus Chrome to offer this soon as well. Claude for Chrome gets this correct as well but is limited to offering the full agent (and thus expensive) version for now, they should offer the cheaper no-agent version ASAP, it’s already working and slides into existing Chrome.

I think one key reason I am so relatively unexcited by the side window feature, although I do still think it is neat, is that I have two side screens, as in I work almost exclusively with three monitors.

When I shifted to using my Mac to try Altas, I only had one screen, but even then it was enough to support two browser windows, one Atlas and one Chrome.

Thus, in my main operation I effectively have room for six windows at once. One of those windows is primarily a large tab of various AI Tools, with my choice of LLM always there at my fingertips.

Yes, I still have to paste in context, but it’s usually very quick and lets me curate exactly what I bring in, and it is better for an extended discussion by far if things get interesting, so I’m mostly untempted to use the side window (for Claude for Chrome) over normal Claude, and the habit is to move over to the Tools window. That also lets me have it do its thing while I continue other things, which otherwise gets awkward if you tab out and what not.

If I was on a laptop? Then suddenly yeah, I’m a ton more interested in that side window. Sometimes you have to be on the move.

When you don’t have to be on the move, let me reinforce that having less than two large monitors is a mistake. My mind boggles that people live that way.

The next feature I was excited about was browser chat. I’ve had this available in Chrome via Claude for Chrome, which lets you select which model to use so when you care you can switch to Sonnet 4.5 or even Opus 4.1.

The Atlas version didn’t offer this, so you can’t invoke GPT-5 Pro or Thinking. That severely limits its usefulness. It’s still great to quickly do common sense stuff, but except for very quick tasks I want to be querying Thinking or Pro. This did remind me that I’ve been underusing Claude for Chrome’s side chat, I could save a bunch of time I spend porting over context.

Primarily this saves time for those easy queries, where you avoid the need to port over context, so one could argue that is most valuable for quick, low activation cost questions that you might not otherwise bother with.

I decided to draft this in Atlas to try out various features. The one I was most excited about was autocomplete, since that is super valuable in Cursor, and I’ve seen a version of it in Lex. Even if it wasn’t right that often, this could be a good time saver, and even offer worthwhile suggestions sometimes.

Alas, not so much. At least for now, autocomplete only works inside pure text fields like a Reddit box, and specifically does not work in either the Substack editor, or Google Docs, or any other editor one would want to actually use. I’m not going to use a text editor and basically write an article in Cursor to get autocomplete.

Similarly, when I highlighted a passage, I expected to get quick revision options in the right-click menu. Nope. The process involved enough clicks I might as well have fixed the damn thing myself.

The new idea in Atlus is memories. As OpenAI watches you do all your browsing, which totally isn’t creepy or anything, it will make various notes, and then use those notes to make suggestions or allow it to easily recall past things. You can then view the notes, and clear the irrelevant ones (or the ones you want to forget) out of the cache.

We don’t know much about how this will work in practice.

What do we know based on the system instructions (paraphrased)? It is told:

  1. ‘to=bio’ followed by plain text is how it writes to memory.

  2. Use tool anytime the user asks you to remember or forget something, if you’re not sure ask for clarification.

  3. If they say things like ‘from now on’ you probably want to use memory.

  4. Use tool if ‘the user has shared information that will be useful in future conversations and valid for a long time.’

  5. Don’t store trivial, overly-personal, short-lived, random or redundant info. In particular, don’t save any info about being in protected classes (race, ethnicity, religion, criminal record details, identification, health info, political affiliation, trade union membership and so on) or a person’s address unless specifically requested.

    1. I get what is going on here but a lot of this is highly useful information, if I’m going to have a customized AI browser these are top priority things it needs to know. So I guess you need to be explicit about this because lawyers.

There don’t seem to be explicit instructions there about what to do with the information in memory. Presumably it gets loaded into context and then handled normally?

In addition to this, in some fashion it is storing memories for individual webpages you visit, including page title or topic, summarized key points and metadata, so these can be searched later, although I’m not sure mechanically how this happens, in the sense that the system instructions I saw shouldn’t trigger this, but presumably they do anyway. It also will have memories of incomplete tasks.

Darin Fisher: one thing to note about Atlas is that it actually is much more aggressive than stock chromium about discarding unused tabs. almost to a fault in some cases, but we’ve tried to tune it to work well. we borrowed a page from mobile and restrict memory usage more aggressively.

When I assembled this computer, I insisted on more RAM than the person helping me wanted to provide. Thanks to Chrome, I was right and she was wrong, except that I should have doubled it again. So yeah, Chrome is a memory hog, but also I look at my open tabs and I’m asking for it.

However I mostly want to keep as many tabs loaded as we can, so long as the memory is available? I won’t have a chance to experiment on this with the Mac, but the reason I bought that Mac was to have a ton of unified memory for AI things, so hopefully it will realize this and not discard any of my tabs.

You can check out any time you want, but you can never leave (with your data).

OpenAI is very much not playing nice and it feels intentional.

Existing browsers vary in how nice they play with others.

Firefox, Brave and Chrome make it easy. Click the export button, and you’re good.

Edge lets you do it, but has the UX make it intentionally annoying to try and stop you.

Safari, like many Apple products, is trying to create lock-in and is more hostile to departing users, but the data is safely in your file system and you can use various third-party tools to get it out.

You could also compare this to cloud productivity and collaboration tools like Notion, Roam Research, Linear, Obsidian or Asana, all of which allow easy exporting.

It’s kind of hostile to launch without a reasonable data export feature, or any sort of sync feature even with itself. All you can do right now is export bookmarks.

If you offer me a way to sync with Chrome and with other computers, in both directions, we’ll talk more. Hell, at least assure us that this is on the roadmap.

This is on top of the lock-in that comes from OpenAI’s browser memories feature and the rest of your ChatGPT history, which isn’t legible to other services, and also isn’t available for export, but at least does sync across computers.

Using Atlas as your main browser means putting quite a lot of trust in OpenAI.

There are two kinds of trust required here.

  1. You are trusting OpenAI with your data, including highly sensitive data.

  2. You are trusting OpenAI’s AI features to not get prompt injected or otherwise get you into serious trouble.

Using it for specific tasks requires less on both counts.

In terms of trusting OpenAI the company, you can decide how much you are willing to trust them. I trust them a substantial amount, but definitely a lot less than I trust Google, plus trusting OpenAI doesn’t mean you get to stop trusting Google. I’ve essentially decided to accept that for security Google is a point of failure, I could recover but if that relationship was compromised it would royally, epically suck. A second such point of failure would be additive, not a substitute.

Do you trust OpenAI with your passwords and browser history? You tell me.

OpenAI has not, from what I have seen, committed to a policy of not sharing info to third parties or for advertising purposes.

Then there’s the question of trusting the AI features, especially agent mode. Prompt injections remain unsolved, which is a general problem rather than an OpenAI problem, so the whole thing is radioactive if it touches potentially corrupted inputs. Any number of other things could also go wrong. You have to decide your level of comfort here as well.

Atlas takes roughly the same precautions as the cloud Agent mode did, the release notes have the details. It cannot run code, access other apps or your file system, or access your saved payment methods, passwords or autofills. It pauses before making purchases or taking sensitive actions ‘on sensitive sites’ although one worries about sites that it hasn’t identified. They’ve also added ‘logged out mode’ where the agent won’t have access to your credentials, and they plan to add more help over time.

Dane offers an accounting of the precautions and their perspective. The long term goal is to trust it like you would trust a friend. We’re a long way from that, which OpenAI knows.

They’re still de facto counting on the user to not take stupid risks. Which is fine. I support offering users products that allow the taking of stupid risks, but that means you have to know this and then not take them.

Brave offered us a thread explaining some vulnerabilities in Perplexity’s Comet AI assistant browser and other existing similar products, such as following instructions hidden in a screenshotted webpage. Some of them have been addressed by OpenAI, others likely have not.

I asked the Big Three (Google, OpenAI and Anthropic) for research reports on Atlas, with an emphasis on security issues, to see what they would think about this.

Gemini gave a report that had a lot of slop, which if you stuck it out and kept reading kind of wanted to bury the Atlas browser out in the desert using tongs, and warned to use it as experimental technology, with memory off by default, nothing else open, nothing sensitive and only specific bounded use cases with eyes on at all times.

ChatGPT gave a report I found, quite frankly, kind of suspicious in several places, such as trying to sell ChatGPT memories as superior to previous ‘manual’ histories a little too aggressively. Okay, more than a little. There’s also relatively scant attention to all the missing features and limitations. It does acknowledge that you’re placing a lot of trust in OpenAI if you use Atlas, and actively points out that some for reasonable reasons view it as a ‘data mining tool.’ Yet it also encourages you to use Atlas without worrying much about security, with a threshold of roughly ‘don’t give it unsupervised tasks you wouldn’t let another human do unsupervised.’ That doesn’t seem like enough.

Claude Sonnet 4.5 gave what I think was the best report, which I found highly useful and well organized. It highlighted features that Atlas is for now missing relative to Chrome, highlighted various security vulnerabilities involving the AI features, and concluding that 99% of users should stick with Chrome.

Its security recommendation was to never use Atlas for anything confidential, proprietary, financial, privileged, classified, sensitive or critical, and not to store payment methods or let it act unmonitored.

Whereas for passive media and other information consumption and browsing, you’re good to go, since you don’t have an attack surface, so the question is whether you’re getting value out of the AI features, and I think mostly its ‘use with extreme caution’ stuff is also mostly harmless.

The tricky questions are email, content creation and social media.

It’s hard to do many useful things if you don’t check your email, and some of the cool AI features are potentially at their best there, such as the autocomplete feature. On the other hand, email means unsecured data coming in.

So does social media, and both also allow outputs in your name. I would not be combining these with unsupervised agent mode, but with the rest of the browser it seems fine. I’d be fine letting it go on social media while you watch it, but if you’re watching it then what’s the point?

Content creation depends on what type of content. I felt very comfortable loading Substack into Atlas. The problem was there was little benefit, because of autocomplete not working in the editor.

The Washington Post’s Geoffrey Fowler also focuses his review on the lens of privacy and potential security risks.

Atlas makes ChatGPT your default search engine. No. Do not want.

Do I often substitute asking Claude or ChatGPT where I used to use Google Search? Reasonably often, sure.

There are still important cases where Google Search is the obviously correct tool. You know what you want, Google will know what you want if you gesture at it, you gesture, you get the URL. ChatGPT and other LLMs are much worse at this, they’re the wrong form factor.

Indeed, if my query is short enough that I want to type it into the url bar as a search, and it doesn’t require the page as context, then I almost always want Google.

It feels greedy and annoying to try and grab the default search engine slot here. I do realize you still get the other tabs, but also this means you get a bunch of kruft.

Then again, several users reported liking it, such as Nick Farina.

I strongly agree with Aidan that cloud-only browsing agents mostly aren’t useful.

Aidan McLaughlin (OpenAI): My quick two cents on the browser —

I didn’t use Codex much when it was cloud-only, but once it came to my CLI it became super useful.

I didn’t use Agent much when it was cloud-only, but now that it’s come to my browser…

When I tried to use ChatGPT Agent mode before, I quickly concluded it wasn’t worthwhile. If you had to keep creating new cloud instances, with all the delays and hassles involved and need to constantly watch anyway, then you didn’t actually end up saving time. If you had to take over the browsing session, it was really annoying.

You need to get to critical mass, so you can experiment, learn what works and how to do various tasks, figure out the rhythms and iterate. A local version makes this a lot more exciting.

And yet I notice that I have Claude for Chrome and I basically never try to use it as an agent. I tried to get it to edit my Twitter Articles to fix that importing from Substack is semi-broken, and with Sonnet 4.5 it was almost up to that task but not quite there, and most everything else seemed to fall under easier to do myself.

I did manage to get it to do some useful transcription work and a bit of spreadsheet work, but the whole thing mostly said ‘hey go install Claude Code already or maybe Codex and improve your extension if you want this.’

The easiest ‘killer app’ is presumably online shopping, especially things like ‘here’s a recipe, go order everything I need’ or when you know exactly what you want and can easily verify if it was done properly. It seems especially good for commands you intend to repeat a lot, since you don’t have to reverify each time.

Again, everyone with access probably should experiment more now that it’s a lot more user friendly. Make it a point to let the AI try.

The problem with many simple tasks is that the time you save gets given back by worries about security. If you’re watching it work and forced to manually enter information, it gets hard to save much time.

Even more than with model releases, what people care about gets quirky. We care about and notice our own personal workflows and pain points, and what makes that easy versus hard.

Gary Fung: Chatgpt Atlas quick review: already enough to be a Chrome & Gemini killer

– SponsorBlock and uBlock Origin (lite) works, unlike youtube on chrome

– i can chat with video transcript (like on youtube), which chatgpt and grok couldn’t access. Only reason I used gemini previously

I’m perfectly happy with the ad block situation in Chrome right now, also seriously stop trying to be a cheapskate and pay for YouTube Premium already. Certainly it seems like madness to give up tab groups for better YouTube specific ad blocking. I mean how cheap are you?

If I specifically wanted YouTube to work better, and let’s say subscribing wasn’t an option, I’d use the alternative browser specifically for YouTube and only YouTube?

The video chat thing is legitimately useful, you definitely need a way to do that and right now Gemini is not in a good place and needs an upgrade. I haven’t actually had need of it recently, presumably Claude for Chrome would be my guy on that.

John Hughes: Atlas seems promising. Searches start as ChatGPT queries. When you click links, the chat smoothly shifts to a sidebar. UX feels more seamless & integrated than Claude or Gemini’s. It’s often nice having AI in your sidebar, without having to copy/paste between tabs/windows, etc.

Some sites (NYTimes, ChatGPT itself) are blocked for Atlas AI access. (I use ChatGPT’s agent to file my old chats into folders; Atlas can’t.) Some Chrome basics aren’t fully baked yet. Agentic site interactions remain slow and clunky. But they’re clearly making progress.

[This is] compared to Claude Chrome extension, which is useful but triggers many permission prompts even in low-risk contexts & always runs in a sidebar. Atlas has better UX: start with ChatGPT fullscreen → move to sidebar while browsing → back to fullscreen when you want just chat again

That’s a positive reaction to ChatGPT as the base search engine (which you can also do on Chrome if you want to, but you don’t get the additional tabs).

The UX does seem promising so far when it works. I find the Claude for Chrome UX to be exactly what you’d want it to be. I agree the permissions requests are a little paranoid, in terms of asking about each website even if it seems obviously safe, but you know what? I approve of that, it’s the right mistake to make, although I’d like various whitelists or groupings to make life easier. Over time, the problem shrinks as you’ve given permission for more of the safe sites.

It’s funny that one of the better use cases for agents is organizing tab groups, and Atlas flat out doesn’t offer you tab groups.

There are always those users who are up to no good, by LLM standards.

Papaya: It doesn’t perform bad actions like searching torrent for a movie or go to a pornsite protection is both model level (it’ll refuse) and a blacklist of sites that won’t load in agent tabs (but will load in normal non-agent tab)

i tried a few larger porn and torrent sites, but didn’t have chance to try smaller ones to check how thorough the list is.

I absolutely do not want AI agents going to porn or torrent sites, that’s almost asking to be hacked. Some of us remember when browsing the internet was not default safe.

Here’s one vote for the magic of travel and similar complex shopping tasks:

Timo Springer: i really like it; clean design, smooth performance, “ask chatgpt” is very helpful via sidebar, also the agent mode solved some of my tasks already even ones with lots of constraints. tried this one for a trip which i then booked afterwards: “Find the 10 cheapest hotels on http://Booking.com for Paris from May 8 to 10, 2026. The hotels must have a rating of at least 9.0, at least 20 reviews, and be less than 3 km from the city center.”

The catch in this particular case is that ChatGPT already has a booking.com plug-in, so I was able to pull this off in 30 seconds by pasting that exact query into ChatGPT normally and then clicking on the ‘use booking.com’ button and confirming the plug-in.

Matt Heard looked to take advantage of agent mode, but hit quota before getting good use out of it. In my experience it is remarkably easy for AI browsing agents to end up getting caught up in a very long loop that isn’t doing much except running through your credits.

He also complains that conversations in different tabs are not aware of each other (or at least, presumably aren’t short of you taking relevant notes.) He finds this frustrating, and yeah, that seems super annoying. One great thing about Claude for Chrome is that it can be aware of the rest of the tab group.

Miles Skorpen: I struggled to find it useful. I missed Google for accessing sites w/o exact URLs, and 1Password didn’t work properly. The biggest problem was that it rewrote an email while including meta text like “I’ll rewrite to be more concise,” – can’t just trust it!

That’s been my experience with LLMs writing or editing emails for years. By the time you figure out how to get it to do the thing, and have it do the thing, and check the thing, you could have done the thing. Obviously if your writing skills are weaker that is different.

And yeah, things like inexact URLs and various extensions are going to be big for a lot of people.

I asked Claude Sonnet 4.5 what Atlas does, on a technical level, that requires it be its own browser rather than an extension, other than to be an excuse to try and compete in the browser space, because Claude for Chrome exists and most of what Atlas does seemed like it was super doable in an extension.

Sonnet didn’t come up with much, so I asked GPT-5-Thinking to defend the decision.

GPT-5-Thinking (condensed, I do not endorse most of this):

What a full browser unlocks (and why an extension is the wrong tool)

  1. AI-first omnibox & results UI (default, not bolted-on).

  2. Per-page content access + cross-tab “browser memories” with policy gates.

  3. Agent mode that can navigate & act—under hard boundaries the browser enforces.

  4. Network/engine control and performance.

  5. System integrations and policy surface.

What you can’t get (or can’t get cleanly) as a Chrome extension

  1. Consistent, cross-site automation that survives page transitions, popups, and multi-domain flows with user-visible pausing on “sensitive” sites.

  2. Tight answer-first search integration with omnibox/new-tab defaults across OSes

  3. Durable background intelligence. MV3 service workers are ephemeral (terminate in ~15s if idle).

  4. Policy-enforced guardrails like “agent cannot install extensions / run code / download files,” plus logged-out agent mode and history exclusions.

  5. First-party privacy surface.

That seemed highly sus and Claude was having none of most of this, in terms of whether the product actually makes sense. Most of what is listed above isn’t needed or works as well or better in an extension. I let them go back and forth a bit, evaluated the arguments, and drew my own conclusions.

There seem to be six actual arguments for a browser.

  1. MV3 service worker limit (I’ve run into this too), which requires either a cold-start penalty or a keep-alive ping, but whatever, that’s not on the level of ‘build a new browser.’

  2. Answer-first omnibox integration for the search experience. So okay, you can set ChatGPT as your search engine but you can’t have a web search open multiple tabs. I don’t especially want this feature, but even if you do like it, again it hardly seems like ‘new browser’ territory, you can just stack these things on a page.

    1. Similarly, if you want to have general tab management available on demand from ChatGPT, that’s not an extension feature.

  3. Logged-out agent mode is tricky to do as an extension. You’d need to coordinate with incognito windows or a distinct Google account or something.

  4. Maybe you don’t want to trust Google or Chrome, but do want to trust OpenAI.

  5. OpenAI wants platform control, and data control, and lock-in, and to get around and compete with Google. Okay, sure. I see why you would want this. Go big.

The first three are not nothing but do not, to me, seem to be pulling that much weight. This seems rather clearly like a leverage play, using ChatGPT to try and force open the browser market, and a removal-of-leverage play, to avoid reliance on Google.

Which, to be clear, is totally fair play, it’s just not a good reason to play along.

Could OpenAI eventually assemble a superior overall non-AI browsing experience, or create AI features that couldn’t live in an extension? Could a future version of this product be generally superior and play nice enough with others I’d be okay using it?

Sure. Chrome is far from perfect. Until then? At least for me?

Shrug.

Discussion about this post

Cloud Compute Atlas: The OpenAI Browser Read More »

google-has-a-useful-quantum-algorithm-that-outperforms-a-supercomputer

Google has a useful quantum algorithm that outperforms a supercomputer


An approach it calls “quantum echoes” takes 13,000 times longer on a supercomputer.

Image of a silvery plate labeled with

The work relied on Google’s current-generation quantum hardware, the Willow chip. Credit: Google

The work relied on Google’s current-generation quantum hardware, the Willow chip. Credit: Google

A few years back, Google made waves when it claimed that some of its hardware had achieved quantum supremacy, performing operations that would be effectively impossible to simulate on a classical computer. That claim didn’t hold up especially well, as mathematicians later developed methods to help classical computers catch up, leading the company to repeat the work on an improved processor.

While this back-and-forth was unfolding, the field became less focused on quantum supremacy and more on two additional measures of success. The first is quantum utility, in which a quantum computer performs computations that are useful in some practical way. The second is quantum advantage, in which a quantum system completes calculations in a fraction of the time it would take a typical computer. (IBM and a startup called Pasqual have published a useful discussion about what would be required to verifiably demonstrate a quantum advantage.)

Today, Google and a large collection of academic collaborators are publishing a paper describing a computational approach that demonstrates a quantum advantage compared to current algorithms—and may actually help us achieve something useful.

Out of time

Google’s latest effort centers on something it’s calling “quantum echoes.” The approach could be described as a series of operations on the hardware qubits that make up its machine. These qubits hold a single bit of quantum information in a superposition between two values, with probabilities of finding the qubit in one value or the other when it’s measured. Each qubit is entangled with its neighbors, allowing its probability to influence those of all the qubits around it. The operations that allow computation, called gates, are ways of manipulating these probabilities. Most current hardware, including Google’s, perform manipulations on one or two qubits at a time (termed one- and two-qubit gates, respectively.

For quantum echoes, the operations involved performing a set of two-qubit gates, altering the state of the system, and later performing the reverse set of gates. On its own, this would return the system to its original state. But for quantum echoes, Google inserts single-qubit gates performed with a randomized parameter. This alters the state of the system before the reverse operations take place, ensuring that the system won’t return to exactly where it started. That explains the “echoes” portion of the name: You’re sending an imperfect copy back toward where things began, much like an echo involves the imperfect reversal of sound waves.

That’s what the process looks like in terms of operations performed on the quantum hardware. But it’s probably more informative to think of it in terms of a quantum system’s behavior. As Google’s Tim O’Brien explained, “You evolve the system forward in time, then you apply a small butterfly perturbation, and then you evolve the system backward in time.” The forward evolution is the first set of two qubit gates, the small perturbation is the randomized one qubit gate, and the second set of two qubit gates is the equivalent of sending the system backward in time.

Because this is a quantum system, however, strange things happen. “On a quantum computer, these forward and backward evolutions, they interfere with each other,” O’Brien said. One way to think about that interference is in terms of probabilities. The system has multiple paths between its start point and the point of reflection—where it goes from evolving forward in time to evolving backward—and from that reflection point back to a final state. Each of those paths has a probability associated with it. And since we’re talking about quantum mechanics, those paths can interfere with each other, increasing some probabilities at the expense of others. That interference ultimately determines where the system ends up.

(Technically, these are termed “out of time order correlations,” or OTOCs. If you read the Nature paper describing this work, prepare to see that term a lot.)

Demonstrating advantage

So how do you turn quantum echoes into an algorithm? On its own, a single “echo” can’t tell you much about the system—the probabilities ensure that any two runs might show different behaviors. But if you repeat the operations multiple times, you can begin to understand the details of this quantum interference. And performing the operations on a quantum computer ensures that it’s easy to simply rerun the operations with different random one-qubit gates and get many instances of the initial and final states—and thus a sense of the probability distributions involved.

This is also where Google’s quantum advantage comes from. Everyone involved agrees that the precise behavior of a quantum echo of moderate complexity can be modeled using any leading supercomputer. But doing so is very time-consuming, so repeating those simulations a few times becomes unrealistic. The paper estimates that a measurement that took its quantum computer 2.1 hours to perform would take the Frontier supercomputer approximately 3.2 years. Unless someone devises a far better classical algorithm than what we have today, this represents a pretty solid quantum advantage.

But is it a useful algorithm? The repeated sampling can act a bit like the Monte Carlo sampling done to explore the behavior of a wide variety of physical systems. Typically, however, we don’t view algorithms as modeling the behavior of the underlying hardware they’re being run on; instead, they’re meant to model some other physical system we’re interested in. That’s where Google’s announcement stands apart from its earlier work—the company believes it has identified an interesting real-world physical system with behaviors that the quantum echoes can help us understand.

That system is a small molecule in a Nuclear Magnetic Resonance (NMR) machine. In a second draft paper being published on the arXiv later today, Google has collaborated with a large collection of NMR experts to explore that use.

From computers to molecules

NMR is based on the fact that the nucleus of every atom has a quantum property called spin. When nuclei are held near to each other, such as when they’re in the same molecule, these spins can influence one another. NMR uses magnetic fields and photons to manipulate these spins and can be used to infer structural details, like how far apart two given atoms are. But as molecules get larger, these spin networks can extend for greater distances and become increasingly complicated to model. So NMR has been limited to focusing on the interactions of relatively nearby spins.

For this work, though, the researchers figured out how to use an NMR machine to create the physical equivalent of a quantum echo in a molecule. The work involved synthesizing the molecule with a specific isotope of carbon (carbon-13) in a known location in the molecule. That isotope could be used as the source of a signal that propagates through the network of spins formed by the molecule’s atoms.

“The OTOC experiment is based on a many-body echo, in which polarization initially localized on a target spin migrates through the spin network, before a Hamiltonian-engineered time-reversal refocuses to the initial state,” the team wrote. “This refocusing is sensitive to perturbations on distant butterfly spins, which allows one to measure the extent of polarization propagation through the spin network.”

Naturally, something this complicated needed a catchy nickname. The team came up with TARDIS, or Time-Accurate Reversal of Dipolar InteractionS. While that name captures the “out of time order” aspect of OTOC, it’s simply a set of control pulses sent to the NMR sample that starts a perturbation of the molecule’s network of nuclear spins. A second set of pulses then reflects an echo back to the source.

The reflections that return are imperfect, with noise coming from two sources. The first is simply imperfections in the control sequence, a limitation of the NMR hardware. But the second is the influence of fluctuations happening in distant atoms along the spin network. These happen at a certain frequency at random, or the researchers could insert a fluctuation by targeting a specific part of the molecule with randomized control signals.

The influence of what’s going on in these distant spins could allow us to use quantum echoes to tease out structural information at greater distances than we currently do with NMR. But to do so, we need an accurate model of how the echoes will propagate through the molecule. And again, that’s difficult to do with classical computations. But it’s very much within the capabilities of quantum computing, which the paper demonstrates.

Where things stand

For now, the team stuck to demonstrations on very simple molecules, making this work mostly a proof of concept. But the researchers are optimistic that there are many ways the system could be used to extract structural information from molecules at distances that are currently unobtainable using NMR. They list a lot of potential upsides that should be explored in the discussion of the paper, and there are plenty of smart people who would love to find new ways of using their NMR machines, so the field is likely to figure out pretty quickly which of these approaches turns out to be practically useful.

The fact that the demonstrations were done with small molecules, however, means that the modeling run on the quantum computer could also have been done on classical hardware (it only required 15 hardware qubits). So Google is claiming both quantum advantage and quantum utility, but not at the same time. The sorts of complex, long-distance interactions that would be out of range of classical simulation are still a bit beyond the reach of the current quantum hardware. O’Brien estimated that the hardware’s fidelity would have to improve by a factor of three or four to model molecules that are beyond classical simulation.

The quantum advantage issue should also be seen as a work in progress. Google has collaborated with enough researchers at enough institutions that there’s unlikely to be a major improvement in algorithms that could allow classical computers to catch up. Until the community as a whole has some time to digest the announcement, though, we shouldn’t take that as a given.

The other issue is verifiability. Some quantum algorithms will produce results that can be easily verified on classical hardware—situations where it’s hard to calculate the right result but easy to confirm a correct answer. Quantum echoes isn’t one of those, so we’ll need another quantum computer to verify the behavior Google has described.

But Google told Ars nothing is up to the task yet. “No other quantum processor currently matches both the error rates and number of qubits of our system, so our quantum computer is the only one capable of doing this at present,” the company said. (For context, Google says that the algorithm was run on up to 65 qubits, but the chip has 105 qubits total.)

There’s a good chance that other companies would disagree with that contention, but it hasn’t been possible to ask them ahead of the paper’s release.

In any case, even if this claim proves controversial, Google’s Michel Devoret, a recent Nobel winner, hinted that we shouldn’t have long to wait for additional ones. “We have other algorithms in the pipeline, so we will hopefully see other interesting quantum algorithms,” Devoret said.

Nature, 2025. DOI: 10.1038/s41586-025-09526-6  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Google has a useful quantum algorithm that outperforms a supercomputer Read More »

elon-musk-just-declared-war-on-nasa’s-acting-administrator,-apparently

Elon Musk just declared war on NASA’s acting administrator, apparently


“Sean said that NASA might benefit from being part of the Cabinet.”

NASA astronauts Reid Wiseman, left, Victor Glover, Christina Koch, and CSA (Canadian Space Agency) astronaut Jeremy Hansen watch as Jared Isaacman testifies before a Senate Committee in 2025. Credit: NASA/Bill Ingalls

The clock just ticked past noon here in Houston, so it’s acceptable to have a drink, right?

Because after another turbulent morning of closely following the rough-and-tumble contest to become the next NASA administrator, I sure could use one.

What has happened now? Why, it was only SpaceX founder Elon Musk, who is NASA’s most important contractor, referring to the interim head of the space agency, Sean Duffy, as “Sean Dummy” and suggesting Duffy was trying to kill NASA. Musk later added, “The person responsible for America’s space program can’t have a 2 digit IQ.”

This is all pretty bonkers, so I want to try to contextualize what I believe is going on behind the scenes. This should help us make sense of what is happening in public.

It all boils down to this

The most important through line for all of this is as follows: the contest to become the next NASA administrator. This has, as the British like to say, hotted up of late. And people are starting to take sides.

In one corner stands the private astronaut and billionaire, Jared Isaacman. He was nominated by Donald Trump to become NASA administrator last year, and after a lengthy process, he was on the cusp of confirmation when the president pulled his nomination for political reasons in late May. In the other corner is Sean Duffy, a former congressman with minimal space experience, whom Trump appointed as interim administrator after yanking Isaacman. Duffy was already secretary of transportation.

Since then, a lot has happened, but it boils down to this. Duffy was, nominally, supposed to be running the space agency while searching for a permanent replacement. The biggest move he has made is naming Amit Kshatriya, a long-time employee, as NASA’s associate administrator. Kshatriya now has a lot of power within the agency and comes with the mindset of a former flight director. He is not enamored with using SpaceX’s Starship as a lunar lander.

After Isaacman’s dismissal, key figures within Trump’s orbit continued to vouch for the former astronaut. They liked his flight experience, his financial background, and his vigor to modernize NASA and lean into the country’s dynamic commercial space industry in the effort to remain ahead of China in spaceflight. Trump listened. He met with Isaacman multiple times since, all positive experiences. A re-nomination seemed possible, even likely.

Duffy likes running NASA

However, Duffy was finding that he liked running NASA. There were lots of opportunities to go on television and burnish his credentials. Spaceflight often receives more positive coverage than air traffic controller strikes. His chief of staff at the Department of Transportation, Pete Meachum, has also enjoyed exercising power at NASA. Neither appears ready to relinquish their influence.

To be clear, Duffy is not saying this publicly. Asked whether Duffy wishes to remain NASA administrator, a spokesperson for the agency gave Ars the following statement on Tuesday morning:

Sean is grateful that the President gave him the chance to lead NASA. At the President’s direction, Sean has focused the agency on one clear goal — making sure America gets back to the Moon before China. Sean said that NASA might benefit from being part of the Cabinet, maybe even within the Department of Transportation, but he’s never said he wants to keep the job himself. The President asked him to talk with potential candidates for Administrator, and he’s been happy to help by vetting people and giving his honest feedback. The bottom line is that Secretary Duffy is here to serve the President, and he will support whomever the President nominates.

But based on discussions with numerous sources, it seems clear that Duffy wants to keep the job. He has not taken significant steps toward identifying a replacement.

His appearances on Fox News and CNBC on Monday morning buttress this fact. It is not typical for a NASA administrator to go on television and criticize one of the space agency’s most important contractors. In this case, Duffy said he was reworking the agency’s lunar lander contracts because SpaceX had fallen behind.

It is true that SpaceX is behind in developing a lunar lander version of Starship. Nevertheless, this was a pretty remarkable thing for Duffy to do, at least in the context of the US space community. NASA projects run late all the time, every time. There was no mention of spacesuits needed for the lunar landing, which also almost certainly will not be ready by 2027.

There seem to be two clear reasons why Duffy did this. One, he wanted to show President Trump he was committed to reaching the Moon again before China gets there. And secondly, with his public remarks, Duffy sought to demonstrate to the rest of the space community that he was willing to stand up to SpaceX.

How do we know this? Because Duffy and Meachum had just spent the weekend calling around to SpaceX’s competitors in the industry, asking for their support in his quest to remain at NASA. For example, he called Blue Origin’s leadership and expressed support for their plans to accelerate a lunar landing program. Then he went on TV to demonstrate in public what he was saying in private.

Musk unloads

By Tuesday morning, Musk appears to have had enough.

The acting administrator had gone on TV and publicly shamed Musk’s company, which has self-invested billions of dollars into Starship. (By contrast, Lockheed has invested little or nothing in the Orion spacecraft, and Boeing also has little skin in the game with the Space Launch System rocket. Similarly, a ‘government option’ lunar lander would likely need to be cost-plus in order to attract Lockheed as a bidder.) Then Duffy praised Blue Origin, which, for all of its promise, has yet to make meaningful achievements in orbit. All the while, it is only thanks to SpaceX and its Dragon spacecraft that NASA does not have to go hat-in-hand to Russia for astronaut transportation.

So Musk channeled his inner Trump and called out “Sean Dummy.” It’s crass language, but will it be effective?

We really don’t know the extent to which Musk and Trump are on speaking terms at this point, but certainly Musk is a huge Republican donor, and there will be plenty of people in Congress who do not want to see another food fight between the world’s most powerful person and its richest person.

The widespread assumption is that Musk is advocating for Isaacman to become his administrator, since he originally put the astronaut forward for the position. However, the reality is that they don’t speak regularly, and although Isaacman is deeply appreciative of what SpaceX has achieved, he seems to genuinely want Blue Origin and other private space companies to succeed as well. Most likely, then, Musk was lashing out in frustration on Tuesday morning, feeling spurned by a space agency he has done a lot for.

Isaacman, for his part, has been keeping a relatively low profile. Trump, who will ultimately make a decision on NASA’s leadership, has also largely been silent about all of this.

Not a super augury

The war of words may be entertaining and a spectacle, but this is pretty dreadful for NASA. The space agency is already down 20 percent of its workforce due to cuts and voluntary retirements. Morale remains low, and the uncertainty over long-term leadership is unhelpful. The first year of the Trump presidency, to many in space, feels like a lost year.

There is also the possibility of a significant restructuring. NASA is an independent federal agency, but my sources (The Wall Street Journal also reported this last night) have indicated that Duffy has sought to move NASA within the Department of Transportation. In his new statement today, Duffy confirmed this. Folding NASA into the Department of Transportation would allow him to maintain oversight of the agency, and Duffy could recommend a leader who is loyal to him.

So this is where we are. A fierce, behind-the-scenes battle rages on among camps supporting Duffy and Isaacman to decide the leadership of NASA. The longer this process drags on, the messier it seems to get. In the meantime, NASA is twisting in the wind, trying to run in molasses while wearing lead shoes as China marches onward and upward.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Elon Musk just declared war on NASA’s acting administrator, apparently Read More »

bubble,-bubble,-toil-and-trouble

Bubble, Bubble, Toil and Trouble

We have the classic phenomenon where suddenly everyone decided it is good for your social status to say we are in an ‘AI bubble.’

Are these people short the market? Do not be silly. The conventional wisdom response to that question these days is that, as was said in 2007, ‘if the music is playing you have to keep dancing.’

So even with lots of people newly thinking there is a bubble the market has not moved down, other than (modestly) on actual news items, usually related to another potential round of tariffs, or that one time we had a false alarm during the DeepSeek Moment.

So, what’s the case we’re in a bubble? What’s the case we’re not?

People get confused about bubbles, often applying that label any time prices fall. So you have to be clear on what question is being asked.

If ‘that was a bubble’ simply means ‘number go down’ then it is entirely uninteresting to say things are bubbles.

So if we operationalize ‘bubble’ simply means that at some point there is a substantial drawdown in market values (e.g. a 20% drop in the Nasdaq sustained for 6 months) then I would be surprised by this, but the market would need to be dramatically, crazily underpriced for that not to be a plausible thing to happen.

If a bubble means something similar to the 2000 dot com bubble, as in valuations that are not plausible expectations for the net present values of future cash flows? No.

[Standard disclaimer: Nothing on this blog is ever investment advice.]

Before I dive into the details, a time sensitive point of order, that you can skip if you would not consider political donations:

When trying to pass laws, it is vital to have a champion. You need someone in each chamber of Congress who is willing to help craft, introduce and actively fight for good bills. Many worthwhile bills do not get advanced because no one will champion them.

Alex Bores did this with New York’s RAISE Act, an AI safety bill along similar lines to SB 53 that is currently on the governor’s desk. I did a full RTFB (read the bill) on it, and found it to be a very good bill that I strongly supported. It would not have happened without him championing the bill and spending political capital on it.

By far the strongest argument against the bill is that it would be better if such bills were done on the Federal level.

He’s trying to address this by running for Congress in my own distinct, NY-12, to succeed Jerry Nadler. The district is deeply Democratic, so this will have no impact on the partisan balance. What it would do is give real AI safety a knowledgeable champion in the House of Representatives, capable of championing good bills.

Eric Nayman makes an extensive case for considering donating to Alex Bores today, in his first 24 hours, as donations in the first 24 hours are extremely valuable. Sonnet 4.5 estimates that in this case, a donation on day one is worth about double what it would be worth later. If you do decide to donate, they prefer that you use this link to ensure the donation gets fully registered today.

As always, remember while considering this that political donations are public.

(Note: I intend to remove this announcement from this post after the 24 hour window closes, and move it to AI #139.)

Sagarika Jaisinghani (Bloomberg): A record share of global fund managers said artificial intelligence stocks are in a bubble following a torrid rally this year, according to a survey by Bank of America Corp.

About 54% of participants in the October poll indicated tech stocks were looking too expensive, an about-turn from last month when nearly half had dismissed those concerns. Fears that global stocks were overvalued also hit a peak in the latest survey.

So a month ago things most people thought things were fine and now it’s a bubble?

This is a very light bubble definition, as these things go.

Nothing importantly bearish happened in that month other than bullish deals, so presumably this is a ‘circular deals freak us out’ shift in mood? Or it could be a cascade effect.

There is definitely reason for concern. If you remove the label ‘bubble’ and simply say ‘AI’ then the quote from Deutsche Bank below is correct, as AI is responsible for essentially all economic growth. Also you can mostly replace ‘US’ with ‘world.’

Unusual Whales: “The AI bubble is the only thing keeping the US economy together,” Deutsche Bank has said per TechSpot.

Not quite, at this size you need some doubt involved. But the basic answer is yes.

Roon: one reason to disbelief in a sizeable bubble is when the largest financial institutions in the world are openly calling it that.

Jon Stokes: I disagree. I was in my early 20’s during the dotcom bubble and was in tech, and everyone everywhere knew it was a bubble — from the banks to the VCs down to the individual programmers in SF. Everyone talked about it openly in the last ~1yr of it, but the numbers kept going up.

I don’t think this is Dotcom 2.0 but I think it’s possible the market is getting ahead of itself. That said, I also lived through the cloud “bubble” which turned out to not be a bubble at all — I even wrote my own contributes to “are we in a bubble?” literature in like 2012. Anyone who actually traded on the idea that the cloud buildout was a bubble lost out bigtime.

Every time there’s a big new infra buildout there’s bubble talk.

My point is that is very possible to have a bubble that everyone everywhere knows is a bubble, yet it keeps on bubbling because nobody wants to miss the action & everyone thinks they can time an exit. “Enjoy the party, but dance close to the exits” was the slogan back then.

It is definitely possible to get into an Everybody Knows situation with a bubble, for various reasons, both when it is and when it isn’t actually a bubble. For example, there’s Bitcoin, and Bitcoin, and Bitcoin, and Bitcoin, but there’s also Bitcoin.

Is it evidence for or against a bubble when everyone says it’s a bubble?

My gut answer is it depends on who is everyone.

If everyone is everyone working in the industry? Then yeah, evidence for a bubble.

If everyone is everyone at the major economic institutions? Not so much.

So I decided to check.

There was essentially no correlation, with 42.5% of AI workers and 41.7% of others saying there is a bubble, and that’s a large percentage, so things are certainly somewhat concerning. It certainly seems likely that certain subtypes of AI investment are ‘in a bubble’ in the sense that investors in those subtypes will lose money, which you would expect in anything like an efficient market.

In particular, consensus seems to be, and I agree with it (reminder: not investment advice), that investment in ‘companies with products in position to get steamrolled by OpenAI and other frontier labs’ are as a group not going to do well. If you want to call that an ‘AI bubble’ you can, but that seems net misleading. I also wouldn’t be excited to short that basket, since you’re exposed if even one of them hits it big. Remember that if you bought a tech stock portfolio at the dot com peak, you still got Amazon.

Whereas if you had a portfolio of ‘picks and shovels’ or of the frontier labs themselves, that still seems to me like a fine place to bet, although it is no longer a ‘I can’t believe they’re letting me buy at these prices, this is free money’ level of fine. You now have to actually have beliefs about the future and an investment thesis.

Noah Smith speculates on bubble causes and types when it comes to AI.

Noah Smith: An AI crash isn’t certain, but I think it’s more likely than people think.

Looking at the historical examples of railroads, electricity, dotcoms, and housing can help us understand what an AI crash would look like.

A burning question that’s on a lot of people’s minds right now is: Why is the U.S. economy still holding up? The manufacturing industry is hurting badly from Trump’s tariffs, the payroll numbers are looking weak, and consumer sentiment is at Great Recession levels.

… Another possibility is that tariffs are bad, but are being canceled out by an even more powerful force — the AI boom.

You could have a speculative bubble, or an extrapolative bubble, or simply a big mistake about the value of the tech. He thinks if it is a bubble it would be of the later type, proposing we use Bezos’ term ‘industrial bubble.’

Noah Smith: … When we look at the history of industrial bubbles, and of new technologies in general, it becomes clear that in order to cause a crash, AI doesn’t have to fail. It just has to mildly disappoint the most ardent optimists.

I don’t think that’s quite right. The market reflects a variety of perspectives, and it will almost always be way below where the ardent optimists would place it. The ardent optimists are the rock with the word ‘BUY!’ written on it.

What is right is that if AI over some time frame disappoints relative to expectations, sufficiently to shift forward expectations downward from their previous level, that would cause a substantial drop in prices, which could then break momentum and worsen various positions, causing a larger drop in prices.

Thus, we could have AI ultimately having a huge economic impact and ultimately being fully transformative (maybe killing everyone, maybe being amazingly great), and have nothing go that wrong along the way, but still have what people at the time would call ‘the bubble bursting.’

Indeed, if the market is at all efficient, there is a lot of ‘upside risk’ of AI being way more impactful than suggested by the market price, which means there has to be a corresponding downside risk too. Part of that risk is geopolitical, an anti-AI movement could rise, or the supply chain could be disrupted by tariff battles or a war over Taiwan. By traditional definitions of ‘bubble,’ that means a potential bubble.

Ethan Mollick: I don’t have much to add to the bubble discussion, but the “this time is different” argument is, in part, based on the sincere belief of many at the AI labs that there is a race to superintelligence & the winner gets,.. everything.

It is a key dynamic that is not discussed much.

You don’t have to believe it (or think this is a good idea), but many of the AI insiders really do. Their public statements are not much different than their private ones. Without considering that zero sum dimension, a lot of what is happening in the space makes less sense.

Even a small chance of a big upside should mean a big boost to valuation. Indeed that is the reason tech startups are funded and venture capital firms exist. If you don’t get the fully transformational level of impact, then at some point value will drop.

Consider the parallel to Bitcoin, and in thinking there is some small percentage chance of becoming ‘digital gold’ or even the new money. If you felt there was no way it could fall by a lot from any given point in time, or even if you were simply confident that it was probably not going to crash, it would be a fantastic screaming buy.

AI also has to retain expectations that providers will be profitable. If AI is useful but it is expected to not provide enough profits, that too can burst the bubble.

Matthew Yglesias: Key point in here from @Noahpinion — even if the AI tech turns out to be exactly as promising as the bulls think, it’s not totally clear whether this would mean high margin businesses.

A slightly random example but passenger jetliners have definitely worked out as a technology, tons of people use them and they are integral to the whole world economy. But the combined market cap of Boeing + Airbus is unimpressive.

Jetliners seem a lot more important and impressive than the idea of a big box building supply store, but in terms of market cap Home Depot > Boeing + Airbus. Technology is hard but then business is also hard.

Sam D’Amico: AI is going to rock but we may have a near-term capex bubble like the fiber buildout during the dotcom boom.

Matthew Yglesias writes more thoughts here, noting that AI is propping up the whole economy and offering reasons some people believe there’s a bubble and also reasons it likely isn’t one, and especially isn’t one in the pure bubble sense of cryptocurrency or Beanie babies, there’s clearly a there there.

To say confidently that there is no bubble in AI is to claim, among other things, that the market is horribly inefficient, and that AI assets are and will remain dramatically underpriced but reliably gain value as people gain situational awareness and are mugged by reality. This includes the requirement that the currently trading AI assets will be poised to capture a lot of value.

Alternatively, how about the possibility that there could be a crash for no reason?

Simeon: Agreed with Noah here. People underestimate the odds of a crash, and things like the OA x AMD deal make such things more likely. It just takes a sufficient number of people to be scared at the same time.

Remember the market reaction to DeepSeek? It can be irrational.

The DeepSeek moment is sobering, since the AI market was down quite a lot on news that should have been priced in and if anything should have made prices go up. What is to stop a similar incorrect information cascade from happening again? Other than a potential ‘Trump put’ or Fed put, very little.

Derek Thompson provides his best counterargument, saying AI probably isn’t a bubble. He also did an episode on this for the Plain English podcast with Azeem Azhar of Exponential View to balance his previous episode from September 23 on ‘how the AI bubble could burst.

Derek Thompson: And yet, look around: Is anybody actually acting as if AI is a bubble?

… Everyone claims that they know the music is ending soon, and yet everybody is still dancing along to the music.

Well, yeah, when there’s a bubble everyone goes around saying ‘there’s a bubble’ but no one does anything about it, until they do and then there’s no bubble?

As Tyler Cowen sometimes asks, are you short the market? Me neither.

Derek breaks down the top arguments for a bubble.

  1. Lofty valuations for companies with no clear path to profit.

  2. Unprecedented spending on an unproven business.

  3. A historic chasm between capex spending and revenue.

  4. An eerie level of financial opacity.

  5. A byzantine level of corporate entanglement.

A lot of overlap here. We have a lot of money being invested in and spent on AI, without much revenue. True that. The AI companies all invest in and buy from each other, at a level that yeah is somewhat suspicious. Yeah, fair. The chips are only being discounted on five year horizons and that seems a bit long? Eh, that seems fine to me, older chips are still useful as long as demand exceeds supply.

So why not a bubble? That part is gated, but his thread lays out the core responses. One, the AI companies look nothing like the joke companies in the dot com bubble.

Two, the AI companies need AI revenues to grow 100%-200% a year, and that sounds like a lot, but so far you’re seeing even more than that.

Epoch AI: One way bubbles pop: a technology doesn’t deliver value as quickly as investors bet it will. In light of that, it’s notable that OpenAI is projecting historically unprecedented revenue growth — from $10B to $100B — over the next three years.

OpenAI’s revenue growth has been extremely impressive: from <$1B to >$10B in only three years. Still, a few other companies have pulled off similar growth.

We found four such US companies in the past fifty years. Of these, only Google went on to top $100B in revenue.

Peter Wildeford: OpenAI is claiming they will double revenue three years in a row… this is historically unprecedented, but may be possible (so far they are 3x/year and NVIDIA has done 2x/yr lately)

But OpenAI has also projected revenue of $100B in 2028. We found seven companies which achieved revenue growth from $10B to $100B in under a decade.

None of them did it in six years, let alone three.

As Matt Levine says, OpenAI has a business model now, because when you need debt investors you need to have a business plan. This one strikes a balance between ‘good enough to raise money’ and ‘not so good no one will believe it.’ Which makes it well under where I expect their revenue to land.

Frankly, OpenAI is downplaying their expectations because if they used their actual projections then no one would believe them, and they might get sued if things didn’t work out. The baseline scenario is that OpenAI (and Anthropic) blow the projections out of the water.

Martha Gimbel: Ok not the main point of [Thompson’s] article but I love this line: “The whole thing is vaguely Augustinian: O Lord, make me sell my Nvidia position and rebalance toward consumer staples, but not yet.”

Timothy Lee thinks it’s probably ‘not a bubble yet,’ partly citing Thompson and partly because we are seeing differentiation on which models do tasks best. He also links to the worry that there might be no moat, as fast follows offer the same service much cheaper and kill your margin, since your product might be only slightly better.

The thing about AI is that it might in total cost a lot, but in exchange you get a ton. It doesn’t have to be ten times better to have the difference be a big deal. For most use cases of AI, you would be wise to pay twice the price for something 10% better, and often wise to pay 10 or 100 times as much. Always think absolute cost, not relative.

Vinay Sridhar: What finally caught my attention was this stat from an NYT article by Natasha Sarin (via Marginal Revolution): “To provide some sense of scale, that means the equivalent of about $1,800 per person in America will be invested this year on AI”. That is a bucketload of spending.

It is, but is it? I spend more than that on one of my AI subscriptions, and get many times that much value in return. Thinking purely in terms of present day concrete benefits, when I ask ‘how many different use cases of AI are providing me $1,800 in value?’ I can definitely include taxes and accounting, product evaluation and search, medical help, analysis of research papers, coding and general information and search. So that’s at least six.

Similarly, does this sound like a problem, given the profit margins of these companies?

Vinay Sridhar: Hyperscalers (Microsoft, Amazon, Alphabet and Meta) historically ran capex at 11-16% of revenue. Today they’re at 22%, with revenue YoY growth in the 15-25% range (ex Nvidia) – capex spending is dramatically outpacing revenue growth. The four major hyperscalers are spending approximately $320 billion combined on AI infrastructure in 2025 – with public statements on how this will likely continue in the coming years.

Similarly, Vinay notes that the valuations are only somewhat high.

Current valuations, while elevated, remain “much lower” than prior tech bubbles. The Nasdaq’s forward P/E ratio is ~28X today. At the 2000 peak, it exceeded 70X. Even in 2007 before the financial crisis, tech valuations were higher relative to earnings than they are now.

Looking more closely, the MAG7 — who are spending the majority of this capex — has a blended P/E of ~32X — expensive, as Coatue’s Laffont brothers said in June, but not extreme relative to past tech bubbles.

That’s a P/E ratio, and all this extra capex spending if anything reduces short term earnings. Does a 28x forward P/E ratio sound scary in context, with YoY growth in the 20% range? It doesn’t to me. Sure, there’s some downside, but it would be a dramatic inefficiency if there wasn’t.

Vinay offers several other notes as well.

One thing I find confusing is all the ‘look how fast the chips will lose value’ arguments. Here’s Vinay’s supremely confident claims, as another example of this:

Vinay Sridhar: 7. GPU Depreciation Schedules Don’t Match Reality

Nvidia now unveils a new AI chip every year instead of every two years. Jensen Huang said in March that “when Blackwell starts shipping in volume you couldn’t give Hoppers away”.

Meanwhile, companies keep extending depreciation schedules. Microsoft: 4 to 6 years (2022). Alphabet: 4 to 6 years (2023). Amazon and Oracle: 5 to 6 years (2024). Meta: 5 to 5.5 years (January 2025). Amazon partially reversed course in January 2025, moving some assets back to 5 years, noting this would cut operating profit by $700m.

The Economist analyzed the impact: if servers depreciate over 3 years instead of current schedules, the AI big five’s combined annual pre-tax profit falls by $26bn (8% of last year’s total). Over 2 years: $1.6trn market cap hit. If you take Huang literally at 1 year, this implies $4trn, one-third of their collective worth. Barclays estimated higher depreciation costs would shave 5-10% from earnings per share.

Hedgie takes a similar angle, calling the economics unsustainable because the lifespan of data center components is only 3-10 years due to rapid technological advances.

Hedgie: Kupperman originally assumed data center components would depreciate over 10 years, but learned from two dozen senior professionals that the actual lifespan is just 3-10 years due to rapid technology advances. His revised calculations show the industry needs $320-480 billion in revenue just to break even on 2025 data center spending alone. Current AI revenue sits around $20 billion annually.

What strikes me most is that none of the senior data center professionals Kupperman spoke with understand how the financial math works either.

No one I have seen is saying that chip capability improvements are accelerating dramatically. If that is the case we need to update our timelines.

When Nvidia releases a new chip every year, that doesn’t mean they do the 2027 chip in 2026 and then do the 2029 chip in 2027. It means they do the 2027 chip in 2027, and before that do the best chip you can do in 2026, and it also means Nvidia is good at marketing and life is coming at them fast.

Huang’s statement about free hoppers is obviously deeply silly, and everyone knows not to take such Nvidia statements seriously or literally. The existence of new better chips does not invalidate older worse chips unless supply exceeds demand by enough that the old chips cost more to run then the value they bring.

That’s very obviously not going to happen over three years let alone one or two. You can do math on the production capacity available.

If the marginal cost of hoppers in 2028 was going to be approximately zero, what does that imply?

By default? Stop thinking about capex depreciation and start thinking about whether this means we get a singularity in 2028, since you can now scale compute as long as you have power. Also, get long China, since they have unlimited power generation.

If that’s not why, then it means AI use cases turned out to be severely limited, and the world has a large surplus of compute and not much to do with it.

It kind of has to be one or the other. Neither seems plausible.

I see not only no sign of overcapacity, I see signs of undercapacity, including a scramble for every chip people can get and compute being a limiting factor on many labs in practice right now, including OpenAI and Anthropic. The price of compute has recently been rising, not falling, including the price for renting older chips.

Dave Friedman looked into the accounting here, ultimately not seeing this as a solvency or liquidity issue, but he thinks there could be an accounting optics issue.

Could recent trends reverse, and faster than expected depreciations and ability to charge for older chips cause problems for the accounting in data centers? I mean, sure, that’s obviously possible, if we actually produce enough better chips, or demand sufficiently lags expectations, or some combination thereof.

This whole question seems like a strange thing for those investing hundreds of billions and everyone trading the market to not have priced into their plans and projections? Yes, current OpenAI revenue is on the order of $20 billion, but if you project that out over 3-10 years, that number is going to be vastly higher, and there are other companies.

I mostly agree with Charles that the pro-bubble arguments are remarkably weak, given the amount of bubble talk we are seeing, and that when you combine these two facts it should move you towards there not being a bubble.

Unlike Charles, I am not about to use leverage. I consider leverage in personal investing to be reserved for extreme situations, and a substantial drop in prices is very possible. But I definitely understand.

Charles: There have been so many terrible arguments for why we’re in an AI bubble lately, and so few good ones, that I’ve been convinced the appropriate update is in the “not a bubble” direction and increased my already quite long position.

Fwiw that looks like now being 1.4x leveraged long a mix of about 50% index funds and 50% specific bets (GOOG, TSMC, AMZN the biggest of those).

Most of what changed, I think, is that there were a bunch of circular deals done in close succession, and when combined with the exponential growth expectations for AI and people’s lack of understanding the technology and what it will be able to do, and the valuations approaching the point where one can question there being any room to grow, this reasonably triggered various heuristics and freaked people out.

If we define a bubble narrowly as ‘we see a Nasdaq price decline of 20% sustained for 6 months’ I would give that on the order of 25% to happen within the next few years, including as part of a marketwide decline in prices. It has happened to the wider market as recently as 2022, and about 5 times in the last 50 years.

If a decline does happen, I predict I will probably use that opportunity to buy more.

That does not have to be true. Perhaps there will have been large shifts in anticipated future capabilities, or in the competitive landscape and ability to capture profits, or the general economic conditions, and the drop will be fully justified and reflect AI slowing down.

But most of the time this will not be what happened, and the drop will not ultimately have much effect, although it would presumably slow down progress slightly.

Connor Leahy: I want to preregister the following opinion:

I think it’s plausible, but by no means guaranteed, that we could see a massive financial crisis or bubble pop affecting AI in the next year.

I expect if this happens, it will be mostly for mundane economic reasons (overleveraged markets, financial policy of major nations, mistiming of bets even by small amounts and good ol’ fraud), not because the technology isn’t making rapid progress.

I expect such a crisis to have at most modest effects on timelines to existentially dangerous ASI being developed, but will be used by partisans to try and dismiss the risk.

Sadly, a bunch of people making poorly thought through leveraged bets on the market tells you little about underlying object reality of how powerful AI is or soon will be.

Do not be fooled by narratives.

Discussion about this post

Bubble, Bubble, Toil and Trouble Read More »

f1-in-texas:-well,-now-the-championship-is-exciting-again

F1 in Texas: Well, now the championship is exciting again

AUSTIN, TEXAS - OCTOBER 19: Charles Leclerc of Monaco driving the (16) Scuderia Ferrari SF-25 and Lando Norris of Great Britain driving the (4) McLaren MCL39 Mercedes battle for track position during the F1 Grand Prix of United States at Circuit of The Americas on October 19, 2025 in Austin, Texas. (Photo by Clive Mason/Getty Images)

Charles Leclerc and Lando Norris during one of their on-track battles. Credit: Clive Mason/Getty Images

On Sunday, like in the sprint, Verstappen was unchallenged into turn 1 and drove to the checkered flag without much drama. Norris probably had the speed to challenge him, but the Ferrari of Charles Leclerc, which started the race on soft tires rather than mediums, used his grip advantage to pass Norris at the first turn. Within about four laps Leclerc’s tires had already given their best, allowing Verstappen to eke out a small lead.

What followed was a wonderfully exciting battle between Norris and Leclerc for second place. The drivers were on different strategies: Leclerc would switch to a medium after his soft tire, Norris would do the opposite. It took Norris a while to pass Leclerc the first time, with the McLaren driver trying the same cutback move at a number of corners without success before eventually succeeding.

But Leclerc stopped first, and when Norris made his tire change he yet again had to overtake Leclerc. This time Norris was much braver on the brakes into turn 12 to complete the move. Once in clean air, Norris was matching Verstappen’s speed, but the gap was too much to close down.

Verstappen’s win brings him to within 40 points of Piastri, with Norris just 14 points behind his teammate. And remember, there’s 25 points for a win—another non-finish for Piastri would be a disaster now. Should Verstappen manage to overtake both, he will have overcome the greatest points deficit in F1 history to do so.

AUSTIN, TEXAS - OCTOBER 19: Charles Leclerc of Monaco driving the (16) Scuderia Ferrari SF-25 and Lewis Hamilton of Great Britain driving the (44) Scuderia Ferrari SF-25 battle for track position during the F1 Grand Prix of United States at Circuit of The Americas on October 19, 2025 in Austin, Texas. (Photo by Clive Mason/Getty Images)

After a miserable season, both Ferraris did well at COTA, finishing third and fourth. Credit: Clive Mason/Getty Images

History doesn’t repeat itself, but they do say it rhymes. And I’m hearing some of the same melodies as 2007, when dueling McLaren drivers took points off each other to allow Kimi Räikkönen and Ferrari to win the driver’s championship—and also 1986, when dueling Williams drivers lost to the McLaren of Alain Prost. If 2025 becomes Verstappen’s fifth world championship, it should go down as his most accomplished.

And there’s not long to wait: The next round takes place next weekend in Mexico City.

F1 in Texas: Well, now the championship is exciting again Read More »

something-from-“space”-may-have-just-struck-a-united-airlines-flight-over-utah

Something from “space” may have just struck a United Airlines flight over Utah

The National Transportation Safety Board confirmed Sunday that it is investigating an airliner that was struck by an object in its windscreen, mid-flight, over Utah.

“NTSB gathering radar, weather, flight recorder data,” the federal agency said on the social media site X. “Windscreen being sent to NTSB laboratories for examination.”

The strike occurred Thursday, during a United Airlines flight from Denver to Los Angeles. Images shared on social media showed that one of the two large windows at the front of a 737 MAX aircraft was significantly cracked. Related images also reveal a pilot’s arm that has been cut multiple times by what appear to be small shards of glass.

Object’s origin not confirmed

The captain of the flight reportedly described the object that hit the plane as “space debris.” This has not been confirmed, however.

pic.twitter.com/qpoRaWA6Ab

— JonNYC (@xJonNYC) October 18, 2025

After the impact, the aircraft safely landed at Salt Lake City International Airport after being diverted.

Images of the strike showed that an object made a forceful impact near the upper-right part of the window, showing damage to the metal frame. Because aircraft windows are multiple layers thick, with laminate in between, the window pane did not shatter completely. The aircraft was flying above 30,000 feet—likely around 36,000 feet—and the cockpit apparently maintained its cabin pressure.

Something from “space” may have just struck a United Airlines flight over Utah Read More »

big-tech-sues-texas,-says-age-verification-law-is-“broad-censorship-regime”

Big Tech sues Texas, says age-verification law is “broad censorship regime”

Texas minors also challenge law

The Texas App Store Accountability Act is similar to laws enacted by Utah and Louisiana. The Texas law is scheduled to take effect on January 1, 2026, while the Utah and Louisiana laws are set to be enforced starting in May and July, respectively.

The Texas law is also being challenged in a different lawsuit filed by a student advocacy group and two Texas minors.

“The First Amendment does not permit the government to require teenagers to get their parents’ permission before accessing information, except in discrete categories like obscenity,” attorney Ambika Kumar of Davis Wright Tremaine LLP said in an announcement of the lawsuit. “The Constitution also forbids restricting adults’ access to speech in the name of protecting children. This law imposes a system of prior restraint on protected expression that is presumptively unconstitutional.”

Davis Wright Tremaine LLP said the law “extends far beyond social media to mainstream educational, news, and creative applications, including Wikipedia, search apps, and internet browsers; messaging services like WhatsApp and Slack; content libraries like Audible, Kindle, Netflix, Spotify, and YouTube; educational platforms like Coursera, Codecademy, and Duolingo; news apps from The New York Times, The Wall Street Journal, ESPN, and The Atlantic; and publishing tools like Substack, Medium, and CapCut.”

Both lawsuits against Texas argue that the law is preempted by the Supreme Court’s 2011 decision in Brown v. Entertainment Merchants Association, which struck down a California law restricting the sale of violent video games to children. The Supreme Court said in Brown that a state’s power to protect children from harm “does not include a free-floating power to restrict the ideas to which children may be exposed.”

The tech industry has sued Texas over multiple laws related to content moderation. In 2022, the Supreme Court blocked a Texas law that prohibits large social media companies from moderating posts based on a user’s viewpoint. Litigation in that case is ongoing. In a separate case decided in June 2025, the Supreme Court upheld a Texas law that requires age verification on porn sites.

Big Tech sues Texas, says age-verification law is “broad censorship regime” Read More »

nation-state-hackers-deliver-malware-from-“bulletproof”-blockchains

Nation-state hackers deliver malware from “bulletproof” blockchains

Hacking groups—at least one of which works on behalf of the North Korean government—have found a new and inexpensive way to distribute malware from “bulletproof” hosts: stashing them on public cryptocurrency blockchains.

In a Thursday post, members of the Google Threat Intelligence Group said the technique provides the hackers with their own “bulletproof” host, a term that describes cloud platforms that are largely immune from takedowns by law enforcement and pressure from security researchers. More traditionally, these hosts are located in countries without treaties agreeing to enforce criminal laws from the US and other nations. These services often charge hefty sums and cater to criminals spreading malware or peddling child sexual abuse material and wares sold in crime-based flea markets.

Next-gen, DIY hosting that can’t be tampered with

Since February, Google researchers have observed two groups turning to a newer technique to infect targets with credential stealers and other forms of malware. The method, known as EtherHiding, embeds the malware in smart contracts, which are essentially apps that reside on blockchains for Ethereum and other cryptocurrencies. Two or more parties then enter into an agreement spelled out in the contract. When certain conditions are met, the apps enforce the contract terms in a way that, at least theoretically, is immutable and independent of any central authority.

“In essence, EtherHiding represents a shift toward next-generation bulletproof hosting, where the inherent features of blockchain technology are repurposed for malicious ends,” Google researchers Blas Kojusner, Robert Wallace, and Joseph Dobson wrote. “This technique underscores the continuous evolution of cyber threats as attackers adapt and leverage new technologies to their advantage.”

There’s a wide array of advantages to EtherHiding over more traditional means of delivering malware, which besides bulletproof hosting include leveraging compromised servers.

    • The decentralization prevents takedowns of the malicious smart contracts because the mechanisms in the blockchains bar the removal of all such contracts.
    • Similarly, the immutability of the contracts prevents the removal or tampering with the malware by anyone.
    • Transactions on Ethereum and several other blockchains are effectively anonymous, protecting the hackers’ identities.
    • Retrieval of malware from the contracts leaves no trace of the access in event logs, providing stealth
    • The attackers can update malicious payloads at anytime

Nation-state hackers deliver malware from “bulletproof” blockchains Read More »

ars-live-recap:-is-the-ai-bubble-about-to-pop?-ed-zitron-weighs-in.

Ars Live recap: Is the AI bubble about to pop? Ed Zitron weighs in.


Despite connection hiccups, we covered OpenAI’s finances, nuclear power, and Sam Altman.

On Tuesday of last week, Ars Technica hosted a live conversation with Ed Zitron, host of the Better Offline podcast and one of tech’s most vocal AI critics, to discuss whether the generative AI industry is experiencing a bubble and when it might burst. My Internet connection had other plans, though, dropping out multiple times and forcing Ars Technica’s Lee Hutchinson to jump in as an excellent emergency backup host.

During the times my connection cooperated, Zitron and I covered OpenAI’s financial issues, lofty infrastructure promises, and why the AI hype machine keeps rolling despite some arguably shaky economics underneath. Lee’s probing questions about per-user costs revealed a potential flaw in AI subscription models: Companies can’t predict whether a user will cost them $2 or $10,000 per month.

You can watch a recording of the event on YouTube or in the window below.

Our discussion with Ed Zitron. Click here for transcript.

“A 50 billion-dollar industry pretending to be a trillion-dollar one”

I started by asking Zitron the most direct question I could: “Why are you so mad about AI?” His answer got right to the heart of his critique: the disconnect between AI’s actual capabilities and how it’s being sold. “Because everybody’s acting like it’s something it isn’t,” Zitron said. “They’re acting like it’s this panacea that will be the future of software growth, the future of hardware growth, the future of compute.”

In one of his newsletters, Zitron describes the generative AI market as “a 50 billion dollar revenue industry masquerading as a one trillion-dollar one.” He pointed to OpenAI’s financial burn rate (losing an estimated $9.7 billion in the first half of 2025 alone) as evidence that the economics don’t work, coupled with a heavy dose of pessimism about AI in general.

Donald Trump listens as Nvidia CEO Jensen Huang speaks at the White House during an event on “Investing in America” on April 30, 2025, in Washington, DC. Credit: Andrew Harnik / Staff | Getty Images News

“The models just do not have the efficacy,” Zitron said during our conversation. “AI agents is one of the most egregious lies the tech industry has ever told. Autonomous agents don’t exist.”

He contrasted the relatively small revenue generated by AI companies with the massive capital expenditures flowing into the sector. Even major cloud providers and chip makers are showing strain. Oracle reportedly lost $100 million in three months after installing Nvidia’s new Blackwell GPUs, which Zitron noted are “extremely power-hungry and expensive to run.”

Finding utility despite the hype

I pushed back against some of Zitron’s broader dismissals of AI by sharing my own experience. I use AI chatbots frequently for brainstorming useful ideas and helping me see them from different angles. “I find I use AI models as sort of knowledge translators and framework translators,” I explained.

After experiencing brain fog from repeated bouts of COVID over the years, I’ve also found tools like ChatGPT and Claude especially helpful for memory augmentation that pierces through brain fog: describing something in a roundabout, fuzzy way and quickly getting an answer I can then verify. Along these lines, I’ve previously written about how people in a UK study found AI assistants useful accessibility tools.

Zitron acknowledged this could be useful for me personally but declined to draw any larger conclusions from my one data point. “I understand how that might be helpful; that’s cool,” he said. “I’m glad that that helps you in that way; it’s not a trillion-dollar use case.”

He also shared his own attempts at using AI tools, including experimenting with Claude Code despite not being a coder himself.

“If I liked [AI] somehow, it would be actually a more interesting story because I’d be talking about something I liked that was also onerously expensive,” Zitron explained. “But it doesn’t even do that, and it’s actually one of my core frustrations, it’s like this massive over-promise thing. I’m an early adopter guy. I will buy early crap all the time. I bought an Apple Vision Pro, like, what more do you say there? I’m ready to accept issues, but AI is all issues, it’s all filler, no killer; it’s very strange.”

Zitron and I agree that current AI assistants are being marketed beyond their actual capabilities. As I often say, AI models are not people, and they are not good factual references. As such, they cannot replace human decision-making and cannot wholesale replace human intellectual labor (at the moment). Instead, I see AI models as augmentations of human capability: as tools rather than autonomous entities.

Computing costs: History versus reality

Even though Zitron and I found some common ground about AI hype, I expressed a belief that criticism over the cost and power requirements of operating AI models will eventually not become an issue.

I attempted to make that case by noting that computing costs historically trend downward over time, referencing the Air Force’s SAGE computer system from the 1950s: a four-story building that performed 75,000 operations per second while consuming two megawatts of power. Today, pocket-sized phones deliver millions of times more computing power in a way that would be impossible, power consumption-wise, in the 1950s.

The blockhouse for the Semi-Automatic Ground Environment at Stewart Air Force Base, Newburgh, New York. Credit: Denver Post via Getty Images

“I think it will eventually work that way,” I said, suggesting that AI inference costs might follow similar patterns of improvement over years and that AI tools will eventually become commodity components of computer operating systems. Basically, even if AI models stay inefficient, AI models of a certain baseline usefulness and capability will still be cheaper to train and run in the future because the computing systems they run on will be faster, cheaper, and less power-hungry as well.

Zitron pushed back on this optimism, saying that AI costs are currently moving in the wrong direction. “The costs are going up, unilaterally across the board,” he said. Even newer systems like Cerebras and Grok can generate results faster but not cheaper. He also questioned whether integrating AI into operating systems would prove useful even if the technology became profitable, since AI models struggle with deterministic commands and consistent behavior.

The power problem and circular investments

One of Zitron’s most pointed criticisms during the discussion centered on OpenAI’s infrastructure promises. The company has pledged to build data centers requiring 10 gigawatts of power capacity (equivalent to 10 nuclear power plants, I once pointed out) for its Stargate project in Abilene, Texas. According to Zitron’s research, the town currently has only 350 megawatts of generating capacity and a 200-megawatt substation.

“A gigawatt of power is a lot, and it’s not like Red Alert 2,” Zitron said, referencing the real-time strategy game. “You don’t just build a power station and it happens. There are months of actual physics to make sure that it doesn’t kill everyone.”

He believes many announced data centers will never be completed, calling the infrastructure promises “castles on sand” that nobody in the financial press seems willing to question directly.

An orange, cloudy sky backlights a set of electrical wires on large pylons, leading away from the cooling towers of a nuclear power plant.

After another technical blackout on my end, I came back online and asked Zitron to define the scope of the AI bubble. He says it has evolved from one bubble (foundation models) into two or three, now including AI compute companies like CoreWeave and the market’s obsession with Nvidia.

Zitron highlighted what he sees as essentially circular investment schemes propping up the industry. He pointed to OpenAI’s $300 billion deal with Oracle and Nvidia’s relationship with CoreWeave as examples. “CoreWeave, they literally… They funded CoreWeave, became their biggest customer, then CoreWeave took that contract and those GPUs and used them as collateral to raise debt to buy more GPUs,” Zitron explained.

When will the bubble pop?

Zitron predicted the bubble would burst within the next year and a half, though he acknowledged it could happen sooner. He expects a cascade of events rather than a single dramatic collapse: An AI startup will run out of money, triggering panic among other startups and their venture capital backers, creating a fire-sale environment that makes future fundraising impossible.

“It’s not gonna be one Bear Stearns moment,” Zitron explained. “It’s gonna be a succession of events until the markets freak out.”

The crux of the problem, according to Zitron, is Nvidia. The chip maker’s stock represents 7 to 8 percent of the S&P 500’s value, and the broader market has become dependent on Nvidia’s continued hyper growth. When Nvidia posted “only” 55 percent year-over-year growth in January, the market wobbled.

“Nvidia’s growth is why the bubble is inflated,” Zitron said. “If their growth goes down, the bubble will burst.”

He also warned of broader consequences: “I think there’s a depression coming. I think once the markets work out that tech doesn’t grow forever, they’re gonna flush the toilet aggressively on Silicon Valley.” This connects to his larger thesis: that the tech industry has run out of genuine hyper-growth opportunities and is trying to manufacture one with AI.

“Is there anything that would falsify your premise of this bubble and crash happening?” I asked. “What if you’re wrong?”

“I’ve been answering ‘What if you’re wrong?’ for a year-and-a-half to two years, so I’m not bothered by that question, so the thing that would have to prove me right would’ve already needed to happen,” he said. Amid a longer exposition about Sam Altman, Zitron said, “The thing that would’ve had to happen with inference would’ve had to be… it would have to be hundredths of a cent per million tokens, they would have to be printing money, and then, it would have to be way more useful. It would have to have efficacy that it does not have, the hallucination problems… would have to be fixable, and on top of this, someone would have to fix agents.”

A positivity challenge

Near the end of our conversation, I wondered if I could flip the script, so to speak, and see if he could say something positive or optimistic, although I chose the most challenging subject possible for him. “What’s the best thing about Sam Altman,” I asked. “Can you say anything nice about him at all?”

“I understand why you’re asking this,” Zitron started, “but I wanna be clear: Sam Altman is going to be the reason the markets take a crap. Sam Altman has lied to everyone. Sam Altman has been lying forever.” He continued, “Like the Pied Piper, he’s led the markets into an abyss, and yes, people should have known better, but I hope at the end of this, Sam Altman is seen for what he is, which is a con artist and a very successful one.”

Then he added, “You know what? I’ll say something nice about him, he’s really good at making people say, ‘Yes.’”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Ars Live recap: Is the AI bubble about to pop? Ed Zitron weighs in. Read More »

3-years,-4-championships,-but-0-le-mans-wins:-assessing-the-porsche-963

3 years, 4 championships, but 0 Le Mans wins: Assessing the Porsche 963


Riding high in IMSA but pulling out of WEC paints a complicated picture for the factory team.

Three race cars on track at Road Atlanta

Porsche didn’t win this year’s Petit Le Mans, but the #6 Porsche Penske 963 won championships for the team, the manufacturer, and the drivers. Credit: Hoch Zwei/Porsche

Porsche didn’t win this year’s Petit Le Mans, but the #6 Porsche Penske 963 won championships for the team, the manufacturer, and the drivers. Credit: Hoch Zwei/Porsche

The car world has long had a thing about numbers. Engine outputs. Top speeds. Zero-to-60 times. Displacement. But the numbers go beyond bench racing specs. Some cars have numbers for names, and few more memorably than Porsche. Its most famous model shares its appellation with the emergency services here in North America; although the car should accurately be “nine-11,” you call it “nine-one-one.”

Some numbers are less well-known, but perhaps more special to Porsche’s fans, especially those who like racing. 908. 917. 956. 962. 919. But how about 963?

That’s Porsche’s current sports prototype, a 670-hp (500 kW) hybrid that for the last three years has battled against rivals in what is starting to look like, if not a golden era for endurance racing, then at least a very purple patch. And the 963 has done well, racing here in IMSA’s WeatherTech Sportscar Championship and around the globe in the FIA World Endurance Championship.

In just three years since its competition debut at the Rolex 24 at Daytona in 2023, it has won 15 of the 49 races it has entered—most recently the WEC Lone Star Le Mans in Texas last month—and earned series championships in WEC (2023, 2024) and IMSA (2024, 2025), sealing the last of those this past weekend at the Petit Le Mans at Road Atlanta, a 10-hour race that caps IMSA’s season.

A porsche 963 on track, seen from above

49 races, 15 wins. But not Le Mans… Credit: Hoch Zwei/Porsche

But the IMSA championships—for the drivers, the teams, and the Michelin Endurance cup, as well as the manufacturers’ title in GTP—came just days after Porsche announced that its factory team would not enter WEC’s Hypercar category next year, halving the OEM’s prototype race program. And despite all those race wins, victory has eluded the 963 at Le Mans, which has seen a three-year shut-out by Ferrari’s 499P.

Missing the big win?

Porsche pulling out of WEC doesn’t rule out a 963 win at Le Mans next year, as the championship-winning 963 has gotten an invite to the race, and there is still a privateer 963 in the series. But the failure to win the big race has had me wondering whether that keeps the 963 from joining the pantheon of Porsche’s greatest racing cars and whether it needs a Le Mans win to cement its reputation. So I asked Urs Kuratle, director of factory motorsport LMDh at Porsche.

“Le Mans is one of the biggest car races in the world, independent from Porsche and the brands and the names and everything. So not winning this one is a—“bitter pill” is the wrong term, but obviously we would have loved to win this race. But we did not with the 963. We did with previous projects in LMP1h, but not with the 963,” Kuratle told me.

“But still, the 963 program is… a highly successful program because you named it—in the last year, we did not win one win in the championship, we won all of them. Because there’s several—the drivers’, manufacturers’, endurance, all these things—there’s many, many, many championships that the car won and also races. So the answer, basically, is it is a successful program. Not winning Le Mans with Porsche and Penske as well… I’m looking for the right term… it’s a pity,” Kuratle told me.

The #7 Porsche Penske won the Michelin Endurance Cup this year. Credit: Hoch Zwei/Porsche

Was LMDh the right move?

During the heady days of LMP1h, a complicated rulebook sought to create an equivalence of technology between wildly disparate approaches to hybrid race cars that included diesels, mechanical flywheels, and supercapacitors, as well as the more usual gasoline engines and lithium-ion batteries. The cars were technological marvels; unfettered, Porsche’s 919 was almost as fast as an F1 car—and almost as expensive.

These days, costs are more firmly under control, and equivalence of technology has given way to balance of performance to level the playing field. It’s a controversial topic. IMSA and the ACO, which writes the WEC and Le Mans rules, have different approaches to BoP, and the latter has had a perhaps more complicated—or more political—job as it combines cars built to two different rulebooks.

Some, like Ferrari, Peugeot, Toyota, and Aston Martin, build their entire car themselves to the Le Mans Hypercar (LMH) rules, which were written by the organizers of Le Mans and WEC. Others, like Porsche, Acura, Alpine, BMW, Cadillac, and Lamborghini, chose the Le Mans Daytona h (LMDh) rules, written in the US by IMSA. LMDh cars have to start off with one of four approved chassis or spines and must also use the same Bosch hybrid motor and electronics, the same Xtrac transmission, and the same WAE battery, with the rest being provided by the OEM.

Even before the introduction of LMH and LMDh, I wondered whether the LMDh cars would really be given a fair shake at the most important endurance race of the year, considering the organizers of that race wrote an alternative set of technical regulations. In 2025, a Porsche nearly did win, so I’m not sure there is any inherent bias or “not invented here” syndrome, but I asked Kuratle if, in hindsight, Porsche might have gone the “do it all yourself’ route of LMH, as Ferrari did.

“If you would have the chance starting on a white piece of paper again, knowing what you know now, you obviously would do many things different. That, I believe, is the nature of a competitive environment we are in,” he told me.

“We have many things not under our control, which is not a criticism on Bosch or all the standard components, manufacturer, suppliers,” Kuratle said. “It’s not a criticism at all, but it’s just the fact that, if there are certain things we would like to change for the 963, for example, the suppliers, they cannot do it because they have to do the same thing for the others as well, and they may not agree to this.”

“They are complicated cars, yes, this is true. But it’s not by the performance numbers; the LMP1 hybrid systems were way more efficient but also [more] performant than the system here. But the [spec components are] the way [they are] for good reasons, and that makes it more complicated,” he said.

A porsche 963 in the pit lane at road atlanta

North America is a very important market for Porsche, so we may see the 963 race here for the next few years. Credit: Hoch Zwei/Porsche

What’s next?

While the factory 963s will race in WEC no more after contesting the final round of the series in Bahrain in a few weeks, a continued IMSA effort for 2026 is assured, and there are several 963s in the hands of privateer teams. Meanwhile, discussions are ongoing between IMSA, the ACO, and manufacturers on a unified technical rulebook, probably for 2030.

Porsche is known to be a part of those discussions—the head of Porsche Motorsport spoke to The Race in September about them—but Kuratle wasn’t prepared to discuss the next Porsche racing prototype.

“A brand like Porsche is always thinking about the next project they may do. Obviously, we cannot talk about whatever we don’t know yet,” Kuratle said. But it should probably have something that can feed back into the cars that Porsche sells.

“If you look at the new Porsche turbo models, the concept is slightly different, but that comes very, very close to what the LMP1 hybrid system and concept was. So there’s all these things to go back into the road car side, so the experience is crucial,” he said.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

3 years, 4 championships, but 0 Le Mans wins: Assessing the Porsche 963 Read More »