Features

yes,-everything-online-sucks-now—but-it-doesn’t-have-to

Yes, everything online sucks now—but it doesn’t have to


from good to bad to nothing

Ars chats with Cory Doctorow about his new book Enshittification.

We all feel it: Our once-happy digital spaces have become increasingly less user-friendly and more toxic, cluttered with extras nobody asked for and hardly anybody wants. There’s even a word for it: “enshittification,” named 2023 Word of the Year by the American Dialect Society. The term was coined by tech journalist/science fiction author Cory Doctorow, a longtime advocate of digital rights. Doctorow has spun his analysis of what’s been ailing the tech industry into an eminently readable new book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It.

As Doctorow tells it, he was on vacation in Puerto Rico, staying in a remote cabin nestled in a cloud forest with microwave Internet service—i.e., very bad Internet service, since microwave signals struggle to penetrate through clouds. It was a 90-minute drive to town, but when they tried to consult TripAdvisor for good local places to have dinner one night, they couldn’t get the site to load. “All you would get is the little TripAdvisor logo as an SVG filling your whole tab and nothing else,” Doctorow told Ars. “So I tweeted, ‘Has anyone at TripAdvisor ever been on a trip? This is the most enshittified website I’ve ever used.’”

Initially, he just got a few “haha, that’s a funny word” responses. “It was when I married that to this technical critique, at a moment when things were quite visibly bad to a much larger group of people, that made it take off,” Doctorow said. “I didn’t deliberately set out to do it. I bought a million lottery tickets and one of them won the lottery. It only took two decades.”

Yes, people sometimes express regret to him that the term includes a swear word. To which he responds, “You’re welcome to come up with another word. I’ve tried. ‘Platform decay’ just isn’t as good.” (“Encrapification” and “enpoopification” also lack a certain je ne sais quoi.)

In fact, it’s the sweariness that people love about the word. While that also means his book title inevitably gets bleeped on broadcast radio, “The hosts, in my experience, love getting their engineers to creatively bleep it,” said Doctorow. “They find it funny. It’s good radio, it stands out when every fifth word is ‘enbeepification.’”

People generally use “enshittification” colloquially to mean “the degradation in the quality and experience of online platforms over time.” Doctorow’s definition is more specific, encompassing “why an online service gets worse, how that worsening unfolds,” and how this process spreads to other online services, such that everything is getting worse all at once.

For Doctorow, enshittification is a disease with symptoms, a mechanism, and an epidemiology. It has infected everything from Facebook, Twitter, Amazon, and Google, to Airbnb, dating apps, iPhones, and everything in between. “For me, the fact that there were a lot of platforms that were going through this at the same time is one of the most interesting and important factors in the critique,” he said. “It makes this a structural issue and not a series of individual issues.”

It starts with the creation of a new two-sided online product of high quality, initially offered at a loss to attract users—say, Facebook, to pick an obvious example. Once the users are hooked on the product, the vendor moves to the second stage: degrading the product in some way for the benefit of their business customers. This might include selling advertisements, scraping and/or selling user data, or tweaking algorithms to prioritize content the vendor wishes users to see rather than what those users actually want.

This locks in the business customers, who, in turn, invest heavily in that product, such as media companies that started Facebook pages to promote their published content. Once business customers are locked in, the vendor can degrade those services too—i.e., by de-emphasizing news and links away from Facebook—to maximize profits to shareholders. Voila! The product is now enshittified.

The four horsemen of the shitocalypse

Doctorow identifies four key factors that have played a role in ushering in an era that he has dubbed the “Enshittocene.” The first is competition (markets), in which companies are motivated to make good products at affordable prices, with good working conditions, because otherwise customers and workers will go to their competitors.  The second is government regulation, such as antitrust laws that serve to keep corporate consolidation in check, or levying fines for dishonest practices, which makes it unprofitable to cheat.

The third is interoperability: the inherent flexibility of digital tools, which can play a useful adversarial role. “The fact that enshittification can always be reversed with a dis-enshittifiting counter-technology always acted as a brake on the worst impulses of tech companies,” Doctorow writes. Finally, there is labor power; in the case of the tech industry, highly skilled workers were scarce and thus had considerable leverage over employers.

All four factors, when functioning correctly, should serve as constraints to enshittification. However, “One by one each enshittification restraint was eroded until it dissolved, leaving the enshittification impulse unchecked,” Doctorow writes. Any “cure” will require reversing those well-established trends.

But isn’t all this just the nature of capitalism? Doctorow thinks it’s not, arguing that the aforementioned weakening of traditional constraints has resulted in the usual profit-seeking behavior producing very different, enshittified outcomes. “Adam Smith has this famous passage in Wealth of Nations about how it’s not due to the generosity of the baker that we get our bread but to his own self-regard,” said Doctorow. “It’s the fear that you’ll get your bread somewhere else that makes him keep prices low and keep quality high. It’s the fear of his employees leaving that makes him pay them a fair wage. It is the constraints that causes firms to behave better. You don’t have to believe that everything should be a capitalist or a for-profit enterprise to acknowledge that that’s true.”

Our wide-ranging conversation below has been edited for length to highlight the main points of discussion.

Ars Technica: I was intrigued by your choice of framing device, discussing enshittification as a form of contagion. 

Cory Doctorow: I’m on a constant search for different framing devices for these complex arguments. I have talked about enshittification in lots of different ways. That frame was one that resonated with people. I’ve been a blogger for a quarter of a century, and instead of keeping notes to myself, I make notes in public, and I write up what I think is important about something that has entered my mind, for better or for worse. The downside is that you’re constantly getting feedback that can be a little overwhelming. The upside is that you’re constantly getting feedback, and if you pay attention, it tells you where to go next, what to double down on.

Another way of organizing this is the Galaxy Brain meme, where the tiny brain is “Oh, this is because consumers shopped wrong.” The medium brain is “This is because VCs are greedy.” The larger brain is “This is because tech bosses are assholes.” But the biggest brain of all is “This is because policymakers created the policy environment where greed can ruin our lives.” There’s probably never going to be just one way to talk about this stuff that lands with everyone. So I like using a variety of approaches. I suck at being on message. I’m not going to do Enshittification for the Soul and Mornings with Enshittifying Maury. I am restless, and my Myers-Briggs type is ADHD, and I want to have a lot of different ways of talking about this stuff.

Ars Technica: One site that hasn’t (yet) succumbed is Wikipedia. What has protected Wikipedia thus far? 

Cory Doctorow: Wikipedia is an amazing example of what we at the Electronic Frontier Foundation (EFF) call the public interest Internet. Internet Archive is another one. Most of these public interest Internet services start off as one person’s labor of love, and that person ends up being what we affectionately call the benevolent dictator for life. Very few of these projects have seen the benevolent dictator for life say, “Actually, this is too important for one person to run. I cannot be the keeper of the soul of this project. I am prone to self-deception and folly just like every other person. This needs to belong to its community.” Wikipedia is one of them. The founder, my friend Jimmy Wales, woke up one day and said, “No individual should run Wikipedia. It should be a communal effort.”

There’s a much more durable and thick constraint on the decisions of anyone at Wikipedia to do something bad. For example, Jimmy had this idea that you could use AI in Wikipedia to help people make entries and navigate Wikipedia’s policies, which are daunting. The community evaluated his arguments and decided—not in a reactionary way, but in a really thoughtful way—that this was wrong. Jimmy didn’t get his way. It didn’t rule out something in the future, but that’s not happening now. That’s pretty cool.

Wikipedia is not just governed by a board; it’s also structured as a nonprofit. That doesn’t mean that there’s no way it could go bad. But it’s a source of friction against enshittification. Wikipedia has its entire corpus irrevocably licensed as the most open it can be without actually being in the public domain. Even if someone were to capture Wikipedia, there’s limits on what they could do to it.

There’s also a labor constraint in Wikipedia in that there’s very little that the leadership can do without bringing along a critical mass of a large and diffuse body of volunteers. That cuts against the volunteers working in unison—they’re not represented by a union; it’s hard for them to push back with one voice. But because they’re so diffuse and because there’s no paychecks involved, it’s really hard for management to do bad things. So if there are two people vying for the job of running the Wikimedia Foundation and one of them has got nefarious plans and the other doesn’t, the nefarious plan person, if they’re smart, is going to give it up—because if they try to squeeze Wikipedia, the harder they squeeze, the more it will slip through their grasp.

So these are structural defenses against enshittification of Wikipedia. I don’t know that it was in the mechanism design—I think they just got lucky—but it is a template for how to run such a project. It does raise this question: How do you build the community? But if you have a community of volunteers around a project, it’s a model of how to turn that project over to that community.

Ars Technica: Your case studies naturally include the decay of social media, notably Facebook and the social media site formerly known as Twitter. How might newer social media platforms resist the spiral into “platform decay”?

Cory Doctorow: What you want is a foundation in which people on social media face few switching costs. If the social media is interoperable, if it’s federatable, then it’s much harder for management to make decisions that are antithetical to the interests of users. If they do, users can escape. And it sets up an internal dynamic within the firm, where the people who have good ideas don’t get shouted down by the people who have bad but more profitable ideas, because it makes those bad ideas unprofitable. It creates both short and long-term risks to the bottom line.

There has to be a structure that stops their investors from pressurizing them into doing bad things, that stops them from rationalizing their way into complying. I think there’s this pathology where you start a company, you convince 150 of your friends to risk their kids’ college fund and their mortgage working for you. You make millions of users really happy, and your investors come along and say, “You have to destroy the life of 5 percent of your users with some change.” And you’re like, “Well, I guess the right thing to do here is to sacrifice those 5 percent, keep the other 95 percent happy, and live to fight another day, because I’m a good guy. If I quit over this, they’ll just put a bad guy in who’ll wreck things. I keep those 150 people working. Not only that, I’m kind of a martyr because everyone thinks I’m a dick for doing this. No one understands that I have taken the tough decision.”

I think that’s a common pattern among people who, in fact, are quite ethical but are also capable of rationalizing their way into bad things. I am very capable of rationalizing my way into bad things. This is not an indictment of someone’s character. But it’s why, before you go on a diet, you throw away the Oreos. It’s why you bind yourself to what behavioral economists call “Ulysses pacts“: You tie yourself to the mast before you go into the sea of sirens, not because you’re weak but because you’re strong enough now to know that you’ll be weak in the future.

I have what I would call the epistemic humility to say that I don’t know what makes a good social media network, but I do know what makes it so that when they go bad, you’re not stuck there. You and I might want totally different things out of our social media experience, but I think that you should 100 percent have the right to go somewhere else without losing anything. The easier it is for you to go without losing something, the better it is for all of us.

My dream is a social media universe where knowing what network someone is using is just a weird curiosity. It’d be like knowing which cell phone carrier your friend is using when you give them a call. It should just not matter. There might be regional or technical reasons to use one network or another, but it shouldn’t matter to anyone other than the user what network they’re using. A social media platform where it’s always easier for users to leave is much more future-proof and much more effective than trying to design characteristics of good social media.

Ars Technica: How might this work in practice?

Cory Doctorow: I think you just need a protocol. This is [Mike] Maznik’s point: protocols, not products. We don’t need a universal app to make email work. We don’t need a universal app to make the web work. I always think about this in the context of administrable regulation. Making a rule that says your social media network must be good for people to use and must not harm their mental health is impossible. The fact intensivity of determining whether a platform satisfies that rule makes it a non-starter.

Whereas if you were to say, “OK, you have to support an existing federation protocol, like AT Protocol and Mastodon ActivityPub,” both have ways to port identity from one place to another and have messages auto-forward. This is also in RSS. There’s a permanent redirect directive. You do that, you’re in compliance with the regulation.

Or you have to do something that satisfies the functional requirements of the spec. So it’s not “did you make someone sad in a way that was reckless?” That is a very hard question to adjudicate. Did you satisfy these functional requirements? It’s not easy to answer that, but it’s not impossible. If you want to have our users be able to move to your platform, then you just have to support the spec that we’ve come up with, which satisfies these functional requirements.

We don’t have to have just one protocol. We can have multiple ones. Not everything has to connect to everything else, but everyone who wants to connect should be able to connect to everyone else who wants to connect. That’s end-to-end. End-to-end is not “you are required to listen to everything someone wants to tell you.” It’s that willing parties should be connected when they want to be.

Ars Technica: What about security and privacy protocols like GPG and PGP?

Cory Doctorow: There’s this argument that the reason GPG is so hard to use is that it’s intrinsic; you need a closed system to make it work. But also, until pretty recently, GPG was supported by one part-time guy in Germany who got 30,000 euros a year in donations to work on it, and he was supporting 20 million users. He was primarily interested in making sure the system was secure rather than making it usable. If you were to put Big Tech quantities of money behind improving ease of use for GPG, maybe you decide it’s a dead end because it is a 30-year-old attempt to stick a security layer on top of SMTP. Maybe there’s better ways of doing it. But I doubt that we have reached the apex of GPG usability with one part-time volunteer.

I just think there’s plenty of room there. If you have a pretty good project that is run by a large firm and has had billions of dollars put into it, the most advanced technologists and UI experts working on it, and you’ve got another project that has never been funded and has only had one volunteer on it—I would assume that dedicating resources to that second one would produce pretty substantial dividends, whereas the first one is only going to produce these minor tweaks. How much more usable does iOS get with every iteration?

I don’t know if PGP is the right place to start to make privacy, but I do think that if we can create independence of the security layer from the transport layer, which is what PGP is trying to do, then it wouldn’t matter so much that there is end-to-end encryption in Mastodon DMs or in Bluesky DMs. And again, it doesn’t matter whose sim is in your phone, so it just shouldn’t matter which platform you’re using so long as it’s secure and reliably delivered end-to-end.

Ars Technica: These days, I’m almost contractually required to ask about AI. There’s no escaping it. But it’s certainly part of the ongoing enshittification.

Cory Doctorow: I agree. Again, the companies are too big to care. They know you’re locked in, and the things that make enshittification possible—like remote software updating, ongoing analytics of use of devices—they allow for the most annoying AI dysfunction. I call it the fat-finger economy, where you have someone who works in a company on a product team, and their KPI, and therefore their bonus and compensation, is tied to getting you to use AI a certain number of times. So they just look at the analytics for the app and they ask, “What button gets pushed the most often? Let’s move that button somewhere else and make an AI summoning button.”

They’re just gaming a metric. It’s causing significant across-the-board regressions in the quality of the product, and I don’t think it’s justified by people who then discover a new use for the AI. That’s a paternalistic justification. The user doesn’t know what they want until you show it to them: “Oh, if I trick you into using it and you keep using it, then I have actually done you a favor.” I don’t think that’s happening. I don’t think people are like, “Oh, rather than press reply to a message and then type a message, I can instead have this interaction with an AI about how to send someone a message about takeout for dinner tonight.” I think people are like, “That was terrible. I regret having tapped it.” 

The speech-to-text is unusable now. I flatter myself that my spoken and written communication is not statistically average. The things that make it me and that make it worth having, as opposed to just a series of multiple-choice answers, is all the ways in which it diverges from statistical averages. Back when the model was stupider, when it gave up sooner if it didn’t recognize what word it might be and just transcribed what it thought you’d said rather than trying to substitute a more probable word, it was more accurate.  Now, what I’m getting are statistically average words that are meaningless.

That elision of nuance and detail is characteristic of what makes AI products bad. There is a bunch of stuff that AI is good at that I’m excited about, and I think a lot of it is going to survive the bubble popping. But I fear that we’re not planning for that. I fear what we’re doing is taking workers whose jobs are meaningful, replacing them with AIs that can’t do their jobs, and then those AIs are going to go away and we’ll have nothing. That’s my concern.

Ars Technica: You prescribe a “cure” for enshittification, but in such a polarized political environment, do we even have the collective will to implement the necessary policies?

Cory Doctorow: The good news is also the bad news, which is that this doesn’t just affect tech. Take labor power. There are a lot of tech workers who are looking at the way their bosses treat the workers they’re not afraid of—Amazon warehouse workers and drivers, Chinese assembly line manufacturers for iPhones—and realizing, “Oh, wait, when my boss stops being afraid of me, this is how he’s going to treat me.” Mark Zuckerberg stopped going to those all-hands town hall meetings with the engineering staff. He’s not pretending that you are his peers anymore. He doesn’t need to; he’s got a critical mass of unemployed workers he can tap into. I think a lot of Googlers figured this out after the 12,000-person layoffs. Tech workers are realizing they missed an opportunity, that they’re going to have to play catch-up, and that the only way to get there is by solidarity with other kinds of workers.

The same goes for competition. There’s a bunch of people who care about media, who are watching Warner about to swallow Paramount and who are saying, “Oh, this is bad. We need antitrust enforcement here.” When we had a functional antitrust system for the last four years, we saw a bunch of telecoms mergers stopped because once you start enforcing antitrust, it’s like eating Pringles. You just can’t stop. You embolden a lot of people to start thinking about market structure as a source of either good or bad policy. The real thing that happened with [former FCC chair] Lina Kahn doing all that merger scrutiny was that people just stopped planning mergers.

There are a lot of people who benefit from this. It’s not just tech workers or tech users; it’s not just media users. Hospital consolidation, pharmaceutical consolidation, has a lot of people who are very concerned about it. Mark Cuban is freaking out about pharmacy benefit manager consolidation and vertical integration with HMOs, as he should be. I don’t think that we’re just asking the anti-enshittification world to carry this weight.

Same with the other factors. The best progress we’ve seen on interoperability has been through right-to-repair. It hasn’t been through people who care about social media interoperability. One of the first really good state-level right-to-repair bills was the one that [Governor] Jared Polis signed in Colorado for powered wheelchairs. Those people have a story that is much more salient to normies. “

What do you mean you spent six months in bed because there’s only two powered wheelchair manufacturers and your chair broke and you weren’t allowed to get it fixed by a third party?” And they’ve slashed their repair department, so it takes six months for someone to show up and fix your chair. So you had bed sores and pneumonia because you couldn’t get your chair fixed. This is bullshit.

So the coalitions are quite large. The thing that all of those forces share—interoperability, labor power, regulation, and competition—is that they’re all downstream of corporate consolidation and wealth inequality. Figuring out how to bring all of those different voices together, that’s how we resolve this. In many ways, the enshittification analysis and remedy are a human factors and security approach to designing an enshittification-resistant Internet. It’s about understanding this as a red team, blue team exercise. How do we challenge the status quo that we have now, and how do we defend the status quo that we want?

Anything that can’t go on forever eventually stops. That is the first law of finance, Stein’s law. We are reaching multiple breaking points, and the question is whether we reach things like breaking points for the climate and for our political system before we reach breaking points for the forces that would rescue those from permanent destruction.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Yes, everything online sucks now—but it doesn’t have to Read More »

inside-the-web-infrastructure-revolt-over-google’s-ai-overviews

Inside the web infrastructure revolt over Google’s AI Overviews


Cloudflare CEO Matthew Prince is making sweeping changes to force Google’s hand.

It could be a consequential act of quiet regulation. Cloudflare, a web infrastructure company, has updated millions of websites’ robots.txt files in an effort to force Google to change how it crawls them to fuel its AI products and initiatives.

We spoke with Cloudflare CEO Matthew Prince about what exactly is going on here, why it matters, and what the web might soon look like. But to get into that, we need to cover a little background first.

The new change, which Cloudflare calls its Content Signals Policy, happened after publishers and other companies that depend on web traffic have cried foul over Google’s AI Overviews and similar AI answer engines, saying they are sharply cutting those companies’ path to revenue because they don’t send traffic back to the source of the information.

There have been lawsuits, efforts to kick-start new marketplaces to ensure compensation, and more—but few companies have the kind of leverage Cloudflare does. Its products and services back something close to 20 percent of the web, and thus a significant slice of the websites that show up on search results pages or that fuel large language models.

“Almost every reasonable AI company that’s out there is saying, listen, if it’s a fair playing field, then we’re happy to pay for content,” Prince said. “The problem is that all of them are terrified of Google because if Google gets content for free but they all have to pay for it, they are always going to be at an inherent disadvantage.”

This is happening because Google is using its dominant position in search to ensure that web publishers allow their content to be used in ways that they might not otherwise want it to.

The changing norms of the web

Since 2023, Google has offered a way for website administrators to opt their content out of use for training Google’s large language models, such as Gemini.

However, allowing pages to be indexed by Google’s search crawlers and shown in results requires accepting that they’ll also be used to generate AI Overviews at the top of results pages through a process called retrieval-augmented generation (RAG).

That’s not so for many other crawlers, making Google an outlier among major players.

This is a sore point for a wide range of website administrators, from news websites that publish journalism to investment banks that produce research reports.

A July study from the Pew Research Center analyzed data from 900 adults in the US and found that AI Overviews cut referrals nearly in half. Specifically, users clicked a link on a page with AI Overviews at the top just 8 percent of the time, compared to 15 percent for search engine results pages without those summaries.

And a report in The Wall Street Journal cited a wide range of sources—including internal traffic metrics from numerous major publications like The New York Times and Business Insider—to describe industry-wide plummets in website traffic that those publishers said were tied to AI summaries, leading to layoffs and strategic shifts.

In August, Google’s head of search, Liz Reid, disputed the validity and applicability of studies and publisher reports of reduced link clicks in search. “Overall, total organic click volume from Google Search to websites has been relatively stable year-over-year,” she wrote, going on to say that reports of big declines were “often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the rollout of AI features in Search.”

Publishers aren’t convinced. Penske Media Corporation, which owns brands like The Hollywood Reporter and Rolling Stone, sued Google over AI Overviews in September. The suit claims that affiliate link revenue has dropped by more than a third in the past year, due in large part to Google’s overviews—a threatening shortfall in a business that already has difficult margins.

Penske’s suit specifically noted that because Google bundles traditional search engine indexing and RAG use together, the company has no choice but to allow Google to keep summarizing its articles, as cutting off Google search referrals entirely would be financially fatal.

Since the earliest days of digital publishing, referrals have in one way or another acted as the backbone of the web’s economy. Content could be made available freely to both human readers and crawlers, and norms were applied across the web to allow information to be tracked back to its source and give that source an opportunity to monetize its content to sustain itself.

Today, there’s a panic that the old system isn’t working anymore as content summaries via RAG have become more common, and along with other players, Cloudflare is trying to update those norms to reflect the current reality.

A mass-scale update to robots.txt

Announced on September 24, Cloudflare’s Content Signals Policy is an effort to use the company’s influential market position to change how content is used by web crawlers. It involves updating millions of websites’ robots.txt files.

Starting in 1994, websites began placing a file called “robots.txt” at the domain root to indicate to automated web crawlers which parts of the domain should be crawled and indexed and which should be ignored. The standard became near-universal over the years; honoring it has been a key part of how Google’s web crawlers operate.

Historically, robots.txt simply includes a list of paths on the domain that were flagged as either “allow” or “disallow.” It was technically not enforceable, but it became an effective honor system because there are advantages to it for the owners of both the website and the crawler: Website owners could dictate access for various business reasons, and it helped crawlers avoid working through data that wouldn’t be relevant.

But robots.txt only tells crawlers whether they can access something at all; it doesn’t tell them what they can use it for. For example, Google supports disallowing the agent “Google-Extended” as a path to blocking crawlers that are looking for content with which to train future versions of its Gemini large language model—though introducing that rule doesn’t do anything about the training Google did before it rolled out Google-Extended in 2023, and it doesn’t stop crawling for RAG and AI Overviews.

The Content Signals Policy initiative is a newly proposed format for robots.txt that intends to do that. It allows website operators to opt in or out of consenting to the following use cases, as worded in the policy:

  • search: Building a search index and providing search results (e.g., returning hyperlinks and short excerpts from your website’s contents). Search does not include providing AI-generated search summaries.
  • ai-input: Inputting content into one or more AI models (e.g., retrieval augmented generation, grounding, or other real-time taking of content for generative AI search answers).
  • ai-train: Training or fine-tuning AI models.

Cloudflare has given all of its customers quick paths for setting those values on a case-by-case basis. Further, it has automatically updated robots.txt on the 3.8 million domains that already use Cloudflare’s managed robots.txt feature, with search defaulting to yes, ai-train to no, and ai-input blank, indicating a neutral position.

The threat of potential litigation

In making this look a bit like a terms of service agreement, Cloudflare’s goal is explicitly to put legal pressure on Google to change its policy of bundling traditional search crawlers and AI Overviews.

“Make no mistake, the legal team at Google is looking at this saying, ‘Huh, that’s now something that we have to actively choose to ignore across a significant portion of the web,'” Prince told me.

Cloudflare specifically made this look like a license agreement. Credit: Cloudflare

He further characterized this as an effort to get a company that he says has historically been “largely a good actor” and a “patron of the web” to go back to doing the right thing.

“Inside of Google, there is a fight where there are people who are saying we should change how we’re doing this,” he explained. “And there are other people saying, no, that gives up our inherent advantage, we have a God-given right to all the content on the Internet.”

Amid that debate, lawyers have sway at Google, so Cloudflare tried to design tools “that made it very clear that if they were going to follow any of these sites, there was a clear license which was in place for them. And that will create risk for them if they don’t follow it,” Prince said.

The next web paradigm

It takes a company with Cloudflare’s scale to do something like this with any hope that it will have an impact. If just a few websites made this change, Google would have an easier time ignoring it, or worse yet, it could simply stop crawling them to avoid the problem. Since Cloudflare is entangled with millions of websites, Google couldn’t do that without materially impacting the quality of the search experience.

Cloudflare has a vested interest in the general health of the web, but there are other strategic considerations at play, too. The company has been working on tools to assist with RAG on customers’ websites in partnership with Microsoft-owned Google competitor Bing and has experimented with a marketplace that provides a way for websites to charge crawlers for scraping the sites for AI, though what final form that might take is still unclear.

I asked Prince directly if this comes from a place of conviction. “There are very few times that opportunities come along where you get to help think through what a future better business model of an organization or institution as large as the Internet and as important as the Internet is,” he said. “As we do that, I think that we should all be thinking about what have we learned that was good about the Internet in the past and what have we learned that was bad about the Internet in the past.”

It’s important to acknowledge that we don’t yet know what the future business model of the web will look like. Cloudflare itself has ideas. Others have proposed new standards, marketplaces, and strategies, too. There will be winners and losers, and those won’t always be the same winners and losers we saw in the previous paradigm.

What most people seem to agree on, whatever their individual incentives, is that Google shouldn’t get to come out on top in a future answer-engine-driven web paradigm just because it previously established dominance in the search-engine-driven one.

For this new standard for robots.txt, success looks like Google allowing content to be available in search but not in AI Overviews. Whatever the long-term vision, and whether it happens because of Cloudflare’s pressure with the Content Signals Policy or some other driving force, most agree that it would be a good start.

Photo of Samuel Axon

Samuel Axon is the editorial lead for tech and gaming coverage at Ars Technica. He covers AI, software development, gaming, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Inside the web infrastructure revolt over Google’s AI Overviews Read More »

rog-xbox-ally-x:-the-ars-technica-review

ROG Xbox Ally X: The Ars Technica review


You got Xbox in my portable gaming PC

The first portable “Xbox” fails to unify a messy world of competing PC gaming platforms.

The ROG Ally X sure looks great floating in a void… Credit: Asus

Here at Ars, we have been writing about rumors of a portable Xbox for literal decades now. With the ROG Xbox Ally, Microsoft has finally made those rumors a reality in the weirdest, most Microsoft way possible.

Yes, the $600 ROG Xbox Ally—and its souped-up cousin, the $1,000, ridiculous-mouthful-of-a-name ROG Xbox Ally X, which we tested—are the first official handheld hardware to sport the Xbox brand name. But Microsoft isn’t taking the exclusive-heavy, walled garden software approach that it has been committed to for nearly 25 years of Xbox home consoles. Instead, the ROG Xbox Ally is, at its base, simply a new version of Asus’ Windows-based ROG Ally line with an Xbox-flavored coat of paint.

That coat of paint—what Microsoft is calling the Xbox Full-screen Experience (FSE)—represents the company’s belated attempt to streamline the Windows gaming experience to be a bit more console-like in terms of user interface and overall simplicity. While that’s a worthy vision, the execution in these early days is so spotty and riddled with annoyances that it’s hard to recommend over the SteamOS-based competition.

Promises, promises

When Microsoft announced the ROG Xbox Ally this summer, the company promised that what it was calling a new “Xbox Experience for Handheld” would “minimize background activity and defer non-essential tasks” usually present in Windows, meaning “more [and] higher framerates” for gaming. While this is technically true, the performance improvement is so small as to be almost meaningless in practice.

In our testing, in-game benchmarks running under the Xbox Full Screen Experience were ever so slightly faster than those same benchmarks running under the full Windows 11 in Desktop Mode (which you can switch to with a few button presses on the ROG Xbox Ally). And when we say “ever so slightly,” we mean less than a single frame per second improvement in many cases and only one or two frames per second at most. Even on a percentage basis, the difference will be practically unnoticeable.

Comparing ROG Xbox Ally X benchmarks on the Xbox Full-screen Experience (FSE) and the standard Windows 11 desktop. Here’s Doom: The Dark Ages. Kyle Orland

The other major selling point of Microsoft’s Xbox FSE, as sold this summer, is an “aggregated gaming library” that includes “all of the games available on Windows” in one single mega-launcher interface. That means apps like Steam, Battle.net, GOG Galaxy, Ubisoft Connect, and EA Play can all be installed with just a click from the “My Apps” section of the FSE from the first launch.

The integration of these apps into the wider Xbox FSE is spotty at best, though. For one, the new “aggregate gaming library” can’t actually show you every game you own across all of these PC gaming platforms in one place. Choosing the “installable” games filter on the Xbox FSE only shows you the games you can access through Microsoft’s own Xbox platform (including any Xbox Game Pass subscription). For other platforms, you must still browse and install the games you own through their own apps, each with their own distinct interfaces that don’t always play well with the ROG Xbox Ally’s button-based controls.

The home screen shows your most recent titles while also offering some ads for other available titles. Kyle Orland / Asus

Even games that are supposed to be installable directly via the Xbox FSE caused me problems in testing. Trying to load the EA Play app to install any number of games included with Xbox Game Pass, for instance, triggered an “authentication error” page with no option to actually log in to EA’s servers. This problem persisted across multiple restarts and reinstallations of the extension that’s supposed to link EA Play to the Xbox FSE. And while I could load the EA Play app via Desktop Mode, I couldn’t get the app to recognize that I had an active Xbox Game Pass subscription to grant me access to the titles I wanted. So much for testing Battlefield, I guess.

Mo’ launchers, mo’ problems

Once you’ve gone to the trouble of installing your favorite games on the ROG Xbox Ally, the Xbox FSE does a good job of aggregating their listings in a single common interface. For players whose gaming libraries are spread across multiple platforms, it can be genuinely useful to see an FSE game list where Battle.net’s Hearthstone sits next to a GOG copy of Cyberpunk 2077, a Steam copy of Hades II, and an Epic Games Store copy of Fortnite, for instance. A quick tap of the Xbox button will even show you the last three games you played, regardless of where you launched them (alongside quick access to some useful general settings).

The “My Games” listing shows everything you’ve installed, regardless of the source.

Credit: Kyle Orland / Asus

The “My Games” listing shows everything you’ve installed, regardless of the source. Credit: Kyle Orland / Asus

Unfortunately, actually playing those games via the new Xbox FSE is far from a seamless experience. You never quite know what you’re going to get when you hit the large, green “Play” button to launch a third-party platform via the Xbox FSE. All too frequently, in fact, you’ll get no immediate outward sign for multiple seconds that you did anything, leaving you to wonder if your button press even registered.

In the best case, this long wait will eventually culminate in a separate launcher popping up and eventually loading your game (or possibly popping up and down multiple times as it cycles through necessary launcher updates). In a slightly annoying case, the launcher might require you to close some pop-up and/or manually hit another on-screen button to launch the game (if you’re playing with the console docked to a TV, this may be downright impossible without a mouse plugged in). In the worst case, the wait might stretch to 30 seconds or more before you think to check the App Switcher and realize that Battle.net actually launched in the background and is waiting for you to input your username and password (to cite just one of many frustratingly counterintuitive examples I encountered).

Tapping the Xbox button brings up this helpful overlay no matter where you are in a game or app. Kyle Orland / Asus

Sometimes, switching from one active game to another via the handy Xbox button will pop up a warning that you should close the first game before opening a new one. Quite often, though, that pop-up warning simply fails to appear, forcing you to go to the trouble of manually closing the first game if you don’t want it eating up resources in the background. But in cases where you do want to multitask—downloading a game on Steam while playing something via the Epic Games Store, for instance—you can never be quite sure if the background app will actually keep doing what you want when it isn’t in focus via the FSE. Sometimes it does, sometimes it doesn’t.

There are plenty of other little annoyances that make using the Xbox FSE more painful than it should be. Sometimes the system will swap between third-party launchers or between a launcher and the Xbox FSE for seemingly no reason, interrupting your flow. When the Xbox app itself needs updating, it does so via the desktop version of the Windows Update settings menu, which isn’t really designed for controllers (but which will offer to let you install the latest version of Notepad). Sometimes, Steam’s Big Picture Mode would have the very top and bottom of the full-screen interface cut off for no apparent reason. The system will frequently freeze on a menu for multiple seconds and refuse to respond to any input, especially when loading or closing an outside launcher. I could go on.

If you want to update the Xbox app itself, you still have to go through this Windows Update screen outside of the FSE.

Credit: Kyle Orland / Asus

If you want to update the Xbox app itself, you still have to go through this Windows Update screen outside of the FSE. Credit: Kyle Orland / Asus

I’m willing to cut Microsoft a bit of slack here. It’s hard to bring the fragmented landscape of competing PC gaming platforms and storefronts together into a single unified interface that provides a cohesive user experience. In a sense, Microsoft is trapped in that XKCD comic where people complain about 14 different competing standards and end up creating a 15th competing standard in the process of trying to unify them.

At the same time, Microsoft sold the Xbox FSE largely on the promise of an “aggregated gaming library” that makes this all simple. Instead, the Full-screen Experience we got papers over the fragmented nature of Windows gaming while causing new problems all their own.

A powerful machine

Xbox FSE aside, there’s a lot to like about the ROG Xbox Ally line from a hardware design perspective. I tested the ROG Xbox Ally X, which is a little thicker and heavier than the Steam Deck but ends up riding that fine line between “solidly built” and “dense brick” pretty well in the hands.

Out of the box. Kyle Orland

I was especially impressed with the design of the ROG Xbox Ally’s hand grips, textured ovoid bumps that slot perfectly into the crook of the palm for comfortable extended gaming sessions. The overall build quality shines through, too, from nicely springy analog sticks and shoulder triggers to extremely powerful, bass-heavy speakers and excellently clicky face buttons (which can be a bit loud when playing next to a sleeping partner). There are also some nice rear buttons that are incredibly easy to nudge with a small flick of your middle finger, if you’re one of those people who never wants to move your thumbs off the analog sticks.

The ROG Xbox Ally’s 7-inch 1080p screen is crisp enough and looks especially nice when delivering steady 120 fps performance on games like Hollow Knight: Silksong. But the maximum brightness of 500 nits is a bit dim if you’re going to be playing in direct sunlight. I also found myself missing the deep blacks and pop of HDR color I’ve gotten used to on my Steam Deck OLED.

The AMD Ryzen Z2 Extreme chip in the ROG Xbox Ally X delivers all the relative gaming horsepower you’d hope for from a $1,000 PC gaming handheld. I was able to hit 30 fps or more running a recent release like Doom: The Dark Ages at 1080p and High graphical settings. For a slightly older game like Cyberpunk 2077, the chip could even handle the “Ray-tracing Low” graphics preset at 1080p resolution at an acceptable frame rate when plugged into an outlet.

If you’re away from a power source, though, pushing the hardware for high-end graphical performance like that does take its toll on the battery. Titles that required the hardware’s preset “Turbo Mode”—which tends to run the noisy fan at full blast to keep internal temperature reasonable—could drain a fully charged battery in around two hours in the worst case. Switching over to the power-sipping but less graphically powerful Silent Mode (which can be accessed with a single button press using a handy “Armoury Crate Command Center” overlay) will extend that to about five or six hours of play time at the expense of the frame rate for high-end games.

The SteamOS elephant in the room

In a bubble, it would be easy to see the ROG Xbox Ally X as a promising, if flawed, early attempt to merge the simplicity of console gaming with the openness of PC gaming in a nice handheld form factor. Here in the real world, though, we’ve been enjoying a much more refined version of that same idea via Valve’s SteamOS and Steam Deck for years.

With SteamOS, I don’t need to worry about which launcher I’ll use to install or update a game. I don’t need to manually close background programs or wait multiple seconds to see an on-screen response when I hit the “Play” button. I don’t need to worry about whether a Settings menu will require me to use a mouse or touchscreen. Everything just works on SteamOS in a way I can’t rely on with the Xbox FSE.

Yes, Microsoft can brag that the Xbox FSE supports every Windows game, while SteamOS is limited to Valve’s walled garden. But this is barely an advantage for many if not most PC gamers, who have been launching their games via Steam more or less exclusively for decades now. Even companies with their own platforms and launchers often offer compatible versions of their biggest titles on Steam these days, a tacit acknowledgement of the social-network lock-in Valve has over the market for whole generations of PC gamers.

The Xbox “Full Screen Experience” can also be a windowed experience from the Windows desktop.

Credit: Kyle Orland / Asus

The Xbox “Full Screen Experience” can also be a windowed experience from the Windows desktop. Credit: Kyle Orland / Asus

Sure, the ROG Xbox Ally can play a handful of games that aren’t available via SteamOS for one reason or another. If you want to play Fortnite, Destiny, Battlefield, or Diablo III portably, the ROG Xbox Ally is a decent solution. Ditto for web-based games (playable here via Edge), games available via niche platforms like itch.io, or even titles you might install directly to the Windows desktop without an outside platform’s launcher (remember those?). And even for games available on SteamOS, the ROG Xbox Ally can give you access to the free or cheap versions you obtained from sales or offers on other platforms (don’t sleep on Amazon’s Prime Gaming offers if you like free GOG codes).

The killer app for the ROG Xbox Ally, though, is Xbox Game Pass. If you subscribe to Microsoft’s popular gaming service, logging in to your account on the ROG Xbox Ally means seeing your “installable” library instantly fill up with hundreds of games spanning a huge swath of the recent history of PC gaming. That’s an especially nice feeling for PC gaming newcomers who haven’t spent years digging through regular Steam sales to build up a sizable backlog.

If you have Xbox Game Pass, your ROG Xbox Ally will have a ton of available games from day one.

Credit: Kyle Orland / Asus

If you have Xbox Game Pass, your ROG Xbox Ally will have a ton of available games from day one. Credit: Kyle Orland / Asus

The recent price hike to $30 a month for Xbox Game Pass Ultimate definitely makes this a less compelling proposition. But ROG Xbox Ally users can probably get by with the $16.49/month “Xbox Game Pass for PC” plan, which offers over 500 installable games and access to new first-party Microsoft releases. As a way to dip your toe into the wide world of PC gaming, it’s hard to beat.

Even for players who aren’t interested in Xbox Game Pass, the ROG Xbox Ally X is a well-built piece of hardware with the power to run today’s games pretty well. All things considered, though, the poor user experience of the Xbox FSE makes it hard to recommend either ROG Xbox Ally over somewhat less powerful SteamOS devices like the Steam Deck or Legion Go S. That said, we hope Microsoft will continue refining the Xbox Full-screen Experience to make for a Windows gaming experience that lives up to its promise.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

ROG Xbox Ally X: The Ars Technica review Read More »

why-signal’s-post-quantum-makeover-is-an-amazing-engineering-achievement

Why Signal’s post-quantum makeover is an amazing engineering achievement


COMING TO A PHONE NEAR YOU

New design sets a high standard for post-quantum readiness.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

The encryption protecting communications against criminal and nation-state snooping is under threat. As private industry and governments get closer to building useful quantum computers, the algorithms protecting Bitcoin wallets, encrypted Web visits, and other sensitive secrets will be useless. No one doubts the day will come, but as the now-common joke in cryptography circles observes, experts have been forecasting this cryptocalypse will arrive in the next 15 to 30 years for the past 30 years.

The uncertainty has created something of an existential dilemma: Should network architects spend the billions of dollars required to wean themselves off quantum-vulnerable algorithms now, or should they prioritize their limited security budgets fighting more immediate threats such as ransomware and espionage attacks? Given the expense and no clear deadline, it’s little wonder that less than half of all TLS connections made inside the Cloudflare network and only 18 percent of Fortune 500 networks support quantum-resistant TLS connections. It’s all but certain that many fewer organizations still are supporting quantum-ready encryption in less prominent protocols.

Triumph of the cypherpunks

One exception to the industry-wide lethargy is the engineering team that designs the Signal Protocol, the open-source engine that powers the world’s most robust and resilient form of end-to-end encryption for multiple private chat apps, most notably the Signal Messenger. Eleven days ago, the nonprofit entity that develops the protocol, Signal Messenger LLC, published a 5,900-word write-up describing its latest updates that make Signal fully quantum-resistant.

The complexity and problem-solving required for making the Signal Protocol quantum safe are as daunting as just about any in modern-day engineering. The original Signal Protocol already resembled the inside of a fine Swiss timepiece, with countless gears, wheels, springs, hands, and other parts all interoperating in an intricate way. In less adept hands, mucking about with an instrument as complex as the Signal protocol could have led to shortcuts or unintended consequences that hurt performance, undoing what would otherwise be a perfectly running watch. Yet this latest post-quantum upgrade (the first one came in 2023) is nothing short of a triumph.

“This appears to be a solid, thoughtful improvement to the existing Signal Protocol,” said Brian LaMacchia, a cryptography engineer who oversaw Microsoft’s post-quantum transition from 2015 to 2022 and now works at Farcaster Consulting Group. “As part of this work, Signal has done some interesting optimization under the hood so as to minimize the network performance impact of adding the post-quantum feature.”

Of the multiple hurdles to clear, the most challenging was accounting for the much larger key sizes that quantum-resistant algorithms require. The overhaul here adds protections based on ML-KEM-768, an implementation of the CRYSTALS-Kyber algorithm that was selected in 2022 and formalized last year by the National Institute of Standards and Technology. ML-KEM is short for Module-Lattice-Based Key-Encapsulation Mechanism, but most of the time, cryptographers refer to it simply as KEM.

Ratchets, ping-pong, and asynchrony

Like the Elliptic curve Diffie-Hellman (ECDH) protocol that Signal has used since its start, KEM is a key encapsulation mechanism. Also known as a key agreement mechanism, it provides the means for two parties who have never met to securely agree on one or more shared secrets in the presence of an adversary who is monitoring the parties’ connection. RSA, ECDH, and other encapsulation algorithms have long been used to negotiate symmetric keys (almost always AES keys) in protocols including TLS, SSH, and IKE. Unlike ECDH and RSA, however, the much newer KEM is quantum-safe.

Key agreement in a protocol like TLS is relatively straightforward. That’s because devices connecting over TLS negotiate a key over a single handshake that occurs at the beginning of a session. The agreed-upon AES key is then used throughout the session. The Signal Protocol is different. Unlike TLS sessions, Signal sessions are protected by forward secrecy, a cryptographic property that ensures the compromise of a key used to encrypt a recent set of messages can’t be used to decrypt an earlier set of messages. The protocol also offers Post-Compromise Security, which protects future messages from past key compromises. While a TLS  uses the same key throughout a session, keys within a Signal session constantly evolve.

To provide these confidentiality guarantees, the Signal Protocol updates secret key material each time a message party hits the send button or receives a message, and at other points, such as in graphical indicators that a party is currently typing and in the sending of read receipts. The mechanism that has made this constant key evolution possible over the past decade is what protocol developers call a “double ratchet.” Just as a traditional ratchet allows a gear to rotate in one direction but not in the other, the Signal ratchets allow messaging parties to create new keys based on a combination of preceding and newly agreed-upon secrets. The ratchets work in a single direction, the sending and receiving of future messages. Even if an adversary compromises a newly created secret, messages encrypted using older secrets can’t be decrypted.

The starting point is a handshake that performs three or four ECDH agreements that mix long- and short-term secrets to establish a shared secret. The creation of this “root key” allows the Double Ratchet to begin. Until 2023, the key agreement used X3DH. The handshake now uses PQXDH to make the handshake quantum-resistant.

The first layer of the Double Ratchet, the Symmetric Ratchet, derives an AES key from the root key and advances it for every message sent. This allows every message to be encrypted with a new secret key. Consequently, if attackers compromise one party’s device, they won’t be able to learn anything about the keys that came earlier. Even then, though, the attackers would still be able to compute the keys used in future messages. That’s where the second, “Diffie-Hellman ratchet” comes in.

The Diffie-Hellman ratchet incorporates a new ECDH public key into each message sent. Using Alice and Bob, the fictional characters often referred to when explaining asymmetric encryption, when Alice sends Bob a message, she creates a new ratchet keypair and computes the ECDH agreement between this key and the last ratchet public key Bob sent. This gives her a new secret, and she knows that once Bob gets her new public key, he will know this secret, too (because, as mentioned earlier, Bob previously sent that other key). With that, Alice can mix the new secret with her old root key to get a new root key and start fresh. The result: Attackers who learn her old secrets won’t be able to tell the difference between her new ratchet keys and random noise.

The result is what Signal developers describe as “ping-pong” behavior, as the parties to a discussion take turns replacing ratchet key pairs one at a time. The effect: An eavesdropper who compromises one of the parties might recover a current ratchet private key, but soon enough, that private key will be replaced with a new, uncompromised one, and in a way that keeps it free from the prying eyes of the attacker.

The objective of the newly generated keys is to limit the number of messages that can be decrypted if an adversary recovers key material at some point in an ongoing chat. Messages sent prior to and after the compromise will remain off limits.

A major challenge designers of the Signal Protocol face is the need to make the ratchets work in an asynchronous environment. Asynchronous messages occur when parties send or receive them at different times—such as while one is offline and the other is active, or vice versa—without either needing to be present or respond immediately. The entire Signal Protocol must work within this asynchronous environment. What’s more, it must work reliably over unstable networks and networks controlled by adversaries, such as a government that forces a telecom or cloud service to spy on the traffic.

Shor’s algorithm lurking

By all accounts, Signal’s double ratchet design is state-of-the-art. That said, it’s wide open to an inevitable if not immediate threat: quantum computing. That’s because an adversary capable of monitoring traffic passing from two or more messenger users can capture that data and feed it into a quantum computer—once one of sufficient power is viable—and calculate the ephemeral keys generated in the second ratchet.

In classical computing, it’s infeasible, if not impossible, for such an adversary to calculate the key. Like all asymmetric encryption algorithms, ECDH is based on a mathematical, one-way function. Also known as trapdoor functions, these problems are trivial to compute in one direction and substantially harder to compute in reverse. In elliptic curve cryptography, this one-way function is based on the Discrete Logarithm problem in mathematics. The key parameters are based on specific points in an elliptic curve over the field of integers modulo some prime P.

On average, an adversary equipped with only a classical computer would spend billions of years guessing integers before arriving at the right ones. A quantum computer, by contrast, would be able to calculate the correct integers in a matter of hours or days. A formula known as Shor’s algorithm—which runs only on a quantum computer—reverts this one-way discrete logarithm equation to a two-way one. Shor’s Algorithm can similarly make quick work of solving the one-way function that’s the basis for the RSA algorithm.

As noted earlier, the Signal Protocol received its first post-quantum makeover in 2023. This update added PQXDH—a Signal-specific implementation that combined the key agreements from elliptic curves used in X3DH (specifically X25519) and the quantum-safe KEM—in the initial protocol handshake. (X3DH was then put out to pasture as a standalone implementation.)

The move foreclosed the possibility of a quantum attack being able to recover the symmetric key used to start the ratchets, but the ephemeral keys established in the ping-ponging second ratchet remained vulnerable to a quantum attack. Signal’s latest update adds quantum resistance to these keys, ensuring that forward secrecy and post-compromise security are safe from Shor’s algorithm as well.

Even though the ping-ponging keys are vulnerable to future quantum attacks, they are broadly believed to be secure against today’s attacks from classical computers. The Signal Protocol developers didn’t want to remove them or the battle-tested code that produces them. That led to their decision to add quantum resistance by adding a third ratchet. This one uses a quantum-safe KEM to produce new secrets much like the Diffie-Hellman ratchet did before, ensuring quantum-safe, post-compromise security.

The technical challenges were anything but easy. Elliptic curve keys generated in the X25519 implementation are about 32 bytes long, small enough to be added to each message without creating a burden on already constrained bandwidths or computing resources. A ML-KEM 768 key, by contrast, is 1,000 bytes. Additionally, Signal’s design requires sending both an encryption key and a ciphertext, making the total size 2272 bytes.

And then there were three

To handle the 71x increase, Signal developers considered a variety of options. One was to send the 2272-byte KEM key less often—say every 50th message or once every week—rather than every message. That idea was nixed because it doesn’t work well in asynchronous or adversarial messaging environments. Signal Protocol developers Grame Connell and Rolfe Schmidt explained:

Consider the case of “send a key if you haven’t sent one in a week”. If Bob has been offline for 2 weeks, what does Alice do when she wants to send a message? What happens if we can lose messages, and we lose the one in fifty that contains a new key? Or, what happens if there’s an attacker in the middle that wants to stop us from generating new secrets, and can look for messages that are [many] bytes larger than the others and drop them, only allowing keyless messages through?

Another option Signal engineers considered was breaking the 2272-byte key into smaller chunks, say 71 of them that are 32 bytes each. Breaking up the KEM key into smaller chunks and putting one in each message sounds like a viable approach at first, but once again, the asynchronous environment of messaging made it unworkable. What happens, for example, when data loss causes one of the chunks to be dropped? The protocol could deal with this scenario by just repeat-sending chunks again after sending all 71 previously. But then an adversary monitoring the traffic could simply cause packet 3 to be dropped each time, preventing Alice and Bob from completing the key exchange.

Signal developers ultimately went with a solution that used this multiple-chunks approach.

Sneaking an elephant through the cat door

To manage the asynchrony challenges, the developers turned to “erasure codes,” a method of breaking up larger data into smaller pieces such that the original can be reconstructed using any sufficiently sized subset of chunks.

Charlie Jacomme, a researcher at INRIA Nancy on the Pesto team who focuses on formal verification and secure messaging, said this design accounts for packet loss by building redundancy into the chunked material. Instead of all x number of chunks having to be successfully received to reconstruct the key, the model requires only x-y chunks to be received, where y is the acceptable number of packets lost. As long as that threshold is met, the new key can be established even when packet loss occurs.

The other part of the design was to split the KEM computations into smaller steps. These KEM computations are distinct from the KEM key material.

As Jacomme explained it:

Essentially, a small part of the public key is enough to start computing and sending a bigger part of the ciphertext, so you can quickly send in parallel the rest of the public key and the beginning of the ciphertext. Essentially, the final computations are equal to the standard, but some stuff was parallelized.

All this in fact plays a role in the end security guarantees, because by optimizing the fact that KEM computations are done faster, you introduce in your key derivation fresh secrets more frequently.

Signal’s post 10 days ago included several images that illustrate this design:

While the design solved the asynchronous messaging problem, it created a new complication of its own: This new quantum-safe ratchet advanced so quickly that it couldn’t be kept synchronized with the Diffie-Hellman ratchet. Ultimately, the architects settled on a creative solution. Rather than bolt KEM onto the existing double ratchet, they allowed it to remain more or less the same as it had been. Then they used the new quantum-safe ratchet to implement a parallel secure messaging system.

Now, when the protocol encrypts a message, it sources encryption keys from both the classic Double Ratchet and the new ratchet. It then mixes the two keys together (using a cryptographic key derivation function) to get a new encryption key that has all of the security of the classical Double Ratchet but now has quantum security, too.

The Signal engineers have given this third ratchet the formal name: Sparse Post Quantum Ratchet, or SPQR for short. The third ratchet was designed in collaboration with PQShield, AIST, and New York University. The developers presented the erasure-code-based chunking and the high-level Triple Ratchet design at the Eurocrypt 2025 conference. At the Usenix 25 conference, they discussed the six options they considered for adding quantum-safe forward secrecy and post-compromise security and why SPQR and one other stood out. Presentations at the NIST PQC Standardization Conference and the Cryptographic Applications Workshop explain the details of chunking, the design challenges, and how the protocol had to be adapted to use the standardized ML-KEM.

Jacomme further observed:

The final thing interesting for the triple ratchet is that it nicely combines the best of both worlds. Between two users, you have a classical DH-based ratchet going on one side, and fully independently, a KEM-based ratchet is going on. Then, whenever you need to encrypt something, you get a key from both, and mix it up to get the actual encryption key. So, even if one ratchet is fully broken, be it because there is now a quantum computer, or because somebody manages to break either elliptic curves or ML-KEM, or because the implementation of one is flawed, or…, the Signal message will still be protected by the second ratchet. In a sense, this update can be seen, of course simplifying, as doubling the security of the ratchet part of Signal, and is a cool thing even for people that don’t care about quantum computers.

As both Signal and Jacomme noted, users of Signal and other messengers relying on the Signal Protocol need not concern themselves with any of these new designs. To paraphrase a certain device maker, it just works.

In the coming weeks or months, various messaging apps and app versions will be updated to add the triple ratchet. Until then, apps will simply rely on the double ratchet as they always did. Once apps receive the update, they’ll behave exactly as they did before upgrading.

For those who care about the internal workings of their Signal-based apps, though, the architects have documented in great depth the design of this new ratchet and how it behaves. Among other things, the work includes a mathematical proof verifying that the updated Signal protocol provides the claimed security properties.

Outside researchers are applauding the work.

“If the normal encrypted messages we use are cats, then post-quantum ciphertexts are elephants,” Matt Green, a cryptography expert at Johns Hopkins University, wrote in an interview. “So the problem here is to sneak an elephant through a tunnel designed for cats. And that’s an amazing engineering achievement. But it also makes me wish we didn’t have to deal with elephants.”

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Why Signal’s post-quantum makeover is an amazing engineering achievement Read More »

“like-putting-on-glasses-for-the-first-time”—how-ai-improves-earthquake-detection

“Like putting on glasses for the first time”—how AI improves earthquake detection


AI is “comically good” at detecting small earthquakes—here’s why that matters.

Credit: Aurich Lawson | Getty Images

On January 1, 2008, at 1: 59 am in Calipatria, California, an earthquake happened. You haven’t heard of this earthquake; even if you had been living in Calipatria, you wouldn’t have felt anything. It was magnitude -0.53, about the same amount of shaking as a truck passing by. Still, this earthquake is notable, not because it was large but because it was small—and yet we know about it.

Over the past seven years, AI tools based on computer imaging have almost completely automated one of the fundamental tasks of seismology: detecting earthquakes. What used to be the task of human analysts—and later, simpler computer programs—can now be done automatically and quickly by machine-learning tools.

These machine-learning tools can detect smaller earthquakes than human analysts, especially in noisy environments like cities. Earthquakes give valuable information about the composition of the Earth and what hazards might occur in the future.

“In the best-case scenario, when you adopt these new techniques, even on the same old data, it’s kind of like putting on glasses for the first time, and you can see the leaves on the trees,” said Kyle Bradley, co-author of the Earthquake Insights newsletter.

I talked with several earthquake scientists, and they all agreed that machine-learning methods have replaced humans for the better in these specific tasks.

“It’s really remarkable,” Judith Hubbard, a Cornell University professor and Bradley’s co-author, told me.

Less certain is what comes next. Earthquake detection is a fundamental part of seismology, but there are many other data processing tasks that have yet to be disrupted. The biggest potential impacts, all the way to earthquake forecasting, haven’t materialized yet.

“It really was a revolution,” said Joe Byrnes, a professor at the University of Texas at Dallas. “But the revolution is ongoing.”

When an earthquake happens in one place, the shaking passes through the ground, similar to how sound waves pass through the air. In both cases, it’s possible to draw inferences about the materials the waves pass through.

Imagine tapping a wall to figure out if it’s hollow. Because a solid wall vibrates differently than a hollow wall, you can figure out the structure by sound.

With earthquakes, this same principle holds. Seismic waves pass through different materials (rock, oil, magma, etc.) differently, and scientists use these vibrations to image the Earth’s interior.

The main tool that scientists traditionally use is a seismometer. These record the movement of the Earth in three directions: up–down, north–south, and east–west. If an earthquake happens, seismometers can measure the shaking in that particular location.

An old-fashioned physical seismometer. Today, seismometers record data digitally. Credit: Yamaguchi先生 on Wikimedia CC BY-SA 3.0

Scientists then process raw seismometer information to identify earthquakes.

Earthquakes produce multiple types of shaking, which travel at different speeds. Two types, Primary (P) waves and Secondary (S) waves are particularly important, and scientists like to identify the start of each of these phases.

Before good algorithms, earthquake cataloging had to happen by hand. Byrnes said that “traditionally, something like the lab at the United States Geological Survey would have an army of mostly undergraduate students or interns looking at seismograms.”

However, there are only so many earthquakes you can find and classify manually. Creating algorithms to effectively find and process earthquakes has long been a priority in the field—especially since the arrival of computers in the early 1950s.

“The field of seismology historically has always advanced as computing has advanced,” Bradley told me.

There’s a big challenge with traditional algorithms, though: They can’t easily find smaller quakes, especially in noisy environments.

Composite seismogram of common events. Note how each event has a slightly different shape. Credit: EarthScope Consortium CC BY 4.0

As we see in the seismogram above, many different events can cause seismic signals. If a method is too sensitive, it risks falsely detecting events as earthquakes. The problem is especially bad in cities, where the constant hum of traffic and buildings can drown out small earthquakes.

However, earthquakes have a characteristic “shape.” The magnitude 7.7 earthquake above looks quite different from the helicopter landing, for instance.

So one idea scientists had was to make templates from human-labeled datasets. If a new waveform correlates closely with an existing template, it’s almost certainly an earthquake.

Template matching works very well if you have enough human-labeled examples. In 2019, Zach Ross’ lab at Caltech used template matching to find 10 times as many earthquakes in Southern California as had previously been known, including the earthquake at the start of this story. Almost all of the new 1.6 million quakes they found were very small, magnitude 1 and below.

If you don’t have an extensive pre-existing dataset of templates, however, you can’t easily apply template matching. That isn’t a problem in Southern California—which already had a basically complete record of earthquakes down to magnitude 1.7—but it’s a challenge elsewhere.

Also, template matching is computationally expensive. Creating a Southern California quake dataset using template matching took 200 Nvidia P100 GPUs running for days on end.

There had to be a better way.

AI detection models solve all of these problems:

  • They are faster than template matching.

  • Because AI detection models are very small (around 350,000 parameters compared to billions in LLMs like GPT4.0), they can be run on consumer CPUs.

  • AI models generalize well to regions not represented in the original dataset.

As an added bonus, AI models can give better information about when the different types of earthquake shaking arrive. Timing the arrivals of the two most important waves—P and S waves—is called phase picking. It allows scientists to draw inferences about the structure of the quake. AI models can do this alongside earthquake detection.

The basic task of earthquake detection (and phase picking) looks like this:

Cropped figure from Earthquake Transformer—an attentive deep-learning model for simultaneous earthquake detection and phase picking. Credit: Nature Communications

The first three rows represent different directions of vibration (east–west, north–south, and up–down respectively). Given these three dimensions of vibration, can we determine if an earthquake occurred, and if so, when it started?

We want to detect the initial P wave, which arrives directly from the site of the earthquake. But this can be tricky because echoes of the P wave may get reflected off other rock layers and arrive later, making the waveform more complicated.

Ideally, then, our model outputs three things at every time step in the sample:

  1. The probability that an earthquake is occurring at that moment.

  2. The probability that the first P wave arrives at that moment.

  3. The probability that the first S wave arrives at that moment.

We see all three outputs in the fourth row: the detection in green, the P wave arrival in blue, and the S wave arrival in red. (There are two earthquakes in this sample.)

To train an AI model, scientists take large amounts of labeled data, like what’s above, and do supervised training. I’ll describe one of the most used models: Earthquake Transformer, which was developed around 2020 by a Stanford University team led by S. Mostafa Mousavi, who later became a Harvard professor.

Like many earthquake detection models, Earthquake Transformer adapts ideas from image classification. Readers may be familiar with AlexNet, a famous image-recognition model that kicked off the deep-learning boom in 2012.

AlexNet used convolutions, a neural network architecture that’s based on the idea that pixels that are physically close together are more likely to be related. The first convolutional layer of AlexNet broke an image down into small chunks—11 pixels on a side—and classified each chunk based on the presence of simple features like edges or gradients.

The next layer took the first layer’s classifications as input and checked for higher-level concepts such as textures or simple shapes.

Each convolutional layer analyzed a larger portion of the image and operated at a higher level of abstraction. By the final layers, the network was looking at the entire image and identifying objects like “mushroom” and “container ship.”

Images are two-dimensional, so AlexNet is based on two-dimensional convolutions. By contrast, seismograph data is one-dimensional, so Earthquake Transformer uses one-dimensional convolutions over the time dimension. The first layer analyzes vibration data in 0.1-second chunks, while later layers identify patterns over progressively longer time periods.

It’s difficult to say what exact patterns the earthquake model is picking out, but we can analogize this to a hypothetical audio transcription model using one-dimensional convolutions. That model might first identify consonants, then syllables, then words, then sentences over increasing time scales.

Earthquake Transformer converts raw waveform data into a collection of high-level representations that indicate the likelihood of earthquakes and other seismologically significant events. This is followed by a series of deconvolution layers that pinpoint exactly when an earthquake—and its all-important P and S waves—occurred.

The model also uses an attention layer in the middle of the model to mix information between different parts of the time series. The attention mechanism is most famous in large language models, where it helps pass information between words. It plays a similar role in seismographic detection. Earthquake seismograms have a general structure: P waves followed by S waves followed by other types of shaking. So if a segment looks like the start of a P wave, the attention mechanism helps it check that it fits into a broader earthquake pattern.

All of the Earthquake Transformer’s components are standard designs from the neural network literature. Other successful detection models, like PhaseNet, are even simpler. PhaseNet uses only one-dimensional convolutions to pick the arrival times of earthquake waves. There are no attention layers.

Generally, there hasn’t been “much need to invent new architectures for seismology,” according to Byrnes. The techniques derived from image processing have been sufficient.

What made these generic architectures work so well then? Data. Lots of it.

Ars has previously reported on how the introduction of ImageNet, an image recognition benchmark, helped spark the deep learning boom. Large, publicly available earthquake datasets have played a similar role in seismology.

Earthquake Transformer was trained using the Stanford Earthquake Dataset (STEAD), which contains 1.2 million human-labeled segments of seismogram data from around the world. (The paper for STEAD explicitly mentions ImageNet as an inspiration). Other models, like PhaseNet, were also trained on hundreds of thousands or millions of labeled segments.

All recorded earthquakes in the Stanford Earthquake Dataset. Credit: IEEE (CC BY 4.0)

The combination of the data and the architecture just works. The current models are “comically good” at identifying and classifying earthquakes, according to Byrnes. Typically, machine-learning methods find 10 or more times the quakes that were previously identified in an area. You can see this directly in an Italian earthquake catalog:

From Machine learning and earthquake forecasting—next steps by Beroza et al. Credit: Nature Communications (CC-BY 4.0)

AI tools won’t necessarily detect more earthquakes than template matching. But AI-based techniques are much less compute- and labor-intensive, making them more accessible to the average research project and easier to apply in regions around the world.

All in all, these machine-learning models are so good that they’ve almost completely supplanted traditional methods for detecting and phase-picking earthquakes, especially for smaller magnitudes.

The holy grail of earthquake science is earthquake prediction. For instance, scientists know that a large quake will happen near Seattle but have little ability to know whether it will happen tomorrow or in a hundred years. It would be helpful if we could predict earthquakes precisely enough to allow people in affected areas to evacuate.

You might think AI tools would help predict earthquakes, but that doesn’t seem to have happened yet.

The applications are more technical and less flashy, said Cornell’s Judith Hubbard.

Better AI models have given seismologists much more comprehensive earthquake catalogs, which have unlocked “a lot of different techniques,” Bradley said.

One of the coolest applications is in understanding and imaging volcanoes. Volcanic activity produces a large number of small earthquakes, whose locations help scientists understand the structure of the magma system. In a 2022 paper, John Wilding and co-authors used a large AI-generated earthquake catalog to create this incredible image of the structure of the Hawaiian volcanic system.

Each dot represents an individual earthquake. Credit: Wilding et al., The magmatic web beneath Hawai‘i.

They provided direct evidence of a previously hypothesized magma connection between the deep Pāhala sill complex and Mauna Loa’s shallow volcanic structure. You can see this in the image with the arrow labeled as Pāhala-Mauna Loa seismicity band. The authors were also able to clarify the structure of the Pāhala sill complex into discrete sheets of magma. This level of detail could potentially facilitate better real-time monitoring of earthquakes and more accurate eruption forecasting.

Another promising area is lowering the cost of dealing with huge datasets. Distributed Acoustic Sensing (DAS) is a powerful technique that uses fiber-optic cables to measure seismic activity across the entire length of the cable. A single DAS array can produce “hundreds of gigabytes of data” a day, according to Jiaxuan Li, a professor at the University of Houston. That much data can produce extremely high-resolution datasets—enough to pick out individual footsteps.

AI tools make it possible to very accurately time earthquakes in DAS data. Before the introduction of AI techniques for phase picking in DAS data, Li and some of his collaborators attempted to use traditional techniques. While these “work roughly,” they weren’t accurate enough for their downstream analysis. Without AI, much of his work would have been “much harder,” he told me.

Li is also optimistic that AI tools will be able to help him isolate “new types of signals” in the rich DAS data in the future.

Not all AI techniques have paid off

As in many other scientific fields, seismologists face some pressure to adopt AI methods, whether or not they are relevant to their research.

“The schools want you to put the word AI in front of everything,” Byrnes said. “It’s a little out of control.”

This can lead to papers that are technically sound but practically useless. Hubbard and Bradley told me that they’ve seen a lot of papers based on AI techniques that “reveal a fundamental misunderstanding of how earthquakes work.”

They pointed out that graduate students can feel pressure to specialize in AI methods at the cost of learning less about the fundamentals of the scientific field. They fear that if this type of AI-driven research becomes entrenched, older methods will get “out-competed by a kind of meaninglessness.”

While these are real issues, and ones Understanding AI has reported on before, I don’t think they detract from the success of AI earthquake detection. In the last five years, an AI-based workflow has almost completely replaced one of the fundamental tasks in seismology for the better.

That’s pretty cool.

Kai Williams is a reporter for Understanding AI, a Substack newsletter founded by Ars Technica alum Timothy B. Lee. His work is supported by a Tarbell Fellowship. Subscribe to Understanding AI to get more from Tim and Kai.

“Like putting on glasses for the first time”—how AI improves earthquake detection Read More »

we’re-about-to-find-many-more-interstellar-interlopers—here’s-how-to-visit-one

We’re about to find many more interstellar interlopers—here’s how to visit one


“You don’t have to claim that they’re aliens to make these exciting.”

The Hubble Space Telescope captured this image of the interstellar comet 3I/ATLAS on July 21, when the comet was 277 million miles from Earth. Hubble shows that the comet has a teardrop-shaped cocoon of dust coming off its solid, icy nucleus. Credit: NASA, ESA, David Jewitt (UCLA); Image Processing: Joseph DePasquale (STScI)

The Hubble Space Telescope captured this image of the interstellar comet 3I/ATLAS on July 21, when the comet was 277 million miles from Earth. Hubble shows that the comet has a teardrop-shaped cocoon of dust coming off its solid, icy nucleus. Credit: NASA, ESA, David Jewitt (UCLA); Image Processing: Joseph DePasquale (STScI)

A few days ago, an inscrutable interstellar interloper made its closest approach to Mars, where a fleet of international spacecraft seek to unravel the red planet’s ancient mysteries.

Several of the probes encircling Mars took a break from their usual activities and turned their cameras toward space to catch a glimpse of an object named 3I/ATLAS, a rogue comet that arrived in our Solar System from interstellar space and is now barreling toward perihelion—its closest approach to the Sun—at the end of this month.

This is the third interstellar object astronomers have detected within our Solar System, following 1I/ʻOumuamua and 2I/Borisov discovered in 2017 and 2019. Scientists think interstellar objects routinely transit among the planets, but telescopes have only recently had the ability to find one. For example, the telescope that discovered Oumuamua only came online in 2010.

Detectable but still unreachable

Astronomers first reported observations of 3I/ATLAS on July 1, just four months before reaching its deepest penetration into the Solar System. Unfortunately for astronomers, the particulars of this object’s trajectory will bring it to perihelion when the Earth is on the opposite side of the Sun. The nearest 3I/ATLAS will come to Earth is about 170 million miles (270 million kilometers) in December, eliminating any chance for high-resolution imaging. The viewing geometry also means the Sun’s glare will block all direct views of the comet from Earth until next month.

The James Webb Space Telescope observed interstellar comet 3I/ATLAS on August 6 with its Near-Infrared Spectrograph instrument. Credit: NASA/James Webb Space Telescope

Because of that, the closest any active spacecraft will get to 3I/ATLAS happened Friday, when it passed less than 20 million miles (30 million kilometers) from Mars. NASA’s Perseverance rover and Mars Reconnaissance Orbiter were expected to make observations of 3I/ATLAS, along with Europe’s Mars Express and ExoMars Trace Gas Orbiter missions.

The best views of the object so far have been captured by the James Webb Space Telescope and the Hubble Space Telescope, positioned much closer to Earth. Those observations helped astronomers narrow down the object’s size, but the estimates remain imprecise. Based on Hubble’s images, the icy core of 3I/ATLAS is somewhere between the size of the Empire State Building to something a little larger than Central Park.

That may be the most we’ll ever know about the dimensions of 3I/ATLAS. The spacecraft at Mars lack the exquisite imaging sensitivity of Webb and Hubble, so don’t expect spectacular views from Friday’s observations. But scientists hope to get a better handle on the cloud of gas and dust surrounding the object, giving it the appearance of a comet. Spectroscopic observations have shown the coma around 3I/ATLAS contains water vapor and an unusually strong signature of carbon dioxide extending out nearly a half-million miles.

On Tuesday, the European Space Agency released the first grainy images of 3I/ATLAS captured at Mars. The best views will come from a small telescope called HiRISE on NASA’s Mars Reconnaissance Orbiter. The images from NASA won’t be released until after the end of the ongoing federal government shutdown, according to a member of the HiRISE team.

Europe’s ExoMars Trace Gas Orbiter turned its eyes toward interstellar comet 3I/ATLAS as it passed close to Mars on Friday, October 3. The comet’s coma is visible as a fuzzy blob surrounding its nucleus, which was not resolved by the spacecraft’s camera. Credit: ESA/TGO/CaSSIS

Studies of 3I/ATLAS suggest it was probably kicked out of another star system, perhaps by an encounter with a giant planet. Comets in our Solar System sometimes get ejected into the Milky Way galaxy when they come too close to Jupiter. It roamed the galaxy for billions of years before arriving in the Sun’s galactic neighborhood.

The rogue comet is now gaining speed as gravity pulls it toward perihelion, when it will max out at a relative velocity of 152,000 mph (68 kilometers per second), much too fast to be bound into a closed orbit around the Sun. Instead, the comet will catapult back into the galaxy, never to be seen again.

We need to talk about aliens

Anyone who studies planetary formation would relish the opportunity to get a close-up look at an interstellar object. Sending a mission to one would undoubtedly yield a scientific payoff. There’s a good chance that many of these interlopers have been around longer than our own 4.5 billion-year-old Solar System.

One study from the University of Oxford suggests that 3I/ATLAS came from the “thick disk” of the Milky Way, which is home to a dense population of ancient stars. This origin story would mean the comet is probably more than 7 billion years old, holding clues about cosmic history that are simply inaccessible among the planets, comets, and asteroids that formed with the birth of the Sun.

This is enough reason to mount a mission to explore one of these objects, scientists said. It doesn’t need justification from unfounded theories that 3I/ATLAS might be an artifact of alien technology, as proposed by Harvard University astrophysicist Avi Loeb. The scientific consensus is that the object is of natural origin.

Loeb shared a similar theory about the first interstellar object found wandering through our Solar System. His statements have sparked questions in popular media about why the world’s space agencies don’t send a probe to actually visit one. Loeb himself proposed redirecting NASA’s Juno spacecraft in orbit around Jupiter on a mission to fly by 3I/ATLAS, and his writings prompted at least one member of Congress to write a letter to NASA to “rejuvenate” the Juno mission by breaking out of Jupiter’s orbit and taking aim at 3I/ATLAS for a close-up inspection.

The problem is that Juno simply doesn’t have enough fuel to reach the comet, and its main engine is broken. In fact, the total boost required to send Juno from Jupiter to 3I/ATLAS (roughly 5,800 mph or 2.6 kilometers per second) would surpass the fuel capacity of most interplanetary probes.

Ars asked Scott Bolton, lead scientist on the Juno mission, and he confirmed that the spacecraft lacks the oomph required for the kind of maneuvers proposed by Loeb. “We had no role in that paper,” Bolton told Ars. “He assumed propellant that we don’t really have.”

Avi Loeb, a Harvard University astrophysicist. Credit: Anibal Martel/Anadolu Agency via Getty Images

So Loeb’s exercise was moot, but his talk of aliens has garnered public attention. Loeb appeared on the conservative network Newsmax last week to discuss his theory of 3I/ATLAS alongside Rep. Tim Burchett (R-Tenn.). Predictably, conspiracy theories abounded. But as of Tuesday, the segment has 1.2 million views on YouTube. Maybe it’s a good thing that people who approve government budgets, especially those without a preexisting interest in NASA, are eager to learn more about the Universe. We will leave it to the reader to draw their own conclusions on that matter.

Loeb’s calculations also help illustrate the difficulty of pulling off a mission to an interstellar object. So far, we’ve only known about an incoming interstellar intruder a few months before it comes closest to Earth. That’s not to mention the enormous speeds at which these objects move through the Solar System. It’s just not feasible to build a spacecraft and launch it on such short notice.

Now, some scientists are working on ways to overcome these limitations.

So you’re saying there’s a chance?

One of these people is Colin Snodgrass, an astronomer and planetary scientist at the University of Edinburgh. A few years ago, he helped propose to the European Space Agency a mission concept that would have very likely been laughed out of the room a generation ago. Snodgrass and his team wanted a commitment from ESA of up to $175 million (150 million euros) to launch a mission with no idea of where it would go.

ESA officials called Snodgrass in 2019 to say the agency would fund his mission, named Comet Interceptor, for launch in the late 2020s. The goal of the mission is to perform the first detailed observations of a long-period comet. So far, spacecraft have only visited short-period comets that routinely dip into the inner part of the Solar System.

A long-period comet is an icy visitor from the farthest reaches of the Solar System that has spent little time getting blasted by the Sun’s heat and radiation, freezing its physical and chemical properties much as they were billions of years ago.

Long-period comets are typically discovered a year or two before coming near the Sun, still not enough time to develop a mission from scratch. With Comet Interceptor, ESA will launch a probe to loiter in space a million miles from Earth, wait for the right comet to come along, then fire its engines to pursue it.

Odds are good that the right comet will come from within the Solar System. “That is the point of the mission,” Snodgrass told Ars.

ESA’s Comet Interceptor will be the first mission to visit a comet coming directly from the outer reaches of the Sun’s realm, carrying material untouched since the dawn of the Solar System. Credit: European Space Agency

But if astronomers detect an interstellar object coming toward us on the right trajectory, there’s a chance Comet Interceptor could reach it.

“I think that the entire science team would agree, if we get really lucky and there’s an interstellar object that we could reach, then to hell with the normal plan, let’s go and do this,” Snodgrass said. “It’s an opportunity you couldn’t just leave sitting there.”

But, he added, it’s “very unlikely” that an interstellar object will be in the right place at the right time. “Although everyone’s always very excited about the possibility, and we’re excited about the possibility, we kind of try and keep the expectations to a realistic level.”

For example, if Comet Interceptor were in space today, there’s no way it could reach 3I/ATLAS. “It’s an unfortunate one,” Snodgrass said. “Its closest point to the Sun, it reaches that on the other side of the Sun from where the Earth is. Just bad timing.” If an interceptor were parked somewhere else in the Solar System, it might be able to get itself in position for an encounter with 3I/ATLAS. “There’s only so much fuel aboard,” Snodgrass said. “There’s only so fast we can go.”

It’s even harder to send a spacecraft to encounter an interstellar object than it is to visit one of the Solar System’s homegrown long-period comets. The calculation of whether Comet Interceptor could reach one of these galactic visitors boils down to where it’s heading and when astronomers discover it.

Snodgrass is part of a team using big telescopes to observe 3I/ATLAS from a distance. “As it’s getting closer to the Sun, it is getting brighter,” he said in an interview.

“You don’t have to claim that they’re aliens to make these exciting,” Snodgrass said. “They’re interesting because they are a bit of another solar system that you can actually feasibly get an up-close view of, even the sort of telescopic views we’re getting now.”

Colin Snodgrass, a professor at the University of Edinburgh, leads the Comet Interceptor science team. Credit: University of Edinburgh

Comets and asteroids are the linchpins for understanding the formation of the Solar System. These modest worlds are the leftover building blocks from the debris that coalesced into the planets. Today, direct observations have only allowed scientists to study the history of one planetary system. An interstellar comet would grow the sample size to two.

Still, Snodgrass said his team prefers to keep their energy focused on reaching a comet originating from the frontier of our own Solar System. “We’re not going to let a very lovely Solar System comet go by, waiting to see ‘what if there’s an interstellar thing?'” he said.

Snodgrass sees Comet Interceptor as a proof of concept for scientists to propose a future mission specially designed to travel to an interstellar object. “You need to figure out how do you build the souped-up version that could really get to an interstellar object? I think that’s five or 10 years away, but [it’s] entirely realistic.”

An American answer

Scientists in the United States are working on just such a proposal. A team from the Southwest Research Institute completed a concept study showing how a mission could fly by one of these interstellar visitors. What’s more, the US scientists say their proposed mission could have actually reached 3I/ATLAS had it already been in space.

The American concept is similar to Europe’s Comet Interceptor in that it will park a spacecraft somewhere in deep space and wait for the right target to come along. The study was led by Alan Stern, the chief scientist on NASA’s New Horizons mission that flew by Pluto a decade ago. “These new kinds of objects offer humankind the first feasible opportunity to closely explore bodies formed in other star systems,” he said.

An animation of the trajectory of 3I/ATLAS through the inner Solar System. Credit: NASA/JPL

It’s impossible with current technology to send a spacecraft to match orbits and rendezvous with a high-speed interstellar comet. “We don’t have to catch it,” Stern recently told Ars. “We just have to cross its orbit. So it does carry a fair amount of fuel in order to get out of Earth’s orbit and onto the comet’s path to cross that path.”

Stern said his team developed a cost estimate for such a mission, and while he didn’t disclose the exact number, he said it would fall under NASA’s cost cap for a Discovery-class mission. The Discovery program is a line of planetary science missions that NASA selects through periodic competitions within the science community. The cost cap for NASA’s next Discovery competition is expected to be $800 million, not including the launch vehicle.

A mission to encounter an interstellar comet requires no new technologies, Stern said. Hopes for such a mission are bolstered by the activation of the US-funded Vera Rubin Observatory, a state-of-the-art facility high in the mountains of Chile set to begin deep surveys of the entire southern sky later this year. Stern predicts Rubin will discover “one or two” interstellar objects per year. The new observatory should be able to detect the faint light from incoming interstellar bodies sooner, providing missions with more advance warning.

“If we put a spacecraft like this in space for a few years, while it’s waiting, there should be five or 10 to choose from,” he said.

Alan Stern speaks onstage during Day 1 of TechCrunch Disrupt SF 2018 in San Francisco. Credit: Photo by Kimberly White/Getty Images for TechCrunch

Winning NASA funding for a mission like Stern’s concept will not be easy. It must compete with dozens of other proposals, and NASA’s next Discovery competition is probably at least two or three years away. The timing of the competition is more uncertain than usual due to swirling questions about NASA’s budget after the Trump administration announced it wants to cut the agency’s science funding in half.

Comet Interceptor, on the other hand, is already funded in Europe. ESA has become a pioneer in comet exploration. The Giotto probe flew by Halley’s Comet in 1986, becoming the first spacecraft to make close-up observations of a comet. ESA’s Rosetta mission became the first spacecraft to orbit a comet in 2014, and later that year, it deployed a German-built lander to return the first data from the surface of a comet. Both of those missions explored short-period comets.

“Each time that ESA has done a comet mission, it’s done something very ambitious and very new,” Snodgrass said. “The Giotto mission was the first time ESA really tried to do anything interplanetary… And then, Rosetta, putting this thing in orbit and landing on a comet was a crazy difficult thing to attempt to do.”

“They really do push the envelope a bit, which is good because ESA can be quite risk averse, I think it’s fair to say, with what they do with missions,” he said. “But the comet missions, they are things where they’ve really gone for that next step, and Comet Interceptor is the same. The whole idea of trying to design a space mission before you know where you’re going is a slightly crazy way of doing things. But it’s the only way to do this mission. And it’s great that we’re trying it.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

We’re about to find many more interstellar interlopers—here’s how to visit one Read More »

elon-musk-tries-to-make-apple-and-mobile-carriers-regret-choosing-starlink-rivals

Elon Musk tries to make Apple and mobile carriers regret choosing Starlink rivals

SpaceX holds spectrum licenses for the Starlink fixed Internet service for homes and businesses. Adding the EchoStar spectrum will make its holdings suitable for mobile service.

“SpaceX currently holds no terrestrial spectrum authorizations and no license to use spectrum allocated on a primary basis to MSS,” the company’s FCC filing said. “Its only authorization to provide any form of mobile service is an authorization for secondary SCS [Supplemental Coverage from Space] operations in spectrum licensed to T-Mobile.”

Starlink unlikely to dethrone major carriers

SpaceX’s spectrum purchase doesn’t make it likely that Starlink will become a fourth major carrier. Grand claims of that sort are “complete nonsense,” wrote industry analyst Dean Bubley. “Apart from anything else, there’s one very obvious physical obstacle: walls and roofs,” he wrote. “Space-based wireless, even if it’s at frequencies supported in normal smartphones, won’t work properly indoors. And uplink from devices to satellites will be even worse.”

When you’re indoors, “there’s more attenuation of the signal,” resulting in lower data rates, Farrar said. “You might not even get megabits per second indoors, unless you are going to go onto a home Starlink broadband network,” he said. “You might only be able to get hundreds of kilobits per second in an obstructed area.”

The Mach33 analyst firm is more bullish than others regarding Starlink’s potential cellular capabilities. “With AWS-4/H-block and V3 [satellites], Starlink DTC is no longer niche, it’s a path to genuine MNO competition. Watch for retail mobile bundles, handset support, and urban hardware as the signals of that pivot,” the firm said.

Mach33’s optimism is based in part on the expectation that SpaceX will make more deals. “DTC isn’t just a coverage filler, it’s a springboard. It enables alternative growth routes; M&A, spectrum deals, subleasing capacity in denser markets, or technical solutions like mini-towers that extend Starlink into neighborhoods,” the group’s analysis said.

The amount of spectrum SpaceX is buying from EchoStar is just a fraction of what the national carriers control. There is “about 1.1 GHz of licensed spectrum currently allocated to mobile operators,” wireless lobby group CTIA said in a January 2025 report. The group also says the cellular industry has over 432,000 active cell sites around the US.

What Starlink can offer cellular users “is nothing compared to the capacity of today’s 5G networks,” but it would be useful “in less populated areas or where you cannot get coverage,” Rysavy said.

Starlink has about 8,500 satellites in orbit. Rysavy estimated in a July 2025 report that about 280 of them are over the United States at any given time. These satellites are mostly providing fixed Internet service in which an antenna is placed outside a building so that people can use Wi-Fi indoors.

SpaceX’s FCC filing said the EchoStar spectrum’s mix of terrestrial and satellite frequencies will be ideal for Starlink.

“By acquiring EchoStar’s market-access authorization for 2 GHz MSS as well as its terrestrial AWS-4 licenses, SpaceX will be able to deploy a hybrid satellite and terrestrial network, just as the Commission envisioned EchoStar would do,” SpaceX said. “Consistent with the Commission’s finding that potential interference between MSS and terrestrial mobile service can best be managed by enabling a single licensee to control both networks, assignment of the AWS-4 spectrum is critical to enable SpaceX to deploy robust MSS service in this band.”

Elon Musk tries to make Apple and mobile carriers regret choosing Starlink rivals Read More »

apple-iphone-17-pro-review:-come-for-the-camera,-stay-for-the-battery

Apple iPhone 17 Pro review: Come for the camera, stay for the battery


a weird-looking phone for taking pretty pictures

If your iPhone is your main or only camera, the iPhone 17 Pro is for you.

The iPhone 17 Pro’s excellent camera is the best reason to buy it instead of the regular iPhone 17. Credit: Andrew Cunningham

The iPhone 17 Pro’s excellent camera is the best reason to buy it instead of the regular iPhone 17. Credit: Andrew Cunningham

Apple’s “Pro” iPhones usually look and feel a lot like the regular ones, just with some added features stacked on top. They’ve historically had better screens and more flexible cameras, and there has always been a Max option for people who really wanted to blur the lines between a big phone and a small tablet (Apple’s commitment to the cheaper “iPhone Plus” idea has been less steadfast). But the qualitative experience of holding and using one wasn’t all that different compared to the basic aluminum iPhone.

This year’s iPhone 17 Pro looks and feels like more of a departure from the basic iPhone, thanks to a new design that prioritizes function over form. It’s as though Apple anticipated the main complaints about the iPhone Air—why would I want a phone with worse battery and fewer cameras, why don’t they just make the phone thicker so they can fit in more things—and made a version of the iPhone that they could point to and say, “We already make that phone—it’s that one over there.”

Because the regular iPhone 17 is so good, and because it uses the same 6.3-inch OLED ProMotion screen, I think the iPhone 17 Pro is playing to a narrower audience than usual this year. But Apple’s changes and additions are also tailor-made to serve that audience. In other words, fewer people even need to consider the iPhone Pro this time around, but there’s a lot to like here for actual “pros” and people who demand a lot from their phones.

Design

The iPhone 17 drops the titanium frame of the iPhone 15 and 16 Pro in favor of a return to aluminum. But it’s no longer the aluminum-framed glass-sandwich design that the iPhone 17 still uses; it’s a reformulated “aluminum unibody” design that also protects a substantial portion of the phone’s back. It’s the most metal we’ve seen on the back of the iPhone since 2016’s iPhone 7.

But remember that part of the reason the 2017 iPhone 8 and iPhone X switched to the glass sandwich design was wireless charging. The aluminum iPhones always featured some kind of cutouts or gaps in the aluminum to allow Wi-Fi, Bluetooth, and cellular signals through. But the addition of wireless charging to the iPhone meant that a substantial portion of the phone’s back now needed to be permeable by wireless signals, and the solution to that problem was simply to embrace it with a full sheet of glass.

The iPhone 17 Pro returns to the cutout approach, and while it might be functional, it leaves me pretty cold, aesthetically. Small stripes on the sides of the phone and running all the way around the “camera plateau” provide gaps between the metal parts so that you can’t mess with your cellular reception by holding the phone wrong; on US versions of the phone with support for mmWave 5G, there’s another long ovular cutout on the top of the phone to allow those signals to pass through.

But the largest and most obvious is the sheet of glass on the back that Apple needed to add to make wireless charging work. The aluminum, the cell signal cutouts, and this sheet of glass are all different shades of the phone’s base color (it’s least noticeable on the Deep Blue phone and most noticeable on the orange one).

The result is something that looks sort of unfinished and prototype-y. There are definitely people who will like or even prefer this aesthetic, which makes it clearer that this piece of technology is a piece of technology rather than trying to hide it—the enduring popularity of clear plastic electronics is a testament to this. But it does feel like a collection of design decisions that Apple was forced into by physics rather than choices it wanted to make.

That also extends to the camera plateau area, a reimagining of the old iPhone camera bump that extends all the way across the top of the phone. It’s a bit less slick-looking than the one on the iPhone Air because of the multiple lenses. And because the camera bumps are still additional protrusions on top of the plateau, the phone wobbles when it’s resting flat on a table instead of resting on the plateau in a way that stabilizes the phone.

Finally, there’s the weight of the phone, which isn’t breaking records but is a step back from a substantial weight reduction that Apple was using as a first-sentence-of-the-press-release selling point just two years ago. The iPhone 17 Pro weighs the same amount as the iPhone 14 Pro, and it has a noticeable heft to it that the iPhone Air (say) does not have. You’ll definitely notice if (like me) your current phone is an iPhone 15 Pro.

Apple sent me one of its $59 “TechWoven” cases with the iPhone 17 Pro, and it solved a lot of what I didn’t like about the design—the inconsistent materials and colors everywhere, and the bump-on-a-bump camera. There’s still a bump on the top, but at least the aperture of a case evens it out so that your phone isn’t tilted by the plateau and wobbling because of the bump.

I liked Apple’s TechWoven case for the iPhone Pro, partly because it papered over some of the things I don’t love about the design. Credit: Andrew Cunningham

The original FineWoven cases were (rightly) panned for how quickly and easily they scratched, but the TechWoven case might be my favorite Apple-designed phone case of the ones I’ve used. It doesn’t have the weird soft lint-magnet feel of some of the silicone cases, FineWoven’s worst problems seem solved, and the texture on the sides of the case provides a reassuring grippiness. My main issue is that the opening for the USB-C port on the bottom is relatively narrow. Apple’s cables will fit fine, but I had a few older or thicker USB-C connectors that didn’t.

This isn’t a case review, but I bring it up mainly to say that I stand by my initial assessment of the Pro’s function-over-form design: I am happy I put it in a case, and I think you will be, too, whichever case you choose (when buying for myself or family members, I have defaulted to Smartish cases for years, but your mileage may vary).

On “Scratchgate”

Early reports from Apple’s retail stores indicated that the iPhone 17 Pro’s design was more susceptible to scratches than past iPhones and that some seemed to be showing marks from as simple and routine an activity as connecting and disconnecting a MagSafe charging pad.

Apple says the marks left by its in-store MagSafe chargers weren’t permanent scratches and could be cleaned off. But independent testing from the likes of iFixit has found that the anodization process Apple uses to add color to the iPhone’s aluminum frame is more susceptible to scratching and flaking on non-flat surfaces like the edges of the camera bump.

Like “antennagate” and “bendgate” before it, many factors will determine whether “scratchgate” is actually something you’ll notice. Independent testing shows there is something to the complaints, but it doesn’t show how often this kind of damage will appear in actual day-to-day use over the course of months or years. Do keep it in mind when deciding which iPhone and accessories you want—it’s just one more reason to keep the iPhone 17 Pro in a case, if you ask me—but I wouldn’t say it should keep you from buying this phone if you like everything else about it.

Camera

I have front-loaded my complaints about the iPhone 17 Pro to get them out of the way, but the fun thing about an iPhone in which function follows form is that you get a lot of function.

When I made the jump from the regular iPhone to the Pro (I went from an 11 to a 13 Pro and then to a 15 Pro), I did it mainly for the telephoto lens in the camera. For both kid photos and casual product photography, it was game-changing to be able to access the functional equivalent of optical zoom on my phone.

The iPhone 17 Pro’s telephoto lens in 4x mode. Andrew Cunningham

The iPhone 16 Pro changed the telephoto lens’ zoom level from 3x to 5x, which was useful if you want maximum zoom but which did leave a gap between it and the Fusion Camera-enabled 2x mode. The 17 Pro switches to a 4x zoom by default, closing that gap, and it further maximizes the zooming capabilities by switching to a 48 MP sensor.

Like the main and ultrawide cameras, which had already switched to 48 MP sensors in previous models, the telephoto camera saves 24 MP images when shooting in 4x mode. But it can also crop a 12 MP image out of the center of that sensor to provide a native-resolution 12 MP image at an 8x zoom level, albeit without the image quality improvements from the “pixel binning” process that 4x images get.

You can debate how accurate it is to market this as “optical-quality zoom” as Apple does, but it’s hard to argue with the results. The level of detail you can capture from a distance in 8x mode is consistently impressive, and Apple’s hardware and software image stabilization help keep these details reasonably free of the shake and blur you might see if you were shooting at this zoom level with an actual hardware lens.

It’s my favorite feature of the iPhone 17 Pro, and it’s the thing about the phone that comes closest to being worth the $300 premium over the regular iPhone 17.

The iPhone 17 Pro, main lens, 1x mode. Andrew Cunningham

Apple continues to gate several other camera-related features to the Pro iPhones. All phones can shoot RAW photos in third-party camera apps that support it, but only the Pro iPhones can shoot Apple’s ProRAW format in the first-party camera app (ProRAW performs Apple’s typical image processing for RAW images but retains all the extra information needed for more flexible post-processing).

I don’t spend as much time shooting video on my phone as I do photos, but for the content creator and influencer set (and the “we used phones and also professional lighting and sound equipment to shoot this movie” set) Apple still reserves several video features for the Pro iPhones. That list includes 120 fps 4K Dolby Vision video recording and a four-mic array (both also supported by the iPhone 16 Pro), plus ProRes RAW recording and Genlock support for synchronizing video from multiple sources (both new to the 17 Pro).

The iPhone Pro also remains the only iPhone to support 10 Gbps USB transfer speeds over the USB-C port, making it faster to transfer large video files from the phone to an external drive or a PC or Mac for additional processing and editing. It’s likely that Apple built this capability into the A19 Pro’s USB controller, but both the iPhone Air and the regular iPhone 17 are restricted to the same old 25-year-old 480 Mbps USB 2.0 data transfer speeds.

The iPhone 17 Pro gets the same front camera treatment as the iPhone 17 and the Air: a new square “Center Stage” sensor that crops a 24 MP square image into an 18 MP image, allowing users to capture approximately the same aspect ratios and fields-of-view with the front camera regardless of whether they’re holding the phone in portrait or landscape mode. It’s definitely an image-quality improvement, but it’s the same as what you get with the other new iPhones.

Specs, speeds, and battery

You still need to buy a Pro phone to get a USB-C port with 10 Gbps USB 3 transfer speeds instead of 480 Mbps USB 2.0 speeds. Credit: Andrew Cunningham

The iPhone 17 Pro uses, by a slim margin, the fastest and most capable version of the A19 Pro chip, partly because it has all of the A19 Pro’s features fully enabled and partly because its thermal management is better than the iPhone Air’s.

The A19 Pro in the iPhone 17 Pro uses two high-performance CPU cores and four smaller high-efficiency CPU cores, plus a fully enabled six-core GPU. Like the iPhone Air, the iPhone Pro also includes 12GB of RAM, up from 8GB in the iPhone 16 Pro and the regular iPhone 17. Apple has added a vapor chamber to the iPhone 17 Pro to help keep it cool rather than relying on metal to conduct heat away from the chips—an infinitesimal amount of water inside a small metal pocket continually boils, evaporates, and condenses inside the closed copper-lined chamber. This spreads the heat evenly over a large area, compared to just using metal to conduct the heat; having the heat spread out over a larger area then allows that heat to be dissipated more quickly.

All phones were tested with Adaptive Power turned off.

We saw in our iPhone 17 review how that phone’s superior thermals helped it outrun the iPhone Air’s version of the A19 Pro in many of our graphics tests; the iPhone Pro’s A19 Pro beats both by a decent margin, thanks to both thermals and the extra hardware.

The performance line graph that 3DMark generates when you run its benchmarks actually gives us a pretty clear look at the difference between how the iPhones act. The graphs for the iPhone 15 Pro, the iPhone 17, and the iPhone 17 Pro all look pretty similar, suggesting that they’re cooled well enough to let the benchmark run for a couple of minutes without significant throttling. The iPhone Air follows a similar performance curve for the first half of the test or so but then drops noticeably lower for the second half—the ups and downs of the line actually look pretty similar to the other phones, but the performance is just a bit lower because the A19 Pro in the iPhone Air is already slowing down to keep itself cool.

The CPU performance of the iPhone 17 Pro is also marginally better than this year’s other phones, but not by enough that it will be user-noticeable.

As for battery, Apple’s own product pages say it lasts for about 10 percent longer than the regular iPhone 17 and between 22 and 36 percent longer than the iPhone Air, depending on what you’re doing.

I found the iPhone Air’s battery life to be tolerable with a little bit of babying and well-timed use of the Low Power Mode feature, and the iPhone 17’s battery was good enough that I didn’t worry about making it through an 18-hour day. But the iPhone 17 Pro’s battery really is a noticeable step up.

One day, I forgot to plug it in overnight and awoke to a phone that still had a 30 percent charge, enough that I could make it through the morning school drop-off routine and plug it in when I got back home. Not only did I not have to think about the iPhone 17 Pro’s battery, but it’s good enough that even a battery with 85-ish percent capacity (where most of my iPhone batteries end up after two years of regular use) should still feel pretty comfortable. After the telephoto camera lens, it’s definitely the second-best thing about the iPhone 17 Pro, and the Pro Max should last for even longer.

Pros only

Apple’s iPhone 17 Pro. Credit: Andrew Cunningham

I’m taken with a lot of things about the iPhone 17 Pro, but the conclusion of our iPhone 17 review still holds: If you’re not tempted by the lightness of the iPhone Air, then the iPhone 17 is the one most people should get.

Even more than most Pro iPhones, the iPhone 17 Pro and Pro Max will make the most sense for people who actually use their phones professionally, whether that’s for product or event photography, content creation, or some other camera-centric field where extra flexibility and added shooting modes can make a real difference. The same goes for people who want a bigger screen, since there’s no iPhone 17 Plus.

Sure, the 17 Pro also performs a little better than the regular 17, and the battery lasts longer. But the screen was always the most immediately noticeable upgrade for regular people, and the exact same display panel is now available in a phone that costs $300 less.

The benefit of the iPhone Pro becoming a bit more niche is that it’s easier to describe who each of these iPhones is for. The Air is the most pleasant to hold and use, and it’s the one you’ll probably buy if you want people to ask you, “Oh, is that one of the new iPhones?” The Pro is for people whose phones are their most important camera (or for people who want the biggest phone they can get). And the iPhone 17 is for people who just want a good phone but don’t want to think about it all that much.

The good

  • Excellent performance and great battery life
  • It has the most flexible camera in any iPhone, and the telephoto lens in particular is a noticeable step up from a 2-year-old iPhone 15 Pro
  • 12GB of RAM provides extra future-proofing compared to the standard iPhone
  • Not counting the old iPhone 16, it’s Apple’s only iPhone to be available in two screen sizes
  • Extra photography and video features for people who use those features in their everyday lives or even professionally

The bad

  • Clunky, unfinished-looking design
  • More limited color options compared to the regular iPhone
  • Expensive
  • Landscape layouts for apps only work on the Max model

The ugly

  • Increased weight compared to previous models, which actually used their lighter weight as a selling point

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Apple iPhone 17 Pro review: Come for the camera, stay for the battery Read More »

how-america-fell-behind-china-in-the-lunar-space-race—and-how-it-can-catch-back-up

How America fell behind China in the lunar space race—and how it can catch back up


Thanks to some recent reporting, we’ve found a potential solution to the Artemis blues.

A man in a suit speaks in front of a mural of the Moon landing.

NASA Administrator Jim Bridenstine says that competition is good for the Artemis Moon program. Credit: NASA

NASA Administrator Jim Bridenstine says that competition is good for the Artemis Moon program. Credit: NASA

For the last month, NASA’s interim administrator, Sean Duffy, has been giving interviews and speeches around the world, offering a singular message: “We are going to beat the Chinese to the Moon.”

This is certainly what the president who appointed Duffy to the NASA post wants to hear. Unfortunately, there is a very good chance that Duffy’s sentiment is false. Privately, many people within the space industry, and even at NASA, acknowledge that the US space agency appears to be holding a losing hand. Recently, some influential voices, such as former NASA Administrator Jim Bridenstine, have spoken out.

“Unless something changes, it is highly unlikely the United States will beat China’s projected timeline to the Moon’s surface,” Bridenstine said in early September.

As the debate about NASA potentially losing the “second” space race to China heats up in Washington, DC, everyone is pointing fingers. But no one is really offering answers for how to beat China’s ambitions to land taikonauts on the Moon as early as the year 2029. So I will. The purpose of this article is to articulate how NASA ended up falling behind China, and more importantly, how the Western world could realistically retake the lead.

But first, space policymakers must learn from their mistakes.

Begin at the beginning

Thousands of words could be written about the space policy created in the United States over the last two decades and all of the missteps. However, this article will only hit the highlights (lowlights). And the story begins in 2003, when two watershed events occurred.

The first of these was the loss of space shuttle Columbia in February, the second fatal shuttle accident, which signaled that the shuttle era was nearing its end, and it began a period of soul-searching at NASA and in Washington, DC, about what the space agency should do next.

“There’s a crucial year after the Columbia accident,” said eminent NASA historian John Logsdon. “President George W. Bush said we should go back to the Moon. And the result of the assessment after Columbia is NASA should get back to doing great things.” For NASA, this meant creating a new deep space exploration program for astronauts, be it the Moon, Mars, or both.

The other key milestone in 2003 came in October, when Yang Liwei flew into space and China became the third country capable of human spaceflight. After his 21-hour spaceflight, Chinese leaders began to more deeply appreciate the soft power that came with spaceflight and started to commit more resources to related programs. Long-term, the Asian nation sought to catch up to the United States in terms of spaceflight capabilities and eventually surpass the superpower.

It was not much of a competition then. China would not take its first tentative steps into deep space for another four years, with the Chang’e 1 lunar orbiter. NASA had already walked on the Moon and sent spacecraft across the Solar System and even beyond.

So how did the United States squander such a massive lead?

Mistakes were made

SpaceX and its complex Starship lander are getting the lion’s share of the blame today for delays to NASA’s Artemis Program. But the company and its lunar lander version of Starship are just the final steps on a long, winding path that got the United States where it is today.

After Columbia, the Bush White House, with its NASA Administrator Mike Griffin, looked at a variety of options (see, for example, the Exploration Systems Architecture Study in 2005). But Griffin had a clear plan in his mind that he dubbed “Apollo on Steroids,” and he sought to develop a large rocket (Ares V), spacecraft (later to be named Orion), and a lunar lander to accomplish a lunar landing by 2020. Collectively, this became known as the Constellation Program.

It was a mess. Congress did not provide NASA the funding it needed, and the rocket and spacecraft programs quickly ran behind schedule. At one point, to pay for surging Constellation costs, NASA absurdly mulled canceling the just-completed International Space Station. By the end of the first decade of the 2000s, two things were clear: NASA was going nowhere fast, and the program’s only achievement was to enrich the legacy space contractors.

By early 2010, after spending a year assessing the state of play, the Obama administration sought to cancel Constellation. It ran into serious congressional pushback, powered by lobbying from Boeing, Lockheed Martin, Northrop Grumman, and other key legacy contractors.

The Space Launch System was created as part of a political compromise between Sen. Bill Nelson (D-Fla.) and senators from Alabama and Texas.

Credit: Chip Somodevilla/Getty Images

The Space Launch System was created as part of a political compromise between Sen. Bill Nelson (D-Fla.) and senators from Alabama and Texas. Credit: Chip Somodevilla/Getty Images

The Obama White House wanted to cancel both the rocket and the spacecraft and hold a competition for the private sector to develop a heavy lift vehicle. Their thinking: Only with lower-cost access to space could the nation afford to have a sustainable deep space exploration plan. In retrospect, it was the smart idea, but Congress was not having it. In 2011, Congress saved Orion and ordered a slightly modified rocket—it would still be based on space shuttle architecture to protect key contractors—that became the Space Launch System.

Then the Obama administration, with its NASA leader Charles Bolden, cast about for something to do with this hardware. They started talking about a “Journey to Mars.” But it was all nonsense. There was never any there there. Essentially, NASA lost a decade, spending billions of dollars a year developing “exploration” systems for humans and talking about fanciful missions to the red planet.

There were critics of this approach, myself included. In 2014, I authored a seven-part series at the Houston Chronicle called Adrift, the title referring to the direction of NASA’s deep space ambitions. The fundamental problem is that NASA, at the direction of Congress, was spending all of its exploration funds developing Orion, the SLS rocket, and ground systems for some future mission. This made the big contractors happy, but their cost-plus contracts gobbled up so much funding that NASA had no money to spend on payloads or things to actually fly on this hardware.

This is why doubters called the SLS the “rocket to nowhere.” They were, sadly, correct.

The Moon, finally

Fairly early on in the first Trump administration, the new leader of NASA, Jim Bridenstine, managed to ditch the Journey to Mars and establish a lunar program. However, any efforts to consider alternatives to the SLS rocket were quickly rebuffed by the US Senate.

During his tenure, Bridenstine established the Artemis Program to return humans to the Moon. But Congress was slow to open its purse for elements of the program that would not clearly benefit a traditional contractor or NASA field center. Consequently, the space agency did not select a lunar lander until April 2021, after Bridenstine had left office. And NASA did not begin funding work on this until late 2021 due to a protest by Blue Origin. The space agency did not support a lunar spacesuit program for another year.

Much has been made about the selection of SpaceX as the sole provider of a lunar lander. Was it shady? Was the decision rushed before Bill Nelson was confirmed as NASA administrator? In truth, SpaceX was the only company that bid a value that NASA could afford with its paltry budget for a lunar lander (again, Congress prioritized SLS funding), and which had the capability the agency required.

To be clear, for a decade, NASA spent in excess of $3 billion a year on the development of the SLS rocket and its ground systems. That’s every year for a rocket that used main engines from the space shuttle, a similar version of its solid rocket boosters, and had a core stage the same diameter as the shuttle’s external tank. Thirty billion bucks for a rocket highly derivative of a vehicle NASA flew for three decades. SpaceX was awarded less than a single year of this funding, $2.9 billion, for the entire development of a Human Landing System version of Starship, plus two missions.

So yes, after 20 years, Orion appears to be ready to carry NASA astronauts out to the Moon. After 15 years, the shuttle-derived rocket appears to work. And after four years (and less than a tenth of the funding), Starship is not ready to land humans on the Moon.

When will Starship be ready?

Probably not any time soon.

For SpaceX and its founder, Elon Musk, the Artemis Program is a sidequest to the company’s real mission of sending humans to Mars. It simply is not a priority (and frankly, the limited funding from NASA does not compel prioritization). Due to its incredible ambition, the Starship program has also understandably hit some technical snags.

Unfortunately for NASA and the country, Starship still has a long way to go to land humans on the Moon. It must begin flying frequently (this could happen next year, finally). It must demonstrate the capability to transfer and store large amounts of cryogenic propellant in space. It must land on the Moon, a real challenge for such a tall vehicle, necessitating a flat surface that is difficult to find near the poles. And then it must demonstrate the ability to launch from the Moon, which would be unprecedented for cryogenic propellants.

Perhaps the biggest hurdle is the complexity of the mission. To fully fuel a Starship in low-Earth orbit to land on the Moon and take off would require multiple Starship “tanker” launches from Earth. No one can quite say how many because SpaceX is still working to increase the payload capacity of Starship, and no one has real-world data on transfer efficiency and propellant boiloff. But the number is probably at least a dozen missions. One senior source recently suggested to Ars that it may be as many as 20 to 40 launches.

The bottom line: It’s a lot. SpaceX is far and away the highest-performing space company in the Solar System. But putting all of the pieces together for a lunar landing will require time. Privately, SpaceX officials are telling NASA it can meet a 2028 timeline for Starship readiness for Artemis astronauts.

But that seems very optimistic. Very. It’s not something I would feel comfortable betting on, especially if China plans to land on the Moon “before” 2030, and the country continues to make credible progress toward this date.

What are the alternatives?

Duffy’s continued public insistence that he will not let China beat the United States back to the Moon rings hollow. The shrewd people in the industry I’ve spoken with say Duffy is an intelligent person and is starting to realize that betting the entire farm on SpaceX at this point would be a mistake. It would be nice to have a plan B.

But please, stop gaslighting us. Stop blustering about how we’re going to beat China while losing a quarter of NASA’s workforce and watching your key contractors struggle with growing pains. Let’s have an honest discussion about the challenges and how we’ll solve them.

What few people have done is offer solutions to Duffy’s conundrum. Fortunately, we’re here to help. As I have conducted interviews in recent weeks, I have always closed by asking this question: “You’re named NASA administrator tomorrow. You have one job: get NASA astronauts safely back to the Moon before China. What do you do?”

I’ve received a number of responses, which I’ll boil down into the following buckets. None of these strike me as particularly practical solutions, which underscores the desperation of NASA’s predicament. However, recent reporting has uncovered one solution that probably would work. I’ll address that last. First, the other ideas:

  • Stubby Starship: Multiple people have suggested this option. Tim Dodd has even spoken about it publicly. Two of the biggest issues with Starship are the need for many refuelings and its height, making it difficult to land on uneven terrain. NASA does not need Starship’s incredible capability to land 100–200 metric tons on the lunar surface. It needs fewer than 10 tons for initial human missions. So shorten Starship, reduce its capability, and get it down to a handful of refuelings. It’s not clear how feasible this would be beyond armchair engineering. But the larger problem is that Musk wants Starship to get taller, not shorter, so SpaceX would probably not be willing to do this.
  • Surge CLPS funding: Since 2019, NASA has been awarding relatively small amounts of funding to private companies to land a few hundred kilograms of cargo on the Moon. NASA could dramatically increase funding to this program, say up to $10 billion, and offer prizes for the first and second companies to land two humans on the Moon. This would open the competition to other companies beyond SpaceX and Blue Origin, such as Firefly, Intuitive Machines, and Astrobotic. The problem is that time is running short, and scaling up from 100 kilograms to 10 metric tons is an extraordinary challenge.
  • Build the Lunar Module: NASA already landed humans on the Moon in the 1960s with a Lunar Module built by Grumman. Why not just build something similar again? In fact, some traditional contractors have been telling NASA and Trump officials this is the best option, that such a solution, with enough funding and cost-plus guarantees, could be built in two or three years. The problem with this is that, sorry, the traditional space industry just isn’t up to the task. It took more than a decade to build a relatively simple rocket based on the space shuttle. The idea that a traditional contractor will complete a Lunar Module in five years or less is not supported by any evidence in the last 20 years. The flimsy Lunar Module would also likely not pass NASA’s present-day safety standards.
  • Distract China: I include this only for completeness. As for how to distract China, use your imagination. But I would submit that ULA snipers or starting a war in the South China Sea is not the best way to go about winning the space race.

OK, I read this far. What’s the answer?

The answer is Blue Origin’s Mark 1 lander.

The company has finished assembly of the first Mark 1 lander and will soon ship it from Florida to Johnson Space Center in Houston for vacuum chamber testing. A pathfinder mission is scheduled to launch in early 2026. It will be the largest vehicle to ever land on the Moon. It is not rated for humans, however. It was designed as a cargo lander.

There have been some key recent developments, though. About two weeks ago, NASA announced that a second mission of Mark 1 will carry the VIPER rover to the Moon’s surface in 2027. This means that Blue Origin intends to start a production line of Mark 1 landers.

At the same time, Blue Origin already has a contract with NASA to develop the much larger Mark 2 lander, which is intended to carry humans to the lunar surface. Realistically, though, this will not be ready until sometime in the 2030s. Like SpaceX’s Starship, it will require multiple refueling launches. As part of this contract, Blue has worked extensively with NASA on a crew cabin for the Mark 2 lander.

A full-size mock-up of the Blue Origin Mk. 1 lunar lander.

Credit: Eric Berger

A full-size mock-up of the Blue Origin Mk. 1 lunar lander. Credit: Eric Berger

Here comes the important part. Ars can now report, based on government sources, that Blue Origin has begun preliminary work on a modified version of the Mark 1 lander—leveraging learnings from Mark 2 crew development—that could be part of an architecture to land humans on the Moon this decade. NASA has not formally requested Blue Origin to work on this technology, but according to a space agency official, the company recognizes the urgency of the need.

How would it work? Blue Origin is still architecting the mission, but it would involve “multiple” Mark 1 landers to carry crew down to the lunar surface and then ascend back up to lunar orbit to rendezvous with the Orion spacecraft. Enough work has been done, according to the official, that Blue Origin engineers are confident the approach could work. Critically, it would not require any refueling.

It is unclear whether this solution has reached Duffy, but he would be smart to listen. According to sources, Blue Origin founder Jeff Bezos is intrigued by the idea. And why wouldn’t he be? For a quarter of a century, he has been hearing about how Musk has been kicking his ass in spaceflight. Bezos also loves the Apollo program and could now play an essential role in serving his country in an hour of need. He could beat SpaceX to the Moon and stamp his name in the history of spaceflight.

Jeff and Sean? Y’all need to talk.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

How America fell behind China in the lunar space race—and how it can catch back up Read More »

in-their-own-words:-the-artemis-ii-crew-on-the-frenetic-first-hours-of-their-flight

In their own words: The Artemis II crew on the frenetic first hours of their flight

No one will be able to sleep when the launch window opens, however.

Wiseman: About seven seconds prior to liftoff, the four main engines light, and they come up to full power. And then the solids light, and that’s when you’re going. What’s crazy to me is that it’s six and a half seconds into flight before the solids clear the top of the tower. Five million pounds of machinery going straight uphill. Six and a half seconds to clear the tower. As a human, I can’t wait to feel that force.

A little more than two minutes into flight, the powerful side-mounted boosters will separate. They will have done the vast majority of lifting to that point, with the rocket already reaching a velocity of 3,100 mph (5,000 kph) and an altitude of 30 miles (48 km), well on its way to space. As payload specialists, Koch and Hansen will largely be along for the ride. Wiseman, the commander, and Glover, the pilot, will be tracking the launch, although the rocket’s flight will be fully automated unless something goes wrong.

Wiseman: Victor and I, we have a lot of work. We have a lot of systems to monitor. Hopefully, everything goes great, and if it doesn’t, we’re very well-trained on what to do next.

After 8 minutes and 3 seconds, the rocket’s core stage will shut down, and the upper stage and Orion spacecraft will separate about 10 seconds later. They will be in space, with about 40 minutes to prepare for their next major maneuver.

In orbit

Koch: The wildest thing in this mission is that literally, right after main-engine cutoff, the first thing Jeremy and I do is get up and start working. I don’t know of a single other mission, certainly not in my memory, where that has been the case in terms of physical movement in the vehicle, setting things up.

Koch, Wiseman, and Glover have all flown to space before, either on a SpaceX Dragon or Russian Soyuz vehicle, and spent several months on the International Space Station. So they know how their bodies will react to weightlessness. Nearly half of all astronauts experience “space adaptation syndrome” during their first flight to orbit, and there is really no way to predict who it will afflict beforehand. This is a real concern for Hansen, a first-time flier, who is expected to hop out of his seat and start working.

Canadian Astronaut Jeremy Hansen is a first-time flier on Artemis II.

Credit: NASA

Canadian Astronaut Jeremy Hansen is a first-time flier on Artemis II. Credit: NASA

Hansen: I’m definitely worried about that, just from a space motion sickness point of view. So I’ll just be really intentional. I won’t move my head around a lot. Obviously, I’m gonna have to get up and move. And I’ll just be very intentional in those first few hours while I’m moving around. And the other thing that I’ll do—it’s very different from Space Station—is I just have everything memorized, so I don’t have to read the procedure on those first few things. So I’m not constantly going down to the [tablet] and reading, and then up. And I’ll just try to minimize what I do.

Koch and Hansen will set up and test essential life support systems on the spacecraft because if the bathroom does not work, they’re not going to the Moon.

Hansen: We kind of split the vehicle by side. So Christina is on the side of the toilet. She’s taking care of all that stuff. I’m on the side of the water dispenser, which is something they want to know: Can we dispense water? It’s not a very complicated system. We just got to get up, get the stuff out of storage, hook it up. I’ll have some camera equipment that I’ll pull out of there. I’ve got the masks we use if we have a fire and we’re trying to purge the smoke. I’ve got to get those set up and make sure they’re good to go. So it’s just little jobs, little odds and ends.

Unlike a conventional rocket mission, Artemis II vehicle’s upper stage, known as the Interim Cryogenic Propulsion Stage, will not fire right away. Rather, after separating from the core stage, Orion will be in an elliptical orbit that will take it out to an apogee of 1,200 nautical miles, nearly five times higher than the International Space Station. There, the crew will be further from Earth than anyone since the Apollo program.

In their own words: The Artemis II crew on the frenetic first hours of their flight Read More »

the-suv-that-saved-porsche-goes-electric,-and-the-tech-is-interesting

The SUV that saved Porsche goes electric, and the tech is interesting


It will be most powerful production Porsche ever, but that’s not the cool bit.

Porsche Cayenne Electrics in the pit lane at the Porsche Experience Center in Leipzig

The next time we see the Cayenne Electric, it probably won’t be wearing fake body panels like the cars you see here. Credit: Jonathan Gitlin

The next time we see the Cayenne Electric, it probably won’t be wearing fake body panels like the cars you see here. Credit: Jonathan Gitlin

LEIPZIG, Germany—Porsche is synonymous with sports cars in which the engine lives behind the driver. From the company’s first open-top 356/1—which it let us drive a couple of years ago—to the latest stupendously clever 911 variants, these are the machines most of us associate with the Stuttgart-based brand. And indeed, the company has sold more than a million 911s since the model’s introduction in 1963. But here’s the bald truth: It’s the SUVs that keep the lights on. Without their profit, there would be no money to develop the next T-Hybrid or GT3. The first Cayenne was introduced just 23 years ago; since then, Porsche has sold more than 1.5 million of them. And the next one will be electric.

Of course, this won’t be Porsche’s first electric SUV. That honor goes to the electric Macan, which is probably becoming a more common sight on the streets in more well-heeled neighborhoods. Like the Macan, the Cayenne Electric is based on Volkswagen Group’s Premium Platform Electric, but this is no mere scaled-up Macan.

“It’s not just a product update; it’s a complete new chapter in the story,” said Sajjad Khan, a member of Porsche’s management board in charge of car IT.

Compared to the Macan, there’s an all-new battery pack design, not to mention more efficient and powerful electric motors. Inside, the cockpit is also new, with OLED screens for the main instrument panel and a curved infotainment display that will probably dominate the discussion.

We were given a passenger ride in the most powerful version of the Cayenne Electric, which is capable of brutal performance. Porsche

In fact, Ars already got behind the wheel of the next Cayenne during a development drive in the US earlier this summer. But we can now tell you about the tech behind the camouflaged body panels.

OLED me tell you about my screens

Although the 14.25-inch digital main instrument display looks pretty similar to the one you’ll find in most modern Porsches, all of the hardware for the Cayenne Electric is new and now uses an OLED panel. The curved central 12.25-inch infotainment screen is also an OLED panel, which keeps customizable widgets on its lower third and allows for a variety of content on the upper portion, including Android Auto or Apple CarPlay. The UI has taken cues from iOS, but it retains a look and feel that’s consistent with other Porsches.

The bottom of the infotainment screen has some persistent icons for things like seat heaters, but there are at least dedicated physical controls for the climate temperature and fan speed, the demisters, and the volume control.

The interior is dominated by new OLED screens. Porsche

New battery

At the heart of the new Cayenne Electric is an all-new 113 kWh battery pack (108 kWh net) that Porsche describes as “functionally integrated” into the car. Unlike previous PPE-based EVs (like the Macan or the Audi Q6) there’s no frame around the pack. Instead, it’s composed of six modules, each housed in its own protective case and bolted to the chassis.

The module cases provide the same kind of added stiffness as a battery frame might, but without devoting so much interior volume (and also mass) to the structure as opposed to the cells. Consequently, energy density is increased by around seven percent compared to the battery in the Taycan sedan.

Inside each module are four small packs, each comprising eight pouch cells connected in series. A new cooling system uses 15 percent less energy, and a new predictive thermal management system uses cloud data to condition the battery during driving and charging. (Porsche says the battery will still condition itself during a loss of connectivity but with less accuracy from the model.)

This all translates into greater efficiency. The pack is able to DC fast charge at up to 400 kW, going from 10 to 80 percent in as little as 16 minutes. Impressively, the curve actually slopes upward a little, only beginning to ramp down once the state of charge passes 55 percent. Even so, it will still accept 270 kW until hitting 70 percent SoC. For those looking for a quick plug-and-go option, Porsche told us you can expect to add 30 kWh in the first five minutes.

An illustration of the Porsche Cayenne Electric battery pack. Porsche

You’ll find a NACS port for DC charging on one side and a J1772 port for AC on the other. Porsche thinks many Cayenne Electric customers will opt for the 11 kW inductive charging pad at home instead of bothering with a plug. This uses Wi-Fi to detect the car’s proximity and will guide you onto the pad, with charging occurring seamlessly. (Unlike your consumer electronic experience, inductive charging for EVs is only a few percent less efficient than using a cable.)

The most powerful production Porsche yet

Less-powerful Cayenne Electrics are in the works, but the one Porsche was ready to talk about was the mighty Turbo, which will boast more torque and power output than any other regular-series production Porsche. The automaker is a little coy on the exact output, but expect nominal power to be more than 804 hp (600 kW). Not enough? The push-to-pass button on the steering wheel ups that to more than 938 hp (700 kW) for bursts of up to 10 seconds.

Still not enough? Engage launch control, which raises power to more than 1,072 hp (800 kW). Let me tell you, that feels brutal when you’re sitting in the passenger seat as the car hits 62 mph (100 km/h) in less than three seconds and carries on to 124 mph (200 km/h) in under eight seconds. This is a seriously quick SUV, despite a curb weight in excess of 5,500 lbs (2.5 tonnes).

A new rear drive unit helps make that happen. (Up front is a second drive unit we’ve seen in the Macan.) Based on lessons learned from the GT4 ePerformance (a technology test bed for a potential customer racing EV), the unit directly cools the stator with a non-conductive oil and benefits from some Formula E-derived tech (like silicon carbide inverters) that pushes the motor efficiency to 98 percent.

A very low center of gravity helps bank angles. Jonathan Gitlin

Regenerative braking performance is even more impressive than fast charging—this SUV will regen up to 600 kW, and the friction brakes won’t take over until well past 0.5 Gs of deceleration. Only around three percent of braking events will require the friction brakes to do their thing—in this case, they’re standard carbon ceramics that save weight compared to conventional iron rotors, which again translates to improved efficiency.

Sadly, you need to push the brake pedal to get all that regen. Deep in the heart of the company, key decision makers remain philosophically opposed to the concept of one-pedal driving, so the most lift-off regen you’ll experience will be around 0.15 Gs. I remain unconvinced that this is the correct decision; as a software-defined vehicle, it’s perfectly possible to have a one-pedal driving setting, and Porsche could offer this as an option for drivers to engage, like many other EVs out there.

While we might have had to test the 911 GTS’s rough-road ability this summer, the Cayenne is positively made for that kind of thing. There are drive modes for gravel/sand, ice, and rocks, and plenty of wheel articulation thanks to the absence of traditional antiroll bars. It’s capable of fording depths of at least a foot (0.35 m), and as you can see from some of the photos, it will happily drive along sloped banks at angles that make passengers look for the grab handles.

A new traction management system helps here, and its 5 ms response time makes it five times faster than the previous iteration.

The big SUV’s agility on the handling track was perhaps even more remarkable. It was actually nauseating at times, given the brutality with which it can accelerate, brake, and change direction. There’s up to 5 degrees of rear axle steering, with a higher speed threshold for turning opposite the front wheels, up to 62 mph (reducing the turning circle); above that speed, the rear wheels turn with the fronts to improve high-speed lane change stability.

The suspension combines air springs and hydraulic adaptive dampers, and like the Panamera we recently tested, comfort mode can enable an active ride comfort mode that counteracts weight transfer during cornering, accelerating, and braking to give passengers the smoothest ride possible.

More detailed specs will follow in time. As for pricing, expect it to be similar or slightly more than the current Cayenne pricing.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

The SUV that saved Porsche goes electric, and the tech is interesting Read More »

zr1,-gtd,-and-america’s-new-nurburgring-war

ZR1, GTD, and America’s new Nürburgring war


Drive quickly and make a lot of horsepower.

Ford and Chevy set near-identical lap times with very different cars; we drove both.

Credit: Tim Stevens | Aurich Lawson

Credit: Tim Stevens | Aurich Lawson

There’s a racetrack with a funny name in Germany that, in the eyes of many international enthusiasts, is the de facto benchmark for automotive performance. But the Nürburgring, a 13-mile (20 km) track often called the Green Hell, rarely hits the radar of mainstream US performance aficionados. That’s because American car companies rarely take the time to run cars there, and if they do, it’s in secrecy, to test pre-production machines cloaked in camouflage without publishing official times.

The track’s domestic profile has lately been on the rise, though. Late last year, Ford became the first American manufacturer to run a sub-7-minute lap: 6: 57.685 from its ultra-high-performance Mustang GTD. It then did better, announcing a 6: 52.072 lap time in May. Two months later, Chevrolet set a 6: 49.275 lap time with the hybrid Corvette ZR1X, becoming the new fastest American car around that track.

It’s a vehicular war of escalation, but it’s about much more than bragging rights.

The Green Hell as a must-visit for manufacturers

The Nürburgring is a delightfully twisted stretch of purpose-built asphalt and concrete strewn across the hills of western Germany. It dates back to the 1920s and has hosted the German Grand Prix for a half-century before it was finally deemed too unsafe in the late 1970s.

It’s still a motorsports mecca, with sports car racing events like the 24 Hours of the Nürburgring drawing hundreds of thousands of spectators, but today, it’s better known as the ultimate automotive performance proving ground.

It offers an unmatched variety of high-speed corners, elevation changes, and differing surfaces that challenge the best engineers in the world. “If you can develop a car that goes fast on the Nürburgring, it’s going to be fast everywhere in the whole world,” said Brian Wallace, the Corvette ZR1’s vehicle dynamics engineer and the driver who set that car’s fast lap of 6: 50.763.

“When you’re going after Nürburgring lap time, everything in the car has to be ten tenths,” said Greg Goodall, Ford’s chief program engineer for the Mustang GTD. “You can’t just use something that is OK or decent.”

Thankfully, neither of these cars is merely decent.

Mustang, deconstructed

You know the scene in Robocop where a schematic displays how little of Alex Murphy’s body remains inside that armor? Just enough of Peter Weller’s iconic jawline remains to identify the man, but the focus is clearly on the machine.

That’s a bit like how Multimatic creates the GTD, which retains just enough Mustang shape to look familiar, but little else.

Multimatic, which builds the wild Ford GT and also helms many of Ford’s motorsports efforts, starts with partially assembled Mustangs pulled from the assembly line, minus fenders, hood, and roof. Then the company guts what’s left in the middle.

Ford’s partner Multimatic cut as much of the existing road car chassis as it could for the GTD. Tim Stevens

“They cut out the second row seat area where our suspension is,” Ford’s Goodall said. “They cut out the rear floor in the trunk area because we put a flat plate on there to mount the transaxle to it. And then they cut the rear body side off and replace that with a wide-body carbon-fiber bit.”

A transaxle is simply a fun name for a rear-mounted transmission—in this case, an eight-speed dual-clutch unit mounted on the rear axle to help balance the car’s weight.

The GTD needs as much help as it can get to offset the heft of the 5.2-liter supercharged V8 up front. It gets a full set of carbon-fiber bodywork, too, but the resulting package still weighs over 4,300 lbs (1,950 kg).

With 815 hp (608 kW) and 664 lb-ft (900 Nm) of torque, it’s the most powerful road-going Mustang of all time, and it received other upgrades to match, including carbon-ceramic brake discs at the corners and the wing to end all wings slung off the back. It’s not only big; it’s smart, featuring a Formula One-style drag-reduction system.

At higher speeds, the wing’s element flips up, enabling a 202 mph (325 km/h) top speed. No surprise, that makes this the fastest factory Mustang ever. At a $325,000 starting price, it had better be, but when it comes to the maximum-velocity stakes, the Chevrolet is in another league.

More Corvette

You lose the frunk but gain cooling and downforce. Tim Stevens

On paper, when it comes to outright speed and value, the Chevrolet Corvette ZR1 seems to offer far more bang for what is still a significant number of bucks. To be specific, the ZR1 starts at about $175,000, which gets you a 1,064 hp (793 kW) car that will do 233 mph (375 km/h) if you point it down a road long enough.

Where the GTD is a thorough reimagining of what a Mustang can be, the ZR1 sticks closer to the Corvette script, offering more power, more aerodynamics, and more braking without any dramatic internal reconfiguration. That’s because it was all part of the car’s original mission plan, GM’s Brian Wallace told me.

“We knew we were going to build this car,” he said, “knowing it had the backbone to double the horsepower, put 20 percent more grip in the car, and oodles of aero.”

At the center of it all is a 5.5-liter twin-turbocharged V8. You can get a big wing here, too, but it isn’t active like the GTD’s.

Chevrolet engineers bolstered the internal structure at the back of the car to handle the extra downforce at the rear. Up front, the frunk is replaced by a duct through the hood, providing yet more grip to balance things. Big wheels, sticky tires, and carbon-ceramic brakes round out a package that looks a little less radical on the outside than the Mustang and substantially less retooled on the inside, but clearly no less capable.

The engine bay of a yellow Corvette ZR1.

A pair of turbochargers lurk behind that rear window. Credit: Tim Stevens

And if that’s not enough, Chevrolet has the 1,250 hp (932 kW), $208,000 ZR1X on offer, which adds the Corvette E-Ray’s hybrid system into the mix. That package does add more weight, but the result is still a roughly 4,000-lb (1,814 kg) car, hundreds less than the Ford.

’Ring battles

Ford and Chevy’s battle at the ‘ring blew up this summer, but both brands have tested there for years. Chevrolet has even set official lap times in the past, including the previous-generation Corvette Z06’s 7: 22.68 in 2012. Despite that, a fast lap time was not in the initial plan for the new ZR1 and ZR1X. Drew Cattell, ZR1X vehicle dynamics engineer and the driver of that 6: 49.275 lap, told me it “wasn’t an overriding priority” for the new Corvette.

But after developing the cars there so extensively, they decided to give it a go. “Seeing what the cars could do, it felt like the right time. That we had something we were proud of and we could really deliver with,” he said.

Ford, meanwhile, had never set an official lap time at the ‘ring, but it was part of the GTD’s raison d’être: “That was always a goal: to go under seven minutes. And some of it was to be the first American car ever to do it,” Ford’s Goodall said.

That required extracting every bit of performance, necessitating a last-minute change during final testing. In May of 2024, after the car’s design had been finalized by everyone up the chain of command at Ford, the test team in Germany determined the GTD needed a little more front grip.

To fix it, Steve Thompson, a dynamic technical specialist at Ford, designed a prototype aerodynamic extension to the vents in the hood. “It was 3D-printed, duct taped,” Goodall said. That design was refined and wound up on the production car, boosting frontal downforce on the GTD without adding drag.

Chevrolet’s development process relied not only on engineers in Germany but also on work in the US. “The team back home will keep on poring over the data while we go to sleep, because of the time difference,” Cattell said, “and then they’ll have something in our inbox the next morning to try out.”

When it was time for the Corvette’s record-setting runs, there wasn’t much left to change, just a few minor setup tweaks. “Maybe a millimeter or two,” Wallace said, “all within factory alignment settings.”

A few months later, it was my turn.

Behind the wheel

No, I wasn’t able to run either of these cars at the Nürburgring, but I was lucky enough to spend one day with both the GTD and the ZR1. First was the Corvette at one of America’s greatest racing venues: the Circuit of the Americas, a 3.5-mile track and host of the Formula One United States Grand Prix since 2012.

A head-on shot of a yellow Corvette ZR1.

How does 180 mph on the back straight at the Circuit of the Americas sound? Credit: Tim Stevens

I’ve been lucky to spend a lot of time in various Corvettes over the years, but none with performance like this. I was expecting a borderline terrifying experience, but I couldn’t have been more wrong. Despite its outrageous speed and acceleration, the ZR1 really is still a Corvette.

On just my second lap behind the wheel of the ZR1, I was doing 180 mph down the back straight and running a lap time close to the record set by a $1 million McLaren Senna a few years before. The Corvette is outrageously fast—and frankly exhausting to drive thanks to the monumental G forces—but it’s more encouraging than intimidating.

The GTD was more of a commitment. I sampled one at The Thermal Club near Palm Springs, California, a less auspicious but more technical track with tighter turns and closer walls separating them. That always amps up the pressure a bit, but the challenging layout of the track really forced me to focus on extracting the most out of the Mustang at low and high speeds.

The GTD has a few tricks up its sleeve to help with that, including an advanced multi-height suspension that drops it by about 1.5 inches (4 cm) at the touch of a button, optimizing the aerodynamic performance and lowering the roll height of the car.

A black Ford Mustang GTD in profile.

Heavier and less powerful than the Corvette, the Mustang GTD has astonishing levels of cornering grip. Credit: Tim Stevens

While road-going Mustangs typically focus on big power in a straight line, the GTD’s real skill is astonishing grip and handling. Remember, the GTD is only a few seconds slower on the ‘ring than the ZR1, despite weighing somewhere around 400 pounds (181 kg) more and having nearly 200 fewer hp (149 kw).

The biggest difference in feel between the two, though, is how they accelerate. The ZR1’s twin-turbocharged V8 delivers big power when you dip in the throttle and then just keeps piling on more and more as the revs increase. The supercharged V8 in the Mustang, on the other hand, is more like an instantaneous kick in the posterior. It’s ferocious.

Healthy competition

The ZR1 is brutally fast, yes, but it’s still remarkably composed, and it feels every bit as usable and refined as any of the other flavors of modern Corvette. The GTD, on the other hand, is a completely different breed than the base Mustang, every bit the purpose-built racer you’d expect from a race shop like Multimatic.

Chevrolet did the ZR1 and ZR1X development in-house. Cattell said that is a huge point of pride for the team. So, too, is setting those ZR1 and ZR1X lap times using General Motors’ development engineers. Ford turned to a pro race driver for its laps.

A racing driver stands in front his car as mechanics and engineers celebrate in the background.

Ford factory racing driver Dirk Muller was responsible for setting the GTD’s time at the ‘ring. Credit: Giles Jenkyn Photography LTD/Ford

An engineer in a fire suit stands next to a yellow Corvette, parked on the Nurburgring.

GM vehicle dynamics engineer Drew Cattell set the ZR1X’s Nordschleife time. Credit: Chevrolet

That, though, was as close to a barb as I could get out of any engineer on either side of this new Nürburgring. Both teams were extremely complimentary of each other.

“We’re pretty proud of that record. And I don’t say this in a snarky way, but we were first, and you can’t ever take away first,” Ford’s Goodall said. “Congratulations to them. We know better than anybody how hard of an accomplishment or how big of an accomplishment it is and how much effort goes into it.”

But he quickly added that Ford isn’t done. “You’re not a racer if you’re just going to take that lying down. So it took us approximately 30 seconds to align that we were ready to go back and do something about it,” he said.

In other words, this Nürburgring war is just beginning.

ZR1, GTD, and America’s new Nürburgring war Read More »