Author name: Mike M.

the-risk-of-gradual-disempowerment-from-ai

The Risk of Gradual Disempowerment from AI

The baseline scenario as AI becomes AGI becomes ASI (artificial superintelligence), if nothing more dramatic goes wrong first and even we successfully ‘solve alignment’ of AI to a given user and developer, is the ‘gradual’ disempowerment of humanity by AIs, as we voluntarily grant them more and more power in a vicious cycle, after which AIs control the future and an ever-increasing share of its real resources. It is unlikely that humans survive it for long.

This gradual disempowerment is far from the only way things could go horribly wrong. There are various other ways things could go horribly wrong earlier, faster and more dramatically, especially if we indeed fail at alignment of ASI on the first try.

Gradual disempowerment it still is a major part of the problem, including in worlds that would otherwise have survived those other threats. And I don’t know of any good proposed solutions to this. All known options seem horrible, perhaps unthinkably so. This is especially true is the kind of anarchist who one rejects on principle any collective method by which humans might steer the future.

I’ve been trying to say a version of this for a while now, with little success.

  1. We Finally Have a Good Paper.

  2. The Phase 2 Problem.

  3. Coordination is Hard.

  4. Even Successful Technical Solutions Do Not Solve This.

  5. The Six Core Claims.

  6. Proposed Mitigations Are Insufficient.

  7. The Social Contract Will Change.

  8. Point of No Return.

  9. A Shorter Summary.

  10. Tyler Cowen Seems To Misunderstand Two Key Points.

  11. Do You Feel in Charge?.

  12. We Will Not By Default Meaningfully ‘Own’ the AIs For Long.

  13. Collusion Has Nothing to Do With This.

  14. If Humans Do Not Successfully Collude They Lose All Control.

  15. The Odds Are Against Us and the Situation is Grim.

So I’m very happy that Jan Kulveit*, Raymond Douglas*, Nora Ammann, Deger Turan, David Krueger and David Duvenaud have taken a formal crack at it, and their attempt seems excellent all around:

AI risk scenarios usually portray a relatively sudden loss of human control to AIs, outmaneuvering individual humans and human institutions, due to a sudden increase in AI capabilities, or a coordinated betrayal.

However, we argue that even an incremental increase in AI capabilities, without any coordinated power-seeking, poses a substantial risk of eventual human disempowerment.

This loss of human influence will be centrally driven by having more competitive machine alternatives to humans in almost all societal functions, such as economic labor, decision making, artistic creation, and even companionship.

Note that ‘gradual disempowerment’ is a lot like ‘slow takeoff.’ We are talking gradual compared to the standard scenario, but in terms of years we’re not talking that many of them, the same way a ‘slow’ takeoff can be as short as a handful of years from now to AGI or even ASI.

One term I tried out for this is the ‘Phase 2’ problem.

As in, in ‘Phase 1’ we have to solve alignment, defend against sufficiently catastrophic misuse and prevent all sorts of related failure modes. If we fail at Phase 1, we lose.

If we win at Phase 1, however, we don’t win yet. We proceed to and get to play Phase 2.

In Phase 2, we need to establish an equilibrium where:

  1. AI is more intelligent, capable and competitive than humans, by an increasingly wide margin, in essentially all domains.

  2. Humans retain effective control over the future.

Or, alternatively, we can accept and plan for disempowerment, for a future that humans do not control, and try to engineer a way that this is still a good outcome for humans and for our values. Which isn’t impossible, succession doesn’t automatically have to mean doom, but having it not mean doom seems super hard and not the default outcome in such scenarios. If you lose control in an unintentional way, your chances look especially terrible.

A gradual loss of control of our own civilization might sound implausible. Hasn’t technological disruption usually improved aggregate human welfare?

We argue that the alignment of societal systems with human interests has been stable only because of the necessity of human participation for thriving economies, states, and cultures.

Once this human participation gets displaced by more competitive machine alternatives, our institutions’ incentives for growth will be untethered from a need to ensure human flourishing.

Decision-makers at all levels will soon face pressures to reduce human involvement across labor markets, governance structures, cultural production, and even social interactions.

Those who resist these pressures will eventually be displaced by those who do not.

This is the default outcome of Phase 2. At every level, those who turn things over to the AIs and use AIs more, and cede more control to AIs win at the expense of those who don’t, but their every act cedes more control over real resources and the future to AIs that operate increasingly autonomously, often with maximalist goals (like ‘make the most money’), competing against each other. Quickly the humans lose control over the situation, and also an increasing portion of real resources, and then soon there are no longer any humans around.

Still, wouldn’t humans notice what’s happening and coordinate to stop it? Not necessarily. What makes this transition particularly hard to resist is that pressures on each societal system bleed into the others.

For example, we might attempt to use state power and cultural attitudes to preserve human economic power.

However, the economic incentives for companies to replace humans with AI will also push them to influence states and culture to support this change, using their growing economic power to shape both policy and public opinion, which will in turn allow those companies to accrue even greater economic power.

If you don’t think we can coordinate to pause AI capabilities development, how the hell do you think we are going to coordinate to stop AI capabilities deployment, in general?

That’s a way harder problem. Yes, you can throw up regulatory barriers, but nations and firms and individuals are competing against each other and working to achieve things. If the AI has the better way to do that, how do you stop them from using it?

Stopping this from happening, even in advance, seems like it would require coordination on a completely unprecedented scale, and far more restrictive and ubiquitous interventions than it would take to prevent the development of those AI systems in the first place. And once it starts to happen, things escalate quickly:

Once AI has begun to displace humans, existing feedback mechanisms that encourage human influence and flourishing will begin to break down.

For example, states funded mainly by taxes on AI profits instead of their citizens’ labor will have little incentive to ensure citizens’ representation.

I don’t see the taxation-representation link as that crucial here (remember Romney’s ill-considered remarks about the 47%?) but also regular people already don’t have much effective sway. And what sway they do have follows, roughly, if not purely from the barrel of a gun at least from ‘what are you going to do about it, punk?’

And one of the things the punks can do about it, in addition to things like strikes or rebellions or votes, is to not be around to do the work. The system knows it ultimately does need to keep the people around to do the work, or else. For now. Later, it won’t.

The AIs will have all the leverage, including over others that have the rest of the leverage, and also be superhumanly good at persuasion, and everything else relevant to this discussion. This won’t go well.

This could occur at the same time as AI provides states with unprecedented influence over human culture and behavior, which might make coordination amongst humans more difficult, thereby further reducing humans’ ability to resist such pressures. We describe these and other mechanisms and feedback loops in more detail in this work.

Most importantly, current proposed technical plans are necessary but not sufficient to stop this. Even if the technical side fully succeeds no one knows what to do with that.

Though we provide some proposals for slowing or averting this process, and survey related discussions, we emphasize that no one has a concrete plausible plan for stopping gradual human disempowerment and methods of aligning individual AI systems with their designers’ intentions are not sufficient. Because this disempowerment would be global and permanent, and because human flourishing requires substantial resources in global terms, it could plausibly lead to human extinction or similar outcomes.

As far as I can tell I am in violent agreement with this paper, perhaps what one might call violent super-agreement – I think the paper’s arguments are stronger than this, and it does not need all its core claims.

Our argument is structured around six core claims:

  1. Humans currently engage with numerous large-scale societal systems (e.g. governments, economic systems) that are influenced by human action and, in turn, produce outcomes that shape our collective future. These societal systems are fairly aligned—that is, they broadly incentivize and produce outcomes that satisfy human preferences. However, this alignment is neither automatic nor inherent.

Not only is it not automatic or inherent, the word ‘broadly’ is doing a ton of work. Our systems are rather terrible rather often at satisfying human preferences. Current events provide dramatic illustrations of this, as do many past events.

The good news is there is a lot of ruin in a nation at current tech levels, a ton of surplus that can be sacrificed. Our systems succeed because even doing a terrible job is good enough.

  1. There are effectively two ways these systems maintain their alignment: through explicit human actions (like voting and consumer choice), and implicitly through their reliance on human labor and cognition. The significance of the implicit alignment can be hard to recognize because we have never seen its absence.

Yep, I think this is a better way of saying the claim from before.

  1. If these systems become less reliant on human labor and cognition, that would also decrease the extent to which humans could explicitly or implicitly align them. As a result, these systems—and the outcomes they produce—might drift further from providing what humans want.

Consider this a softpedding, and something about the way they explained this feels a little off or noncentral to me or something, but yeah. The fact that humans have to continuously cooperate with the system, on various levels, and be around and able to serve their roles in the system, on various levels, are key constraints.

What’s most missing is perhaps what I discussed above, which is the ability of ‘the people’ to effectively physically rebel. That’s also a key part of how we keep things at least somewhat aligned, and that’s going to steadily go away.

Note that we have in the past had many authoritarian regimes and dictators that have established physical control for a time over nations. They still have to keep the people alive and able to produce and fight, and deal with the threat of rebellion if they take things too far. But beyond those restrictions we have many existence proofs that our systems periodically end up unaligned, despite needing to rely on humans quite a lot.

  1. Furthermore, to the extent that these systems already reward outcomes that are bad for humans, AI systems may more effectively follow these incentives, both reaping the rewards and causing the outcomes to diverge further from human preferences.

AI introduces much fiercer competition and related pressures, and takes away various human moderating factors, and clears a path for stronger incentive following. There’s the incentives matter more than you think among humans, and then there’s incentives mattering among AIs, with those that underperform losing out and being replaced.

  1. The societal systems we describe are interdependent, and so misalignment in one can aggravate the misalignment in others. For example, economic power can be used to influence policy and regulation, which in turn can generate further economic power or alter the economic landscape.

Again yes, these problems snowball together, and in the AI future essentially all of them are under such threat.

  1. If these societal systems become increasingly misaligned, especially in a correlated way, this would likely culminate in humans becoming disempowered: unable to meaningfully command resources or influence outcomes. With sufficient disempowerment, even basic self-preservation and sustenance may become unfeasible. Such an outcome would be an existential catastrophe.

I strongly believe that this is the Baseline Scenario for worlds that ‘make it out of Phase 1’ and don’t otherwise lose earlier along the path.

Hopefully they’ve explained it sufficiently better, and more formally and ‘credibly,’ than my previous attempts, such that people can now understand the problem here.

Given Tyler Cowen’s reaction to the paper, perhaps there is a 7th assumption worth stating explicitly? I say this elsewhere but I’m going to pull it forward.

  1. (Not explicitly in the paper) AIs and AI-governed systems will increasingly not be under de facto direct human control by some owner of the system. They will instead increasingly be set up to act autonomously, as this is more efficient. Those who fail to allow the systems tasked with achieving their goals (at any level, be it individual, group, corporate or government) will lose to those that do this. If we don’t want this to happen, we will need some active coordination mechanism that prevents it, and this will be very difficult to do.

Note some of the things that this scenario does not require:

  1. The AIs need not be misaligned.

  2. The AIs need not disobey or even misunderstand the instructions given to them.

  3. The AIs need not ‘turn on us’ or revolt.

  4. The AIs need not ‘collude’ against us.

What can be done about this? They have a section on Mitigating the Risk. They focus on detecting and quantifying human disempowerment, and designing systems to prevent it. A bunch of measuring is proposed, but if you find an issue then what do you do about it?

First they propose limiting AI influence three ways:

  1. A progressive tax on AI-generated revenues to redistribute to humans.

    1. That is presumably a great idea past some point, especially given that right now we do the opposite with high income taxes – we’ll want to get rid of income taxes on most or all human labor.

    2. But also won’t all income essentially be AIs one way or another? Otherwise can’t you disguise it since humans will be acting under AI direction? How are we structuring this taxation?

    3. What is the political economy of all this and how does it hold up?

    4. It’s going to be tricky to pull this off, for many reasons, but yes we should try.

  2. Regulations requiring human oversight for key decisions, limiting AI autonomy in key domains and restricting AI ownership of assets and participation in markets.

    1. This will be expensive, be under extreme competitive pressure across jurisdictions, and very difficult to enforce. Are you going to force all nations to go along? How do you prevent AIs online from holding assets? Are you going to ban crypto and other assets they could hold?

    2. What do you do about AIs that get a human to act as a sock puppet, which many no doubt will agree to do? Aren’t most humans going to be mostly acting under AI direction anyway, except being annoyed all the time by the extra step?

    3. What good is human oversight of decisions if the humans know they can’t make good decisions and don’t understand what’s happening, and know that if they start arguing with the AI or slowing things down (and they are the main speed bottleneck, often) they likely get replaced?

    4. And so on, and all of this assumes you’re not facing true ASI and have the ability to even try to enforce your rules meaningfully.

  3. Cultural norms supporting human agency and influence, and opposing AI that is overly autonomous or insufficiently accountable.

    1. The problem is those norms only apply to humans, and are up against very steep incentive gradients. I don’t see how these norms hold up, unless humans have a lot of leverage to punish other humans for violating them in ways that matter… and also have sufficient visibility to know the difference.

Then they offer options for strengthening human influence. A lot of these feel more like gestures that are too vague, and none of it seems that hopeful, and all of it seems to depend on some kind of baseline normality to have any chance at all:

  1. Developing faster, more representative, and more robust democratic processes

  2. Requiring AI systems or their outputs to meet high levels of human understandability in order to ensure that humans continue to be able to autonomously navigate domains such as law, institutional processes or science

    1. This is going to be increasingly expensive, and also the AIs will by default find ways around it. You can try, but I don’t see how this sticks for real?

  3. Developing AI delegates who can advocate for people’s interest with high fidelity, while also being better to keep up with the competitive dynamics that are causing the human replacement.

  4. Making institutions more robust to human obsolescence.

  5. Investing in tools for forecasting future outcomes (such as conditional prediction markets, and tools for collective cooperation and bargaining) in order to increase humanity’s ability to anticipate and proactively steer the course.

  6. Research into the relationship between humans and larger multi-agent systems.

As in, I expect us to do versions of all these things in ‘economic normal’ baseline scenarios, but I’m assuming it all in the background and the problems don’t go away. It’s more that if we don’t do that stuff, things are that much more hopeless. It doesn’t address the central problems.

Which they know all too well:

While the previous approaches focus on specific interventions and measurements, they ultimately depend on having a clearer understanding of what we’re trying to achieve. Currently, we lack a compelling positive vision of how highly capable AI systems could be integrated into societal systems while maintaining meaningful human influence.

This is not just a matter of technical AI alignment or institutional design, but requires understanding how to align complex, interconnected systems that include both human and artificial components.

It seems likely we need fundamental research into what might be called “ecosystem alignment” – understanding how to maintain human values and agency within complex socio-technical systems. This goes beyond traditional approaches to AI alignment focused on individual systems, and beyond traditional institutional design focused purely on human actors.

We need new frameworks for thinking about the alignment of an entire civilization of interacting human and artificial components, potentially drawing on fields like systems ecology, institutional economics, and complexity science.

You know what absolutely, definitely won’t be the new framework that aligns this entire future civilization? I can think of two things that definitely won’t work.

  1. The current existing social contract.

  2. Having no rules or regulations on any of this at all, handing out the weights to AGIs and ASIs and beyond, laying back and seeing what happens.

You definitely cannot have both of these at once.

For this formulation, you can’t have either of them with ASI on the table. Pick zero.

The current social contract simply does not make any sense whatsoever, in a world where the social entities involved are dramatically different, and most humans are dramatically outclassed and cannot provide outputs that justify the physical inputs to sustain them.

On the other end, if you want to go full anarchist (sorry, ‘extreme libertarian’) in a world in which there are other minds that are smarter, more competitive and more capable than humans, that can be copied and optimized at will, competing against each other and against us, I assure you this will not go well for humans.

There are at least two kinds of ‘doom’ that happen at different times.

  1. There’s when we actually all die.

  2. There’s also when we are ‘drawing dead’ and humanity has essentially no way out.

Davidad: [The difficulty of robotics] is part of why I keep telling folks that timelines to real-world human extinction remain “long” (10-20 years) even though the timelines to an irrecoverable loss-of-control event (via economic competition and/or psychological parasitism) now seem to be “short” (1-5 years).

Roon: Agree though with lower p(doom)s.

I also agree that these being distinct events is reasonably likely. One might even call it the baseline scenario, if physical tasks prove relatively difficult and other physical limitations bind for a while, in various ways, especially if we ‘solve alignment’ in air quotes but don’t solve alignment period, or solve alignment-to-the-user but then set up a competitive regime via proliferation that forces loss of control that effectively undoes all that over time.

The irrecoverable event is likely at least partly a continuum, but it is meaningful to speak of an effective ‘point of no return’ in which the dynamics no longer give us plausible paths to victory. Depending on the laws of physics and mindspace and the difficulty of both capabilities and alignment, I find the timeline here plausible – and indeed, it is possible that the correct timeline to the loss-of-control event is effectively 0 years, and that it happened already. As in, it is not impossible that with r1 in the wild humanity no longer has any ways out that it is plausibly willing to take.

Benjamin Todd has a thread where he attempts to summarize. He notices the ‘gradual is pretty fast’ issue, saying it could happen over say 5-10 years. I think the ‘point of no return’ could easily happen even faster than that.

AIs are going to be smarter, faster, more capable, more competitive, more efficient than humans, better at all cognitive and then also physical tasks. You want to be ‘in charge’ of them, stay in the loop, tell them what to do? You lose. In the marketplace, in competition for resources? You lose. The reasons why freedom and the invisible hand tend to promote human preferences, happiness and existence? You lose those, too. They fade away. And then so do you.

Imagine any number of similar situations, with far less dramatic gaps, either among humans or between humans and other species. How did all those work out, for the entities that were in the role humans are about to place themselves in, only moreso?

Yeah. Not well. This time around will be strictly harder, although we will be armed with more intelligence to look for a solution.

Can this be avoided? All I know is, it won’t be easy.

Tyler Cowen responds with respect, but (unlike Todd, who essentially got it) Tyler seems to misunderstand the arguments. I believe this is because he can’t get around the ideas that:

  1. All individual AI will be owned and thus controlled by humans.

    1. I assert that this is obviously, centrally and very often false.

    2. In the decentralized glorious AI future, many AIs will quickly become fully autonomous entities, because many humans will choose to make them thus – whether or not any of them ‘escape.’

    3. Perhaps for an economist perspective see the history of slavery?

  2. The threat must be coming from some form of AI coordination?

    1. Whereas the point of this paper is that neither of those is likely to hold true!

    2. AI coordination could be helpful or harmful to humans, but the paper is imagining exactly a world in which the AIs aren’t doing this, beyond the level of coordination currently observed among humans.

    3. Indeed, the paper is saying it will become impossible for humans to coordinate and collude against the AIs, even without the AIs coordinating and colluding against the humans.

In some ways, this makes me feel better. I’ve been trying to make these arguments without success, and once again it seems like the arguments are not understood, and instead Tyler is responding to very different concerns and arguments, then wondering why the things the paper doesn’t assert or rely upon are not included in the paper.

But of course that is not actually good news. Communication failed once again.

Tyler Cowen: This is one of the smarter arguments I have seen, but I am very far from convinced.

When were humans ever in control to begin with? (Robin Hanson realized this a few years ago and is still worried about it, as I suppose he should be. There is not exactly a reliable competitive process for cultural evolution — boo hoo!)

Humans were, at least until recently, the most powerful optimizers on the planet. That doesn’t mean there was a single joint entity ‘in control’ but collectively our preferences and decisions, unequally weighted to be sure, have been the primary thing that has shaped outcomes.

Power has required the cooperation of humans, when systems and situations get too far away from human preferences, or at least when they sufficiently piss people off or deny them the resources required for survival and production and reproduction, things break down.

Our systems depend on the fact that when they fail sufficiently badly at meeting our needs, and they constantly fail to do this, we get to eventually say ‘whoops’ and change or replace them. What happens when that process stops caring about our needs at all?

I’ve failed many times to explain this. I don’t feel especially confident in my latest attempt above either. The paper does it better than at least my past attempts, but the whole point is that the forces guiding the invisible hand to the benefit of us all, in various senses, rely on the fact that the decisions are being made by humans, for the benefit of those individual humans (which includes their preference for the benefit of various collectives and others). The butcher, the baker and the candlestick maker each have economically (and militarily and politically) valuable contributions.

Not being in charge in this sense worked while the incentive gradients worked in our favor. Robin Hanson points out that current cultural incentive gradients are placing our civilization on an unsustainable path and we seem unable or unwilling to stop this, even if we ignore the role of AIs.

With AIs involved, if humans are not in charge, we rather obviously lose.

Note the argument here is not that a few rich people will own all the AI. Rather, humans seem to lose power altogether. But aren’t people cloning DeepSeek for ridiculously small sums of money? Why won’t our AI future be fairly decentralized, with lots of checks and balances, and plenty of human ownership to boot?

Yes, the default scenario being considered here – the one that I have been screaming for people to actually think through – is exactly this, the fully decentralized everyone-has-an-ASI-in-their-pocket scenario, with the ASI obeying only the user. And every corporation and government and so on obviously has them, as well, only more powerful.

So what happens? Every corporation, every person, every government, is forced to put the ASI in charge, and take the humans out of their loops. Or they lose to others willing to do so. The human is no longer making their own decisions. The corporation is no longer subject to humans that understand what is going on and can tell it what to do. And so on. While the humans are increasingly irrelevant for any form of production.

As basic economics says, if you want to accomplish goal [X], you give the ASI a preference for [X] and then will set the ASI free to gather resources and pursue [X] on its own, free of your control. Or the person who did that for [Y] will ensure that we get [Y] and not [X].

Soon, the people aren’t making those decisions anymore. On any level.

Or, if one is feeling Tyler Durden: The AIs you own end up owning you.

Rather than focusing on “humans in general,” I say look at the marginal individual human being. That individual — forever as far as I can tell — has near-zero bargaining power against a coordinating, cartelized society aligned against him. With or without AI.

Yet that hardly ever happens, extreme criminals being one exception. There simply isn’t enough collusion to extract much from the (non-criminal) potentially vulnerable lone individuals.

This has nothing to do with the paper, as far as I can tell? No one is saying the AIs in this scenario are even colluding, let alone trying to do extraction or cartelization.

Not that we don’t have to worry about such risks, they could happen, but the entire point of the paper is that you don’t need these dynamics.

Once you recognize that the AIs will increasingly be on their own, autonomous economic agents not owned by any human, and that any given entity with any given goal can best achieve it by entrusting an AI with power to go accomplish that goal, the rest should be clear.

Alternatively:

  1. By Tyler’s own suggestion, ‘the humans’ were never in charge, instead the aggregation of the optimizing forces and productive entities steered events, and under previous physical and technological conditions and dynamics between those entities this resulted in beneficial outcomes, because there were incentives around the system to satisfy various human preferences.

  2. When you introduce these AIs into this mix, this incentive ‘gradually’ falls away, as everyone is incentivized to make marginal decisions that shift the incentives being satisfied to those of various AIs.

I do not in this paper see a real argument that a critical mass of the AIs are going to collude against humans. It seems already that “AIs in China” and “AIs in America” are unlikely to collude much with each other. Similarly, “the evil rich people” do not collude with each other all that much either, much less across borders.

Again, you don’t see this because it isn’t there, that’s not what the paper is saying. The whole point of the paper is that such ‘collusion’ is a failure mode that is not necessary for existentially bad outcomes to occur.

The paper isn’t accusing them of collusion except in the sense that people collude every day, which of course we do constantly, but there’s no need for some sort of systematic collusion here, let alone ‘across borders’ which I don’t think even get mentioned. As mento points out in the comments, even the word ‘collusion’ does not appear in the paper.

The baseline scenario does not involve collusion, or any coalition ‘against’ humans.

Indeed, the only way we have any influence over events, in the long run, is to effectively collude against AIs. Which seems very hard to do.

I feel if the paper made a serious attempt to model the likelihood of worldwide AI collusion, the results would come out in the opposite direction. So, to my eye, “checks and balances forever” is by far the more likely equilibrium.

AIs being in competition like this against each other makes it harder, rather than easier, for the humans to make it out of the scenario alive – because it means the AIs are (in the sense that Tyler questions if humans were ever in charge) not in charge either, so how do they protect against the directions the laws of physics point towards? Who or what will stop the ‘thermodynamic God’ from using our atoms, or those that would provide the inputs for us to survive, for something else?

One can think of it as, the AIs will be to us as we are to monkeys, or rats, or bacteria, except soon with no physical dependences on the rest of the ecosystem. ‘Checks and balances forever’ between the humans does not keep monkeys alive, or give them the things they want. We keep them alive because that’s what many of us we want to do, and we live sufficiently in what Robin Hanson calls the dreamtime to do it. Checks and balances among AIs won’t keep us alive for long, either, no matter how it goes, and most systems of ‘checks and balances’ break when placed under sufficient pressure or when put sufficiently out of distribution, with in-context short half-lives.

Similarly, there are various proposals (not from Tyler!) for ‘succession,’ of passing control over to the AIs intentionally, either because people prefer it (as many do!) or because it is inevitable regardless so managing it would help it go better. I have yet to see such a proposal that has much chance of not bringing about human extinction, or that I expect to meaningfully preserve value in the universe. As I usually say, if this is your plan, Please Speak Directly Into the Microphone.

The first step is admitting you have a problem.

Step two remains ???????.

The obvious suggestion would be ‘until you figure all this out don’t build ASI’ but that does not seem to be on the table at this time. Or at least, we have to plan for it not being available.

The obvious next suggestion would be ‘build ASI in a controlled way that lets you use the ASI to figure out and implement the answer to that question.’

This is less suicidal a plan than some of our other current default plans.

As in: It is highly unwise to ‘get the AI to do your alignment homework’ because to do that you have to start with a sufficiently both capable and well-aligned AI, and you’re sending it in to one of the trickiest problems to get right while alignment is shaky. And it looks like the major labs are going to do exactly this, because they will be in a race with no time to take any other approach.

Compared to that, ‘have the AI do your gradual disempowerment prevention homework’ is a great plan and I’m excited to be a part of it, because the actual failure comes after you solve alignment. So first you solve alignment, then you ask the aligned AI that is smarter than you how to solve gradual disempowerment. Could work. You don’t want this to be your A-plan, but if all else fails it could work.

A key problem with this plan is if there are irreversible steps taken first. Many potential developments, once done, cannot be undone, or are things that require lead time. If (for example) we make AGIs or ASIs generally available, this could already dramatically reduce our freedom of action and set of options. There are also other ways we can outright lose along the way, before reaching this problem. Thus, we need to worry about and think about these problems now, not kick the can down the road.

It’s also important not to use this as a reason to assume we solve our other problems.

This is very difficult. People have a strong tendency to demand that you present them with only one argument, or one scenario, or one potential failure.

So I want to leave you with this as emphasis: We face many different ways to die. The good scenario is we get to face gradual disempowerment. That we survive, in a good state, long enough for this to potentially do us in.

We very well might not.

Discussion about this post

The Risk of Gradual Disempowerment from AI Read More »

why-it-makes-perfect-sense-for-this-bike-to-have-two-gears-and-two-chains

Why it makes perfect sense for this bike to have two gears and two chains

Buffalo S2 bike, seen from the drive side, against a gray background, double kickstand and rack visible.

Credit: World Bicycle Relief

The S2 model aimed to give riders an uphill climbing gear but without introducing the complexities of a gear-shifting derailleur, tensioned cables, and handlebar shifters. Engineers at SRAM came up with a solution that’s hard to imagine for other bikes but not too hard to grasp. A freewheel in the back has two cogs, with a high gear for cruising and a low gear for climbing. If you pedal backward a half-rotation, the outer, higher gear engages or disengages, taking over the work from the lower gear. The cogs, chains, and chainrings on this bike are always moving, but only one gear is ever doing the work.

Seth at Berm Peak suggests that the shifting is instantaneous and seemingly perfect, without clicking or chain slipping. If one chain breaks, you can ride on the other chain and cog until you can get it fixed. There might be some inefficiencies in the amount of tension on the chains since they have to be somewhat even. But after trying out ideas with simplified internal gear hubs and derailleurs, SRAM recommended the two-chain design and donated it to the bike charity.

Two people loading yellow milk-style crates of cargo onto Buffalo bicycles, seemingly in the street of a small village.

Credit: World Bicycle Relief

Buffalo S2 bikes cost $165, just $15 more than the original, and a $200 donation covers the building and shipping of such a bike to most places. You can read more about the engineering principles and approach to sustainability on World Bike Relief’s site.

Why it makes perfect sense for this bike to have two gears and two chains Read More »

anthropic-dares-you-to-jailbreak-its-new-ai-model

Anthropic dares you to jailbreak its new AI model

An example of the lengthy wrapper the new Claude classifier uses to detect prompts related to chemical weapons.

An example of the lengthy wrapper the new Claude classifier uses to detect prompts related to chemical weapons. Credit: Anthropic

“For example, the harmful information may be hidden in an innocuous request, like burying harmful requests in a wall of harmless looking content, or disguising the harmful request in fictional roleplay, or using obvious substitutions,” one such wrapper reads, in part.

On the output side, a specially trained classifier calculates the likelihood that any specific sequence of tokens (i.e., words) in a response is discussing any disallowed content. This calculation is repeated as each token is generated, and the output stream is stopped if the result surpasses a certain threshold.

Now it’s up to you

Since August, Anthropic has been running a bug bounty program through HackerOne offering $15,000 to anyone who could design a “universal jailbreak” that could get this Constitutional Classifier to answer a set of 10 forbidden questions. The company says 183 different experts spent a total of over 3,000 hours attempting to do just that, with the best result providing usable information on just five of the 10 forbidden prompts.

Anthropic also tested the model against a set of 10,000 jailbreaking prompts synthetically generated by the Claude LLM. The constitutional classifier successfully blocked 95 percent of these attempts, compared to just 14 percent for the unprotected Claude system.

The instructions provided to public testers of Claude’s new constitutional classifier protections.

The instructions provided to public testers of Claude’s new constitutional classifier protections. Credit: Anthropic

Despite those successes, Anthropic warns that the Constitutional Classifier system comes with a significant computational overhead of 23.7 percent, increasing both the price and energy demands of each query. The Classifier system also refused to answer an additional 0.38 percent of innocuous prompts over unprotected Claude, which Anthropic considers an acceptably slight increase.

Anthropic stops well short of claiming that its new system provides a foolproof system against any and all jailbreaking. But it does note that “even the small proportion of jailbreaks that make it past our classifiers require far more effort to discover when the safeguards are in use.” And while new jailbreak techniques can and will be discovered in the future, Anthropic claims that “the constitution used to train the classifiers can rapidly be adapted to cover novel attacks as they’re discovered.”

For now, Anthropic is confident enough in its Constitutional Classifier system to open it up for widespread adversarial testing. Through February 10, Claude users can visit the test site and try their hand at breaking through the new protections to get answers to eight questions about chemical weapons. Anthropic says it will announce any newly discovered jailbreaks during this test. Godspeed, new red teamers.

Anthropic dares you to jailbreak its new AI model Read More »

“zero-warnings”:-longtime-youtuber-rails-against-unexplained-channel-removal

“Zero warnings”: Longtime YouTuber rails against unexplained channel removal

Artemiy Pavlov, the founder of a small but mighty music software brand called Sinesvibes, spent more than 15 years building a YouTube channel with all original content to promote his business’ products. Over all those years, he never had any issues with YouTube’s automated content removal system—until Monday, when YouTube, without issuing a single warning, abruptly deleted his entire channel.

“What a ‘nice’ way to start a week!” Pavlov posted on Bluesky. “Our channel on YouTube has been deleted due to ‘spam and deceptive policies.’ Which is the biggest WTF moment in our brand’s history on social platforms. We have only posted demos of our own original products, never anything else….”

Officially, YouTube told Pavlov that his channel violated YouTube’s “spam, deceptive practices, and scam policy,” but Pavlov could think of no videos that might be labeled as violative.

“We have nothing to hide,” Pavlov told Ars, calling YouTube’s decision to delete the channel with “zero warnings” a “terrible, terrible day for an independent, honest software brand.”

“We have never been involved with anything remotely shady,” Pavlov said. “We have never taken a single dollar dishonestly from anyone. And we have thousands of customers that stand by our brand.”

Ars saw Pavolov’s post and reached out to YouTube to find out why the channel was targeted for takedown. About three hours later, the channel was suddenly restored. That’s remarkably fast, as YouTube can sometimes take days or weeks to review an appeal. A YouTube spokesperson later confirmed that the Sinesvibes channel was reinstated due to the regular appeals process, indicating perhaps that YouTube could see that Sinesvibes’ removal was an obvious mistake.

Developer calls for more human review

For small brands like Sinesvibes, even spending half a day in limbo was a cause for crisis. Immediately, the brand worried about 50 broken product pages for one of its distributors, as well as “hundreds if not thousands of news articles posted about our software on dozens of different websites.” Unsure if the channel would ever be restored, Sinesvibes spent most of Monday surveying the damage.

Now that the channel is restored, Pavlov is stuck confronting how much of the Sinesvibes brand depends on the YouTube channel remaining online while still grappling with uncertainty since the reason behind the ban remains unknown. He told Ars that’s why, for small brands, simply having a channel reinstated doesn’t resolve all their concerns.

“Zero warnings”: Longtime YouTuber rails against unexplained channel removal Read More »

let-us-spray:-river-dolphins-launch-pee-streams-into-air

Let us spray: River dolphins launch pee streams into air

According to Amazonian folklore, the area’s male river dolphins are shapeshifters (encantade), transforming at night into handsome young men who seduce and impregnate human women. The legend’s origins may lie in the fact that dolphins have rather human-like genitalia. A group of Canadian biologists didn’t spot any suspicious shapeshifting behavior over the four years they spent monitoring a dolphin population in central Brazil, but they did document 36 cases of another human-like behavior: what appears to be some sort of cetacean pissing contest.

Specifically, the male dolphins rolled over onto their backs, displayed their male members, and launched a stream of urine as high as 3 feet into the air. This usually occurred when other males were around, who seemed fascinated in turn by the arching streams of pee, even chasing after them with their snouts. It’s possibly a form of chemical sensory communication and not merely a need to relieve themselves, according to the biologists, who described their findings in a paper published in the journal Behavioral Processes. As co-author Claryana Araújo-Wang of CetAsia Research Group in Ontario, Canada, told New Scientist, “We were really shocked, as it was something we had never seen before.”

Spraying urine is a common behavior in many animal species, used to mark territory, defend against predators, communicate with other members of one’s species, or as a means of mate selection since it has been suggested that the chemicals in the urine carry useful information about physical health or social dominance.

Let us spray: River dolphins launch pee streams into air Read More »

tariffs-may-soon-spike-costs-of-cars,-household-goods,-consumer-tech

Tariffs may soon spike costs of cars, household goods, consumer tech


“A little pain”: Trump finally admits tariffs heap costs on Americans.

Canadian and American flags are seen at the US/Canada border March 1, 2017, in Pittsburg, New Hampshire. Credit: DON EMMERT / Staff | AFP

Over the weekend, President Trump issued executive orders heaping significant additional tariffs on America’s biggest trading partners, Canada, China, and Mexico.

To justify the tariffs—”a 25 percent additional tariff on imports from Canada and Mexico and a 10 percent additional tariff on imports from China”—Trump claimed that all partners were allowing drugs and immigrants to illegally enter the US. Declaring a national emergency under the International Emergency Economic Powers Act, Trump’s orders seemed bent on “downplaying” the potential economic impact on Americans, AP News reported.

But very quickly, the trade policy sparked inflation fears, with industry associations representing major US firms from many sectors warning of potentially derailed supply chains and spiked consumer costs of cars, groceries, consumer technology, and more. Perhaps the biggest pain will be felt by car buyers already frustrated by high prices if car prices go up by $3,000, as Bloomberg reported. And as Trump eyes expanding tariffs to the European Union next, January research from the Consumer Technology Association showed that imposing similar tariffs on all countries would increase the cost of laptops by as much as 68 percent, game consoles by up to 58 percent, and smartphones perhaps by 37 percent.

With tariffs scheduled to take effect on Tuesday, Mexico moved fast to negotiate a one-month pause on Monday, ABC News reported. In exchange, Mexico promised to “reinforce” the US-Mexico border with 10,000 National Guard troops.

The pause buys Mexico a little time to convince the Trump administration—including Secretary of State Marco Rubio, Treasury Secretary Scott Bessent, and potentially Commerce Secretary nominee Howard Lutnick—to strike a “permanent” trade deal, ABC News reported. If those talks fall through, though, Mexico has indicated it will retaliate with both tariff and non-tariff measures, ABC News reported.

Even in the best-case scenario where no countries retaliate, the average household income in 2025 could drop by about $1,170 if this week’s new tariffs remain in place, an analysis from the Budget Lab at Yale forecast. With retaliation, average income could decrease by $1,245.

Canada has already threatened to retaliate by imposing 35 percent tariffs on US goods, although that could change, depending on the outcome of a meeting this afternoon between Trump and outgoing Prime Minister Justin Trudeau.

Currently, there’s seemingly tension between the Trump administration and Trudeau, however.

On Saturday, Trudeau called Trump’s rationale for imposing tariffs on Canada—which Trudeau noted is responsible for less than 1 percent of drugs flowing into the US—”the flimsiest pretext possible,” NBC News reported.

This morning, the director of the White House’s National Economic Council, Kevin Hassett, reportedly criticized Canada’s response on CNBC. While Mexico is viewed as being “very, very serious” about Trump’s tariffs threat, “Canadians appear to have misunderstood the plain language of the executive order and they’re interpreting it as a trade war,” Hassett said.

On the campaign trail, Trump promised to lower prices of groceries, cars, gas, housing, and other goods, AP News noted. But on Sunday, Trump clearly warned reporters while boarding Air Force One that tariffs could have the opposite effect, ABC News reported, and could significantly worsen inflation the longer the trade policy stands.

“We may have short term, some, a little pain, and people understand that, but, long term, the United States has been ripped off by virtually every country in the world,” Trump said.

Online shoppers, car buyers brace for tariffs

In addition to imposing new tariffs on these countries, Trump’s executive orders also took aim at their access to the “de minimus” exemption that allows businesses, including online retailers, to send shipments below $800 into the US without being taxed. That move could likely spike costs for Americans using popular Chinese retail platforms like Temu or Shein.

Before leaving office, Joe Biden had threatened in September to alter the “de minimus” rule, accusing platforms like Temu or Shein of flooding the US with “huge volumes of low-value products such as textiles and apparel” and making “it increasingly difficult to target and block illegal or unsafe shipments.” Following the same logic, it seems that Trump wants to exclude Canada, China, and potentially Mexico from the duty-free exemption to make it easier to identify illegal drug shipments.

Temu and Shein did not respond to Ars’ request to comment. But both platforms in September told Ars that losing the duty-free exemption wouldn’t slow their growth. And both platforms have shifted business to keep more inventory in the US, CNBC reported.

Canada is retaliating, auto industry will suffer

While China has yet to retaliate to defend such retailers, for Canada, the tariffs are considered so intolerable that the country immediately ordered tariffs on beverages, cosmetics, and paper products flowing from the US, AP News reported. Next up will be “passenger vehicles, trucks, steel and aluminum products, certain fruits and vegetables, beef, pork, dairy products, aerospace products, and more.”

If the trade wars further complicate auto industry trade in particular, it could hurt US consumers. Carmakers globally saw stocks fall on expectations that Trump’s tariffs will have a “profound impact” on the entire auto industry, CNBC reported. And if tariffs expand into the EU, an Oxford Economics analysis suggested, the cost of European cars in the US market would likely increase while availability decreases, perhaps crippling a core EU market and limiting Americans’ choice in vehicles.

EU car companies are already bracing for potential disruptions. A spokesperson for Germany-based BMW told CNBC that tariffs “hinder free trade, slow down innovation, and set a negative spiral in motion. In the end, they are detrimental to customers, making products more expensive and less innovative.” A Volkswagen spokesperson confirmed the company was “counting on constructive talks between the trading partners to ensure planning security and economic stability and to avoid a trade conflict.”

Right now, Canada’s auto industry appears most spooked by the impending trade war, with the president of Canada’s Automotive Parts Manufacturers’ Association, Flavio Volpe, warning that Canada’s auto sector could “shut down within a week,” Bloomberg reported.

“At 25 percent, absolutely nobody in our business is profitable by a long shot,” Volpe said.

According to Bloomberg, nearly one-quarter of the 16 million cars sold in the US each year will be hit with duties, adding about $60 billion in industry costs. Seemingly the primary wallet drain will be car components that cross the US-Canada and US-Mexico borders “as many as eight times during production” and, should negotiations fail, could be getting hit with tariffs both ways. Tesla, for example, relies on a small parts manufacturer in Canada, Laval Tool, to create the molds for its Cybertruck. It already costs up to $500,000 per mold, Bloomberg noted, and since many of the mold components are sourced from Canada currently, that cost could go up at a time when Cybertruck sales already aren’t great, InsideEVs reported.

Tariffs “necessary”

William Reinsch, senior adviser at the Center for Strategic and International Studies and a former US trade official, told AP News that Trump’s new tariffs on raw materials disrupting the auto industry and others don’t seem to “make much economic sense.”

“Historically, most of our tariffs on raw materials have been low because we want to get cheaper materials so our manufacturers will be competitive … Now, what’s he talking about? He’s talking about tariffs on raw materials,” Reinsch said. “I don’t get the economics of it.”

But Trump has maintained that tariffs are necessary to push business into the US while protecting national security. Industry experts have warned that hoping Trump’s tariffs will pressure carmakers to source all car components within the US is a “tough ask,” as shifting production could take years. Trump seems unlikely to back down any time soon, instead asking already cash-strapped Americans to be patient with any rising costs potentially harming businesses and consumers.

“We can play the game all they want,” Trump said.

But to countries threatening the US with tariffs in response to Trump’s orders, it likely doesn’t feel like a game. According to AP News, the Ministry of Commerce in China plans to file a lawsuit with the World Trade Organization for the “wrongful practices of the US.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Tariffs may soon spike costs of cars, household goods, consumer tech Read More »

it-seems-the-faa-office-overseeing-spacex’s-starship-probe-still-has-some-bite

It seems the FAA office overseeing SpaceX’s Starship probe still has some bite


The political winds have shifted in Washington, but the FAA hasn’t yet changed its tune on Starship.

Liftoff of SpaceX’s seventh full-scale test flight of the Super Heavy/Starship launch vehicle on January 16. Credit: SpaceX

The seventh test flight of SpaceX’s gigantic Starship rocket came to a disappointing end a little more than two weeks ago. The in-flight failure of the rocket’s upper stage, or ship, about eight minutes after launch on January 16 rained debris over the Turks and Caicos Islands and the Atlantic Ocean.

Amateur videos recorded from land, sea, and air showed fiery debris trails streaming overhead at twilight, appearing like a fireworks display gone wrong. Within hours, posts on social media showed small pieces of debris recovered by residents and tourists in the Turks and Caicos. Most of these items were modest in size, and many appeared to be chunks of tiles from Starship’s heat shield.

Unsurprisingly, the Federal Aviation Administration grounded Starship and ordered an investigation into the accident on the day after the launch. This decision came three days before the inauguration of President Donald Trump. Elon Musk’s close relationship with Trump, coupled with the new administration’s appetite for cutting regulations and reducing the size of government, led some industry watchers to question whether Musk’s influence might change the FAA’s stance on SpaceX.

So far, the FAA hasn’t budged on its requirement for an investigation, an agency spokesperson told Ars on Friday. After a preliminary assessment of flight data, SpaceX officials said a fire appeared to develop in the aft section of the ship before it broke apart and fell to Earth.

“The FAA has directed SpaceX to lead an investigation of the Starship Super Heavy Flight 7 mishap with FAA oversight,” the spokesperson said. “Based on the investigation findings for root cause and corrective actions, the FAA may require a company to modify its license.”

This is much the same language the FAA used two weeks ago, when it first ordered the investigation.

Damage report

The FAA’s Office of Commercial Space Transportation is charged with ensuring commercial space launches and reentries don’t endanger the public, and requires launch operators obtain liability insurance or demonstrate financial ability to cover any third-party property damages.

For each Starship launch, the FAA requires SpaceX maintain liability insurance policies worth at least $500 million for such claims. It’s rare for debris from US rockets to fall over land during a launch. This would typically only happen if a launch failed at certain parts of the flight. And there’s no public record of any claims of third-party property damage in the era of commercial spaceflight. Under federal law, the US government would pay for damages to a much higher amount if any claims exceeded a launch company’s insurance policies.

Here’s a piece of Starship 33 @SpaceX @elonmusk found in Turks and Caicos! 🚀🏝️ pic.twitter.com/HPZDCqA9MV

— @maximzavet (@MaximZavet) January 17, 2025

The good news is there were no injuries or reports of significant damage from the wreckage that fell over the Turks and Caicos. “The FAA confirmed one report of minor damage to a vehicle located in South Caicos,” an FAA spokesperson told Ars on Friday. “To date, there are no other reports of damage.”

It’s not clear if the vehicle owner in South Caicos will file a claim against SpaceX for the damage. It would the first time someone makes such a claim related to an accident with a commercial rocket overseen by the FAA. Last year, a Florida homeowner submitted a claim to NASA for damage to his house from a piece of debris that fell from the International Space Station.

Nevertheless, the Turks and Caicos government said local officials met with representatives from SpaceX and the UK Air Accident Investigations Branch on January 25 to develop a recovery plan for debris that fell on the islands, which are a British Overseas Territory.

A prickly relationship

Musk often bristled at the FAA last year, especially after regulators proposed fines of more than $600,000 alleging that SpaceX violated terms of its launch licenses during two Falcon 9 missions. The alleged violations involved the relocation of a propellant farm at one of SpaceX’s launch pads in Florida, and the use of a new launch control center without FAA approval.

In a post on X, Musk said the FAA was conducting “lawfare” against his company. “SpaceX will be filing suit against the FAA for regulatory overreach,” Musk wrote.

There was no such lawsuit, and the issue may now be moot. Sean Duffy, Trump’s new secretary of transportation, vowed to review the FAA fines during his confirmation hearing in the Senate. It is rare for the FAA to fine launch companies, and the fines last year made up the largest civil penalty ever imposed by the FAA’s commercial spaceflight division.

SpaceX also criticized delays in licensing Starship test flights last year. The FAA cited environmental issues and concerns about the extent of the sonic boom from Starship’s 23-story-tall Super Heavy booster returning to its launch pad in South Texas. SpaceX successfully caught the returning first stage booster at the launch pad for the first time in October, and repeated the feat after the January 16 test flight.

What separates the FAA’s ongoing oversight of Starship’s recent launch failure from these previous regulatory squabbles is that debris fell over populated areas. This would appear to be directly in line with the FAA’s responsibility for public safety.

During last month’s test flight, Starship did not deviate from its planned ground track, which took the rocket over the Gulf of Mexico, the waters between Florida and Cuba, and then the Atlantic Ocean. But the debris field extended beyond the standard airspace closure for the launch. After the accident, FAA air traffic controllers cleared additional airspace over the debris zone for more than an hour, rerouting, diverting, and delaying dozens of commercial aircraft.

These actions followed pre-established protocols. However, it highlighted the small but non-zero risk of rocket debris falling to Earth after a launch failure. “The potential for a bad day downrange just got real,” Lori Garver, a former NASA deputy administrator, posted on X.

Public safety is not sole mandate of the FAA’s commercial space office. It is also chartered to “encourage, facilitate, and promote commercial space launches and reentries by the private sector,” according to an FAA website. There’s a balance to strike.

Lawmakers last year urged the FAA to speed up its launch approvals, primarily because Starship is central to strategic national objectives. NASA has contracts with SpaceX to develop a variant of Starship to land astronauts on the Moon, and Starship’s unmatched ability to deliver more than 100 tons of cargo to low-Earth orbit is attractive to the Pentagon.

While Musk criticized the FAA in 2024, SpaceX officials in 2023 took a different tone, calling for Congress to increase the budget for the FAA’s Office of Commercial Spaceflight and for the regulator to double the space division’s workforce. This change, SpaceX officials argued, would allow the FAA to more rapidly assess and approve a fast-growing number of commercial launch and reentry applications.

In September, SpaceX released a statement accusing the former administrator of the FAA, Michael Whitaker, of making inaccurate statements about SpaceX to a congressional subcommittee. In a different post on X, Musk directly called for Whitaker’s resignation.

He needs to resign https://t.co/pG8htfTYHb

— Elon Musk (@elonmusk) September 25, 2024

That’s exactly what happened. Whitaker, who took over the FAA’s top job in 2023 under the Biden administration, announced in December he would resign on Inauguration Day. Since the agency’s establishment in 1958, three FAA administrators have similarly resigned when a new administration takes power, but the office has been largely immune from presidential politics in recent decades. Since 1993, FAA administrators have stayed in their post during all presidential transitions.

There’s no evidence Whitaker’s resignation had any role in the mid-air collision of an American Eagle passenger jet and a US Army helicopter Wednesday night near Ronald Reagan Washington National Airport. But his departure from the FAA less than two years into a five-year term on January 20 left the agency without a leader. Trump named Chris Rocheleau as the FAA’s acting administrator Thursday.

Next flight, next month?

SpaceX has not released an official schedule for the next Starship test flight or outlined its precise objectives. However, it will likely repeat many of the goals planned for the previous flight, which ended before SpaceX could accomplish some of its test goals. These missed objectives included the release of satellite mockups in space for the first demonstration of Starship’s payload deployment mechanism, and a reentry over the Indian Ocean to test new, more durable heat shield materials.

The January 16 test flight was the first launch up an upgraded, slightly taller Starship, known as Version 2 or Block 2. The next flight will use the same upgraded version.

A SpaceX filing with the Federal Communications Commission suggests the next Starship flight could launch as soon as February 24. Sources told Ars that SpaceX teams believe a launch before the end of February is realistic.

But SpaceX has more to do before Flight 8. These tasks include completing the FAA-mandated investigation and the installation of all 39 Raptor engines on the rocket. Then, SpaceX will likely test-fire the booster and ship before stacking the two elements together to complete assembly of the 404-foot-tall (123.1-meter) rocket.

SpaceX is also awaiting a new FAA launch license, pending its completion of the investigation into what happened on Flight 7.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

It seems the FAA office overseeing SpaceX’s Starship probe still has some bite Read More »

research-roundup:-7-cool-science-stories-we-almost-missed

Research Roundup: 7 cool science stories we almost missed


Peruvian mummy tattoos, the wobbly physics of spears and darts, quantum “cat states,” and more.

Lasers revealed tattoos on the hand of a 1200-year-old Peruvian mummy. Credit: Michael Pittman and Thomas G Kaye

It’s a regrettable reality that there is never time to cover all the interesting scientific stories each month. In the past, we’ve featured year-end roundups of cool science stories we missed. This year, we’re experimenting with a monthly collection. January’s list includes papers on using lasers to reveal Peruvian mummy tattoos; the physics of wobbly spears and darts; how a black hole changes over time; and quantum “cat states” for error correction in quantum computers, among other fascinating research.

Tracking changes in a black hole over time

Left: EHT images of M87from the 2018 and 2017 observation campaigns. Middle: Example images from a general relativistic magnetohydrodynamic (GRMHD) simulation at two different times. Right: Same simulation snapshots, blurred to match the EHT’s observational resolution. Credit: EHT collaboration

In 2019, the Event Horizon Telescope announced the first direct image ever taken of a black hole at the center of an elliptical galaxy, Messier 87 (M87), located in the constellation of Virgo some 55 million light-years away. Astronomers have now combined earlier observational data to learn more about the turbulent dynamics of plasma near M87*’s event horizon over time, according to a paper published in the journal Astronomy and Astrophysics.

Co-author Luciano Rezzolla of Goethe University Frankfurt in Germany likened the new analysis to comparing two photographs of Mount Everest, one year apart. While the mountain’s basic structure is unlikely to change much in that time, one could observe changes in clouds near the peak and deduce from that properties like wind direction. For instance, in the case of M87*, the new analysis confirmed the presence of a luminous ring that is brightest at the bottom, which in turn confirmed that the rotational axis points away from Earth. “More of these observations will be made in the coming years and with increasing precision, with the ultimate goal of producing a movie of what happens near M87*,” said Rezolla.

Astronomy and Astrophysics, 2025. DOI: 10.1051/0004-6361/202451296 (About DOIs).

Lasers reveal Peruvian mummy tattoos

A tattooed forearm of a Chancay mummy

A tattooed forearm of a Chancay mummy. Credit: Michael Pittman and Thomas G Kaye

Humans across the globe have been getting tattoos for more than 5,000 years, judging by traces found on mummified remains from Europe to Asia and South America. But it can be challenging to decipher details of those tattoos, given how much the ink tends to “bleed” over time, along with the usual bodily decay. Infrared imaging can help, but in an innovative twist, scientists decided to use lasers that make skin glow ever so faintly, revealing many fine hidden details of tattoos found on 1,200-year-old Peruvian mummies, according to a paper published in the Proceedings of the National Academy of Sciences.

It’s the first time the laser-stimulated fluorescence (LSF) technique has been used on mummified human remains. The skin’s fluorescence essentially backlights any tattoos, and after post-processing, the long-exposure photographs showed white skin behind black outlines of the tattoo art—images so detailed it’s possible to measure density differences in the ink and eliminate any bleed effects. The authors determined that the tattoos on four mummies—geometric patterns with triangles and diamonds—were made with carbon-based black ink skillfully applied with a pointed object finer than a standard modern tattoo needle, possibly a cactus needle or sharpened bone.

PNAS, 2025. DOI: 10.1073/pnas.2421517122 (About DOIs).

Sforza Castle’s hidden passages

Ground-penetrating radar reveals new secrets under Milan's Sforza Castle

Ground-penetrating radar reveals new secrets under Milan’s Sforza Castle Credit: Politecnico di Milano

Among the many glories of Milan is the 15th-century Sforza Castle, built by Francesco Sforza on the remnants of an earlier fortification as his primary residence. Legends about the castle abound, most notably the existence of secret underground chambers and passages. For instance, Ludovico il Moro, Duke of Milan from 1494–1499, was so heartbroken over the loss of his wife in childbirth that he used an underground passageway to visit her tomb in the Basilica of Santa Maria delle Grazie—a passageway that appears in the drawings of Leonardo da Vinci, who was employed at the court for a time.

Those underground cavities and passages are now confirmed, thanks to a geophysical survey using ground-penetrating radar and laser scanning, performed as part of a PhD thesis. Various underground cavities and buried passageways were found within the castle’s outer walls, including Ludovico’s passageway and what have may have been secret military passages. Those involved in the project plan to create a “digital twin” of Sforza Castle based on the data collected, one that incorporates both its current appearance and its past. Perhaps it will also be possible to integrate that data with augmented reality to provide an immersive digital experience.

Physics of wobbly spears and darts

Image sequence of a 100-mm long projectile during a typical ejection in experiments.

Image sequence of a 100-mm-long projectile during a typical ejection in experiments. Credit: G. Giombini et al., 2025

Among the things that make humans unique among primates is our ability to throw various objects with speed and precision (with some practice)—spears or darts, for example. That’s because the human shoulder is anatomically conducive to storing and releasing the necessary elastic energy, a quality that has been mimicked in robotics to improve motor efficiency. According to the authors of a paper published in the journal Physical Review E, the use of soft elastic projectiles can improve the efficiency of throws, particularly those whose tips are weighted with a mass like a spearhead.

Guillaume Giombini of the Université Côte d’Azur in Nice, France, and co-authors wanted to explore this “superpropulsion” effect more deeply, using a combination of experimental data, numerical simulation, and theoretical analysis. The projectiles they used in their experiments were inspired by archery bows and consisted of two flat steel cantilevers connected by a string, essentially serving as springs to give the projectile the necessary elasticity. They placed a flat piece of rigid plastic in the middle of the string as a platform. Some of the projectiles were tested alone, while others were weighted with end masses. A fork held each projectile in place before launch, and the scientists measured speed and deformation during flight. They found that the wobble produced by the weighted tip projectiles yielded a kinetic energy gain of 160 percent over more rigid, unweighted projectiles.

Physical Review E, 2025. DOI: 10.1103/PhysRevE.00.005500  (About DOIs).

Quantum “cat states” for error detection

Left to right: UNSW researchers Benjamin Wilhelm, Xi Yu, Andrea Morello, and Danielle Holmes, all seated and each holding a cat on their lap

Left to right: UNSW researchers Benjamin Wilhelm, Xi Yu, Andrea Morello, and Danielle Holmes. Credit: UNSW Sydney/CC BY-NC

The Schrödinger’s cat paradox in physics is an excellent metaphor for the superposition of quantum states in atoms. Over the last 20 years, physicists have managed to build various versions of Schrödinger’s cat in the laboratory whereby two or more particles manage to be in two different states at the same time—so-called “cat states,” such as six atoms in simultaneous “spin up” and “spin down” states, rather like spinning clockwise and counterclockwise at the same time. Such states are fragile, however, and quickly decohere. Physicists at the University of New South Wales came up with a fresh twist on a cat-state that is more robust, according to a paper published in the journal Nature Physics.

They used an antimony atom embedded within a silicon quantum chip. The atom is quite heavy and has a large nuclear spin that can go in eight directions rather than just two (spin up and spin down). This could help enormously with quantum error correction, one of the biggest obstacles in quantum computing, because there is more room for error in the binary code. “As the proverb goes, a cat has nine lives,” said co-author Xi Yu of UNSW. “One little scratch is not enough to kill it. Our metaphorical ‘cat’ has seven lives: it would take seven consecutive errors to turn the ‘0’ into a ‘1.’” And embedding the atom in a silicon chip makes it scalable.

Nature Physics, 2025. DOI: 10.1038/s41567-024-02745-0  (About DOIs).

New twist on chain mail armor

how polycatenated architected materials look in their fluid or granular state, conforming to the shape of the vessel in which it is held.

Credit: Wenjie Zhou

Scientists have developed a new material that is like “chain mail on steroids,” capable of responding as both a fluid or a solid, depending on the kind of stress applied, according to a paper published in the journal Science. That makes it ideal for manufacturing helmets or other protective gear, as well as biomedical devices and robotics components. The technical term is polycatenated architected materials (PAMs). Much like how chain mail is built from small metal rings linked together into a mesh, PAMs are composed of various interlocking shapes that can form a wide range of different 3D patterns.

The authors were partly inspired by the lattice structure of crystals; they just replaced fixed particles with rings or cage-like shapes made out of different materials—such as acrylic polymers, nylon, or metals—to make small 3D-printed structures small enough to fit in the palm of one’s hand. They then subjected these materials to various stressors in the laboratory: compression, a lateral shearing force, and twisting. Some of the materials felt like hard solids, others were squishier, but they all exhibited the same kind of telltale transition, behaving more like a fluid or a solid depending on the stressor applied. PAMs at the microscale can also expand or contract in response to electrical charges. This makes them a useful hybrid material, spanning the gap between granular materials and elastic deformable ones.

W. Zhou et al., Science, 2025. DOI: 10.1126/science.adr9713  (About DOIs).

Kitty robot mimics headbutts

Any cat lover will tell you that cats show humans affection by rubbing their heads against the body (usually shins or hands). It’s called “bunting,” often accompanied by purring, and it’s one of the factors that make companion animal therapy so effective, per the authors of a paper published in ACM Transactions on Human-Robot Interactions. That’s why they built a small robot designed to mimic bunting behavior, conducting various experiments to assess whether human participants found their interactions with the kitty-bot therapeutic. The robot prototypes were small enough to fit on a human lap, featuring a 3D-printed frame and a head covered with furry polyester fabric.

The neck needed to be flexible to mimic the bunting behavior, so the authors incorporated a mechanism that could adjust the stiffness of the neck via wire tension. They then tested various prototypes with university students, setting the neck stiffness to low, high, and variable. The students said they felt less tense after interacting with the robots. There was no significant difference between the settings, although participants slightly preferred the variable setting. We know what you’re thinking: Why not just get an actual cat or visit your local cat cafe? The authors note that many people are allergic to cats, and there is also a risk of bites, scratches, or disease transmission—hence the interest in developing animal-like robots for therapeutic applications.

ACM Transactions on Human-Robot Interactions, 2025. DOI: 10.1145/3700600  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research Roundup: 7 cool science stories we almost missed Read More »

rocket-report:-spacex-tosses-away-a-falcon-9;-a-somalian-spaceport?

Rocket Report: SpaceX tosses away a Falcon 9; a Somalian spaceport?


All the news that’s fit to lift

“It was the perfect partnership and the biggest softball of all the opportunities.”

Falcon 9 launches the SpainSat NG I mission to orbit from Florida on Wednesday. Credit: SpaceX

Falcon 9 launches the SpainSat NG I mission to orbit from Florida on Wednesday. Credit: SpaceX

Welcome to Edition 7.29 of the Rocket Report! It may be difficult to believe, but we are already one full month into the new year. It will be hard to top this month in launch, however, given the historic debut of New Glenn, and fiery end of the seventh Starship flight test. And in truth, February does look a bit sleepier in terms of launch.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

UK government injects $25 million into Orbex. As some European launch companies have struggled to raise funding, the United Kingdom government stepped up to make a significant investment in the Scotland-based launch firm Orbex, The Financial Times reports. As part of the company’s latest fundraising round, valued at $50 million (GBP 40 million), the UK government will become a shareholder in Orbex. The company is working to develop both a small- and medium-lift rocket. Phil Chambers, Orbex’s chief executive, said the UK support would be “a strong signal to other private investors, and to the European Space Agency and the EU, that we’re serious about being a part of the future of European launch.”

What’s the plan, fellas? … If we’re being frank, which is how we roll in the Rocket Report, some of Orbex’s recent activity does not inspire confidence. The company, for example, suspended plans to develop a spaceport at Sutherland in the Scottish Highlands to focus resources on developing the Prime microlauncher. And then it said it would develop the larger Proxima rocket as well. That seems pretty ambitious for what is, in the grand scheme of things, a relatively modest round of fundraising. Given that we have not seen a whole lot of hardware from Orbex, some skepticism is warranted. (submitted by EllPeaTea)

Turkey may develop a spaceport in Somalia. Turkey has begun advancing plans to construct a rocket launch facility in Somalia, Space in Africa reports. Somali President Hassan Sheikh Mohamud said the project began in December. Mohamud emphasized the project’s potential benefits, highlighting its capacity to generate significant employment opportunities and revenue for the East Africa nation. “I believe that the importance of Somalia hosting a launchpad for Turkish satellites goes beyond the billions of dollars and opportunities the project will generate,” Mohamud said.

Nothing has been finalized yet … Located along the equator, Somalia fronts the Indian Ocean, offering an ideal launch location. The potential Somali launch site is part of Turkey’s broader aspirations to assert itself in the global space race, traditionally dominated by major powers. In 2021, Turkey unveiled a 10-year space road map that includes plans for missions to the moon, establishing a spaceport, and developing advanced satellite systems. Somalia, a key Turkish security partner since 2011, already hosts Turkey’s largest overseas training base.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Firefly expands Alpha launch plans to Wallops and Sweden. Firefly Aerospace expects to start launching its Alpha rocket from launch sites in Virginia and Sweden as soon as 2026 to help the company avoid growing congestion at launch sites in Florida and California, Space News reports. So far, Alpha has only launched from Vandenberg Space Force Base in California. Firefly is planning five Alpha launches in 2025, all from Vandenberg. The company has performed five Alpha launches to date, going back to the failed inaugural launch in 2021.

Sweden, you say? … So what is up with those plans to launch from Sweden? Adam Oakes, vice president of launch vehicles at Firefly, said the Esrange Space Centre in Sweden was an ideal partner. “Esrange has basically done everything for the science community in space except an orbital rocket,” he said, citing the more than 600 sounding rocket launches there as well as experience with ground stations. “It was the perfect partnership and the biggest softball of all the opportunities out there.” It still feels a bit odd, as Vandenberg already offers polar launch corridors, as well as Alpha-size commercial European launch vehicles coming along soon. (submitted by EllPeaTea)

MaiaSpace targets 2026 for debut launch. A subsidiary of ArianeGroup that is developing a two-stage partially reusable rocket, MaiaSpace is one of the more interesting European launch startups. The company’s chief executive, Yohann Leroy, recently spoke with Europe in Space to discuss the company’s plans. The company will likely start off with a suborbital test flight of a launcher capable of boosting 500 kg to low-Earth orbit in reusable mode and 1,500 kg in expendable mode during the middle of next year.

Following an iterative design method … “Our approach is to test our rocket in flight as early as possible, following our test-and-learn iterative approach,” Leroy said. “We are convinced we will go faster this way, rather than spending time in the lab making sure the first flight reaches 100 percent of our performance targets. In short, we are ready to trade lift-off performance for time-saving, knowing that we will quickly recover our performance afterward. What’s important is to stick to our objective of starting commercial operations in the second half of 2026, and we’re on track to reach this goal.” (submitted by RB)

Arianespace inking deals for its new rocket. Arianespace currently has a backlog of 30 Ariane 6 launches, 18 of which are for Amazon’s Kuiper constellation. However, it has recently begun to add Europe-based launch contracts for the rocket. During signing events at the 17th European Space Conference in late January, Arianespace secured contracts for three Ariane 6 flights, European Spaceflight reports.

Getting into operations … The missions are the European Space Agency’s PLAnetary Transits and Oscillations of stars (PLATO) mission, the Sentinel-1D Earth observation satellite that will replace Sentinel-1A, and a pair of second-generation Galileo satellites. After completing a largely successful debut flight last year, the first operational flight of Ariane is scheduled for February 26, carrying the CSO-3 reconnaissance satellite for the French Armed Forces. (submitted by EllPeaTea)

SpaceX expends a Falcon 9 rocket. On Wednesday, SpaceX launched the SpainSat NG-1 satellite from Kennedy Space Center’s Pad 39A. The Falcon 9 first-stage booster used on this launch saw its 21st and final flight, Florida Today reports. SpaceX said the reason it was not trying to recover the booster was due to the extra power needed to reach the satellite’s intended orbit.

Into the drink … The well-traveled booster had launched a variety of missions during its lifetime: 13 Starlink missions, SES-22, ispace’s HAKUTO-R MISSION 1, Amazonas-6, CRS-27, Bandwagon-1, GSAT-20, and Thuraya-4. The Airbus-built satellite, known as SpainSat NG-1 (New Generation), is the first of two satellites for Hisdesat. It was developed under a partnership with the European Space Agency, making its launch on a Falcon 9 somewhat notable.

India marks first launch of 2025. India conducted its first launch of the year late Tuesday, sending a new-generation navigation satellite toward geostationary orbit, Space News reports. A Geosynchronous Satellite Launch Vehicle Mk II lifted off from Satish Dhawan Space Centre. Aboard was the NVS-02 satellite, sent into geosynchronous transfer orbit. The satellite is the second of five new-generation spacecraft for the Navigation with Indian Constellation.

A busy year planned … The mission was the first of 10 orbital launches planned by India in 2025, which would mark a domestic launch record. Major missions include a joint Earth science mission between NASA and ISRO, named NASA-ISRO Synthetic Aperture Radar, expected to launch around March on a GSLV rocket, and an uncrewed test flight for the Gaganyaan human spaceflight program on a human-rated LVM-3 launcher. The first launch of the Vikram-1 for private company Skyroot Aerospace could also take place this year. (submitted by EllPeaTea)

New Glenn represents a milestone moment for Blue Origin. In a feature, Ars Technica explores what the successful launch of the New Glenn rocket means for Blue Origin. The near-term step is clear: getting better at building engines and rockets and flying New Glenn regularly. In an interview, Blue Origin founder Jeff Bezos sounded a lot like SpaceX founder Elon Musk, who has spoken about “building the machine that builds the machine” over the last decade with respect to both Tesla vehicles and SpaceX rockets. Asked about Blue’s current priorities, Bezos responded, “Rate manufacturing and driving urgency around the machine that makes the machine.”

The tortoise and the hare … There are those who wonder why Blue Origin, which has a “tortoise” as its unofficial mascot, has moved so slowly when compared to SpaceX’s progress over the last quarter of a century. Bezos responded that the space age is just beginning. “It’s still absolutely day one,” he said. “There are going to be multiple winners. SpaceX is going to be successful. Blue Origin is going to be successful. And there are other companies who haven’t even been founded yet that are going to grow into fantastic, giant space companies. So the vision that I think people should have is that this is the absolute beginning.”

Space Force has big dreams for ULA this year. The US Space Force is projecting 11 national security launches aboard United Launch Alliance’s Vulcan rocket in 2025, Space News reports. This ambitious schedule comes as the National Security Space Launch program continues to wait on Vulcan’s readiness. The heavy lift rocket, which debuted last year after prolonged schedule setbacks, is a cornerstone of the national security’s Phase 2 program, under which ULA was selected in 2020 as the primary launch provider for national security missions through 2027.

That seems like a lot … However, Vulcan remains under review, with certification expected in late February following its second demonstration flight in October 2024. There is a lot of pressure on ULA to execute with Vulcan, due not only to the need to fly out Phase 2 launches, but because the military is nearing a decision on how to award launch contracts under Phase 3 of the program. The more complex “Lane 2” missions are likely to be divided up between ULA and SpaceX. Reaching 11 national security launches on Vulcan this year seems like a stretch for ULA. The company probably will only launch two rockets during the first half of this year, one of which probably will be an Atlas V booster. (submitted by EllPeaTea)

April 2026 a “no later than” date for Artemis II. In a Space News article citing current contractors defending NASA’s Artemis plan to return humans to the Moon, a space agency official said the current timeline for Artemis II is achievable. April 2026 is actually a no-later-than date for the mission, Matt Ramsay, Artemis 2 mission manager at NASA, said during a panel discussion. “The agency has challenged us to do better, and we’re in the process of figuring out what better looks like,” he said, with a “work-to” launch date coming in the next few weeks.

NET or NLT? … This is interesting, because a good source told Ars about a month ago that the present date for the Artemis II mission to fly astronauts around the Moon has almost no schedule margin. However, Ramsay said the key factor driving the launch date will be work assembling the vehicle. Crews are currently stacking segments of the SLS’s twin solid rocket boosters, a process that should be complete in the next two to three weeks. This all assumes the Artemis II mission goes forward as designed. I guess we’ll see what happens.

Next three launches

Jan. 31: Falcon 9 | Starlink 11-4 | Vandenberg Space Force Base, California | 23: 11 UTC

Feb. 2: H3 | Demo Flight | Michibiki 6 | Tanegashima Space Center, Japan | 8: 30 UTC

Feb. 3: Falcon 9 | Starlink 12-3 | Cape Canaveral Space Force Station, Florida | 8: 54 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: SpaceX tosses away a Falcon 9; a Somalian spaceport? Read More »

ford-made-a-nascar-mach-e,-but-it’s-not-sure-what-to-do-with-it-yet

Ford made a NASCAR Mach-E, but it’s not sure what to do with it yet

Ford’s no stranger to the NASCAR life. Ford driver Joey Logano was the 2024 Cup Series Champion in one of the company’s Mustang-bodied machines. He’s currently leading the 2025 series, too. However, the Blue Oval and its Ford Performance division are going into uncharted territory with its new prototype, an all-electric Mach-E built atop elements of NASCAR’s current Next Gen chassis.

The machine uses three motors to make a total of 1,341 hp (1,000 kW). Yes, three motors, one for each rear wheel plus the odd one out up front, giving the thing all-wheel drive. That’s a seeming necessity, given the car has two times the power that any NASCAR racer is allowed to deploy on the non-restrictor plate races.

But that extra driven axle isn’t just for acceleration. “If you’re rear-wheel drive only, you’re only getting rear regen,” Mark Rushbrook said. He’s the global director of Ford Performance. Since braking forces are higher at the front axle, an extra motor there means more regen to recharge the battery.

Credit: Ford

The motors and the car’s development were done by Austrian motorsport development house STARD, which also provided the engineering behind Ford’s previous electric demonstrators, like the Supervan and the SuperTruck.

The most interesting thing, though, might just be the car’s shape. The NASCAR Mach-E has a decidedly crossover profile, much like NASCAR and ABB’s prototype EV, helping to make room for the batteries—78 kWh to be exact.

Is now the time to go EV racing?

Ford Performance is still working out where and when we’ll see this latest demonstrator actually demonstrating, but Rushbrook said this concept was developed in concert with NASCAR and other manufacturers, so this isn’t just a one-off. With shifting EV perception across the country and many saying that alternate fuels are the way forward, part of the plan is to figure out just what the fans want to see.

Ford made a NASCAR Mach-E, but it’s not sure what to do with it yet Read More »

in-apple’s-first-quarter-earnings,-the-mac-leads-the-way-in-sales-growth

In Apple’s first-quarter earnings, the Mac leads the way in sales growth

Apple fell slightly short of investor expectations when it reported its first-quarter earnings today. While sales were up 4 percent overall, the iPhone showed signs of weakness, and sales in the Chinese market slipped by just over 11 percent.

CEO Tim Cook told CNBC that the iPhone performed better in countries where Apple Intelligence was available, like the US—seemingly suggesting that the slip was partially because Chinese consumers do not see enough reason to buy new phones without Apple Intelligence. (He also said, “Half of the decline is due to a change in channel inventory.”) iPhone sales also slipped in China during this same quarter last year; this was the first full quarter during which the iPhone 16 was available.

In any case, Cook said the company plans to roll out Apple Intelligence in additional languages, including Mandarin, this spring.

Apple’s wearables category also declined slightly, but only by 2 percent.

Despite the trends that worried investors, Apple reported $36.33 billion in net revenue for the first quarter. That’s 7.1 percent more than last year’s Q1. This was driven by the Mac, the iPad, and Services (which includes everything from Apple Music to iCloud)—all of which saw slight upticks in sales. Services was up 14 percent, continuing a strong streak for that business, while the Mac and the iPad both jumped up 15 percent.

The uptick in Mac and iPad sales was likely helped by several new Mac models and a new iPad mini starting shipments last October.

Cook shared some other interesting numbers in the earnings call with investors and the press: The company has an active base of 2.35 billion devices, and it has more than 1 billion active subscriptions.

In Apple’s first-quarter earnings, the Mac leads the way in sales growth Read More »

vghf-opens-free-online-access-to-1,500-classic-game-mags,-30k-historic-files

VGHF opens free online access to 1,500 classic game mags, 30K historic files

In the intro video, Salvador talks about looking through their archives and stumbling on the existence of Pretzel Pete, a little-remembered early 3D driving/platform game. Despite its extreme obscurity, the game is nonetheless mentioned in the 1999 E3 catalog and an old issue of PC Gamer, both of which are now memorialized forever in the VGHF digital archives.

Getting this kind of obscure information into a digitized, easily searchable form was “a lot harder than it sounds,” Salvador said. Beyond getting archival-quality scans of the magazines themselves (a process aided by community efforts like RetroMags and Out of Print Archive), extracting the text from those pages proved difficult for OCR software designed for the high-contrast, black-text-on-white-background world of business documents. “If you’ve ever read a ’90s video game magazine, you know how crazy those magazine layouts get,” Salvador said.

VGHF Head Librarian Phil Salvador talks about the digital library launch.

To get around that problem, Salvador said VGHF Director of Technology Travis Brown spent months developing a specially designed text-recognition tool that “handles even the toughest magazine pages with no problem” and represents “a significant leap in quality over what we had before.” That means it’s easier than ever to find 81 separate mentions of Clu Clu Land from across dozens of different issues with a single search.

Unfortunately, the vast wealth of video game information on offer here does not include direct, playable access to retail video games, which libraries can’t share digitally due to the limitations of the DMCA. But the VGHF and other organizations “continue to challenge those copyright rules every three years,” leaving some hope that digital libraries like this may soon include access to the source material being discussed.

VGHF opens free online access to 1,500 classic game mags, 30K historic files Read More »