Author name: 9u50fv

vandals-deface-ads-for-ai-necklaces-that-listen-to-all-your-conversations

Vandals deface ads for AI necklaces that listen to all your conversations

In addition to backlash over feared surveillance capitalism, critics have accused Schiffman of taking advantage of the loneliness epidemic. Conducting a survey last year, researchers with Harvard Graduate School of Education’s Making Caring Common found that people between “30-44 years of age were the loneliest group.” Overall, 73 percent of those surveyed “selected technology as contributing to loneliness in the country.”

But Schiffman rejects these criticisms, telling the NYT that his AI Friend pendant is intended to supplement human friends, not replace them, supposedly helping to raise the “average emotional intelligence” of users “significantly.”

“I don’t view this as dystopian,” Schiffman said, suggesting that “the AI friend is a new category of companionship, one that will coexist alongside traditional friends rather than replace them,” the NYT reported. “We have a cat and a dog and a child and an adult in the same room,” the Friend founder said. “Why not an AI?”

The MTA has not commented on the controversy, but Victoria Mottesheard—a vice president at Outfront Media, which manages MTA advertising—told the NYT that the Friend campaign blew up because AI “is the conversation of 2025.”

Website lets anyone deface Friend ads

So far, the Friend ads have not yielded significant sales, Schiffman confirmed, telling the NYT that only 3,100 have sold. He expects that society isn’t ready for AI companions to be promoted at such a large scale and that his ad campaign will help normalize AI friends.

In the meantime, critics have rushed to attack Friend on social media, inspiring a website where anyone can vandalize a Friend ad and share it online. That website has received close to 6,000 submissions so far, its creator, Marc Mueller, told the NYT, and visitors can take a tour of these submissions by choosing to “ride train to see more” after creating their own vandalized version.

For visitors to Mueller’s site, riding the train displays a carousel documenting backlash to Friend, as well as “performance art” by visitors poking fun at the ads in less serious ways. One example showed a vandalized ad changing “Friend” to “Fries,” with a crude illustration of McDonald’s French fries, while another transformed the ad into a campaign for “fried chicken.”

Others were seemingly more serious about turning the ad into a warning. One vandal drew a bunch of arrows pointing to the “end” in Friend while turning the pendant into a cry-face emoji, seemingly drawing attention to research on the mental health risks of relying on AI companions—including the alleged suicide risks of products like Character.AI and ChatGPT, which have spawned lawsuits and prompted a Senate hearing.

Vandals deface ads for AI necklaces that listen to all your conversations Read More »

medical-roundup-#5

Medical Roundup #5

Some amazing things are going on, not all of which involve mRNA, although please please those of you with the ability to do so, do your part to ensure that stays funded, either via investment or grants.

As for mRNA, please do what you can to help save it, so we can keep getting more headlines like ‘a new cancer vaccine just wiped out tumors’ even if it is sufficiently early that the sentence this time inevitably concludes ‘IN MICE.

Wait, what, you’re saying we might soon ‘mostly defeat’ heart disease?

Cremieux: It’s hard to oversell how big a discovery this is.

Heart disease is America’s #1 cause of death. With a combination of two drugs administered once every six months, it might be mostly defeated.

Just think about that how big this is! You will know your great-grandkids!

It is insanely optimistic that we have drugs that can reduce Lp(a) levels by [65%-]98% in trials right now.

If they succeed, they could help to crush heart disease and stroke.

Unlike LDL in general, Lp(a) levels are basically entirely genetic in origin and not open to lifestyle intervention. It is also widely accepted that the race differences are down to genes.

These drugs are examples of genetic discovery leading to a group difference fix via tech.

The future of medicine is still very bright.

I wouldn’t go that far. Even if these trials are successful, it seems unlikely we’re talking about ‘mostly defeat,’ although we could plausibly be at ‘greatly reduce.’ Which could still be worth several years of life expectancy across the board. If we could also similarly help with other major causes of death and aging, you’d see compounding gains, but without that aging still catches up with everyone.

Unfortunately, America under the current administration is making deep cuts in basic research funding that leads to advances like this. Hopefully AI can make up for that.

A new major study finds that alcohol causes cancer, so government worked to bury the study. Time and time again we get presented with the fact that small amounts of drinking correlate with improved health in various ways, fooling many into thinking a little alcohol is healthy.

As opposed to the reality, which is that alcohol is bad for you no matter what, but that inability to drink is highly correlated with alcoholism and other traits that go along with poor health, whereas ability to drink only in moderation is a good sign.

So drinking in moderation, which is only a small amount bad for your health, is still a good sign for your health if you are drinking in moderation. Whereas heavier drinking consistently looks bad, perhaps even worse than it is.

Trump himself is not fooled, and does not drink at all, as he has seen the dangers of alcoholism in his own family. A wise choice.

Dylan Scott: They broke out their findings by different drinking levels — from one drink per day to three — and focused on health outcomes that have been proven to be associated with alcohol use. Their big-picture conclusion:

Among the US population, the negative health effects of drinking alcohol start at low levels of consumption and begin to increase sharply the more a person drinks. A man drinking one drink per day has roughly a one in 1,000 chance of dying from any alcohol-related cause, whether an alcohol-associated cancer or liver disease or a drunk driving accident. Increase that to two drinks per day, and the odds increase to one in 25.

In that context, the report is a harrowing read: Alcohol use is associated with increased mortality for seven types of cancer — colorectal, breast cancer in women, liver, oral, pharynx, larynx, and esophagus. Risk for these cancers increases with any alcohol use and continues to grow with higher levels of use, the study’s authors concluded.

Women experience a higher risk of an alcohol-attributable cancer per drink consumed than men. Men and women who die from an alcohol-attributable cause die 15 years earlier on average.

A 4% chance of dying on average 15 years earlier is a big deal.

This is not the first time a Trump administration defied the data on this.

US officials always solicit expert opinion as they prepare a fresh set of dietary guidelines. The input is usually compiled into one massive report from a group of experts called the US Dietary Guidelines Advisory Committee and then submitted to the Department of Health and Human Services and the Department of Agriculture, the two agencies that produce the guidelines.

That was how the process went in 2020, and at that time, the subcommittee of researchers dedicated to alcohol (including Naimi) advised the government to reduce the recommended limit down to one drink per day for men, from two. The Trump administration ultimately decided not to follow the recommendation.

Montana passes SB 535 with broad bipartisan backing, further expands its ‘right to try’ rules, with a license path for experimental treatment centers to administer any drug that got through Phase I trials, to anyone who wants it. This is The Way.

Alex Tabarrok is correct that we could greatly improve healthcare if we allowed telemedicine across state lines. As long as a doctor is licensed where the doctor is physically located, it shouldn’t matter where the patient is located. The best part is that this could be done with an admin rules change of two words.

A call to make statins be sold over the counter (OTC). This seems obviously correct, even if you are highly skeptical that the correlations cited here imply causation, or that they imply that intervening via statin cause this causation without important side effects. That’s a decision people can make for themselves at this point. But then a whole host of things should be OTC at this point that aren’t.

A new embryo selection company, Herasight has launched that is allowing users to select for IQ. They claim that with as few as 10 embryos you can already go from an expected IQ of 100 to a new average of 107. Or you can do things like go from 45% chance of Type 2 Diabetes to 25%.

Alex Young: I’ve been working with an IVF startup, @herasight, that has already screened hundreds of embryos. Today we come out of stealth with a paper showing that our predictors for 17 diseases — validated within-family — beat the competition, with improved performance in non-Europeans.

In our paper, we detail our polygenic scores (PGS) for 17 diseases using a custom meta-analysis. We used state-of-the-art methods to create PGSs based on 7.3M SNPs. Our most predictive PGSs explained ~20% of the variance in liability for prostate cancer and type-II diabetes.

In practice what happens is you get your 5, 10 or 20 embryos, they profile each one, and you are choosing based on a variety of traits. What do you actually care about most? You are about to find out.

Kitten: Within a few years they’re going to be able to show you a very accurate picture of what each baby will look like as an adult.

That will trump every other consideration by a large margin.

I agree that this technology is coming. I agree it will matter to people. And why shouldn’t it matter, at least from a selfish point of view? It also might be a better way to select for other good things than you might think, as in general health and other positive traits are correlated with beauty. I do not think it will be anything like ‘this overrides everything else.’

Even the graceful failure mode for things like this is a really big deal.

Mason: I think this just ignores the realities of IVF tbh

20 healthy embryos is, for most people, 3-5 IVF cycles

Success for a single embryo transfer is 40-60%.

“Well, your dad and I wanted a boy and you were the second highest IQ male after Embryo 6 failed to implant.”

Yeah, these numbers are for mothers <35. It's crazy how many people think IVF is a safety net for aging out of your fertility window when you actually need to start the process quite young in order for it to be likely to work.

I mean that sounds pretty great? You got your choice of gender and a couple of IQ points, while presumably also dodging a variety of potential genetic disorders. That’s a pretty good haul.

What you actually get is not maximizing on one to two traits, although you do have that option. What you get is to do a general maximization over many traits. That is a lot more valuable if you are making reasonable decisions.

Mason is however making the very important point that if you want to use IVF and have confidence it will work at all, let alone confidence it will give you selection, you need to do it early. The younger you are, the better all of this will on average go, until such time as we figure out how to generate new eggs (which is plausibly only a few years away).

As Gene Smith points out in this thread, academics are super against all of this.

Gene Smith: We’re in this insane world right now where rich parents are going through IVF just to do embryo selection, and middle class parents who are ALREADY doing IVF are being told by doctors that embryo selection doesn’t work or is unethical.

If you’ve been to conferences on embryo selection or ones where the topic is discussed, you will realize the truth is basically the opposite. Most academic discussion of this is dominated by ethicists talking about how problematic it is or academics talking about its limitations.

And make no mistake, these discussions matter. They’re the main reason why even doctors who understand embryo selection rarely recommend it to their patients, even when it could significantly reduce the odds of a patient passing along a disease to their child.

Among the academics that care about this, a large majority are opposed to embryo selection for ideological reasons. There hasn’t been all that much quantitative analysis done on it. And what has been done often fails to capture how embryo selection is actually done in industry.

A lot of work has focused on selection on a single trait, which is easier to model, but not how selection actually works in practice. And many of the conclusions about its efficacy rely on outdated predictors which have improved substantially since publication.

The effect size of embryo selection for IQ has nearly tripled since the publication of this paper, which was still being cited relatively recently.

He also reminds us that we are spending all our gene editing resources on rare diseases where we can stick the government gigantic bills and the ‘ethicists’ stop trying to ban you from helping, whereas the places where most of the value lies are complete ignored, even by people who would directly benefit.

Gene Smith: The situation in the gene editing field right now is kind of unbelievable. We’ve spent over a decade throwing billions of dollars at the gene editing for rare diseases and we still can’t get the editors inside most of the cell types we need to modify.

Everything is either for the liver or the eye or the bone marrow. We can’t get to the brain. We can’t get to the lungs. We can’t get to the heart. And we can barely fit a single editor into the best delivery vector so no one is even thinking about polygenic disease.

There is an extremely simple way to fix this: edit in embryos. All the stuff we can’t target right now because of delivery issues becomes almost trivial in embryos. You can literally stick a needle into the egg and squirt the editors in.

Alzheimer’s? Tractable. Diabetes? You can pretty much get rid of it. Heart disease? You can mostly make it a thing of the past. Breast cancer? Same deal.

The blame for this delay lies at the feet of academics. There are literally professors who go to conferences on gene editing and brag about how they’ve gotten governments to ban embryo gene editing.

[thread continues]

Bex: surely this is being done black budget and for the uber wealthy already. but I guess all that learning is hidden:(

Gene Smith: As someone working on the cutting edge of this field, I can tell you it’s definitely NOT. You underestimate how maddeningly myopic wealthy people can be about new tech. Fred Koch literally didn’t ask the cancer research center he funded to look into ways to treat his cancer.

The most cutting edge thing wealthy people are actually doing right now is embryo selection a la Herasight and Orchid.

I mean, it’s not that simple, of course it is not that simple, but it is a little bit that simple?

If you have the option to use the technology we do have, it seems crazy not to use it. Noor Siddiqui is being too righteous and superior about it even for me here, but if it really does cost as little as $2,500 to get selection over potential embryos, that is a better deal than essentially any remotely optional interventions you have access to after birth. The replies to such proposals are full of people saying how horrible all this is and having instinctual disgust and purity reactions, and calling everyone involved and the proposal itself various names, all of it patently absurd without any actual justifications.

I do strongly agree with Vitalik Buterin that while we should be very much in favor of embryo selection and most of the attacks on it are deranged, it is counterproductive and wrong to strike back by saying that since it condemns your children to be worse off that not using selection is unethical.

Vitalik Buterin: If you publicly telegraph that if you win your fight for acceptance you will immediately follow it up with a fight for imposition, do not be surprised if people decide to oppose your fight for acceptance from day one with a vigor only reserved for resisting imposition.

We’ve been over this one many times. If that which is not forbidden is compulsory, then that which I do not want to be compulsory must be forbidden.

Oh no, all these GLP-1 drugs are going to prevent people from dying, and that could have a negative impact on pensions?

Eliezer does a survey, finds ~80% of those who tried GLP-1 drugs report it helped lots versus those that didn’t, roughly confirming industry claims. That’s a fantastic success rate.

Now we are potentially seeing Eli Lily have a pill that’s ~55% as effective as Ozempic in phase-3 trials, their stock was up 14% on the news. They also have Retatrutide coming in a year or two, which is claimed to be a more effective GLP-1 drug that also causes less muscle loss.

These marginal improvements make a huge practical difference. GLP-1s are now a lot like AI, in that we keep getting better versions and people’s impressions don’t update.

Unfortunately, it’s not always easy to get a line on the necessary supply.

Scott Alexander: Update on Ozempocalypse: some pharmacies have stopped selling compounded GLP-1 drugs, others continue, with various flimsy legal excuses. Cremieux has a guide (partly subscriber-only) on how to order and use cheap “research chemical” GLP-1 from from peptide companies. And the Trump administration cancelled a Biden initiative to make GLP-1 drugs available via insurance.

There were official shortages of GLP-1 drugs, which allowed compounders to make and sell those drugs cheaply, on the order of $300/month, in essentially unlimited quantities. Alas, there is now no longer an official shortage, so the price is shooting back up again (~$1k/month) and supply is harder to find. We really should find a way to buy out those patents and make these drugs available for low prices (ideally for free, with at most a minimal consultation) for whoever wants them.

Is it possible that if you combine a GLP-1 with anabolic agent Bimagrumab, you can lose fat without losing muscle? Eli Lily is in phase 2b of trying to find out.

Egan Peltan: Today, Lilly revealed the weight loss & composition at 48 weeks

Sema: -13.5% (72% Fat)

Bima: -8.6% (100% Fat)

Bima + Sema: -16.4% (>90% Fat)

Patients on bima+sema saw >40% decrease in body fat with no changein lean mass at 24W. Overall, bima and bima+sema were well tolerated No striking SAE imbalances.

What we don’t know works with GLP-1s is microdosing.

Eric Topol: The GLP-1 microdosing wellness and longevity craze without any data for dose or evidence of effect.

As usual this as worded confuses lack of formalized data and evidence for a lack of data and evidence. These are not the same thing. It is an entirely sensible strategy to experiment with different doses and to choose, with a drug that has big benefits, but also downsides, and is clearly safe. Self-experimentation can provide highly useful data and evidence, and clearly different people respond differently in terms of benefits and side effects at different doses.

Babb (from the WSJ article): What feels healthiest is taking the lowest amount that’s providing what I’m perceiving to be these health benefits.

The marketing involving microdosing does sound not great, especially its claims of other vague and hard to measure benefits.

There are still some potential safety issues where it would be good to have more confidence, I agree that we are not treating checking for such possibilities as seriously as we should given how many people are on these drugs. But also given how many people are on these drugs (about 6% of Americans right now), and how much many people are inherently suspicious that there is a catch, and how long people have been taking them for (including previously for diabetes) I am confident that if there was a showstopper problem it would have been found by now.

Andrew Rettek: Ozempic type drugs have been on the market for over seven years. It’s been about five years since 1 million Americans have been taking them. We have a pretty good idea of their side effects at this point. It’s longer than you thought, because people have been taking them for diabetes for years before they became popular for weight loss.

People instinctively think there must be a catch to GLP-1s. But for people in most circumstances there mostly isn’t one, at least not one of magnitude similar to the benefits, other than that for some people it doesn’t work?

GSV Bemusement Park: Look if you are trying to lose weight there is no reason to not be padding your efforts with a GLP-1 drug unless everyone in your family gets thyroid cancer.

If you buy smartly the cost is literally negative after accounting for the food you don’t eat, and you can always go back on if you regain weight.

If you follow me then your mutuals almost certainly include several helpful, literate people who will be thrilled to advise you.

You have nothing to lose but waistline.

Peter Hague: Ozempic skepticism seems entirely based on the animist-like idea that there is a conservation law for good fortune – all good things must have some equal cost imposed elsewhere. It’s not scientific skepticism, it’s superstition.

I’m really concerned too how “big pharma is trying to kill you/sterilise you with fake medicine” has gone from the fringes of Facebook mummy groups to the mainstream. It used to be the main complaint was they were greedy and withholding good medicine from sick people.

Dystopina GF: Anti-Ozempic people don’t understand that the average person isn’t supposed to be an iron-willed Übermensch. Average people simply reflect the quality of their society; they’re obese bc modernity is sick, not bc of some moral failing. Being thin should be effortless, the default!

A well-designed society is a system that makes good things as frictionless as possible. A poorly-designed society is one where it takes a Herculean effort to achieve the most basic elements of human health and fulfillment

John Pressman: The replies on this inspire serious misanthropy. The QT is straightforwardly correct and the replies make it obvious that all good things come from brutally, ruthlessly destroying the perverse costly signaling equilibria people naturally cover up utility with.

Of course, here I am saying that and then not taking them, but I am a bizarro case where I was able to get to my target through sheer willpower, and I have decades of experience dealing with essentially permanent hunger. That almost never works.

Andrew Rettek reports from a week discussing diet, exercise and GLP-1s at LessOnline and Manifest. Lots of people wanted to exercise, but felt they needed ‘demystifying’ of the gym and the general procedures involved. I very much feel this. Our on ramps suck, and mostly amount to ‘find a source and trust it.’ Which is way better than not exercising but doesn’t fill one with confidence or motivation – I’m spending a substantial portion of my time and willpower and energy on this, and different implementations differ a lot. Whereas for diet, Andrew observed people mostly weren’t interested, my guess is because they’ve already heard so many contradictory things.

GLP-1s served as a large scale experiment in drug compounding, and in sidestepping the FDA’s various usual regulatory requirements. The results were amazingly great, everything went smoothly. This is even more evidence for FDA Delenda Est, while noting that ‘fire a bunch of people at the FDA’ makes things worse rather than better. Removing requirements is good, but if there are going to be requirements it is necessary to be able to handle the paperwork and meetings promptly.

Going forward, the bigger question is: What happens if this actually works?

Cremieux: Eli Lilly just showed that you can lose tons of fat while barely losing any muscle using their activin type-II receptor inhibitor, bimagrumab.

We are approaching a golden era of weight loss, where everyone can easily be muscular and skinny.

Prepare for hordes of hot Americans.

Devon Eriksen: And I am predicting, here and now, that there will be a massive social pushback from people who would rather see their fellow Americans live out their lives sick, fat, and miserable than give up their just-universe fallacy.

At what point do we start to think of weight loss as fully treatable?

Shako: Increasingly in public I see obese people as glp1 deficient. I mean this seriously. There is a 200 pound 15 year old girl at this playground. It’s heartbreaking and we can treat it now.

Lots of people commenting diet and exercise work better. And yet vast amounts of Americans, often stupid and poor, die every year from preventable metabolic diseases. Do you think they just don’t know about diets?

Eliezer Yudkowsky: Tirzepatide doesn’t work on 20% of people, including me.

Shako: 🙏🏻 hope something does soon.

Not every case can be treated, since we have a non-response problem, and some people will run into side effects or risk factors. But a large majority of people with a problem still haven’t attempted treatment. Most of them should try. If you were already going to lose the weight on your own, you could do it easier with help?

What about the equilibrium, signaling and status arguments? That if we allow people to lose weight without willpower or personal virtue then that will make the signals harder to read and be Just Awful? Yeah, I don’t care.

Cremieux claims that it is a myth that yo-yo weight loss reduces muscle mass and makes you fatter. Andrew Rettek responds that the studies claiming to bust this were on wrestlers, which doesn’t apply to normal people, and I buy his explanation of the mechanism here, which is:

  1. If you lose weight without resistance training you’ll lose some muscle mass.

  2. If you gain weight without resistance training you won’t gain muscle mass.

  3. Thus, if you do both, yes, you end up worse off.

  4. But that’s a choice, and it’s fixable by attacking ‘without resistance training.’

There are also other exercise methods. The point is clear.

Weight loss helps resolve back problems and otherwise make things hurt less. Few appreciate how big a deal this is. This was a big game changer for me on its own.

Man Down Ted: I may have a bit less hair, but I have FAR fewer body aches and pains than I did 10 years ago.

A lot of that is due to lowering the weight my body has to carry, but at least some of it is due to the fact that I will just make myself do 10 mins of yoga to loosen up, any time.

Meanwhile, half my male friends near my age have thrown their back out this year. Possibly more than half wake up stiff and sore from sleep every day.

Dropping the weight is good for dating,, but it’s GREAT for your long-term health and self image. And not THAT hard.

That is, no one knows much about how to do it right.

We do know many ways in which one can do it wrong.

Cremieux points out that in general ‘nutrition beliefs are just-so stories.’

If there were two concrete specific things I would have said I was confident about, one of them would have been ‘vegetables mostly good’ and the other would be ‘sugar bad.’ I don’t think it’s bad enough to stop me all that much on either front, but I do buy that sugar is bad on the margin.

And yet when Americans cut sugar consumption (modestly by ~10%, note the y-axis) the obesity rate still mostly kept moving up.

He blames this mess on selection. Once sugar got blamed, choosing to consume more sugar became correlated with other health decisions, some of which matter. The associations between sugar and health only show up, he claims, after 2012. And he says this generalizes – certain people eat certain diets. He finishes with a survey of some of the other problems.

I am inclined to be kinder, and see the epistemic problems here as actually super hard. Nutrition is remarkably different for different people, doing proper studies on humans is extremely hard, there are distinct effects for short, medium and long terms, a lot of this is about how you psychologically respond, lots of details matter and are hard to measure and can be changed without you realizing, and so on.

As far as anyone can tell Aspartame, a zero calorie sweetener, seems benign. I agree that this is great news so long as no one forces anyone to use it. I think the people this most annoys are people who believe that it is not as good an experience as sugar, and don’t want their sugar taken away, either legally or by those ‘worried about their health.’ Also, since no one knows anything about nutrition, I wouldn’t assume that there isn’t a catch.

From what I’ve seen, they go wrong enough often enough that starting to mess with them probably doesn’t make sense if you aren’t already messing with them.

Do ketogenic diets dramatically raise risk of stroke and heart attack?

Christoffer Bausted Nilsen: “When the facts change, I change my mind. What do you do, sir?” (source)

The graphs show dramatically accelerated PAV (Percent Atheroma Volume) and NCPV (Non-Calcified Plaque Volume) in those on Keto diets.

As always, beware conclusions from intermediate endpoints, and also different people often have very different reactions and averages can be misleading. But I agree this does not look good.

Cremieux: I’m a little annoyed that I spent so many years thinking keto was obviously good.

Now that we have a lot of data, it turns out that the people who said “fat—especially saturated fat—clogs your arteries” were just totally correct.

I should not have believed the contrarians.

I don’t think we have enough evidence here to draw that conclusion either?

Here’s a counterargument from someone thoughtful and sympathetic to Keto:

William Eden (not endorsed): As someone who has recommended paleo/keto diets in the past, let me lay out my current thinking:

ApoB is directly related to atherosclerosis progression (experts are right)

*Somepeople who go on keto skyrocket ApoB, and yes I think keto is bad for those people. This is where ultimately empirical results overcame the same things he derides in this discussion (“mechanistic” reasoning to support the answer you’re hoping to see).

Human PCSK9 mutants with no CVD changed my mind – and are the basis for a class of very strong therapeutics! However… I still think mechanisms are real. Some are better established, others speculative. Good theories predict the (very complicated) data My favorite thinker on the subject is Peter over at Hyperlipid, though he’s almost impossible to understand.

Just because high sat fat keto increases heart disease doesn’t mean the answer is to eat huge amounts of polyunsaturated fats either. Yes, I am a seed oil disrespecter. Mechanistically I think it causes the liver to hang onto fat instead of circulating it, but we can do better.

This video from Peter Attia is just 10 minutes and lays out the good drug options available today, including ones with better risk/reward than statins (though a *low dosestatin I think is fine with CoQ10 supplementation).

(Btw I think it’s absolutely clear that LDL must play some role in the body or it wouldn’t exist. I don’t 100% know all of its functions, but when you look across countries, low LDL is seen in developing countries with lots of infectious disease and other cardiovascular issues)

In terms of my diet advice, I do think a higher-fat diet that dips occasionally in and out of ketosis is likely optimal for long term performance and health. If you get high ApoB, swap out sat fat for monounsaturated fat, and consider drugs. Keto for >>1 month I have concerns.

In terms of my epidemic advice, I do think contrarians should try harder to grapple with mainstream evidence instead of dismissing it. *Especiallythe really strong evidence, and being able to see it as such. That’s why short term diet studies didn’t get me, but PCSK9 did. Fin.

Nathan Cofnas defends Keto on multiple fronts while also offering warnings.

  1. He gives us the standard ‘correlation is not causation’ warnings that are at their peak in nutrition and a reminder that no one knows anything beyond that you need a basic set of nutrients and that different people are different.

  2. The key thing about Keto, as its fans envision it, is it is the ultimate case of never do anything by halves. The idea of Keto is that you force your body into ketosis. That means consistent hardcore restricting of carbs. Most people can’t do it.

  3. So almost any observational study is going to include a bunch of people who don’t actually do the thing and stay in ketosis, and being kind-of-keto doesn’t work.

  4. Remember, when evaluating diets, that people basically neve stick to one for very long. The majority of people who say they’re ‘vegetarian’ will then admit they’ve eaten meat in the past week.

  5. Keto is not optimized for longevity. You do it to feel and look better now, not to live for longer. Even if it has longevity costs, it might still be worth it. If you do want primarily longevity he suggests caloric restriction and pescetarianism, which is kind of the opposite of keto.

  6. Keto is the only diet that worked for him in terms of current health, but of course different things work for different people.

I find these defenses:

  1. Reasonable arguments.

  2. Clear indications that very few people should be trying to be on a keto diet.

For keto to make sense, even if you mostly buy the counterarguments, given the risks involved (over a range of possible worlds):

  1. You need to have extraordinary willpower and discipline, keeping in mind that most people who think they have it, don’t have it.

  2. You need to be willing to spend that, and give up carbs, and endure all the hedonic and social costs involved, to stick to keto for real.

  3. You need to prioritize short term health over longevity.

  4. You need to have a body where keto happens to be relatively beneficial for you.

  5. You keep avery close eye on various markers and your experience as you try it, and abort on the spot if an issue emerges, either in terms of your health or experience, or simply exposing that the willpower price is too damn high.

And that’s if you buy the counterarguments.

In practice, even if you think this is right for you to try, I’m willing to go ahead and say that on average you are wrong about that.

Congratulations to Allyson Taft on her transformation, including losing 120+ pounds in just under a year by walking a lot, drinking a lot of water and tracking calories.

Not everyone can do this. Not everyone should attempt this. And most people in the position to need to do it should probably take GLP-1s.

It was still very nice to see a clear example of someone pulling off the pure willpower route who wasn’t me. And it sounds like she didn’t even use GLP-1s.

One thing I would push back on very hard is the idea that there is something perverse and modern about the fact that if we eat until we feel satiated with reasonable access to a variety of foods then we will gain weight:

exfatloss: If you can maintain a healthy body weight by restricting your total food intake, but never give in to eat ad libitum because you know you’d gain fat, your satiety & appetite are broken. Until very recently, nobody needed to count calories to remain thin.

I’m not saying that staying thin did not get harder on many fronts, and it is entirely possible that there are other things making things harder on top of that, but the idea that in the past eating as much as you wanted of what you wanted wouldn’t have made you fat is Obvious Nonsense. Rich people from the past who could eat as much as they wanted, and didn’t care, totally got fat. Sure, people mostly stayed thin, but largely out of lack of opportunity.

Needing to control yourself to stay thin does not mean that your satiety is broken. You’re asking way too much of satiety. You’re asking it to give you the right highly calibrated signal in a radically different world than the one it was trained for.

This doesn’t require toxins or any particular villain beyond widespread variety and caloric availability, including many calorically dense foods, and greatly increased food variety and quality, and various cultural expectations of eating frequently. It is enough, and regardless of the original cause life is not fair.

Cremieux: Attempts to explain the obesity epidemic through contamination, toxins, conspiracies, seed oils, sugar, high-fructose corn syrup, lithium, or whatever else always strike me as annoying.

What we must explain is an increase of ~200-350 calories a day in energy balance. That’s all.

My answer is simple, and similar to Cremieux’s in the thread, that there was never really any robust natural ‘balance.’

For full disclosure, I say this as someone whose satiety has always been utterly broken to the point of uselessness. If left to my own devices I would put on insane amounts of weight very quickly. I might double my caloric intake. That’s what actually broken looks like. But don’t ask an evolutionary mechanism to give the right answer in a radically changed world, and don’t call it ‘broken’ when you have to adapt.

Yes, until recently that meant you had to work for it. That’s one option. The other? Good news, now we have Ozempic and other GLP-1s.

Key Smash Bandit: I started taking a million supplements and I do feel meaningfully better but the new problem is I have no idea which one did it, and taking all of them is expensive and annoying

Ryan Moulton: This is a fun task to try to design an experiment to resolve.

I would randomize to taking 50% each day, (ideally blinded, but whatever) and then record how I felt each day. Then after you have ~a month of data, fit a regression model from what you took to how you felt, probably also with at least terms for 1 day lag. Lean on L1 in lasso a lot, because you’re probably looking for a single thing rather than a large set.

Unfortunately there are what the economists call ‘long and variable lags’ in vitamin impact, both positive and negative. Even if it originally helped or hurt in a day, it might take a while for a supplement to wear off. Others only work via accumulation over time, or work on things other than observable mood.

Experimental design here is very hard. I would not want to maximize purely on the basis of how I feel within 24 hours.

Also, for various reasons, you cannot by default confidently equate any given pill with other pills from another source that claim to deliver the same dose of the same vitamin.

If I was going to run such tests, as much I would like to randomize every day, I would at minimum want to wait three or so days between swaps, and I would limit the multiple hypothesis testing. The data is going to be super noisy at best.

A story worth sharing.

Josh Whiton: Halfway through university I was diagnosed with clinical depression. After a battery of tests and interviews with psychologists I eventually met with the psychiatrist who was to dispense my medication.

Instead he asked me a question that no one had ever asked.

“Why are you depressed?”

So I told him about the meaninglessness of life in an accidental universe where all life was just the product of chance.

“You want me to put you on medication because you’re an intellectual?” he said.

Then he said the wildest thing: “My concern is that your depression is part of a process and the drugs will slow it down.”

He told me to go home and observe all the thoughts in my mind instead of trying to escape from them. If in three days I still wanted the drugs, to come back and he’d give them to me.

So I went home and spent three days journaling, had three epiphanies about the nature of reality, and the year-long depression immediately lifted.

I wonder about all the kids like me who got the drugs instead.

Zac Hill: One of the things I was most brazenly wrong about was just basically making fun of people who thought that Canada’s euthanasia program would achieve scale.

As in, MAID now is already 5% of all deaths in Canada and is about to be available for mental conditions and parliament wants to grant access to minors, and I see a bunch of claims about it being aggressively pushed onto people in various ways. One wonders where this stops, once it starts. If it stops.

Former Quebec officials suggest that best “solution” for intellectually disabled woman without adequate support might be death. Which might be true, but wowie. This highlights where things might be going. Do you trust the state to not push such solutions on people who are financial burdens? To not cut off aid to them as part of an attempt to force such choices upon them? I notice my answer is very much no.

A lot of ‘ethics’ people think the only ethical thing for you to do is hurry up and die.

Joao Pedro de Magalhães: “It is highly unethical to stop aging” – reviewer commenting on one of my grant applications.

The grant focused on cellular rejuvenation, no mention to curing aging, but it shows we still have a long way to go to convince even fellow scientists that curing aging is desirable.

As in, you are trying to solve people’s health problems, but this might also cure aging, And That’s Terrible.

Once again: Bioethicists know nothing of ethics. This is them answering:

Here’s a bioethicist giving a Ted talk advocating that humans be ‘engineered’ to be, no not more intelligent or healthy or happy but (checks notes) intolerant to meat, at the same time that other ‘ethicists’ are screaming about how awful it is we might do embryo selection on positive traits.

And 41% stand ready here to outright deny physical reality.

How exactly would we design society such that it is not a disadvantage to be blind?

If I was blind and I heard people were claiming this, I’d be pretty pissed off at them.

Do you think, by answering that way, you are helping anyone?

Learning about your HIV status, when treatment is not available, dramatically reduces long term survival rates. Some of this is people adjusting rationally to the news, and engaging in generally riskier (to themselves) behaviors and having higher discount rates. Some of it is despair or overreaction. However it isn’t really an option not to tell them or not run the tests, for obvious reasons.

Here in New York they take every opportunity to push you to get tested for HIV, even when there is no reason to do that, as I found out on a recent hospital visit (it was an infection, I am 100% fine now, don’t worry).

Clash Report: Hot-mic moment at the Beijing parade.

Xi: “People rarely lived past 70 before. Now at 70 you’re still a child.”

Putin: “With biotech, organs can be replaced endlessly… people could even reach immortality.”

Xi: “Some predict people might live to 150 this century.”

I do see hope that some people might live to 150 this century, or even that some people alive today might live far longer than that. It will not happen because ‘organs can be replaced endlessly’ because that does not reverse aging, so the rate of new problems will keep doubling roughly every seven years, but we have other options.

Also, yeah, thinking ‘at 70 you’re still a child’ is both obviously false and reflective of the geriatric ruling class that is doing so much damage. Until your anti-aging technology gets a lot better no one 70+ should be in high office.

In response to RFK’s War on Vaccinations and as Florida repeals all vaccine requirements, there are now two state coalitions trying to defend our health. California, Oregon, Washington and Hawaii are going to form one set of recommendations. New York, Massachusetts, Connecticut, Delaware, Pennsylvania and Rhode Island will form another, likely with New Jersey, Vermont and Maine.

Freezing sperm young, if you can spare a little cash to do it, is a pretty great deal. Even if everything keeps working fine you cut down on mutation load a lot, and if something does go wrong you’re going to be very happy about it. It’s a lot of optionality and insurance at a remarkably cheap price, even if you factor in the chance that in the long term we’ll have other tech that renders it unnecessary.

This could end up being a big one: Germany’s national academy of sciences correctly urges that we treat aging like a disease. If we can get momentum behind this, it will supercharge progress towards the (non-AI) thing that is actually killing most of us.

New study claims living near (within 1-3 miles of) a golf course doubles risk of Parkinson’s Disease, which is presumed to be due to pesticides.

Ross Rheingans-Yoo is doing a video podcast series on why our drug development process is so broken and expensive.

Scott Adams has prostate cancer, we wish him well, thread has illustrations of exactly how unhinged a certain type of thinking got and how wide a set of medical topics. Luckily he himself has realized the mistake and although the odds are against him he now appears to be setting aside the quacks and getting real treatment.

A new claim from a randomized trial that getting immunotherapy in the morning has a doubled survival rate versus getting it in the evening, Abhishaike Mahajan speculates potential reasons why.

A potentially serious incentive problem: If CRISPR treatments can permanently cure conditions, insurance companies have no reason to value a permanent solution more than a temporary one, so the whole system will underinvest in such treatments. Ultimately it should be fine, since at most this is a limited cost factor and thus not enough distortion to stop this from happening, and AI should improve development costs enough to compensate.

They did finally run a decade-long study of GMOs in monkeys, and of course they found zero adverse effects on any measured health indicator.

Discussion about this post

Medical Roundup #5 Read More »

f1-in-singapore:-“trophy-for-the-hero-of-the-race”

F1 in Singapore: “Trophy for the hero of the race”

The scandal became public the following year when Piquet was dropped halfway through the season, and he owned up. In the fallout, Briatore was issued a lifetime ban from the sport, with a five-year ban for the team’s engineering boss, Pat Symonds. Those were later overturned, and Symonds went on to serve as F1’s CTO before recently becoming an advisor to the nascent Cadillac Team.

Even without possible RF interference or race-fixing, past Singaporean races were often interrupted by the safety car. The streets might be wider than Monaco, but the walls are just as solid, and overtaking is almost as hard. And Monaco doesn’t take place with nighttime temperatures above 86°F (30°C) with heavy humidity. Those are the kinds of conditions that cause people to make mistakes.

The McLaren F1 Team celebrates their Constructors' World Champion title on the podium at the Formula 1 Singapore Airlines Singapore Grand Prix in Marina Bay Street Circuit, Singapore, on October 5, 2025.

This is the first time McLaren has won back-to-back WCC titles since the early 1990s. Credit: Robert Szaniszlo/NurPhoto via Getty Images

But in 2023, a change was made to the layout, the fourth since 2008. The removal of a chicane lengthened a straight but also removed a hotspot for crashes. Since the alteration, the Singapore Grand Prix has run caution-free.

What about the actual race?

Last time, I cautioned McLaren fans not to worry about a possibly resurgent Red Bull. Monza and Baku are outliers of tracks that require low downforce and low drag. Well, Singapore benefits from downforce, and the recent upgrades to the Red Bull have, in Max Verstappen’s hands at least, made it a competitor again.

The McLarens of Oscar Piastri (leading the driver’s championship) and Lando Norris (just behind Piastri in second place) are still fast, but they no longer have an advantage of several tenths of a second against the rest of the field. They started the race in third and fifth places, respectively. Ahead of Piastri on the grid, Verstappen would start the race on soft tires; everyone else around him was on the longer-lasting mediums.

F1 in Singapore: “Trophy for the hero of the race” Read More »

here’s-the-real-reason-endurance-sank

Here’s the real reason Endurance sank


The ship wasn’t designed to withstand the powerful ice compression forces—and Shackleton knew it.

The Endurance, frozen and keeled over in the ice of the Weddell Sea. Credit: BF/Frank Hurley

In 1915, intrepid British explorer Sir Ernest Shackleton and his crew were stranded for months in the Antarctic after their ship, Endurance, was trapped by pack ice, eventually sinking into the freezing depths of the Weddell Sea. Miraculously, the entire crew survived. The prevailing popular narrative surrounding the famous voyage features two key assumptions: that Endurance was the strongest polar ship of its time, and that the ship ultimately sank after ice tore away the rudder.

However, a fresh analysis reveals that Endurance would have sunk even with an intact rudder; it was crushed by the cumulative compressive forces of the Antarctic ice with no single cause for the sinking. Furthermore, the ship wasn’t designed to withstand those forces, and Shackleton was likely well aware of that fact, according to a new paper published in the journal Polar Record. Yet he chose to embark on the risky voyage anyway.

Author Jukka Tuhkuri of Aalto University is a polar explorer and one of the leading researchers on ice worldwide. He was among the scientists on the Endurance22 mission that discovered the Endurance shipwreck in 2022, documented in a 2024 National Geographic documentary. The ship was in pristine condition partly because of the lack of wood-eating microbes in those waters. In fact, the Endurance22 expedition’s exploration director, Mensun Bound, told The New York Times at the time that the shipwreck was the finest example he’s ever seen; Endurance was “in a brilliant state of preservation.”

As previously reported, Endurance set sail from Plymouth on August 6, 1914, with Shackleton joining his crew in Buenos Aires, Argentina. By the time they reached the Weddell Sea in January 1915, accumulating pack ice and strong gales slowed progress to a crawl. Endurance became completely icebound on January 24, and by mid-February, Shackleton ordered the boilers to be shut off so that the ship would drift with the ice until the weather warmed sufficiently for the pack to break up. It would be a long wait. For 10 months, the crew endured the freezing conditions. In August, ice floes pressed into the ship with such force that the ship’s decks buckled.

The ship’s structure nonetheless remained intact, but by October 25, Shackleton realized Endurance was doomed. He and his men opted to camp out on the ice some two miles (3.2 km) away, taking as many supplies as they could with them. Compacted ice and snow continued to fill the ship until a pressure wave hit on November 13, crushing the bow and splitting the main mast—all of which was captured on camera by crew photographer Frank Hurley. Another pressure wave hit in the late afternoon on November 21, lifting the ship’s stern. The ice floes parted just long enough for Endurance to finally sink into the ocean before closing again to erase any trace of the wreckage.

Once the wreck had been found, the team recorded as much as they could with high-resolution cameras and other instruments. Vasarhelyi, particularly, noted the technical challenge of deploying a remote digital 4K camera with lighting at 9,800 feet underwater, and the first deployment at that depth of photogrammetric and laser technology. This resulted in a millimeter-scale digital reconstruction of the entire shipwreck to enable close study of the finer details.

Challenging the narrative

The ice and wave tank at Aalto University

The ice and wave tank at Aalto University. Credit: Aalto University

It was shortly after the Endurance22 mission found the shipwreck that Tuhkuri realized that there had never been a thorough structural analysis conducted of the vessel to confirm the popular narrative. Was Endurance truly the strongest polar ship of that time, and was a broken rudder the actual cause of the sinking? He set about conducting his own investigation to find out, analyzing Shackleton’s diaries and personal correspondence, as well as the diaries and correspondence of several Endurance crew members.

Tuhkuri also conducted a naval architectural analysis of the vessel under the conditions of compressive ice, which had never been done before. He then compared those results with the underwater images of the Endurance shipwreck. He also looked at comparable wooden polar expedition ships and steel icebreakers built in the late 1800s and early 1900s.

Endurance was originally named Polaris; Shackleton renamed it when he purchased the ship in 1914 for his doomed expedition. Per Tuhkuri, the ship had a lower (tween) deck, a main deck, and a short bridge deck above them that stopped at the machine room in order to make space for the steam engine and boiler. There were no beams in the machine room area, nor any reinforcing diagonal beams, which weakened this significant part of the ship’s hull.

This is because Endurance was originally built for polar tourism and for hunting polar bears and walruses in the Arctic; at the ice edge, ships only needed sufficiently strong planking and frames to withstand the occasional collision from ice floes. However, “In pack ice conditions, where compression from the ice needs to be taken into account, deck beams become of key importance,” Tuhkuri wrote. “It is the deck beams that keep the two ship sides apart and maintain the shape of a ship. Without strong enough deck beams, a vessel gets crushed by compressive ice, more or less irrespective of the thickness of planking and frames.”

The Endurance was nonetheless sturdy enough to withstand five serious ice compression events before her final sinking. On April 4, 1915, one of the scientists on board reported hearing loud rumbling noises from a 3-meter-high ice ridge that formed near the ship, causing the ship to vibrate. Tuhkuri believes this was due to a “compressive failure process” as ice crushed against the hull. On July 14, a violent snowstorm hit, and crew members could hear the ice breaking beneath the ship. The ice ridges that formed over the next few days were sufficiently concerning that Shackleton instituted four-hour watches on deck and insisted on having everything packed in case they had to abandon ship.

Crushed by the ice

Idealized cross sections of early Antarctic ships. Endurance was type (a); Fram and Deutschland were type (b).

Idealized cross sections of early Antarctic ships. Endurance was type (a); Deutschland was type (b). Credit: J. Tuhkuri, 2025

On August 1, an ice floe fractured and grinding noises were heard beneath the ship as the floe piled underneath it, lifting Endurance and causing her to first heel starboard and then heel to port, as several deck beams began to buckle. Similar compression events kept happening until there was a sudden escalation on September 30. The hull began vibrating hard enough to shake the whole rigging as even more ice crushed against the hull. Even the linoleum on the floors buckled; Harry McNish wrote in his diary that it looked like Endurance “was going to pieces.”

Yet another ice compression event occurred on October 17, pushing the vessel one meter into the air as the iron plates on the engine room’s floor buckled and slid over each other. Ship scientist Reginald James wrote that “for a time things were not good as the pressure was mostly along the region of the engine room where there are no beams of any strength,” while Captain Worsley described the engine room as “the weakest part of the ship.”

By the afternoon, Endurance was heeled almost 30 degrees to port, so much so that the keel was visible from the starboard side, per Tuhkuri, although the ice started to fracture in the evening so that the ship could shift upright again. The crew finally abandoned ship on October 27 after an even more severe compression event hit a few days before. Endurance finally sank below the ice on November 21.

Tuhkuri’s analysis of the structural damage to Endurance revealed that the rudder and the stern post were indeed torn off, confirmed by crew correspondence and diaries and by the underwater images taken of the wreck. The keel was also ripped off, with McNish noting in his diary that the ship broke into two halves as a result. The underwater images are less clear on this point, but Tuhkuri writes that there is something “some distance forward from the rudder, on the port side” that “could be the end of a displaced part of the keel sticking up from under the ship.”

All the diaries mentioned the buckling and breaking of deck beams, and there was much structural damage to the ship’s sides; for instance, Worsley writes of “great spikes of ice… forcing their way through the ship’s sides.” There are no visible holes in the wreck’s sides in the underwater images, but Tuhkuri posits that the damage is likely buried in the mud on the sea bed, given that by late October, Endurance “was heavily listed and the bottom was exposed.”

Jukka Tuhkari on the polar ice

Jukka Tuhkuri on the ice. Credit: Aalto University

Based on his analysis, Tuhkuri concluded that the rudder wasn’t the sole or primary reason for the ship’s sinking. “Endurance would have sunk even if it did not have a rudder at all,” Tuhkuri wrote; it was crushed by the ice, with no single reason for its eventual sinking. Shackleton himself described the process as ice floes “simply annihilating the ship.”

Perhaps the most surprising finding is that Shackleton knew of Endurance‘s structural shortcomings even before undertaking the voyage. Per Tuhkuri, the devastating effects of compressive ice on ships were known to shipbuilders in the early 1900s. An early Swedish expedition was forced to abandon its ship Antarctic in February 1903 when it became trapped in the ice. Things progressed much like Endurance: the ice lifted Antarctic up so that the ship heeled over, with ice-crushed sides, buckling beams, broken planking, and a damaged rudder and stern post. The final sinking occurred when an advancing ice floe ripped off the keel.

Shackleton knew of Antarctic‘s fate and had even been involved in the rescue operation. He also helped Wilhelm Filchner make final preparations for Filchner’s 1911–1913 polar expedition with a ship named Deutschland; he even advised his colleague to strengthen the ship’s hull by adding diagonal beams, the better to withstand the Weddell Sea ice. Filchner did so, and as a result, Deutschland survived eight months of being trapped in compressive ice until the ship was finally able to break free and sail home. (It took a torpedo attack in 1917 to sink the good ship Deutschland.)

The same shipyard that modified Deutschland had also just signed a contract to build Endurance (then called Polaris). So both Shackleton and the shipbuilders knew how destructive compressive ice could be and how to bolster a ship against it. Yet Endurance was not outfitted with diagonal beams to strengthen its hull. And knowing this, Shackleton bought Endurance anyway for his 1914–1915 voyage. In a 1914 letter to his wife, he even compared the strength of its construction unfavorably with that of the Nimrod, the ship he used for his 1907–1909 expedition. So Shackleton had to know he was taking a big risk.

“Even simple structural analysis shows that the ship was not designed for the compressive pack ice conditions that eventually sank it,” said Tuhkuri. “The danger of moving ice and compressive loads—and how to design a ship for such conditions—was well understood before the ship sailed south. So we really have to wonder why Shackleton chose a vessel that was not strengthened for compressive ice. We can speculate about financial pressures or time constraints, but the truth is, we may never know. At least we now have more concrete findings to flesh out the stories.”

Polar Record, 2025. DOI: 10.1017/S0032247425100090 (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Here’s the real reason Endurance sank Read More »

apple-iphone-17-pro-review:-come-for-the-camera,-stay-for-the-battery

Apple iPhone 17 Pro review: Come for the camera, stay for the battery


a weird-looking phone for taking pretty pictures

If your iPhone is your main or only camera, the iPhone 17 Pro is for you.

The iPhone 17 Pro’s excellent camera is the best reason to buy it instead of the regular iPhone 17. Credit: Andrew Cunningham

The iPhone 17 Pro’s excellent camera is the best reason to buy it instead of the regular iPhone 17. Credit: Andrew Cunningham

Apple’s “Pro” iPhones usually look and feel a lot like the regular ones, just with some added features stacked on top. They’ve historically had better screens and more flexible cameras, and there has always been a Max option for people who really wanted to blur the lines between a big phone and a small tablet (Apple’s commitment to the cheaper “iPhone Plus” idea has been less steadfast). But the qualitative experience of holding and using one wasn’t all that different compared to the basic aluminum iPhone.

This year’s iPhone 17 Pro looks and feels like more of a departure from the basic iPhone, thanks to a new design that prioritizes function over form. It’s as though Apple anticipated the main complaints about the iPhone Air—why would I want a phone with worse battery and fewer cameras, why don’t they just make the phone thicker so they can fit in more things—and made a version of the iPhone that they could point to and say, “We already make that phone—it’s that one over there.”

Because the regular iPhone 17 is so good, and because it uses the same 6.3-inch OLED ProMotion screen, I think the iPhone 17 Pro is playing to a narrower audience than usual this year. But Apple’s changes and additions are also tailor-made to serve that audience. In other words, fewer people even need to consider the iPhone Pro this time around, but there’s a lot to like here for actual “pros” and people who demand a lot from their phones.

Design

The iPhone 17 drops the titanium frame of the iPhone 15 and 16 Pro in favor of a return to aluminum. But it’s no longer the aluminum-framed glass-sandwich design that the iPhone 17 still uses; it’s a reformulated “aluminum unibody” design that also protects a substantial portion of the phone’s back. It’s the most metal we’ve seen on the back of the iPhone since 2016’s iPhone 7.

But remember that part of the reason the 2017 iPhone 8 and iPhone X switched to the glass sandwich design was wireless charging. The aluminum iPhones always featured some kind of cutouts or gaps in the aluminum to allow Wi-Fi, Bluetooth, and cellular signals through. But the addition of wireless charging to the iPhone meant that a substantial portion of the phone’s back now needed to be permeable by wireless signals, and the solution to that problem was simply to embrace it with a full sheet of glass.

The iPhone 17 Pro returns to the cutout approach, and while it might be functional, it leaves me pretty cold, aesthetically. Small stripes on the sides of the phone and running all the way around the “camera plateau” provide gaps between the metal parts so that you can’t mess with your cellular reception by holding the phone wrong; on US versions of the phone with support for mmWave 5G, there’s another long ovular cutout on the top of the phone to allow those signals to pass through.

But the largest and most obvious is the sheet of glass on the back that Apple needed to add to make wireless charging work. The aluminum, the cell signal cutouts, and this sheet of glass are all different shades of the phone’s base color (it’s least noticeable on the Deep Blue phone and most noticeable on the orange one).

The result is something that looks sort of unfinished and prototype-y. There are definitely people who will like or even prefer this aesthetic, which makes it clearer that this piece of technology is a piece of technology rather than trying to hide it—the enduring popularity of clear plastic electronics is a testament to this. But it does feel like a collection of design decisions that Apple was forced into by physics rather than choices it wanted to make.

That also extends to the camera plateau area, a reimagining of the old iPhone camera bump that extends all the way across the top of the phone. It’s a bit less slick-looking than the one on the iPhone Air because of the multiple lenses. And because the camera bumps are still additional protrusions on top of the plateau, the phone wobbles when it’s resting flat on a table instead of resting on the plateau in a way that stabilizes the phone.

Finally, there’s the weight of the phone, which isn’t breaking records but is a step back from a substantial weight reduction that Apple was using as a first-sentence-of-the-press-release selling point just two years ago. The iPhone 17 Pro weighs the same amount as the iPhone 14 Pro, and it has a noticeable heft to it that the iPhone Air (say) does not have. You’ll definitely notice if (like me) your current phone is an iPhone 15 Pro.

Apple sent me one of its $59 “TechWoven” cases with the iPhone 17 Pro, and it solved a lot of what I didn’t like about the design—the inconsistent materials and colors everywhere, and the bump-on-a-bump camera. There’s still a bump on the top, but at least the aperture of a case evens it out so that your phone isn’t tilted by the plateau and wobbling because of the bump.

I liked Apple’s TechWoven case for the iPhone Pro, partly because it papered over some of the things I don’t love about the design. Credit: Andrew Cunningham

The original FineWoven cases were (rightly) panned for how quickly and easily they scratched, but the TechWoven case might be my favorite Apple-designed phone case of the ones I’ve used. It doesn’t have the weird soft lint-magnet feel of some of the silicone cases, FineWoven’s worst problems seem solved, and the texture on the sides of the case provides a reassuring grippiness. My main issue is that the opening for the USB-C port on the bottom is relatively narrow. Apple’s cables will fit fine, but I had a few older or thicker USB-C connectors that didn’t.

This isn’t a case review, but I bring it up mainly to say that I stand by my initial assessment of the Pro’s function-over-form design: I am happy I put it in a case, and I think you will be, too, whichever case you choose (when buying for myself or family members, I have defaulted to Smartish cases for years, but your mileage may vary).

On “Scratchgate”

Early reports from Apple’s retail stores indicated that the iPhone 17 Pro’s design was more susceptible to scratches than past iPhones and that some seemed to be showing marks from as simple and routine an activity as connecting and disconnecting a MagSafe charging pad.

Apple says the marks left by its in-store MagSafe chargers weren’t permanent scratches and could be cleaned off. But independent testing from the likes of iFixit has found that the anodization process Apple uses to add color to the iPhone’s aluminum frame is more susceptible to scratching and flaking on non-flat surfaces like the edges of the camera bump.

Like “antennagate” and “bendgate” before it, many factors will determine whether “scratchgate” is actually something you’ll notice. Independent testing shows there is something to the complaints, but it doesn’t show how often this kind of damage will appear in actual day-to-day use over the course of months or years. Do keep it in mind when deciding which iPhone and accessories you want—it’s just one more reason to keep the iPhone 17 Pro in a case, if you ask me—but I wouldn’t say it should keep you from buying this phone if you like everything else about it.

Camera

I have front-loaded my complaints about the iPhone 17 Pro to get them out of the way, but the fun thing about an iPhone in which function follows form is that you get a lot of function.

When I made the jump from the regular iPhone to the Pro (I went from an 11 to a 13 Pro and then to a 15 Pro), I did it mainly for the telephoto lens in the camera. For both kid photos and casual product photography, it was game-changing to be able to access the functional equivalent of optical zoom on my phone.

The iPhone 17 Pro’s telephoto lens in 4x mode. Andrew Cunningham

The iPhone 16 Pro changed the telephoto lens’ zoom level from 3x to 5x, which was useful if you want maximum zoom but which did leave a gap between it and the Fusion Camera-enabled 2x mode. The 17 Pro switches to a 4x zoom by default, closing that gap, and it further maximizes the zooming capabilities by switching to a 48 MP sensor.

Like the main and ultrawide cameras, which had already switched to 48 MP sensors in previous models, the telephoto camera saves 24 MP images when shooting in 4x mode. But it can also crop a 12 MP image out of the center of that sensor to provide a native-resolution 12 MP image at an 8x zoom level, albeit without the image quality improvements from the “pixel binning” process that 4x images get.

You can debate how accurate it is to market this as “optical-quality zoom” as Apple does, but it’s hard to argue with the results. The level of detail you can capture from a distance in 8x mode is consistently impressive, and Apple’s hardware and software image stabilization help keep these details reasonably free of the shake and blur you might see if you were shooting at this zoom level with an actual hardware lens.

It’s my favorite feature of the iPhone 17 Pro, and it’s the thing about the phone that comes closest to being worth the $300 premium over the regular iPhone 17.

The iPhone 17 Pro, main lens, 1x mode. Andrew Cunningham

Apple continues to gate several other camera-related features to the Pro iPhones. All phones can shoot RAW photos in third-party camera apps that support it, but only the Pro iPhones can shoot Apple’s ProRAW format in the first-party camera app (ProRAW performs Apple’s typical image processing for RAW images but retains all the extra information needed for more flexible post-processing).

I don’t spend as much time shooting video on my phone as I do photos, but for the content creator and influencer set (and the “we used phones and also professional lighting and sound equipment to shoot this movie” set) Apple still reserves several video features for the Pro iPhones. That list includes 120 fps 4K Dolby Vision video recording and a four-mic array (both also supported by the iPhone 16 Pro), plus ProRes RAW recording and Genlock support for synchronizing video from multiple sources (both new to the 17 Pro).

The iPhone Pro also remains the only iPhone to support 10 Gbps USB transfer speeds over the USB-C port, making it faster to transfer large video files from the phone to an external drive or a PC or Mac for additional processing and editing. It’s likely that Apple built this capability into the A19 Pro’s USB controller, but both the iPhone Air and the regular iPhone 17 are restricted to the same old 25-year-old 480 Mbps USB 2.0 data transfer speeds.

The iPhone 17 Pro gets the same front camera treatment as the iPhone 17 and the Air: a new square “Center Stage” sensor that crops a 24 MP square image into an 18 MP image, allowing users to capture approximately the same aspect ratios and fields-of-view with the front camera regardless of whether they’re holding the phone in portrait or landscape mode. It’s definitely an image-quality improvement, but it’s the same as what you get with the other new iPhones.

Specs, speeds, and battery

You still need to buy a Pro phone to get a USB-C port with 10 Gbps USB 3 transfer speeds instead of 480 Mbps USB 2.0 speeds. Credit: Andrew Cunningham

The iPhone 17 Pro uses, by a slim margin, the fastest and most capable version of the A19 Pro chip, partly because it has all of the A19 Pro’s features fully enabled and partly because its thermal management is better than the iPhone Air’s.

The A19 Pro in the iPhone 17 Pro uses two high-performance CPU cores and four smaller high-efficiency CPU cores, plus a fully enabled six-core GPU. Like the iPhone Air, the iPhone Pro also includes 12GB of RAM, up from 8GB in the iPhone 16 Pro and the regular iPhone 17. Apple has added a vapor chamber to the iPhone 17 Pro to help keep it cool rather than relying on metal to conduct heat away from the chips—an infinitesimal amount of water inside a small metal pocket continually boils, evaporates, and condenses inside the closed copper-lined chamber. This spreads the heat evenly over a large area, compared to just using metal to conduct the heat; having the heat spread out over a larger area then allows that heat to be dissipated more quickly.

All phones were tested with Adaptive Power turned off.

We saw in our iPhone 17 review how that phone’s superior thermals helped it outrun the iPhone Air’s version of the A19 Pro in many of our graphics tests; the iPhone Pro’s A19 Pro beats both by a decent margin, thanks to both thermals and the extra hardware.

The performance line graph that 3DMark generates when you run its benchmarks actually gives us a pretty clear look at the difference between how the iPhones act. The graphs for the iPhone 15 Pro, the iPhone 17, and the iPhone 17 Pro all look pretty similar, suggesting that they’re cooled well enough to let the benchmark run for a couple of minutes without significant throttling. The iPhone Air follows a similar performance curve for the first half of the test or so but then drops noticeably lower for the second half—the ups and downs of the line actually look pretty similar to the other phones, but the performance is just a bit lower because the A19 Pro in the iPhone Air is already slowing down to keep itself cool.

The CPU performance of the iPhone 17 Pro is also marginally better than this year’s other phones, but not by enough that it will be user-noticeable.

As for battery, Apple’s own product pages say it lasts for about 10 percent longer than the regular iPhone 17 and between 22 and 36 percent longer than the iPhone Air, depending on what you’re doing.

I found the iPhone Air’s battery life to be tolerable with a little bit of babying and well-timed use of the Low Power Mode feature, and the iPhone 17’s battery was good enough that I didn’t worry about making it through an 18-hour day. But the iPhone 17 Pro’s battery really is a noticeable step up.

One day, I forgot to plug it in overnight and awoke to a phone that still had a 30 percent charge, enough that I could make it through the morning school drop-off routine and plug it in when I got back home. Not only did I not have to think about the iPhone 17 Pro’s battery, but it’s good enough that even a battery with 85-ish percent capacity (where most of my iPhone batteries end up after two years of regular use) should still feel pretty comfortable. After the telephoto camera lens, it’s definitely the second-best thing about the iPhone 17 Pro, and the Pro Max should last for even longer.

Pros only

Apple’s iPhone 17 Pro. Credit: Andrew Cunningham

I’m taken with a lot of things about the iPhone 17 Pro, but the conclusion of our iPhone 17 review still holds: If you’re not tempted by the lightness of the iPhone Air, then the iPhone 17 is the one most people should get.

Even more than most Pro iPhones, the iPhone 17 Pro and Pro Max will make the most sense for people who actually use their phones professionally, whether that’s for product or event photography, content creation, or some other camera-centric field where extra flexibility and added shooting modes can make a real difference. The same goes for people who want a bigger screen, since there’s no iPhone 17 Plus.

Sure, the 17 Pro also performs a little better than the regular 17, and the battery lasts longer. But the screen was always the most immediately noticeable upgrade for regular people, and the exact same display panel is now available in a phone that costs $300 less.

The benefit of the iPhone Pro becoming a bit more niche is that it’s easier to describe who each of these iPhones is for. The Air is the most pleasant to hold and use, and it’s the one you’ll probably buy if you want people to ask you, “Oh, is that one of the new iPhones?” The Pro is for people whose phones are their most important camera (or for people who want the biggest phone they can get). And the iPhone 17 is for people who just want a good phone but don’t want to think about it all that much.

The good

  • Excellent performance and great battery life
  • It has the most flexible camera in any iPhone, and the telephoto lens in particular is a noticeable step up from a 2-year-old iPhone 15 Pro
  • 12GB of RAM provides extra future-proofing compared to the standard iPhone
  • Not counting the old iPhone 16, it’s Apple’s only iPhone to be available in two screen sizes
  • Extra photography and video features for people who use those features in their everyday lives or even professionally

The bad

  • Clunky, unfinished-looking design
  • More limited color options compared to the regular iPhone
  • Expensive
  • Landscape layouts for apps only work on the Max model

The ugly

  • Increased weight compared to previous models, which actually used their lighter weight as a selling point

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Apple iPhone 17 Pro review: Come for the camera, stay for the battery Read More »

world-famous-primatologist-jane-goodall-dead-at-91

World-famous primatologist Jane Goodall dead at 91

A sculpture of Jane Goodall and David Greybeard outside the Field Museum of Natural History in Chicago

A sculpture of Jane Goodall and David Greybeard outside the Field Museum of Natural History in Chicago Credit: Geary/CC0

David Greybeard’s behavior also challenged the long-held assumption that chimpanzees were vegetarians. Goodall found that chimps would hunt and eat smaller primates like colobus monkeys as well, sometimes sharing the carcass with other troop members. She also recorded evidence of strong bonds between mothers and infants, altruism, compassion, and aggression and violence. For instance, dominant females would sometimes kill the infants of rival females, and from 1974 to 1978, there was a violent conflict between two communities of chimpanzees that became known as the Gombe Chimpanzee War.

Almost human

One of the more colorful chimps Goodall studied was named Frodo, who grew up to be an alpha male with a temperament very unlike his literary namesake. “As an infant, Frodo proved mischievous, disrupting Jane Goodall’s efforts to record data on mother-infant relationships by grabbing at her notebooks and binoculars,” anthropologist Michael Wilson of the University of Minnesota in Saint Paul recalled on his blog when Frodo died from renal failure in 2013. “As he grew older, Frodo developed a habit of throwing rocks, charging at, hitting, and knocking over human researchers and tourists.” Frodo attacked Wilson twice on Wilson’s first trip to Gombe, even beating Goodall herself in 1989, although he eventually lost his alpha status and “mellowed considerably” in his later years, per Wilson.

Goodall became so renowned around the world that she even featured in one of Gary Larson’s Far Side cartoons, in which two chimps are shown grooming when one finds a blonde hair on the other. “Conducting a little more ‘research’ with that Jane Goodall tramp?” the caption read. The JGI was not amused, sending Larson a letter (without Goodall’s knowledge) calling the cartoon an “atrocity,” but their objections were not shared by Goodall herself, who thought the cartoon was very funny when she heard of it. Goodall even wrote a preface to The Far Side Gallery 5. Larson, for his part, visited Goodall’s research facility in Tanzania in 1988, where he experienced Frodo’s alpha aggressiveness firsthand.

A young Jane Goodall in the field.

A young Jane Goodall in the field. Credit: YouTube/Jane Goodall Institute

Goodall founded the JGI in 1977 and authored more than 27 books, most notably My Friends, the Wild Chimpanzees (1967), In the Shadow of Man (1971), and Through a Window (1990). There was some initial controversy around her 2014 book Seeds of Hope, co-written with Gail Hudson, when portions were found to have been plagiarized from online sources; the publisher postponed publication so that Goodall could revise the book and add 57 pages of endnotes. (She blamed her “chaotic note-taking” for the issue.) National Geographic released a full-length documentary last year about her life’s work, drawing from over 100 hours of previously unseen archival footage.

World-famous primatologist Jane Goodall dead at 91 Read More »

tesla-reverses-sales-decline-in-q3,-sells-50k-more-cars-than-it-built

Tesla reverses sales decline in Q3, sells 50k more cars than it built

This morning, Tesla published its production and delivery numbers for the third quarter of the year. We’ve heard the same story for a while, one of diminishing sales as customers tire of a stale product lineup and are repulsed by the politics of the company’s CEO. But Q3 2025 tells a different tale. It’s been a good three months for the beleaguered automaker, one that appears to have cleared out a lot of old inventory.

Tesla built a total of 447,450 electric vehicles between July and September this year. That’s actually a 4.8 percent decrease compared to the same three months last year.

The Models 3 and Y production lines saw less of a slowdown—Tesla built 435,826 of these EVs, a 1.8 percent decline on last year. But the Models S and X, grouped together with the US-only Cybertruck, saw the greatest cutbacks. Just 11,624 of these collected models were produced, a 55.1 percent decrease compared to Q3 2024.

By contrast, Tesla managed to sell 497,099 cars during Q3 2025, a 7.4 percent increase compared to Q3 2024. The Models 3 and Y did all the heavy lifting here, increasing sales by 9.4 percent year over year to 481,166. But the near-antique Models S and X, and the divisive Cybertruck kept playing the old tune: sales of these models dropped by 30.5 percent to just 15,933 units.

That’s well above most analysts’ estimates for Q3, which predicted that the automaker would sell fewer than 450,000. The end of the IRS clean vehicle tax credit in the US is believed to be a significant contributing factor to the sales growth, although registration data from Europe has shown sales growth in France, Spain, Denmark, and Norway.

It’s quite the clear-out of inventory—more than 45,000 Models 3 and Y and more than 4,000 of Tesla’s other EVs have been cleared from Tesla’s books.

Tesla reverses sales decline in Q3, sells 50k more cars than it built Read More »

how-automakers-are-reacting-to-the-end-of-the-$7,500-ev-tax-credit

How automakers are reacting to the end of the $7,500 EV tax credit

Just after midnight this morning, in addition to getting a federal government shutdown, we also lost all federal tax credits for new electric vehicles, used electric vehicles, and commercial electric vehicles.

Sadly, this was not a surprise. During last year’s election, the Trump campaign made no secret of its disgust toward clean vehicles (and clean energy in general), and it promised to end subsidies meant to encourage Americans to switch from internal combustion engines to EVs. Once in power, the Republicans moved quickly to make this happen.

Federal clean vehicle incentives had only recently been revamped in then-US President Joe Biden’s massive investment in clean technologies as part of the Inflation Reduction Act of 2022. To qualify for the $7,500 tax credit, a new EV had to have its final assembly in North America, and certain percentages of its battery content needed to be domestically sourced.

A separate $7,500 commercial tax credit for new EVs was created, which did not require domestic assembly or content and which applied to leased EVs. And Congress finally added a $4,000 tax credit for the purchase of a used EV.

Visiting the relevant IRS page today, though, you’ll see an update declaring that the “New Clean Vehicle Credit, Previously-Owned Clean Vehicle Credit, and Qualified Commercial Clean Vehicle Credit are not available for vehicles acquired after Sept. 30, 2025.”

How automakers are reacting to the end of the $7,500 EV tax credit Read More »

in-2022,-the-world-axed-a-disease-name-seen-as-racist-us-just-switched-back.

In 2022, the world axed a disease name seen as racist. US just switched back.

Switching names

In November 2022, the WHO decided to change the name. The United Nations health agency noted that it had received reports from individuals and countries about the “racist and stigmatizing language online, in other settings, and in some communities.” The WHO decided to switch to the name “mpox” with a one-year grace period.

The agency also clarified its authority to make such a change, saying: “Assigning names to new and, very exceptionally, to existing diseases is the responsibility of WHO under the International Classification of Diseases (ICD) and the WHO Family of International Health Related Classifications through a consultative process which includes WHO Member States.”

The WHO does not, however, have the authority to change the names of viruses. That power belongs to the International Committee on Taxonomy of Viruses, which has not changed the name of the virus.

While the virus remains the same, the world has shifted to using mpox to discuss the disease. The US CDC followed suit, changing its websites and health information to use the new name.

This month, however, the CDC reverted to monkeypox. The change was first reported by NPR. When journalists have asked about the change, the Department of Health and Human Services (which includes the CDC) has responded only by saying “Monkeypox is the name of the viral disease caused by the monkeypox virus,” which is not accurate.

In 2022, the world axed a disease name seen as racist. US just switched back. Read More »

on-dwarkesh-patel’s-podcast-with-richard-sutton

On Dwarkesh Patel’s Podcast With Richard Sutton

This seems like a good opportunity to do some of my classic detailed podcast coverage.

The conventions are:

  1. This is not complete, points I did not find of note are skipped.

  2. The main part of each point is descriptive of what is said, by default paraphrased.

  3. For direct quotes I will use quote marks, by default this is Sutton.

  4. Nested statements are my own commentary.

  5. Timestamps are approximate and from his hosted copy, not the YouTube version, in this case I didn’t bother because the section divisions in the transcript should make this very easy to follow without them.

Full transcript of the episode is here if you want to verify exactly what was said.

Well, that was the plan. This turned largely into me quoting Sutton and then expressing my mind boggling. A lot of what was interesting about this talk was in the back and forth or the ways Sutton lays things out in ways that I found impossible to excerpt, so one could consider following along with the transcript or while listening.

  1. (0: 33) RL and LLMs are very different. RL is ‘basic’ AI. Intelligence and RL are about understanding your world. LLMs mimic people, they don’t figure out what to do.

    1. RL isn’t strictly about ‘understanding your world’ except insofar as it is necessary to do the job. The same applies to LLMs, no?

    2. To maximize RL signal you need to understand and predict the world, aka you need intelligence. To mimic people, you have to understand and predict them, which in turn requires understanding and predicting the world. Same deal.

  2. (1: 19) Dwarkesh points out that mimicry requires a robust world model, indeed LLMs have the best world models to date. Sutton disagrees, you’re mimicking people, and he questions that people have a world model. He says a world model would allow you to predict what would happen, whereas people can’t do that.

    1. People don’t always have an explicit world model, but sometimes they do, and they have an implicit one running under the hood.

    2. Even if people didn’t have a world model in their heads, their outputs in a given situation depend on the world, which you then have to model, if you want to mimic those humans.

    3. People predict what will happen all the time, on micro and macro levels. On the micro level they are usually correct. On sufficiently macro levels they are often wrong, but this still counts. If the claim is ‘if you can’t reliably predict what will happen then you don’t have a model’ then we disagree on what it means to have a model, and I would claim no such-defined models exist at any interesting scale or scope.

  3. (1: 38) “What we want, to quote Alan Turing, is a machine that can learn from experience, where experience is the things that actually happen in your life. You do things, you see what happens, and that’s what you learn from. The large language models learn from something else. They learn from “here’s a situation, and here’s what a person did”. Implicitly, the suggestion is you should do what the person did.”

    1. That’s not the suggestion. If [X] is often followed by [Y], then the suggestion is not ‘if [X] then you should do [Y]’ it it ‘[X] means [Y] is likely’ so yes if you are asked ‘what is likely after [X]’ it will respond [Y] but it will also internalize everything implied by this fact and the fact is not in any way normative.

    2. That’s still ‘learning from experience’ it’s simply not continual learning.

    3. Do LLMs do continual learning, e.g. ‘from what actually happens in your life’ in particular? Not in their current forms, not technically, but there’s no inherent reason they couldn’t, you’d just do [mumble] except that doing so would get rather expensive.

    4. You can also have them learn via various forms of external memory, broadly construed, including having them construct programs. It would work.

    5. Not that it’s obvious that you would want an LLM or other AI to learn specifically from what happens in your life, as opposed to learning from things that happen in lives in general plus having context and memory.

  4. (2: 39) Dwarkesh responds with a potential crux that imitation learning is a good prior or reasonable approach, and gives the opportunity to get answers right sometimes, then you can train on experience. Sutton says no, that’s the LLM perspective, but the LLM perspective is bad. It’s not ‘actual knowledge.’ You need continual learning so you need to know what’s right during interactions, but the LLM setup can’t tell because there’s no ground truth, because you don’t have a prediction about what will happen next.

    1. I don’t see Dwarkesh’s question as a crux.

    2. I think Sutton’s response is quite bad, relying on invalid sacred word defenses.

    3. I think Sutton wants to draw a distinction between events in the world and tokens in a document. I don’t think you can do that.

    4. There is no ‘ground truth’ other than the feedback one gets from the environment. I don’t see why a physical response is different from a token, or from a numerical score. The feedback involved can come from anywhere, including from self-reflection if verification is easier than generation or can be made so in context, and it still counts. What is this special ‘ground truth’?

    5. Almost all feedback is noisy because almost all outcomes are probabilistic.

    6. You think that’s air you’re experiencing breathing? Does that matter?

  5. (5: 29) Dwarkesh points out you can literally ask “What would you anticipate a user might say in response?” but Sutton rejects this because it’s not a ‘substantive’ prediction and the LLM won’t be ‘surprised’ or “they will not change because an unexpected thing has happened. To learn that, they’d have to make an adjustment.”

    1. Why is this ‘not substantive’ in any meaningful way, especially if it is a description of a substantive consequence, which speech often is?

    2. How is it not ‘surprise’ when a low-probability token appears in the text?

    3. There are plenty of times a human is surprised by an outcome but does not learn from it out of context. For example, I roll a d100 and get a 1. Okie dokie.

    4. LLMs do learn from a surprising token in training. You can always train. This seems like an insistence that surprise requires continual learning? Why?

  6. Dwarkesh points out LLMs update within a chain-of-thought, so flexibility exists in a given context. Sutton reiterates they can’t predict things and can’t be surprised. He insists that “The next token is what they should say, what the actions should be. It’s not what the world will give them in response to what they do.”

    1. What is Sutton even saying, at this point?

    2. Again, this distinction that outputting or predicting a token is distinct from ‘taking an action,’ and getting a token back is not the world responding.

    3. I’d point out the same applies to the rest of the tokens in context without CoT.

  7. (6: 47) Sutton claims something interesting, that intelligence requires goals, “I like John McCarthy’s definition that intelligence is the computational part of the ability to achieve goals. You have to have goals or you’re just a behaving system.” And he asks Dwarkesh is he agrees that LLMs don’t have goals (or don’t have ‘substantive’ goals, and that next token prediction is not a goal, because it doesn’t influence the tokens.

    1. Okay, seriously, this is crazy, right?

    2. What is this ‘substantive’ thing? If you say something on the internet, it gets read in real life. It impacts real life. It causes real people to do ‘substantive’ things, and achieving many goals within the internet requires ‘substantive’ changes in the offline world. If you’re dumb on the internet, you’re dumb in real life. If you die on the internet, you die in real life (e.g. in the sense of an audience not laughing, or people not supporting you, etc).

    3. I feel dumb having to type that, but I’m confused what the confusion is.

    4. Of course next token prediction is a goal. You try predicting the next token (it’s hard!) and then tell me you weren’t pursuing a goal.

    5. Next token prediction does influence the tokens in deployment because the LLM will output the next most likely token, which changes what tokens come after, its and the user’s, and also the real world.

    6. Next token prediction does influence the world in training, because the feedback on that prediction’s accuracy will change the model’s weights, if nothing else. Those are part of the world.

    7. If intelligence requires goals, and something clearly displays intelligence, then that something must have a goal. If you conclude that LLMs ‘don’t have intelligence’ in 2025, you’ve reached a wrong conclusion. Wrong conclusions are wrong. You made a mistake. Retrace your steps until you find it.

  8. Dwarkesh next points out you can do RL on top of LLMs, and they get IMO gold, and asks why Sutton still doesn’t think that is anything. Sutton doubles down that math operations still aren’t the empirical world, doesn’t count.

    1. Are you kidding me? So symbolic things aren’t real, period, and manipulating them can’t be intelligence, period?

  9. Dwarkesh notes that Sutton is famously the author of The Bitter Lesson, which is constantly cited as inspiring and justifying the whole ‘stack more layers’ scaling of LLMs that basically worked, yet Sutton doesn’t see LLMs as ‘bitter lesson’ pilled. Sutton says they’re also putting in lots of human knowledge, so kinda yes kinda no, he expects that new systems that ‘learn from experience’ and ‘perform much better’ and are ‘more scalable’ to then be another instance of the Bitter Lesson?

    1. This seems like backtracking on the Bitter Lesson? At least kinda. Mostly he’s repeating that LLMs are one way and it’s the other way, and therefore Bitter Lesson will be illustrated the other way?

  10. “In every case of the bitter lesson you could start with human knowledge and then do the scalable things. That’s always the case. There’s never any reason why that has to be bad. But in fact, and in practice, it has always turned out to be bad. People get locked into the human knowledge approach, and they psychologically… Now I’m speculating why it is, but this is what has always happened. They get their lunch eaten by the methods that are truly scalable.”

    1. I do not get where ‘truly scalable’ is coming from here, as it becomes increasingly clear that he is using words in a way I’ve never seen before.

    2. If anything it is the opposite. The real objection is training efficiency, or failure to properly update from direct relevant experiences, neither of which has anything to do with scaling.

    3. I also continue not to see why there is this distinction ‘human knowledge’ versus other information? Any information available to the AI can be coded as tokens and be put into an LLM, regardless of its ‘humanness.’ The AI can still gather or create knowledge on its own, and LLMs often do.

  11. “The scalable method is you learn from experience. You try things, you see what works. No one has to tell you. First of all, you have a goal. Without a goal, there’s no sense of right or wrong or better or worse. Large language models are trying to get by without having a goal or a sense of better or worse. That’s just exactly starting in the wrong place.”

    1. Again, the word ‘scaling’ is being used in a completely alien manner here. He seems to be trying to say ‘successful’ or ‘efficient.’

    2. You have to have a ‘goal’ in the sense of a means of selecting actions, and a way of updating based on those actions, but in this sense LLMs in training very obviously have ‘goals’ regardless of whether you’d use that word that way.

    3. Except Sutton seems to think this ‘goal’ needs to exist in some ‘real world’ sense or it doesn’t count and I continue to be boggled by this request, and there are many obvious counterexamples, but I risk repeating myself.

    4. No sense of better or worse? What do you think thumbs up and down are? What do you think evaluators are? Does he not think an LLM can do evaluation?

Sutton has a reasonable hypothesis that a different architecture, that uses a form of continual learning and that does so via real world interaction, would be an interesting and potentially better approach to AI. That might be true.

But his uses of words do not seem to match their definitions or common usage, his characterizations of LLMs seem deeply confused, and he’s drawing a bunch of distinctinctions and treating them as meaningful in ways that I don’t understand. This results in absurd claims like ‘LLMs are not intelligent and do not have goals’ and that feedback from digital systems doesn’t count and so on.

It seems like a form of essentialism, the idea that ‘oh LLMs can never [X] because they don’t [Y]’ where when you then point (as people frequently do) to the LLM doing [X] and often also doing [Y] and they say ‘la la la can’t hear you.’

  1. Dwarkesh claims humans initially do imitation learning, Sutton says obviously not. “When I see kids, I see kids just trying things and waving their hands around and moving their eyes around. There’s no imitation for how they move their eyes around or even the sounds they make. They may want to create the same sounds, but the actions, the thing that the infant actually does, there’s no targets for that. There are no examples for that.”

    1. GPT-5 Thinking says partly true, but only 30% in the first months, more later on. Gemini says yes. Claude says yes: “Imitation is one of the core learning mechanisms from birth onward. Newborns can imitate facial expressions within hours of birth (tongue protrusion being the classic example). By 6-9 months, they’re doing deferred imitation – copying actions they saw earlier. The whole mirror neuron system appears to be built for this.”

    2. Sutton’s claim seems clearly so strong as to be outright false here. He’s not saying ‘they do more non-imitation learning than imitation learning in the first few months,’ he is saying ‘there are no examples of that’ and there are very obviously examples of that. Here’s Gemini: “Research has shown that newborns, some just a few hours old, can imitate simple facial expressions like sticking out their tongue or opening their mouth. This early imitation is believed to be a reflexive behavior that lays the groundwork for more intentional imitation later on.”

  2. “School is much later. Okay, I shouldn’t have said never. I don’t know, I think I would even say that about school. But formal schooling is the exception. You shouldn’t base your theories on that.” “Supervised learning is not something that happens in nature. Even if that were the case with school, we should forget about it because that’s some special thing that happens in people.”

    1. At this point I kind of wonder if Sutton has met humans?

    2. As in, I do imitation learning. All. The Time. Don’t you? Like, what?

    3. As in, I do supervised learning. All. The. Time. Don’t you? Like, what?

    4. A lot of this supervised and imitation learning happens outside of ‘school.’

    5. You even see supervised learning in animals, given the existence of human supervisors who want to teach them things. Good dog! Good boy!

    6. You definitely see imitation learning in animals. Monkey see, monkey do.

    7. The reason not to do supervised learning is the cost of the supervisor, or (such as in the case of nature) their unavailability. Thus nature supervises, instead.

    8. The reason not to do imitation learning in a given context is the cost of the thing to imitate, or the lack of a good enough thing to imitate to let you continue to sufficiently progress.

  3. “Why are you trying to distinguish humans? Humans are animals. What we have in common is more interesting. What distinguishes us, we should be paying less attention to.” “I like the way you consider that obvious, because I consider the opposite obvious. We have to understand how we are animals. If we understood a squirrel, I think we’d be almost all the way there to understanding human intelligence. The language part is just a small veneer on the surface.”

    1. Because we want to create something that has what only humans have and humans don’t, which is a high level of intelligence and ability to optimize the arrangements of atoms according to our preferences and goals.

    2. Understanding an existing intelligence is not the same thing as building a new intelligence, which we have also managed to build without understanding.

    3. The way animals have (limited) intelligence does not mean this is the One True Way that intelligence can ever exist. There’s no inherent reason an AI needs to mimic a human let alone an animal, except for imitation learning, or in ways we find this to be useful. We’re kind of looking for our keys under the streetlamp here, while assuming there are no keys elsewhere, and I think we’re going to be in for some very rude (or perhaps pleasant?) surprises.

    4. I don’t want to make a virtual squirrel and scale it up. Do you?

  4. The process of humans learning things over 10k years a la Henrich, of figuring out a many-step long process, where you can’t one-shot the reasoning process. This knowledge evolves over time, and is passed down through imitation learning, as are other cultural practices and gains. Sutton agrees, but calls this a ‘small thing.’

    1. You could of course one-shot the process with sufficient intelligence and understanding of the world, what Henrich is pointing out is that in practice this was obviously impossible and not how any of this went down.

    2. Seems like Sutton is saying again that the difference between humans and squirrels is a ‘small thing’ and we shouldn’t care about it? I disagree.

  5. They agree that mammals can do continual learning and LLMs can’t. We all agree that Moravec’s paradox is a thing.

    1. Moravec’s paradox is misleading. There will of course be all four quadrants of things, where for each of [AI, human] things will be [easy, hard].

    2. The same is true for any pair of humans, or any pair of AIs, to a lesser degree.

    3. The reason it is labeled a paradox is that there are some divergences that look very large, larger than one might expect, but this isn’t obvious to me.

  1. “The experiential paradigm. Let’s lay it out a little bit. It says that experience, action, sensation—well, sensation, action, reward—this happens on and on and on for your life. It says that this is the foundation and the focus of intelligence. Intelligence is about taking that stream and altering the actions to increase the rewards in the stream…. This is what the reinforcement learning paradigm is, learning from experience.”

    1. Can be. Doesn’t have to be.

    2. A priori knowledge exists. Paging Descartes’ meditator! Molyneux’s problem.

    3. Words, written and voiced, are sensation, and can also be reward.

    4. Thoughts and predictions, and saying or writing words, are actions.

    5. All of these are experiences. You can do RL on them (and humans do this).

  2. Sutton agrees that the reward function is arbitrary, and can often be ‘seek pleasure and avoid pain.’

    1. That sounds exactly like ‘make number go up’ with extra steps.

  3. Sutton wants to say ‘network’ instead of ‘model.’

    1. Okie dokie, this does cause confusion with ‘world models’ that minds have, as Sutton points out later, so using the same word for both is unfortunate.

    2. I do think we’re stuck with ‘model’ here, but I’d be happy to support moving to ‘network’ or another alternative if one got momentum.

  4. He points out that copying minds is a huge cost savings, more than ‘trying to learn from people.’

    1. Okie dokie, again, but these two are not rivalrous actions.

    2. If anything they are complements. If you learn from general knowledge and experiences it is highly useful to copy you. If you are learning from local particular experiences then your usefulness is likely more localized.

    3. As in, suppose I had a GPT-5 instance, embodied in a humanoid robot, that did continual learning, which let’s call Daneel. I expect that Daneel would rapidly become a better fit to me than to others.

    4. Why wouldn’t you want to learn from all sources, and then make copies?

    5. One answer would be ‘because to store all that info the network would need to be too large and thus too expensive’ but that again pushes you in the other direction, and towards additional scaffolding solutions.

  5. They discuss temporal difference learning and finding intermediate objectives.

  6. Sutton brings up the ‘big world hypothesis’ where to be maximally useful a human or AI needs particular knowledge of a particular part of the world. In continual learning the knowledge goes into weights. “You learn a policy that’s specific to the environment that you’re finding yourself in.”

    1. Well sure, but there are any number of ways to get that context, and to learn that policy. You can even write the policy down (e.g. in claude.md).

    2. Often it would be actively unwise to put that knowledge into weights. There is a reason humans will often use forms of external memory. If you were planning to copy a human into other contexts you’d use it even more.

  1. Sutton lays out the above common model of the agent. The new claim seems to be that you learn from all the sensation you receive, not just from the reward. And there is emphasis on the importance of the ‘transition model’ of the world.

    1. I once again don’t see the distinction between this and learning from a stream of tokens, whether one or two directional, or even from contemplation, where again (if you had an optimal learning policy) you would pay attention to all the tokens and not only to the formal reward, as indeed a human does when learning from a text, or from sending tokens and getting tokens back in various forms.

    2. In terms of having a ‘transition model,’ I would say that again this is something all agents or networks need similarly, and can ‘get away with not having’ to roughly similar extents.

So do humans.

  1. Sutton claims people live in one world that may involve chess or Atari games and and can generalize across not only games but states, and will happen whether that generalization is good or bad. Whereas gradient descent will not make you generalize well, and we need algorithms where the generalization is good.

    1. I’m not convinced that LLMs or SGD generalize out-of-distribution (OOD) poorly relative to other systems, including humans or RL systems, once you control for various other factors.

    2. I do agree that LLMs will often do pretty dumb or crazy things OOD.

    3. All algorithms will solve the problem at hand. If you want that solution to generalize, you need to either make the expectation of such generalization part of the de facto evaluation function, develop heuristics and methods that tend to lead to generalization for other reasons, or otherwise incorporate the general case, or choose or get lucky with a problem where the otherwise ‘natural’ solution does still generalize.

  2. “Well maybe that [LLMs] don’t need to generalize to get them right, because the only way to get some of them right is to form something which gets all of them right. If there’s only one answer and you find it, that’s not called generalization. It’s just it’s the only way to solve it, and so they find the only way to solve it. But generalization is when it could be this way, it could be that way, and they do it the good way.”

    1. Sutton only thinks you can generalize given the ability to not generalize, the way good requires the possibility of evil. It is a relative descriptor.

    2. I don’t understand why you’d find that definition useful or valid. I care about the generality of your solution in practice, not whether there was a more or less general alternative solution also available.

    3. Once again there’s this focus on whether something ‘counts’ as a thing. Yes, of course, if the only or simplest or easiest way to solve a special case is to solve the general case, which often happens, and thus you solve the general case, and this happens to solve a bunch of problem types you didn’t consider, then you have done generalization. Your solution will work in the general case, whether or not you call that OOD.

    4. If there’s only one answer and you find it, you still found it.

    5. This seems pretty central. SGD or RL or other training methods, of both humans and AIs, will solve the problem you hand to them. Not the problem you meant to solve, the problem and optimization target you actually presented.

    6. You need to design that target and choose that method, such that this results in a solution that does what you want it to do. You can approach that in any number of ways, and ideally (assuming you want a general solution) you will choose to set the problem up such that the only or best available solution generalizes, if necessary via penalizing solutions that don’t in various ways.

  3. Sutton claims coding agents trained via SGD will only find solutions to problems they have seen, and yes sometimes the only solution will generalize but nothing in their algorithms will cause them to choose solutions that generalize well.

    1. Very obviously coding agents generalize to problems they haven’t seen.

    2. Not fully to ‘all coding of all things’ but they generalize quite a bit and are generalizing better over time. Seems odd to deny this?

    3. Sutton is making at least two different claims.

    4. The first claim is that coding agents only find solutions to problems they have seen. This is at least a large overstatement.

    5. The second claim is that the algorithms will not cause the network to choose solutions that generalize well over alternative solutions that don’t.

    6. The second claim is true by default. As Sutton notes, sometimes the default or only solution does indeed generalize well. I would say this happens often. But yeah, sometimes by default this isn’t true, and then by construction and default there is nothing pushing towards finding the general solution.

    7. Unless you design the training algorithms and data to favor the general solution. If you select your data well, often you can penalize or invalidate non-general solutions, and there are various algorithmic modifications available.

    8. One solution type is giving the LLM an inherent preference for generality, or have the evaluator choose with a value towards generality, or both.

    9. No, it isn’t going to be easy, but why should it be? If you want generality you have to ask for it. Again, compare to a human or an RL program. I’m not going for a more general solution unless I am motivated to do so, which can happen for any number of reasons.

  1. Dwarkesh asks what has been surprising in AI’s big picture? Sutton says the effectiveness of artificial neural networks. He says ‘weak’ methods like search and learning have totally won over ‘strong’ methods that come from ‘imbuing a system with human knowledge.’

    1. I find it interesting that Sutton in particular was surprised by ANNs. He is placing a lot of emphasis on copying animals, which seems like it would lead to expecting ANNs.

    2. It feels like he’s trying to make ‘don’t imbue the system with human knowledge’ happen? To me that’s not what makes the ‘strong’ systems strong, or the thing that failed. The thing that failed was GOFAI, the idea that you would hardcode a bunch of logic and human knowledge in particular ways, and tell the AI how to do things, rather than letting the AI find solutions through search and learning. But that can still involve learning from human knowledge.

    3. It doesn’t have to (see AlphaZero and previously TD-Gammon as Sutton points out), and yes that was somewhat surprising but also kind of not, in the sense that with More Dakka within a compact space like chess you can just solve the game from scratch.

    4. As in: We don’t need to use human knowledge to master chess, because we can learn chess through self-play beyond human ability levels, and we have enough compute and data that way that we can do it ‘the hard way.’ Sure.

  1. Dwarkesh asks what happens to scaling laws after AGI is created that can do AI research. Sutton says: “These AGIs, if they’re not superhuman already, then the knowledge that they might impart would be not superhuman.”

    1. This seems like more characterization insistence combined with category error?

    2. And it ignores or denies the premise of the question, which is that AGI allows you to scale researcher time with compute the same way we previously could scale compute spend in other places. Sutton agrees that doing bespoke work is helpful, it’s just that it doesn’t scale, but what if it did?

    3. Even if the AGI is not ‘superhuman’ per se, the ability to run it faster and in parallel and with various other advantages means it can plausibly produce superhuman work in AI R&D. Already we have AIs that can do ‘superhuman’ tasks in various domains, even regular computers are ‘superhuman’ in some subdomains (e.g. arithmetic).

  2. “So why do you say, “Bring in other agents’ expertise to teach it”, when it’s worked so well from experience and not by help from another agent?”

    1. Help from another agent is experience. It can also directly create experience.

    2. The context is chess where this is even more true.

    3. Indeed, the way AlphaZero was trained was not to not involve other agents. The way AlphaZero was trained involved heavy use of other agents, except all those other agents were also AlphaZero.

  3. Dwarkesh focuses specifically on the ‘billions of AI researchers’ case, Sutton says that’s an interesting case very different from today and The Bitter Lesson doesn’t have to apply. Better to ask questions like whether you should use compute to enhance a few agents or spread it around to spin up more of them, and how they will interact. “More questions, will it be possible to really spawn it off, send it out, learn something new, something perhaps very new, and then will it be able to be reincorporated into the original? Or will it have changed so much that it can’t really be done? Is that possible or is that not?”

    1. I agree that things get strange and different and we should ask new questions.

    2. Asking whether it is possible for an ASI (superintelligent AI) copy to learn something new and then incorporate it into the original seems like such a strange question.

      1. It presupposes this ‘continual learning’ thesis where the copy ‘learns’ the information via direct incorporation into its weights.

      2. It then assumes that passing on this new knowledge requires incorporation directly into weights or something weird?

      3. As opposed to, ya know, writing the insight down and the other ASI reading it? If ASIs are indeed superintelligent and do continual learning, why can’t they learn via reading? Wouldn’t they also get very good at knowing how to describe what they know?

      4. Also, yes, I’m pretty confident you can also do this via direct incorporation of the relevant experiences, even if the full Sutton model holds here in ways I don’t expect. You should be able to merge deltas directly in various ways we already know about, and in better ways that these ASIs will be able to figure out.

      5. Even if nothing else works, you can simply have the ‘base’ version of the ASI in question rerun the relevant experiences once it is verified that they led to something worthwhile, reducing this to the previous problem, says the mathematician.

  4. Sutton also speculates about potential for corruption or insanity and similar dangers, if a central mind is incorporating the experiences or knowledge of other copies of itself. He expects this to be a big concern, including ‘mind viruses.’

    1. Seems fun to think about, but nothing an army of ASIs couldn’t handle.

    2. In general, when imagining scenarios with armies of ASIs, you have to price into everything the fact that they can solve problems way better than you.

    3. I don’t think the associated ‘mind viruses’ in this scenario are fundamentally different than the problems with memetics and hazardous information we experience today, although they’ll be at a higher level.

    4. I would of course expect lots of new unexpected and weird problems to arise.

It’s Sutton, so eventually we were going to have to deal with him being a successionist.

  1. He argues that succession is inevitable for four reasons: Humanity is incapable of a united front, we will eventually figure out intelligence, we will eventually figure out superhuman intelligence, and it is inevitable that over time the most intelligent things around would gain intelligence and power.

    1. We can divide this into two parts. Let “it” equal superintelligence.

    2. Let’s call part one Someone Will Build It.

    3. Let’s call part two If Anyone Builds It, Everyone Dies.

      1. Okay, sure, not quite as you see below, but mostly? Yeah, mostly.

    4. Therefore, Everyone Will Die. Successionism is inevitable.

    5. Part two is actually a very strong argument! It is simpler and cleaner and in many ways more convincing than the book’s version, at least in terms of establishing this as a baseline outcome. It doesn’t require (or give the impression it requires) any assumptions whatsoever about the way we get to superintelligence, what form that superintelligence takes, nothing.

    6. I actually think this should be fully convincing of the weaker argument that by default (rather than inevitably) this happens, and that there is a large risk of this happening, and something has to go very right for it to not happen.

    7. If you say ‘oh even if we do build superintelligence there’s no risk of this happening’ I consider this to be Obvious Nonsense and you not to be thinking.

    8. I don’t think this argument is convincing that it is ‘inevitable.’ Facts not in evidence, and there seem like two very obvious counterexamples.

      1. Counterexample one is that if the intelligence gap is not so large in practical impact, other attributes can more than compensate for this. Other attributes, both mental and physical, also matter and can make up for this. Alas, this seems unlikely to be relevant given the expected intelligence gaps.

      2. Counterexample two is that you could ‘solve the alignment problem’ in a sufficiently robust sense that the more intelligent minds optimize for a world in which the less intelligent minds retain power in a sufficiently robust way. Extremely tricky, but definitely not impossible in theory.

    9. However his definition of what is inevitable, and what counts as ‘succession’ here, is actually much more optimistic than I previously realized…

    10. If we agree that If Anyone Builds It, Everyone Dies, then the logical conclusion is ‘Then Let’s Coordinate To Ensure No One Fing Build It.’

    11. He claims nope, can’t happen, impossible, give up. I say, if everyone was convinced of part two, then that would change this.

  2. “Put all that together and it’s sort of inevitable. You’re going to have succession to AI or to AI-enabled, augmented humans. Those four things seem clear and sure to happen. But within that set of possibilities, there could be good outcomes as well as less good outcomes, bad outcomes. I’m just trying to be realistic about where we are and ask how we should feel about it.”

    1. If ‘AI-enhanced, augmented humans’ count here, well, that’s me, right now.

    2. I mean, presumably that’s not exactly what he meant.

    3. But yeah, conditional on us building ASIs or even AGIs, we’re at least dealing with some form of augmented humans.

    4. Talk of ‘merge with the AI’ is nonsense, you’re not adding anything to it, but it can enhance you.

  3. “I mark this as one of the four great stages of the universe. First there’s dust, it ends with stars. Stars make planets. The planets can give rise to life. Now we’re giving rise to designed entities. I think we should be proud that we are giving rise to this great transition in the universe.”

    1. Designed is being used rather loosely here, but we get the idea.

    2. We already have created designed things, and yeah that’s pretty cool.

  4. “It’s an interesting thing. Should we consider them part of humanity or different from humanity? It’s our choice. It’s our choice whether we should say, “Oh, they are our offspring and we should be proud of them and we should celebrate their achievements.” Or we could say, “Oh no, they’re not us and we should be horrified.””

    1. It’s not about whether they are ‘part of humanity’ or our ‘children.’ They’re not.

    2. They can still have value. One can imagine aliens (as many stories have) that are not these things and still have value.

    3. That doesn’t mean that us going away would therefore be non-horrifying.

  5. “A lot of it has to do with just how you feel about change. If you think the current situation is really good, then you’re more likely to be suspicious of change and averse to change than if you think it’s imperfect. I think it’s imperfect. In fact, I think it’s pretty bad. So I’m open to change. I think humanity has not had a super good track record. Maybe it’s the best thing that there has been, but it’s far from perfect.” “I think it’s appropriate for us to really work towards our own local goals. It’s kind of aggressive for us to say, “Oh, the future has to evolve this way that I want it to.””

    1. So there you have it.

    2. I disagree.

  6. “So we’re trying to design the future and the principles by which it will evolve and come into being. The first thing you’re saying is, “Well, we try to teach our children general principles which will promote more likely evolutions.” Maybe we should also seek for things to be voluntary. If there is change, we want it to be voluntary rather than imposed on people. I think that’s a very important point. That’s all good.”

    1. This is interestingly super different and in conflict with the previous claim.

    2. It’s fully the other way so far that I don’t even fully endorse it, this idea that change needs to be voluntary whenever it is imposed on people. That neither seems like a reasonable ask, nor does it historically end well, as in the paralysis of the West and especially the Anglosphere in many ways, especially in housing.

    3. I am very confident in what would happen if you asked about the changes Sutton is anticipating, and put them to a vote.

Fundamentally, I didn’t pull direct quotes on this but Sutton repeatedly emphasizes that AI-dominated futures can be good or bad, that he wants us to steer towards good futures rather than bad futures, and that we should think carefully about which futures we are steering towards and choose deliberately.

I can certainly get behind that. The difference is that I don’t think we need to accept this transition to AI dominance as our only option, including that I don’t think we should accept that humans will always be unable to coordinate.

Mostly what I found interesting were the claims around the limitations and nature of LLMs, in ways that don’t make sense to me. This did help solidify a bunch of my thinking about how all of this works, so it felt like a good use of time for that alone.

Discussion about this post

On Dwarkesh Patel’s Podcast With Richard Sutton Read More »

taiwan-pressured-to-move-50%-of-chip-production-to-us-or-lose-protection

Taiwan pressured to move 50% of chip production to US or lose protection

The Trump administration is pressuring Taiwan to rapidly move 50 percent of its chip production into the US if it wants ensured protection against a threatened Chinese invasion, US Commerce Secretary Howard Lutnick told NewsNation this weekend.

In the interview, Lutnick noted that Taiwan currently makes about 95 percent of chips used in smartphones and cars, as well as in critical military defense technology. It’s bad for the US, Lutnick said, that “95 percent of our chips are made 9,000 miles away,” while China is not being “shy” about threats to “take” Taiwan.

Were the US to lose access to Taiwan’s supply chain, the US could be defenseless as its economy takes a hit, Lutnick alleged, asking, “How are you going to get the chips here to make your drones, to make your equipment?”

“The model is: if you can’t make your own chips, how can you defend yourself, right?” Lutnick argued. That’s why he confirmed his “objective” during his time in office is to shift US chip production from 2 percent to 40 percent. To achieve that, he plans to bring Taiwan’s “whole supply chain” into the US, a move experts have suggested could take much longer than a single presidential term to accomplish.

In 2023, Nvidia CEO Jensen Huang forecast that the US was “somewhere between a decade and two decades away from supply chain independence,” emphasizing that “it’s not a really practical thing for a decade or two.”

Deal is “not natural for Taiwan”

Lutnick acknowledged this will be a “herculean” task. “Everybody tells me it’s impossible,” he said.

To start with, Taiwan must be convinced that it’s not getting a raw deal, he noted, explaining that it’s “not natural for Taiwan” to mull a future where it cedes its dominant role as a global chip supplier, as well as the long-running protections it receives from allies that comes with it.

Taiwan pressured to move 50% of chip production to US or lose protection Read More »

30-years-later,-i’m-still-obliterating-planets-in-master-of-orion-ii—and-you-can,-too

30 years later, I’m still obliterating planets in Master of Orion II—and you can, too

I love 4X games. I’ve tried other strategy game genres, but frankly, they don’t stick if they’re not first and foremost 4X games—at the heart of it, it must be about exploring, expanding, exploiting, and yes, exterminating.

I suspect that the first 4X game most people played was some entry in the Civilization franchise—though certainly, a select few played precursors dating back to text-based games in the 1970s.

But for me, the title that kicked off my obsession was Master of Orion II (MOO2)—a game that has you develop and build up planets across a simple galaxy map, researching speculative future technologies, and ultimately wiping out your opponents and claiming dominion over the known universe. (There are other victory conditions too, but that one is the most fun.)

There is something satisfying about making a couple thousand small choices that all add up to that galaxy map gradually changing color in your favor until the final cut scene plays, declaring you the true Master of Orion.

The games I love the most are the ones where you make decisions that compound over many, many hours to a long-term payoff. I’ll take that over games with bite-sized, contained challenges and short play times any day. The deeper and longer the experience, the better the payoff can be. To me, that’s ultimately what makes 4X games great. MOO2 is no exception.

A high score screen declares the player the ultimate master of the universe

I needed this validation. Credit: Samuel Axon

Nostalgic but flawed

That said, it’s not a perfect game. It benefited from the lessons it could learn from more than a decade of 4X games before it, and its designers were clearly thinking about how to make it balanced and fun.

They just missed the mark sometimes. For example, a big part of the game is choosing perks that customize your empire from before the first turn. One of those perks is called “Creative,” which allows you to learn multiple technologies at once rather than one at a time. It’s pretty hard to imagine anyone consciously declining to choose that perk unless they’re looking to make things a lot harder for themselves.

30 years later, I’m still obliterating planets in Master of Orion II—and you can, too Read More »