Author name: DJ Henderson

on-dwarkesh-patel’s-2026-podcast-with-dario-amodei

On Dwarkesh Patel’s 2026 Podcast With Dario Amodei

Some podcasts are self-recommending on the ‘yep, I’m going to be breaking this one down’ level. This was very clearly one of those. So here we go.

As usual for podcast posts, the baseline bullet points describe key points made, and then the nested statements are my commentary. Some points are dropped.

If I am quoting directly I use quote marks, otherwise assume paraphrases.

What are the main takeaways?

  1. Dario mostly stands by his predictions of extremely rapid advances in AI capabilities, both in coding and in general, and in expecting the ‘geniuses in a data center’ to show up within a few years, possibly even this year.

  2. Anthropic’s actions do not seem to fully reflect this optimism, but also when things are growing on a 10x per year exponential if you overextend you die, so being somewhat conservative with investment is necessary unless you are prepared to fully burn your boats.

  3. Dario reiterated his stances on China, export controls, democracy, AI policy.

  4. The interview downplayed catastrophic and existential risk, including relative to other risks, although it was mentioned and Dario remains concerned. There was essentially no talk about alignment at all. The dog did not bark in the nighttime.

  5. Dwarkesh remains remarkably obsessed with continual learning.

  1. The Pace of Progress.

  2. Continual Learning.

  3. Does Not Compute.

  4. Step Two.

  5. The Quest For Sane Regulations.

  6. Beating China.

  1. AI progress is going at roughly Dario’s expected pace plus or minus a year or two, except coding is going faster than expected. His top level model of scaling is the same as it was in 2017.

    1. I don’t think this is a retcon, but he did previously update too aggressively on coding progress, or at least on coding diffusion.

  2. Dario still believes the same seven things matter: Compute, data, data quality and distribution, length of training, an objective function that scales, and two things around normalization or conditioning.

    1. I assume this is ‘matters for raw capability.’

  3. Dwarkesh asks about Sutton’s perspective that we’ll get human-style learners. Dario says there’s an interesting puzzle there, but it probably doesn’t matter. LLMs are blank slates in ways humans aren’t. In-context learning will be in-between human short and long term learning. Dwarkesh asks then why all of this RL and building RL environments? Why not focus on learning on the fly?

    1. Because the RL and giving it more data clearly works?

    2. Whereas learning on the fly doesn’t work, even if it did what happens when the model resets every two months?

    3. Dwarkesh has pushed on this many times and is doing so again.

  4. Timeline time. Why does Dario think we are at ‘the end of the exponential’ rather than ten years away? Dario says his famous ‘country of genuines in a data center’ is 90% within 10 years without biting a bullet on faster. One concern is needing verification. Dwarkesh pushes that this means the models aren’t general, Dario says no we see plenty of generalization, but the world where we don’t get the geniuses is still a world where we can do all the verifiable things.

    1. As always, notice the goalposts. Ten years from human-level AI is ‘long time.’

    2. Dario is mostly right on generalization, in that you need verification to train in distribution but then things often work well (albeit less well) out of distribution.

    3. The class of verifiable things is larger than one might think, if you include all necessary subcomponents of those tasks and then the combination of those subcomponents.

  5. Dwarkesh challenges if you could automate an SWE without generalization outside verifiable domains, Dario says yes you can, you just can’t verify the whole company.

    1. I’m 90% with Dario here.

  6. What’s the metric of AI in SWE? Dario addresses his predictions of AI writing 90% of the lines of code in 3-6 months. He says it happened at Anthropic, and that ‘100% of today’s SWE tasks are done by the models,’ but that’s all not yet true overall, and says people were reading too much into the prediction.

    1. The prediction was still clearly wrong.

    2. A lot of that was Dario overestimating diffusion at this stage.

    3. I do agree that the prediction was ‘less wrong,’ or more right, than those who predicted a lack of big things for AI coding, who thoughts things would not escalate quickly.

    4. Dario could have reliably looked great if he’d made a less bold prediction. There’s rarely reputational alpha in going way beyond others. If everyone else says 5 years, and you think 3-6 months, you can say 2 years and then if it happens in 3-6 months you still look wicked smart. Whereas the super fast predictions don’t sound credible and can end up wrong. Predicting 3-6 months here only happens if you’re committed to a kind of epistemic honesty.

    5. I agree with Dario that going from 90% of code to 100% of code written by AI is a big productivity unlock, Dario’s prediction on this has already been confirmed by events. This is standard Bottleneck Theory.

  7. “Even when that happens, it doesn’t mean software engineers are out of a job. There are new higher-level things they can do, where they can manage. Then further down the spectrum, there’s 90% less demand for SWEs, which I think will happen but this is a spectrum.”

    1. It would take quite a lot of improved productivity to reduce demand by 90%.

    2. I’d go so far as to say that if we reduce SWE demand by 90%, then we have what one likes to call ‘much bigger problems.’

  8. Anthropic went from zero ARR to $100 million in 2023, to $1 billion in 2024, to $9-$10 billion in 2025, and added a few more billion in January 2026. He guesses the 10x per year starts to level off some time in 2026, although he’s trying to speed it up further. Adoption is fast, but not infinitely fast.

    1. Dario’s predictions on speed of automating coding were unique, in that all the revenue predictions for OpenAI and Anthropic have consistently come in too low, and I think the projections are intentional lowballs to ensure they beat the projections and because the normies would never believe the real number.

  9. Dwarkesh pulls out the self-identified hot take that ‘diffusion is cope’ used to justify when models can’t do something. Hiring humans is much more of a hassle than onboarding an AI. Dario says you still have to do a lot of selling in several stages, the procurement processes are often shortcutted but still take time, and even geniuses in a datacenter will not be ‘infinitely’ compelling as a product.

    1. I’ve basically never disagreed with a Dwarkesh take as much as I do here.

    2. Yes, of course diffusion is a huge barrier.

    3. The fact that if the humans knew to set things up, and how to set things up, that the cost of deployment and diffusion would be low? True, but completely irrelevant.

    4. The main barrier to Claude Code is not that it’s hard to install, it’s that it’s hard to get people to take the plunge and install it, as Dario notes.

    5. In practice, very obviously, even the best of us miss out on a lot of what LLMs can do for us, and most people barely scratch the surface at best.

    6. A simple intuition pump: If diffusion is cope, what do you expect to happen if there was an ‘AI pause’ starting right now, and no new frontier models were ever created?

    7. Dwarkesh sort of tries to backtrack on what he said as purely asserting that we’re not currently at AGI, but that’s an entirely different claim?

  10. Dario says we’re not at AGI, and that if we did have a ‘country of geniuses in a datacenter’ then everyone would know this.

    1. I think it’s possible that we might not know, in the sense that they might be sufficiently both capable and misaligned to disguise this fact, in which case we would be pretty much what we technically call ‘toast.’

    2. I also think it is very possible in the future that an AI lab might get the geniuses and then disguise this fact from the rest of us, and not release the geniuses directly, for various reasons.

    3. Barring those scenarios? Yes, we would know.

It’s a Dwarkesh Patel AI podcast, so it’s time for continual learning in two senses.

  1. Dwarkesh thinks Dario’s prediction for today, from three years ago, of “We should expect systems which, if you talk to them for the course of an hour, it’s hard to tell them apart from a generally well-educated human” was basically accurate. Dwarkesh however is spiritually unsatisfied because that system can’t automated large parts of white-collar work. Dario points out OSWorld scores are already at 65%-70% up from 15% a year ago, and computer use will improve.

    1. I think it is very easy to tell, but I think the ‘spirit of the question’ is not so off, in the sense that on most topics I can have ‘at least as good’ a conversation with the LLM for an hour as with the well-educated human.

    2. Can such a system automate large parts of white-collar work? Yes. Very obviously yes, if we think in terms of tasks rather than full jobs. If you gave us ten years (as an intuition pump) to adapt to existing systems, then I would predict a majority of current white-collar digital job tasks get automated.

    3. The main current barrier to the next wave of practical task automation is that computer use is still not so good (as Dario says), but that will get fixed.

  2. Dwarkesh asks about the job of video editor. He says they need six months of experience to understand the trade-offs and preferences and tastes necessary for the job and asks when AI systems will have that. Dario says the ‘country of geniuses in a datacenter’ can do that.

    1. I bet that if you took Claude Opus 4.6 and Claude Code, and you gave it the same amount of human attention to improving its understanding of trade-offs, preferences and taste over six months that a new video editor would have, and a similar amount of time training video editing skills, that you could get this to the point where it could do most of the job tasks.

    2. You’d have to be building up copious notes and understandings of the preferences and considerations, and you’d need for now some amount of continual human supervision and input, but yeah, sure, why not.

    3. Except that by the time you were done you’d use Opus 5.1, but same idea.

  3. Dwarkesh says he still has to have humans do various text-to-text tasks, and LLMs have proved unable to do them, for example on ‘identify what the best clips would be in this transcript’ they can only do a 7/10 job.

    1. If you see the LLMs already doing a 7/10 job, the logical conclusion is that this will be 9/10 reasonably soon especially if you devote effort to it.

    2. There are a lot of things one could try here, and my guess is that Dwarkesh has mostly not tried them, largely because until recently trying them was a lot slower and more expensive than it is now.

  4. Dwarkesh asks if a lot of LLM coding ability is the codebase as massive notes. Dario points out this is not an accounting of what a human needs to know, and the model is much faster than humans at understanding the code base.

    1. I think the metaphor is reasonably apt, in that in code the humans or prior AIs have written things down, and in other places we haven’t written similar things down. You could fix that, including over time.

  5. Dwarkesh cites the ‘the developers using LLMs thought they were faster but were went slower’ study and asks where the renaissance of software and productivity benefits are from AI coding. Dario says it’s unmistakable within Anthropic, and cites that they’ve cut their competitors off from using Claude.

    1. Not letting OpenAI use Claude is a big costly signal that they view agentic coding as a big productivity boost, and even that theirs is a big boost over OpenAI’s versions of the same tools.

    2. It seems very difficult, watching the pace of developments in AI inside and outside of the frontier labs, to think coding productivity isn’t accelerating.

  6. Dario estimates current coding models give 15%-20% speedup, versus 5% six months ago, and that Amdhal’s law means you eventually get a much bigger speedup once you start closing full loops.

    1. It’s against his interests to come up with a number that small.

    2. I also don’t believe a number that small, especially since the pace of coding now seems to be largely rate limited by compute and frequency of human interruptions to parallel agents. It’s very hard to thread the needle and have the gains be this small.

    3. The answer will vary a lot. I can observe that for me, given my particular set of skills, the speedup is north of 500%. I’m vastly faster and better.

  7. Dwarkesh asks again ‘continual learning when?’ and Dario says he has ideas.

    1. There are cathedrals for those with eyes to see.

  1. How does Dario reconcile his general views on progress with his radically fast predictions on capabilities? Fast but finite diffusion, especially economic. Curing diseases might take years.

    1. Diffusion is real but Dario’s answer to this, which hasn’t changed, has never worked for me. His predictions on impact do not square with his predictions on capabilities, period, and it is not a small difference.

  2. Why not buy the biggest data center you can get? If Anthropic managed to buy enough compute for their anticipated demand, they burn the boats. That’s on the order of $5 trillion dollars two years from now. If the revenue does not materialize, they’re toast. Whereas Anthropic can ensure financial stability and profitability by not going nuts, as their focus is enterprise revenue with higher margins and reliability.

    1. Being early in this sense, when things keep going 10x YoY, is fatal.

    2. That’s not strictly true. You’re only toast if you can’t resell the compute at the same or a better price. But yes, you’re burning the boats if conditions change.

    3. Even if you did want to burn the boats, it doesn’t mean the market will let you burn the boats. The compute is not obviously for sale, nor is Anthropic’s credit good for it, nor would the investors be okay with this.

    4. This does mean that Anthropic is some combination of insufficiently confident to burn the boats or unable to burn them.

  3. Dario won’t give exact numbers, but he’s predicting more than 3x to Anthropic compute each year going forward.

  1. Why is Anthropic planning on turning a profit in 2028 instead of reinvesting? “I actually think profitability happens when you underestimated the amount of demand you were going to get and loss happens when you overestimated the amount of demand you were going to get, because you’re buying the data centers ahead of time.” He says they could potentially even be profitable in 2026.

    1. Thus, the theory is that Anthropic needs to underestimate demand because it is death to overestimate demand, which means you probably turn a profit ‘in spite of yourself.’ That’s so weird, but it kind of makes sense.

    2. Dario denies this is Anthropic ‘systematically underinvesting in compute’ but that depends on your point of view. You’re underinvesting post-hoc with hindsight. That doesn’t mean it was a mistake over possible worlds, but I do think that it counts as underinvesting for these purposes.

    3. Also, Dario is saying (in the toy model) you split compute 50/50 internal use versus sales. You don’t have to do that. You could double the buy, split it 75/25 and plan on taking a loss and funding the loss by raising capital, if you wanted that.

  2. Dwarkesh suggests exactly doing an uneven split, Dario says there are log returns to scale, diminishing returns after spending e.g. $50 billion a year, so it probably doesn’t help you that much.

    1. I basically don’t buy this argument. I buy the diminishing return but it seems like if you actually believed Anthropic’s projections you wouldn’t care. As Dwarkesh says ‘diminishing returns on a genius could be quite high.’

    2. If you actually did have a genius in your datacenters, I’d expect there to be lots of profitable ways to use that marginal genius. The world is your oyster.

    3. And that’s if you don’t get into an AI 2027 or other endgame scenario.

  3. Dario says AI companies need revenue to raise money and buy more compute.

    1. In practice I think Dario is right. You need customers to prove your value and business model sufficiently to raise money.

    2. However, I think the theory here is underdeveloped. There is no reason why you couldn’t keep raising at higher valuations without a product. Indeed, see Safe Superintelligence, and see Thinking Machines before they lost a bunch of people, and so on, as Matt Levine often points out. It’s better to be a market leader, but the no product, all research path is very viable.

    3. The other advantage of having a popular product is gaining voice.

  4. Dwarkesh claims Dario’s view is compatible with us being 10 years away from AI generating trillions in value. Dario says it might take 3-4 years at most, he’s very confident in the ‘geniuses’ showing up by 2028.

    1. Dario feels overconfident here, and also more confident than his business decisions reflect. If he’s that confident he’s not burning enough boats.

  5. Dario predicts a Cournot equilibrium, with a small number of relevant firms, which means there will be economic profits to be captured. He points out that gross margins are currently very positive, and the reason AI companies are taking losses is that each model turns a profit but you’re investing in the model that costs [10*X] while collecting the profits from the model that costs [X]. At some point the compute stops multiplying by 10 each cycle and then you notice that you were turning a profit the whole time, the economy is going to grow faster but that’s like 10%-20% fast, not 300% a year fast.

    1. I don’t understand what is confusing Dwarkesh here. I do get that this is confusing to many but it shouldn’t confuse Dwarkesh.

    2. Of course if we do start seeing triple-digit economic growth, things get weird, and also we should strongly suspect we will all soon die or lose control, but in the meantime there’ll be some great companies and I wouldn’t worry about Anthropic’s business model while that is happening.

  6. Dario says he feels like he’s in an economics class.

    1. Honestly it did feel like that. This is the first time in a long time it felt like Dwarkesh flat out was not prepared on a key issue, and is getting unintentionally taken to school (as opposed to when someone like Sarah Paine is taking us to school, but by design.)

  7. Dario predicts an oligopoly, not a monopoly, because of lack of network effects combined with high fixed costs, similar to cloud providers.

    1. This is a bet on there not being win-more or runaway effects.

    2. For a while, the battle had catch-up mechanics rather than runaway effects. If you were behind, you can distill and you can copy ideas, so it’s hard to maintain much of a lead.

    3. This feels like it is starting to change as RSI sets in. Claude is built by Claude Code, Codex is built by Codex, Google has to make its own choices and so on. The models are in many ways charged with training their successors.

    4. Also the cycle may be speeding up a la AI 2027. If you’re six months behind that used to be one generation behind. Now it is three.

    5. And of course, once sufficiently powerful RSI (recursive self-improvement) sets in, and the models become sufficiently capable, that edge starts to translate into various other advantages far more readily.

    6. Many fates are possible, but don’t rule out monopoly or winner-take-most.

  8. Dario points out different models have different comparative advantages, often in subtle ways.

    1. True, but a sufficient lead would apply across the board. We’re not there right now, but we’re not that far from it either.

  9. Dario worried Silicon Valley and those connected to it could grow at 50% while everyone else grows at not much above the normal 2%. He says that would be ‘a pretty messed up world.’

    1. I think that turns out fine. You tax the part growing at 50%, everyone wins.

    2. That’s distinct from issues about the AI taking over, or the people in charge of the AI taking over, you still do have to dodge those problems. But if well-meaning humans are in control I don’t worry about distributional issues under extreme growth.

  10. Will robotics get solved soon after we get the ‘geniuses’? Dario says it doesn’t depend on learning like a human, there are many options, and it will happen, we will learn to control robots, and yes the robotics industry will then make trillions. It tacks on maybe a year or two to get going.

    1. This seems obviously correct if you understand the premise, that we have the geniuses and the geniuses are playing nice for whatever reason.

    2. That premise is not obvious.

  11. Dwarkesh Patel keeps talking about continual learning, Dario Amodei keeps saying that we don’t need it.

    1. I agree with Dario. We don’t need it as such, if nothing else we can easily solve such problems already via [CENSORED].

  12. How should we price AGI? Dario thinks API is durable and will exist alongside other options, including forms of ‘pay for results.’

  13. How did Anthropic end up being the ones to build Claude Code? Dario encouraged experimentation internally, they used it internally, and then Dario said they should launch it externally.

Finally, we ask about making AI ‘go well.’ With that framing you know that everyone is mostly conspicuously ignoring the biggest issues.

  1. Soon there will be lots of misaligned or crazy AIs running around. What to do? Dario correctly reiterates his dismissal of the idea that having a bunch of different AIs keeps them meaningfully in check. He points to alignment work, and classifiers, for the short run. For the long run, we need governance and some sort of monitoring system, but it needs to be consistent with civil liberties, and we need to figure this out really fast.

    1. We’ve heard Dario’s take on this before, he gives a good condensed version.

    2. For my response, see my discussion of The Adolescence of Technology. I think he’s dodging the difficult questions, problems and clashes of sacred values, because he feels it’s the strategically correct play to dodge them.

    3. That’s a reasonable position, in that if you actively spell out any plan that might possibly work, even in relatively fortunate scenarios, this is going to involve some trade-offs that are going to create very nasty pull quotes.

    4. The longer you wait to make those trade-offs, the worse they get.

  2. Dwarkesh asks, what do we do in an offense-dominated world? Dario says we would need international coordination on forms of defense.

    1. Yes. To say (less than) the least.

  3. Dwarkesh asks about Tennessee’s latest crazy proposed bill (it’s often Tennessee), which says “It would be an offense for a person to knowingly train artificial intelligence to provide emotional support, including through open-ended conversations with a user” and a potential patchwork of state laws. Dario (correctly) points out that particular law is dumb and reiterates that a blanket moratorium on all state AI bills for 10 years is a bad idea, we should only stop states once we have a federal framework in place on a particular question.

    1. Yes, that is the position we still need to argue against, my lord.

  4. Dario points out that people talk about ‘thousands of state laws’ but those are only proposals, almost all of them fail to pass, and when really stupid laws pass they often don’t get implemented. He points out that there are many things in AI he would actively deregulate, such as around health care. But he says we need to ramp up the safety and security legislation quite significantly, especially transparency. Then we need to be nimble.

    1. I agree with all of this, as far as it goes.

    2. I don’t think it goes far enough.

    3. Colorado passed a deeply stupid AI regulation law, and didn’t implement it.

  5. What can we do to get the benefits of AI better instantiated? Dwarkesh is worried about ‘kinds of moral panics or political economy problems’ and he worries benefits are fragile. Dario says no, markets actually work pretty well in the developed world.

    1. Whereas Dwarkesh does not seem worried about the actual catastrophic or existential risks from AI.

  1. Dario is fighting for export controls on chips, and he will ‘politely call the counterarguments fishy.’

  2. Dwarkesh asks, what’s wrong with China having its own geniuses? Dario says we could be in an offense-dominant world, and even if we are not then potential conflict would create instability. And he worried governments will use AI to oppress their own people, China especially. Some coalition with pro-human values has to say ‘these are the rules of the road.’ We need to press our edge.

    1. I am sad that this is the argument he is choosing here. There are better reasons, involving existential risks. Politically I get why he does it this way.

  3. Dario doesn’t see a key inflection point, even with his ‘geniuses,’ the exponential will continue. He does call for negotiation with a strong hand.

    1. This is reiteration from his essays. He’s flinching.

    2. There’s good reasons for him to flinch, but be aware he’s doing it.

  4. More discussion of democracy and authoritarianism and whether democracy will remain viable or authoritarianism lack sustainability or moral authority, etc.

    1. There’s nothing new here, Dario isn’t willing to say things that would be actually interesting, and I grow tired.

  5. Why does Claude’s constitution try to make Claude align to desired values and do good things and not bad things, rather than simply being user aligned? Dario gives the short version of why virtue ethics gives superior results here, without including explanations of why user alignment is ultimately doomed or the more general alignment problems other approaches can’t solve.

    1. If you’re confused about this see my thoughts on the Claude Constitution.

  6. How are these principles determined? Can’t Anthropic change them at any time? Dario suggests three sizes of loop: Within Anthropic, different companies putting out different constitutions people can compare, and society at large. He says he’d like to let representative governments have input but right now the legislative process is too slow therefore we should be careful and make it slower. Dwarkesh likes control loop two.

    1. I like the first two loops. The problem with putting the public in the loop is that they have no idea how any of this works and would not make good choices, even according to their own preferences.

  7. What have we likely missed about this era when we write the book on it? Dario says, the extent the world didn’t understand the exponential while it was happening, that the average person had no idea and everything was being decided all at once and often consequential decisions are made very quickly on almost no information and spending very little human compute.

    1. I really hope we are still around to write the book.

    2. From the processes we observe and what he says, I don’t love our chances.

Discussion about this post

On Dwarkesh Patel’s 2026 Podcast With Dario Amodei Read More »

“it-ain’t-no-unicorn”:-these-researchers-have-interviewed-130-bigfoot-hunters

“It ain’t no unicorn”: These researchers have interviewed 130 Bigfoot hunters

It was the image that launched a cultural icon. In 1967, in the Northern California woods, a 7-foot-tall, ape-like creature covered in black fur and walking upright was captured on camera, at one point turning around to look straight down the lens. The image is endlessly copied in popular culture—it’s even become an emoji. But what was it? A hoax? A bear? Or a real-life example of a mysterious species called the Bigfoot?

The film has been analysed and re-analysed countless times. Although most people believe it was some sort of hoax, there are some who argue that it’s never been definitively debunked. One group of people, dubbed Bigfooters, is so intrigued that they have taken to the forests of Washington, California, Oregon, Ohio, Florida, and beyond to look for evidence of the mythical creature.

But why? That’s what sociologists Jamie Lewis and Andrew Bartlett wanted to uncover. They were itching to understand what prompts this community to spend valuable time and resources looking for a beast that is highly unlikely to even exist. During lockdown, Lewis started interviewing more than 130 Bigfooters (and a few academics) about their views, experiences, and practices, culminating in the duo’s recent book “Bigfooters and Scientific Inquiry: On the Borderlands of Legitimate Science.”

Here, we talk to them about their academic investigation.

What was it about the Bigfoot community that you found so intriguing?

Lewis: It started when I was watching either the Discovery Channel or Animal Planet and a show called Finding Bigfoot was advertised. I was really keen to know why this program was being scheduled on what certainly at the time was a nominally serious and sober natural history channel. The initial plan was to do an analysis of these television programmes, but we felt that wasn’t enough. It was lockdown and my wife was pregnant and in bed a lot with sickness, so I needed to fill my time.

Bartlett: One of the things that I worked on when Jamie and I shared an office in Cardiff was a sociological study of fringe physicists. These are people mostly outside of academic institutions trying to do science. I was interviewing these people, going to their conferences. And that led relatively smoothly into Bigfoot, but it was Jamie’s interest in Bigfoot that brought me to this field.

How big is this community?

Lewis: It’s very hard to put a number on it. There is certainly a divide between what are known as “apers,” who believe that Bigfoot is just a primate unknown to science, and those that are perhaps more derogatorily called “woo-woos,” who believe that Bigfoot is some sort of interdimensional traveller, an alien of sort. We’re talking in the thousands of people. But there are a couple of hundred really serious people of which I probably interviewed at least half.

Many people back them. A YouGov survey conducted as recently as November 2025, suggested that as many as one quarter of Americans believe that Bigfoot either definitely or probably exists.

Were the interviewees suspicious of your intentions?

Lewis: I think there was definitely a worry that they would be caricatured. And I was often asked, “Do I believe in Bigfoot?” I had a standard answer that Andy and I agreed on, which was that mainstream, institutional science says there is absolutely no compelling evidence that Bigfoot exists. We have no reason to dissent with that consensus. But as sociologists what does exist is a community (or communities) of Bigfooting, and that’s what interests us.

“It ain’t no unicorn”: These researchers have interviewed 130 Bigfoot hunters Read More »

a-valentine’s-day-homage-to-crouching-tiger,-hidden-dragon

A Valentine’s Day homage to Crouching Tiger, Hidden Dragon

That thief turns out to be Jen, who has secretly been studying martial arts. And Jen’s really not keen on her upcoming arranged marriage because she has fallen in love with a bandit named Lo “Dark Cloud” Xiao Hou (Chang Chen). They are the symbolic tiger and dragon, with Lo as the unchanging yin (tiger) and Jen as the dynamic yang (the hidden dragon).

(WARNING: Major spoilers below. Stop reading now if you haven’t seen the entire film.)

Longtime friends Mu Bai (Chow Yan-Fat) and Shu Lien (Michelle Yeoh) have refrained from declaring their love out of respect for Shu Lien’s late fiancé. Sony Pictures Classics

There are multiple clashes between our main characters, most notably Jen battling Shu Lien, and a famous sequence where Mu Bai pursues Jen across the treetops of a bamboo forest, deftly balancing on the swaying branches and easily evading Jen’s increasingly undisciplined sword thrusts. It’s truly impressive wire work (all the actors performed their own stunts), in fine wuxia tradition. Jen is gifted, but arrogant and defiant, refusing Mu Bai’s offer of mentorship; she thinks with Green Destiny she will be invincible and has nothing more to learn. Ah, the arrogance of youth.

Eventually, Jen is betrayed by her former teacher, Jade Fox, who is bitter because Jen has surpassed her skills—mostly because Jade Fox is illiterate and had to rely on a stolen manual’s diagrams, while the literate Jen could read the text yet did not share those insights with her teacher. Jade Fox is keeping her drugged in a cave, intending to poison her, when Mu Bai and Shu Lien come to the rescue. In the ensuing battle, Mu Bai is struck by one of Jade Fox’s poison darts. Jen rushes off to bring back the antidote, but arrives too late. Mu Bai dies in Shu Lien’s arms, as the two finally confess how much they love each other.

(sniff) Sorry, something in my eye. Anyway, the ever-gracious Shu Lien forgives the young woman and tells her to be true to herself, and to join Lo on Mount Wudang. But things don’t end well for our young lovers either. After spending the night together, Lo finds Jen standing on a bridge at the edge of the mountain. Legend has it that a man once made a wish and jumped off the mountain. His heart was pure so his wish was granted and he flew away unharmed, never to be seen again. Jen asks Lo to make a wish before swan-diving into the mist-filled chasm. Was her heart pure? Did Lo get his wish for them to back in the desert, happily living as renegades? Or did she plunge to her death? We will never know. Jen is now part of the legend.

A Valentine’s Day homage to Crouching Tiger, Hidden Dragon Read More »

when-amazon-badly-needed-a-ride,-europe’s-ariane-6-rocket-delivered

When Amazon badly needed a ride, Europe’s Ariane 6 rocket delivered

The Ariane 64 flew with an extended payload shroud to fit all 32 Amazon Leo satellites. Combined, the payload totaled around 20 metric tons, or about 44,000 pounds, according to Arianespace. This is close to maxing out the Ariane 64’s lift capability.

Amazon has booked more than 100 missions across four launch providers to populate the company’s planned fleet of more than 3,200 satellites. With Thursday’s launch, Amazon has launched 214 production satellites on eight missions with United Launch Alliance, SpaceX, and now Arianespace.

The Amazon Leo constellation is a competitor with SpaceX’s Starlink Internet network. SpaceX now has more than 9,000 satellites in orbit beaming broadband to more than 9 million subscribers, and all have launched on the company’s own Falcon 9 rockets. Amazon, meanwhile, initially bypassed SpaceX when selecting which companies would launch satellites for the Amazon Leo program, formerly known as Project Kuiper.

Amazon booked the last nine launches on ULA’s soon-to-retire Atlas V, five of which have now flown, and reserved the rest of its launches in 2022 on rockets that had never launched before: 38 flights on ULA’s new Vulcan rocket, 24 launches on Blue Origin’s New Glenn, and 18 on Europe’s Ariane 6.

An artist’s illustration of the Ariane 6’s upper stage in orbit with a stack of Amazon Leo satellites awaiting deployment.

Credit: Arianespace

An artist’s illustration of the Ariane 6’s upper stage in orbit with a stack of Amazon Leo satellites awaiting deployment. Credit: Arianespace

Meanwhile, in Florida

All three new rockets suffered delays but are now in service. The Ariane 6 has enjoyed the fastest ramp-up in launch cadence, with six flights under its belt after Thursday’s mission from French Guiana. ULA’s Vulcan rocket has flown four times, and Amazon says its first batch of satellites to fly on Vulcan is now complete. But a malfunction with one of the Vulcan launcher’s solid rocket boosters on a military launch from Florida early Thursday—the second such anomaly in three flights—raises questions about when Amazon will get its first ride on Vulcan.

Blue Origin, owned by Amazon founder Jeff Bezos, is gearing up for the third flight of its heavy-lift New Glenn rocket from Florida as soon as next month. Amazon and Blue Origin have not announced when the first group of Amazon Leo satellites will launch on New Glenn.

When Amazon badly needed a ride, Europe’s Ariane 6 rocket delivered Read More »

ring-cancels-flock-deal-after-dystopian-super-bowl-ad-prompts-mass-outrage

Ring cancels Flock deal after dystopian Super Bowl ad prompts mass outrage

Both statements verified that the integration never launched and that no Ring customers’ videos were ever sent to Flock.

Ring did not credit users’ privacy concerns for its change of heart. Instead, they claimed that a joint decision was made “following a comprehensive review” where Ring “determined the planned Flock Safety integration would require significantly more time and resources than anticipated.”

Separately, Flock said that “we believe this decision allows both companies to best serve their respective customers and communities.”

The only hint that Ring gave users that their concerns had been heard came in the last line of its blog, which said, “We’ll continue to carefully evaluate future partnerships to ensure they align with our standards for customer trust, safety, and privacy.”

Sharing his views on X and Bluesky, John Scott-Railton, a senior cybersecurity researcher at the Citizen Lab, joined critics calling Ring’s statement insufficient. He posted an image of the ad frame that Markey found creepy next to a statement from Ring, writing, “On the left? A picture of mass surveillance from #Ring’s ad. On the right? A ring [spokesperson] saying that they are not doing mass surveillance. The company cannot have it both ways.”

Ring’s statements so far do not “acknowledge the real issue,” Scott-Railton said, which is privacy risks. For Ring, it seemed like a missed opportunity to discuss or introduce privacy features to reassure concerned users, he suggested, noting the backlash showed “Americans want more control of their privacy right now” and “are savvy enough to see through sappy dog pics.”

“Stop trying to build a surveillance dystopia consumers didn’t ask for” and “focus on shipping good, private products,” Scott-Railton said.

He also suggested that lawmakers should take note of the grassroots support that could possibly help pass laws to push back on mass surveillance. That could help block not just a potential future partnership with Flock, but possibly also stop Ring from becoming the next Flock.

“Ring communications not acknowledging the lesson they just got publicly taught is a bad sign that they hope this goes away,” Scott-Railton said.

Ring cancels Flock deal after dystopian Super Bowl ad prompts mass outrage Read More »

byte-magazine-artist-robert-tinney,-who-illustrated-the-birth-of-pcs,-dies-at-78

Byte magazine artist Robert Tinney, who illustrated the birth of PCs, dies at 78

On February 1, Robert Tinney, the illustrator whose airbrushed cover paintings defined the look and feel of pioneering computer magazine Byte for over a decade, died at age 78 in Baker, Louisiana, according to a memorial posted on his official website.

As the primary cover artist for Byte from 1975 to the late 1980s, Tinney became one of the first illustrators to give the abstract world of personal computing a coherent visual language, translating topics like artificial intelligence, networking, and programming into vivid, surrealist-influenced paintings that a generation of computer enthusiasts grew up with.

Tinney went on to paint more than 80 covers for Byte, working almost entirely in airbrushed Designers Gouache, a medium he chose for its opaque, intense colors and smooth finish. He said the process of creating each cover typically took about a week of painting once a design was approved, following phone conversations with editors about each issue’s theme. He cited René Magritte and M.C. Escher as two of his favorite artists, and fans often noticed their influence in his work.

A phone call that changed his life

A recent photo portrait of Robert Tinney provided by the family.

A recent photo portrait of Robert Tinney provided by the family.

A recent photo portrait of Robert Tinney provided by the family. Credit: Family of Robert Tinney

Born on November 22, 1947, in Penn Yan, New York, Tinney moved with his family to Baton Rouge, Louisiana, as a child. He studied illustration and graphic design at Louisiana Tech University, and after a tour of service during the Vietnam War, he began his career as a commercial artist in Houston.

His connection to Byte came through a chance meeting with Carl Helmers, who would later found the magazine. In a 2006 interview I conducted with Tinney for my blog, Vintage Computing and Gaming, he recalled how the relationship began: “One day the phone rang in my Houston apartment and it was Carl wanting to know if I would be interested in painting covers for Byte.” His first cover appeared on the December 1975 issue, just three months after the magazine launched.

Over time, his covers became so popular that he created limited-edition signed prints that he sold on his website for decades. “A friend suggested once that I should select the best covers and reproduce them as signed prints,” he said in 2006. “Byte was gracious enough to let me advertise the prints when they could fit in an ad (it did get bumped occasionally), and the prints were very popular in the Byte booth at the big computer shows, two or three of which my wife, Susan, and I attended per year. When an edition sold out, I then put the design on a T-shirt.”

Byte magazine artist Robert Tinney, who illustrated the birth of PCs, dies at 78 Read More »

apple-releases-ios-26.3-with-updates-that-mainly-benefit-non-apple-devices

Apple releases iOS 26.3 with updates that mainly benefit non-Apple devices

Other additions, and other OSes

Another iOS 26.3 update is also aimed at interoperability, though it may only apply to iPhones covered by European Union regulations. A feature called “notification forwarding” will send your iPhone’s notifications to third-party accessories, including Google’s Android-based Wear OS smartwatches. Once the setting is enabled, users will be able to decide which apps can forward notifications to the third-party device, similar to how Apple Watch notifications work.

In current betas, Apple allows notifications to be forwarded to only one device at a time, and forwarding notifications to a third-party device means you can’t send them to an Apple Watch.

Finally, both iOS 26.3 and iPadOS 26.3 are introducing a feature for some newer devices with Apple’s in-house C1 and C1X modems: a “limit precise location” toggle that Apple says “enhances your location privacy by reducing the precision of location data available to cellular networks.”

This feature is currently only available on a handful of devices and even fewer carriers: In the US, Boost Mobile is the only one. Only the iPhone Air, iPhone 16e, or the M5 iPad Pro will offer the toggle; devices like the iPhone 17, iPhone 17 Pro, and older phones with Qualcomm or Intel modems won’t support the feature.

Apple has also updated all of its other major operating systems today. But macOS 26.3, iPadOS 26.3, watchOS 26.3, tvOS 26.3, visionOS 26.3 and version 26.3 of the HomePod software are all quieter updates of the bug-fixes-and-performance-improvements variety. Beta testers have found early evidence of support for the M5 Max and M5 Ultra chips, pointing to pending refreshes for some higher-end Macs, but that doesn’t tell us much we didn’t already know.

The 26.3 updates are mostly sleepy, but the 26.4 releases may be a bigger deal. These are said to be the first to include Apple’s “more intelligent Siri,” a feature initially promised as part of the first wave of Apple Intelligence updates in iOS 18 but delayed after it failed to meet Apple’s quality standards.

Apple and Google jointly announced in January that the new Siri would be powered by Google’s Gemini language models rather than OpenAI’s ChatGPT or other competing models. As with other Apple Intelligence features, we’d expect the new Siri to be available to testers via Apple’s developer and public beta programs before being released to all devices.

Apple releases iOS 26.3 with updates that mainly benefit non-Apple devices Read More »

after-republican-complaints,-judicial-body-pulls-climate-advice

After Republican complaints, judicial body pulls climate advice

In short, the state attorneys general object to the document treating facts as facts, as there have been lawsuits that contested them. “Among other things, the Manual states that human activities have ‘unequivocally warmed the climate,’ that it is ‘extremely likely’ human influence drives ocean warming, and that researchers are ‘virtually certain’ about ocean acidification,” their letter states, “treating contested litigation positions as settled fact.” In other words, they’re arguing that, if someone is ignorant enough to start a suit based on ignorance of well-established science, then the Federal Judicial Center should join them in their ignorance.

The attorneys general also complain that the report calls the Intergovernmental Panel on Climate Change an “authoritative science body,” citing a conservative Canadian public policy think tank that disagreed with that assessment.

These complaints were mixed in with some more potentially reasonable complaints about how the climate chapter gave specific suggestions on how to legally approach some issues and assigned significance to one or two recent studies that haven’t yet been validated by follow-on work. But the letter’s authors would not settle for revisions based on a few reasonable complaints; instead, they demand the entire chapter be removed because it accurately reflects the status of climate science.

Naturally, the Federal Judicial Center has agreed. We have confirmed that the current version of the document no longer includes a chapter on climate science, even though the foreword by Supreme Court Justice Elana Kagan still mentions it. The full text of the now-deleted chapter has been posted by the RealClimate blog, though.

After Republican complaints, judicial body pulls climate advice Read More »

under-trump,-epa’s-enforcement-of-environmental-laws-collapses,-report-finds

Under Trump, EPA’s enforcement of environmental laws collapses, report finds


The Environmental Protection Agency has drastically pulled back on holding polluters accountable.

Enforcement against polluters in the United States plunged in the first year of President Donald Trump’s second term, a far bigger drop than in the same period of his first term, according to a new report from a watchdog group.

By analyzing a range of federal court and administrative data, the nonprofit Environmental Integrity Project found that civil lawsuits filed by the US Department of Justice in cases referred by the Environmental Protection Agency dropped to just 16 in the first 12 months after Trump’s inauguration on Jan. 20, 2025. That is 76 percent less than in the first year of the Biden administration.

Trump’s first administration filed 86 such cases in its first year, which was in turn a drop from the Obama administration’s 127 four years earlier.

“Our nation’s landmark environmental laws are meaningless when EPA does not enforce the rules,” Jen Duggan, executive director of the Environmental Integrity Project, said in a statement.

The findings echo two recent analyses from the nonprofits Public Employees for Environmental Responsibility and Earthjustice, which both documented dwindling environmental enforcement under Trump.

From day one of Trump’s second term, the administration has pursued an aggressive deregulatory agenda, scaling back regulations and health safeguards across the federal government that protect water, air and other parts of the environment. This push to streamline industry activities has been particularly favorable for fossil fuel companies. Trump declared an “energy emergency” immediately after his inauguration.

At the EPA, Administrator Lee Zeldin launched in March what the administration called the “biggest deregulatory action in U.S. history”: 31 separate efforts to roll back restrictions on air and water pollution; to hand over more authority to states, some of which have a long history of supporting lax enforcement; and to relinquish EPA’s mandate to act on climate change under the Clean Air Act.

The new report suggests the agency is also relaxing enforcement of existing law. Neither the White House nor the EPA responded to a request for comment.

A “compliance first” approach

Part of the decline in lawsuits against polluters could be due to the lack of staff to carry them out, experts say. According to an analysis from E&E News, at least a third of lawyers in the Justice Department’s environment division have left in the past year. Meanwhile, the EPA in 2025 laid off hundreds of employees who monitored pollution that could hurt human health.

Top agency officials are also directing staff to issue fewer violation notices and reduce other enforcement actions. In December, the EPA formalized a new “compliance first” enforcement policy that stresses working with suspected violators to correct problems before launching any formal action that could lead to fines or mandatory correction measures.

“Formal enforcement … is appropriate only when compliance assurance or informal enforcement is inapplicable or insufficient to achieve rapid compliance,” wrote Craig Pritzlaff, who is now a principal deputy assistant EPA administrator, in a Dec. 5 memo to all enforcement officials and regional offices.

Only in rare cases involving an immediate hazard should enforcers use traditional case tools, Pritzlaff said. “Immediate formal enforcement may be required in certain circumstances, such as when there is an emergency that presents significant harm to human health and the environment,” he wrote.

Federal agencies like the EPA, with staffs far outmatched in size compared to the vast sectors of the economy they oversee, typically have used enforcement actions not only to deal with violators but to deter other companies from breaking the law. Environmental advocates worry that without environmental cops visible on the beat, compliance will erode.

Pritzlaff joined the EPA last fall after five years heading up enforcement for the Texas Commission on Environmental Quality, where nonprofit watchdog group Public Citizen noted that he was known as a “reluctant regulator.” Public Citizen and other advocacy groups criticized TCEQ under Pritzlaff’s leadership for its reticence to take decisive action against repeat violators.

One example: An INEOS chemical plant had racked up close to 100 violations over a decade before a 2023 explosion that sent one worker to the hospital, temporarily shut down the Houston Ship Channel and sparked a fire that burned for an hour. Public Citizen said it was told by TCEQ officials that the agency allowed violations to accumulate over the years, arguing it was more efficient to handle multiple issues in a single enforcement action.

“But that proved to be untrue, instead creating a complex backlog of cases that the agency is still struggling to resolve,” Public Citizen wrote last fall after Pritzlaff joined the EPA. “That’s not efficiency, it’s failure.”

Early last year, TCEQ fined INEOS $2.3 million for an extensive list of violations that occurred between 2016 and 2021.

“A slap on the wrist”

The EPA doesn’t always take entities to court when they violate environmental laws. At times, the agency can resolve these issues through less-formal administrative cases, which actually increased during the first eight months of Trump’s second term when compared to the same period in the Biden administration, according to the new report.

However, most of these administrative actions involved violations of requirements for risk management plans under the Clean Air Act or municipalities’ violations of the Safe Drinking Water Act. The Trump administration did not increase administrative cases that involve pollution from industrial operations, Environmental Integrity Project spokesperson Tom Pelton said over email.

Another signal of declining enforcement: Through September of last year, the EPA issued $41 million in penalties—$8 million less than the same period in the first year of the Biden administration, after adjusting for inflation. This suggests “the Trump Administration may be letting more polluters get by with a slap on the wrist when the Administration does take enforcement action,” the report reads.

Combined, the lack of lawsuits, penalties, and other enforcement actions for environmental violations could impact communities across the country, said Erika Kranz, a senior staff attorney in the Environmental and Energy Law Program at Harvard Law School, who was not involved in the report.

“We’ve been seeing the administration deregulate by repealing rules and extending compliance deadlines, and this decline in enforcement action seems like yet another mechanism that the administration is using to de-emphasize environmental and public health protections,” Kranz said. “It all appears to be connected, and if you’re a person in the US who is worried about your health and the health of your neighbors generally, this certainly could have effects.”

The report notes that many court cases last longer than a year, so it will take time to get a clearer sense of how environmental enforcement is changing under the Trump administration. However, the early data compiled by the Environmental Integrity Project and other nonprofits shows a clear and steep shift away from legal actions against polluters.

Historically, administrations have a “lot of leeway on making enforcement decisions,” Kranz said. But this stark of a drop could prompt lawsuits against the Trump administration, she added.

“Given these big changes and trends, you might see groups arguing that this is more than just an exercise of discretion or choosing priorities [and] this is more of an abdication of an agency’s core mission and its statutory duties,” Kranz said. “I think it’s going to be interesting to see if groups make those arguments, and if they do, how courts look at them.”

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.

Photo of Inside Climate News

Under Trump, EPA’s enforcement of environmental laws collapses, report finds Read More »

bad-sleep-made-woman’s-eyelids-so-floppy-they-flipped-inside-out,-got-stuck

Bad sleep made woman’s eyelids so floppy they flipped inside out, got stuck

Exhausted elastin

As such, the correct next step for addressing her floppy eyelids wasn’t eye surgery or medication—it was a referral for a sleep test.

The patient did the test, which found that while she was sleeping, she stopped breathing 27 times per hour. On the apnea–hypopnea index, that yields a diagnosis of moderate-level OSA.

With this finding, the woman started using a continuous positive airway pressure (CPAP) machine, which delivers continuous air into the airway during sleep, preventing it from closing up. Along with some eye lubricants, nighttime eye patches, and a weight-loss plan, the woman’s condition rapidly improved. After two weeks, her eyelids were no longer inside out, and she could properly close her eyes. She was also sleeping better and no longer had daytime drowsiness.

Doctors don’t entirely understand the underlying mechanisms that cause floppy eyelid syndrome, and not all cases are linked to OSA. Researchers have hypothesized that genetic predispositions or anatomical anomalies may contribute to the condition. Some studies have found links to underlying connective tissue disorders. Tissue studies have clearly pointed to decreased amounts or abnormalities in the elastin fibers of the tarsal plate, the dense connective tissue in the eyelids.

For people with OSA, researchers speculate that the sleep disorder leads to hypoxic conditions (a lack of oxygen) in their tissue. This, in turn, could increase oxidative stress and reactive oxygen species in the tissue, which can spur the production of enzymes that break down elastin in the eyelid. Thus, the eyelids become lax and limp, allowing them to get into weird positions (such as inside out) and leading to chronic irritation of the eye surface.

The good news is that most people with floppy eye syndrome can manage the condition with conservative measures, such as CPAP for those with OSA, as did the woman in New York. But some may end up needing corrective surgery.

Bad sleep made woman’s eyelids so floppy they flipped inside out, got stuck Read More »

fbi-stymied-by-apple’s-lockdown-mode-after-seizing-journalist’s-iphone

FBI stymied by Apple’s Lockdown Mode after seizing journalist’s iPhone

Apple made Lockdown Mode for people at high risk

CART couldn’t get anything from the iPhone. “Because the iPhone was in Lockdown mode, CART could not extract that device,” the government filing said.

The government also submitted a declaration by FBI Assistant Director Roman Rozhavsky that said the agency “has paused any further efforts to extract this device because of the Court’s Standstill Order.” The FBI did extract information from the SIM card “with an auto-generated HTML report created by the tool utilized by CART,” but “the data contained in the HTML was limited to the telephone number.”

Apple says that LockDown Mode “helps protect devices against extremely rare and highly sophisticated cyber attacks,” and is “designed for the very few individuals who, because of who they are or what they do, might be personally targeted by some of the most sophisticated digital threats.”

Introduced in 2022, Lockdown Mode is available for iPhones, iPads, and Macs. It must be enabled separately for each device. To enable it on an iPhone or iPad, a user would open the Settings app, tap Privacy & Security, scroll down and tap Lockdown Mode, and then tap Turn on Lockdown Mode.

The process is similar on Macs. In the System Settings app that can be accessed via the Apple menu, a user would click Privacy & Security, scroll down and click Lockdown Mode, and then click Turn On.

“When Lockdown Mode is enabled, your device won’t function like it typically does,” Apple says. “To reduce the attack surface that potentially could be exploited by highly targeted mercenary spyware, certain apps, websites, and features are strictly limited for security and some experiences might not be available at all.”

Lockdown Mode blocks most types of message attachments, blocks FaceTime calls from people you haven’t contacted in the past 30 days, restricts the kinds of browser technologies that websites can use, limits photo sharing, and imposes other restrictions. Users can exclude specific apps and websites they trust from these restrictions, however.

FBI stymied by Apple’s Lockdown Mode after seizing journalist’s iPhone Read More »

trump-admin-is-“destroying-medical-research,”-senate-report-finds

Trump admin is “destroying medical research,” Senate report finds

Senators also pressed the director on the future of the NIH, noting that it has been hamstrung by the ongoing chaos, putting upcoming grant funding at risk, too. Of the NIH’s 27 institutes and centers, Bhattacharya testified, “I think it’s 15″ that are without a director. Sen. Patty Murray (D-Wash.), meanwhile, noted that more than half of the institutes are on track to lose all their voting advisory committee members by the end of the year—and grants cannot be approved without sign-off from these committees. Bhattacharya responded that they’re working on it.

Weasely answers on vaccines

In the course of the hearing, senators also tried to assess Bhattacharya’s loyalty to Kennedy’s dangerous anti-vaccine ideology, which includes the false and thoroughly debunked claim that vaccines cause autism.

Sanders asked Bhattacharya directly: “Do vaccines cause autism? Yes/no?”

“I do not believe that the measles vaccine causes autism,” Bhattacharya responded.

“No, uh-uh,” Sanders quickly interjected. “I didn’t ask [about] measles. Do vaccines cause autism?”

“I have not seen a study that suggests any single vaccine causes autism,” Bhattacharya responded.

But this, too, is an evasive answer. Note that he said “any single vaccine,” leaving open the possibility that he believes vaccines collectively or in some combination could cause autism. The measles vaccine, for instance, is given in combination with immunizations against mumps, rubella, and sometimes varicella (chickenpox).

It would also be false to suggest vaccines in combination are linked to autism; numerous studies have found no link between autism and vaccination generally. Still, this is a false idea that Kennedy and the like-minded anti-vaccine advocates he has installed into critical federal vaccine advisory roles are now pursuing.

Later in the hearing, Bhattacharya also indicated that when he said “I have not seen a study,” he was suggesting that it was because such studies have not been done—which is also false; routine childhood vaccines have been extensively studied for safety and efficacy.

“I’ve seen so many studies on measles vaccines and autism that established that there is no link,” [to autism], he said in an exchange with Hassan on the subject. “The other vaccines are less well studied.”

Trump admin is “destroying medical research,” Senate report finds Read More »