Author name: 9u50fv

monthly-roundup-#35:-october-2025

Monthly Roundup #35: October 2025

It is increasingly often strange compiling the monthly roundup, because life comes at us fast. I look at various things I’ve written, and it feels like they are from a different time. Remember that whole debate over free speech? Yeah, that was a few weeks ago. Many such cases. Gives one a chance to reflect.

In any case, here we go.

  1. Don’t Provide Bad Training Data.

  2. Maybe Don’t Say Maybe.

  3. Throwing Good Parties Means Throwing Parties.

  4. Air Travel Gets Worse.

  5. Bad News.

  6. You Do Not Need To Constantly Acknowledge That There Is Bad News.

  7. Prediction Market Madness.

  8. No Reply Necessary.

  9. While I Cannot Condone This.

  10. Antisocial Media.

  11. Government Working.

  12. Tylenol Does Not Cause Autism.

  13. Jones Act Watch.

  14. For Science!.

  15. Work Smart And Hard.

  16. So Emotional.

  17. Where Credit Is Due.

  18. Good News, Everyone.

  19. I Love New York.

  20. For Your Entertainment.

  21. Gamers Gonna Game Game Game Game Game.

  22. I Was Promised Flying Self-Driving Cars.

  23. Sports Go Sports.

  24. Opportunity Knocks.

People should be free to squander their money, but when other people make bad choices, this tends to not go well for you either, even when talking about consumer choices, let alone things like ‘building superintelligence thus causing everyone to die.’

Bryan Caplan: If everyone but you squanders their money, everyone but you suffers.

If everyone but you votes for terrible policies, everyone including you suffers.

Eliezer Yudkowsky: Idiots with buying power have negative externalities, not just idiots with voting power. It means going on Amazon and seeing crap. It’s an easier problem to attack but not at all trivial.

Eg broken Amazon reviews won’t let you find one non-crappy product even if it exists.

Or to put it in another way: we live in an economy, not just a legal system. I too feel like there could and should be a libertarian solution rather than a tyrannical one, but I’m not in denial about the problem.

There are also times when you don’t want to compete for the good stuff. Or when this makes it easy for you to save money or turn a profit, and it goes well for you.

Most of the time, no, you want everyone to choose wisely. When others have good taste, and choose quality products, the market produces quality products. If not, not. When they rate things properly, you can find the good stuff. When we collectively make choices that enrich everyone, that too is good for everyone, and so on. It’s #NotOnlyPolicy.

Again, no, you shouldn’t coerce these choices, but you mostly don’t want to live in a world where everyone else is squandering their money in dumb ways.

Hosts don’t like it when you reply ‘maybe,’ they’d feel more respected if you said ‘no’ when invited to an event. Certainly by saying ‘maybe’ you are making life easier for you and harder for the host. Invitees told themselves the ‘maybe’ indicated interest, which it does, but it’s mainly annoying since you have to plan for both outcomes.

Thus, you should only reply ‘maybe’ if you get high value from the option, or you attending provides a lot of value to the host and you’re genuinely unsure. Assume that by doing so you are imposing a cost.

Uri gives us 21 Facts About Throwing Good Parties (via MR). Mostly seems like great advice starting with the ‘announce at a quarter-hour so people will only be 15 minutes late,’ especially if you are trying to optimize the party as a public service.

My biggest new takeaway is I was under considering the ‘know who else is going’ value of using apps like Partiful or Luma. I strongly endorse that if you want to leave a group at a party, you straight up walk away, don’t say anything (and if a group is bigger than ~5, strongly consider leaving it).

One thing I challenge is that he thinks if you are gender imbalanced, the sparse gender will stop attending. It’s definitely true that women become apprehensive if too outnumbered, but running the other way is probably symptomatic of party design that didn’t appeal to men in the first place. Men are not, in general, less likely to show up to a party when there’s going to be more women.

My biggest old takeaway, the first one on the list this time, that throwing a part at all is good, is you are not throwing enough parties, and you should not let the perfect be the enemy of the good. The MVP (minimum viable party) is inviting friends over, providing some food, and chilling. That’s a remarkably valuable public service. If obsessing over details would top you from throwing or enjoying it, skip those details.

One can compare this to Auren Hoffman’s advice for dinner parties, which I discussed in the January 2025 roundup, where again a central theme is not letting perfect be the enemy of the good. I stand by my disagreement there that it is important in a dinner party that the food be good. It is less important for other party formats, but also in a pinch not so difficult to make the food good because delivery services exist.

Also one can reiterate this thread from Kasay and my commentary on it from June 2024, especially that you only need 14 square feet per person, and that ultimately the most important decision is who to invite, with another key factor being engineering a space to create alcoves of conversation.

Airplanes seem to have a growing systemic problem with exposing passengers to fumes in a way that is importantly bad for their health, which they have handled badly and covered up in similar fashion to other industries with similar health issues.

Eliezer Yudkowsky: If it’s true, aircraft manufacturers and airlines are engaging in a classic denial-coverup in the style of cigarette companies, leaded gasoline makers, or AI builders. (At smaller scale.)

Patrick Collison: I’ve been tracking it for a few years (outside of MSM); I’m pretty sure it’s true in a way that does not reflect well on the sector.

My quick math says that this is not common enough you should worry much about it as a passenger, and that air travel remains far safer than other forms of travel even in the worst case. Claude did a worst-case scenario estimate based on the claims and came up with a cost per flight of 0.00003 QALY, which is still at least one order of magnitude lower than any alternative form of transportation.

But this is worth noticing.

The French left is being heavily influenced by a so-called ‘economist’ advocating for ultimately instituting an 8% annual wealth tax, also known as full confiscatory taxation of all wealth subject to such a tax, which can then serve as a warning to the next ten generations. Nine of which will presumably forget, but that’s the way it goes.

Halloween, a holiday so not deep that you don’t even get the day off of work or school, is out of control, complete with not only sales and banners but people who put intentional nightmare fuel (as in, items intentionally created to be scary especially to kids) on display for over a month.

Lady Nimby: Why would you want this at your doorstep for 10% of your life? Why are my kids seeing this?

Mason: Yeah, I love Halloween but I think you should be able to easily opt out of nightmare fuel. This is great decor for your indoor/backyard party.

I’m not saying we should ban you from displaying that stuff if you want to, it’s your house. I am definitely saying that you shouldn’t want to beyond a few days tops, because it is cool in the context of a party or active trick-or-treating, and totally not cool left in place for several days before or after that.

Halloween is of course only a secondary offender when compared to Christmas.

Broadway musicals are in deep trouble, since 2020 there have been 46 new musicals costing $800 million and 43 of them have lost money, including all 18 last season. One neglected explanation is that New York has de facto banned hotel construction and largely banned AirBnB, so hotel prices are way up. The good news is that would be easy to fix.

An article I suspect is itself slop (but in this case, fair!) says We Are The Slop, living our lives in order to generate entertainment slop for others. Certainly influencers do this, that is their job, but the basic math says that like rock stars there is a lot of aspiration to that job but there are not so many slots that pay real money. Most of us do indeed live our lives with the cameras off.

There is a lot of bad news in the world. There is also a lot of good news in the world.

I cannot emphasize enough that the existence of bad news, and especially of political bad news, should not be a constant preface to conversation or good news.

You do not need to continuously self-flagellate to show that you care about the bad news. If others do insist that you do this, that is no good, you need to push back.

This applies across all causes, across the political spectrum, and it especially applies to AI and the fact that it is likely that someone is going to build superintelligence and then everyone will die. Yes, And That’s Terrible, and we should spend some of the time trying to prevent it, but at other times the show must go on. Worry, but also at other times Be Happy.

Tyler Alterman: “Share your happiness with the world.” Duh, right? That’s some basic bh stuff. But recently a Buddhist nun said this to me in this sort of knowing way and it’s been changing my life

I realized that I am often not only hiding my happiness but actively turning it down. I’m doing this to fit in, to connect with the zeitgeist. And today’s zeitgeist has made it borderline offensive to be happy

“You’re happy? What about the war? And misaligned AI? And Tr*mp???”

Being happy is uncool right now in academia, amongst liberals, amongst humanitarians, and in art circles. It’s cringe in many locales of NYC and twitter. So I noticed that when I walk smiling through the streets, I start to feel like I’m Out Of Touch

This nun, however, was pointing out that if you don’t share your happiness, if you don’t let thy cup runneth over, you’re depriving other people of something that can light them up

No one wants to be seen as naive, spiritually bypassing, or brushing aside the horrors of the world. But a mature form of happiness, one that acknowledges these horrors, and which shines despite them…? that strikes me as exactly the sort of thing we need right now

Nick Cammarata: almost got attacked in a russian subway over this. someone was angrily like what the f are you smiling about bc you’re not supposed to smile there but i don’t speak russian so i ignored him and he freaked but it was okay

he was coming up behind me and someone I was with noticed I didn’t realize and grabbed me and pointed me towards him. door happened to be open so we hopped out and took the next one a min later, whole thing was like 15s

Chris Lakin: “I must suffer to show that I care” is a memetic virus.

Jake Eaton (again, this applies to a wide variety of groups): among a fraction of my leftiest friends, there appears to be an internalized norm that you cannot celebrate life if you don’t first acknowledge political reality. i’ve seen birth announcements that begin with the state of American politics, family photos captioned with “the world sucks but here’s this,” instagram carousels that begin with “this year has been hard,” so that i need to read on to figure out whether it’s cancer or trump

maybe it’s an expression of grief — fine. but my sense is that the most progressive environments demand an outward expression of despair before personal celebration, either as some sort of act of solidarity or otherwise guilt. this started in the workplace before moving into our personal lives

I wish I could find ways to explain how quietly corrosive that is, both socially, but more so personally. it makes me sad not for the world but for them! but my experience — having been in those environments for years — is that you have to find your own way out

Robin Hanson points out that if you are trying to do futarchy, you can avoid any non-causal correlation issues by creating enough different markets based on possible random decisions and decisions at different time points. I mean, yeah, okay, technically that works, but even if you can get clean definitions for all of this, how are you going to get price discovery on all of them? This is not a realistic ask.

There was insider trading of the winner of the Nobel Peace Prize. Which is good.

Jason Furman: The other day a student asked me about the prevalence of insider trading in prediction markets. I now have an answer.

If I was asked to draw a graph of what insider trading on a prediction market looks like, I would draw this graph. At a time when the outcome could plausibly be known, a rapid shoot upwards of the winner, up to some upper limit that still allows substantial gains, then settling at that limit, until the jump to ~100%.

The even better news is that since insider trading mostly exhibits such obvious patterns, it is easy to avoid it. Here, there are two easy principles to learn.

  1. Do not trade after the decision has already been made and could be known.

  2. If you must trade, do not trade against this kind of sharp move up.

  3. Definitely, absolutely do not have resting sell orders on the book this late.

The person buying is claiming to know the outcome, that the fair price is 100. You might choose to trade against that person if you think they’re often wrong, but understand that this is what you will be doing.

Should insider trading be allowed here, as it is on Polymarket? I say yes. It depends on your goal. Do you want to be able to trade at and know the non-insider price, or to know the insider price and info? You can get at most one of those two things.

Norwegian officials are predictably not amused by this particular bout of insider trading, and are investigating what they call a ‘criminal actor who wants to earn money on our information.’

Polymarket: JUST IN: It has been revealed only 5 people at the Nobel Peace Prize foundation knew the winner before they were announced.

Everyone checking Polymarket knew.

A good periodic reminder of one reason people often reject prediction markets:

Robin Hanson: Years ago a NYC based software firm ran some prediction markets, hoping in part to find & promote “diamonds in the rough” employees who predict especially well. They did find such, but then said “Oh, not them”; such folks didn’t have the polish & style they wanted.

Let that be a warning to those who think that being proven right will gain them more respect and inclusion.

Duncan Sabien notes that comments often create an obligation to respond, and suggests a new way of differentiating ask culture versus guess culture. I see what he’s trying to do to connect them and both are interesting, but I think mostly distinct.

The obligation to respond is that if others see a criticism [Z] to which you don’t respond, or others mischaracterize your [X] as if you said [Y] and respond to [Y], and especially if others then argue about [Y], then you’re in trouble. In the first case, your failure to respond will imply you don’t have a good response to [Z]. In the second case, they’ll start to believe you really did say [Y].

The ultimate source of this obligation to respond is, essentially, that your failure to respond would be Bayesian evidence of inability to respond, or to respond well.

As in, if I get a comment [C] that says [Z], and I had a good response [R] that answers [Z], then why didn’t I respond to [Z] with [R]? A conspicuous non-answer suggests I don’t know of any such [R]. A bad answer [B] also suggests I don’t have a good [R].

A non-conspicuous non-answer does not. One price of consistently engaging with critical comments or statements, in any context, is that an increasing share of non-answers become conspicuous.

Indeed, one of the reasons I rarely respond to comments these days is that I do not wish to create this obligation to respond to other comments, to avoid the time sink. When I do consider a comment worth responding to, because many would want to see that, I will often do so by quoting it in a full post.

The theory on guess versus ask culture is that the distinction is about how many ‘echoes’ you trace. As in, ask culture traces zero echoes, you simply ask for what you want, and they are responsible for saying no and not holding the ask against you. Whereas guess culture traces one echo, you think about how they would respond, and you can imagine sophisticated others (guess harder!) tracking many echoes.

I think this is an interesting idea but I don’t think it is right. In both guess and ask culture, you are responsible for an unlimited number of potential echoes. The difference is what consequences and thus echoes an action causes.

Technically speaking, I think the actual dial is either or both of these:

  1. Narrowly: Ask culture is created by radically raising the penalty for imposing a penalty upon someone for refusing an explicit request. As you turn up that penalty, and turn it up under a broader set of circumstances, you get Ask culture.

  2. Broadly: Guess culture is created by, in an increasing variety of circumstances, punishing fully generally the creation of common knowledge.

In case #1, I am less penalized for saying no, which means that there is far less reason to penalize you for asking, which in turn means you should ask, and indeed because you should ask I can then put the impetus upon you to ask, and impose various penalties upon you for instead trying to have me guess, and also my guess if you don’t ask is that you probably aren’t implicitly asking either.

Explanation #2 is more compete, more general, and cleaner, once you grok it.

Ask culture very much does not get you out of tracking social echoes in general.

The ultimate both guess and ask culture move here is the anti-ask, as in: No Reply Necessary, or saying (NRN) the way you would with an email. Duncan mentions less blunt ways to say this, but I prefer the classic version, the straight NRN, without further explanation. As in, here is some information, and it is 100% fine on all levels to simply ignore it if you do not find it useful.

John Wentworth advises us how to dress to improve our epistemics, the central thesis is that coolness is status countersignaling. This oversimplifies but is a helpful note.

Have you tried making any effort at all to talk to people in the industry you are studying or want to know about, including any of the many many free ways to request this? It seems very often the answer for PhD students is no. So you get papers where data is analyzed in detail but there has been zero contact with anyone in the real world. In general, remarkably many people are willing to talk to you if you ask.

Scott Sumner points to the concept of ‘defining deviancy up’ by extending words that refer to extremely wrong and bad things [X] (such as genocide, slave labor or pedophilia) to include related things [Y] that most people would agree are, at minimum, a lot less wrong or bad. If you try to respond that [Y] is less bad than [X], or that the new expansive definition covers things it shouldn’t, people respond by claiming you’re saying [X] is fine (or you’re condoning [Y]). Other times, or eventually, things loop around, and definitions are so expansive the word becomes meaningless or fine, the same way mere ‘speeding’ is now ignored so they invented ‘reckless driving.’

People are, according to a new study, ‘much more likelyto purchase ‘stigmatized’ items like condoms and pregnancy tests at self-checkout counters.

Abstract: On the intensive margin, we show that stigmatized items are much more likely to be purchased at self-checkout than at cashier registers, especially condoms and pregnancy tests. We estimate that customers are willing to pay 8.5 cents in additional time cost for the privacy of purchasing stigmatized items at self-checkout.

I totally buy that if there is an open self-checkout line and an open cashier, and you are buying a pregnancy test, you are going to go for self-checkout on the intensive margin. Sure.

But if anything, this effect looks surprisingly tiny. Customers are only willing to pay 8.5 cents in additional time? That’s not a lot of stigma. If one values time at $20 per hour, then this is on the order of fifteen seconds. Do you have any idea what people will do in other contexts to avoid mild social awkwardness? If people had the opportunity to pay money to not have to look anyone in the eye, some would pay $0, but you could get some people for dollars, and also you can stimulate new demand.

Tyler Cowen: I even draw distinctions across automated models. For instance, if I have “a stupid question,” I am more likely to ask Grok, since I would rather GPT maintain a higher opinion of what I do and do not know.

Dismalist: Reminds me of a line in Mad About You when after hiring a cleaner to start coming the next day, Jamie starts cleaning the night before. Paul sees her, and says: We don’t need a cleaner. We need a credible threat of a cleaner!

If you worry about ChatGPT or Claude’s memory on this, which I wouldn’t, you can use a temporary chat, or delete the chat afterwards. Let’s not panic and use Grok.

Also, yeah, I absolutely hate the thing where you hire a person to do [X], and then you get pressure to do [X] for them in advance to avoid looking bad or being rude or what not. The whole point of hiring them is to get them to do [X] so you don’t have to, or so they can do it better.

Another commentator notes that the right amount of stigma is sometimes not zero, indeed one can see this because for some prosocial items we approve of the existing negative stigma (as in, you buy wisely and the cashier looks at you approvingly).

China cracks down on ‘negative emotional contagionand ‘excessively pessimistic’ social media users. I do agree with Tyler Cowen that if you are spreading negative emotional contagion, ‘there is a very good chance’ you are likely to be ‘part of the problem,’ but it is a hell of a thing to ‘crack down’ on it.

Lily Kuo (NYT): The authorities have punished two bloggers who advocated for a life of less work and less pressure; an influencer who said that it made financial sense not to marry and have children; and a commentator known for bluntly observing that China still lags behind Western countries in terms of quality of life.

… Beijing is concerned that such pessimism doesn’t just discourage citizens from being productive members of society. It could turn into criticism of the ruling Communist Party.

… In the city of Zhengzhou in central China, officials said two social media account owners were investigated for portraying the city in an unflattering light.

… Weibo, a popular microblog, said last week that it suspended more than 1,200 accounts that “spread rumors” about the economy and government welfare programs.

Banning not only political dissent but any and all pessimistic speech in this way is, shall we say, highly pessimistic speech, not a sign things are going well.

Lily Kuo: “The official message of positivity is contrasted by an economic reality that is just starkly different compared with the last decades,” said Katja Drinhausen, head of Chinese politics and society at the Mercator Institute for China Studies. “It will not be enough to keep online negative emotions in check.”

I do not expect this to end well.

Are you excited for this new method of sharing content on Twitter? I know I am.

Nikita Bier (Twitter): Starting next week we’ll be testing a new way to share and engage with web links on Twitter. The goal will be to ensure all content on the platform has equal visibility on Timeline.

Johnny v5: oh wow. hoping this is legit. we all know AOL is the future. but hyperlinks — while mostly an untested technology — have shown some promise as a niche applications

Ashkhen Kazaryan sounds the alarm about the case TikTok vs. Anderson, in which it is held that if an algorithm promotes harmful content, it cuts through Section 230 immunity and becomes the platform’s speech. As Kazaryan argues, the modern internet does not work without curation or without algorithms.

This is a tricky problem. Obviously you can’t make everything in every algorithmic feed (or ‘for you’ page) the responsibility or speech of the platform, as the Third Circuit did here, or you effectively ban such feeds. Also obviously, if you intentionally steer users towards particular content sufficiently strongly, then that should be on you. So you need a limiting principle to determine what constitutes a sufficiently non-neutral algorithm.

Sound advice from Rob Miles that bears repeating.

Rob Miles: There are a bunch of really basic and easy ways to improve your social media experience that I see smart people not doing.

  1. Turn off auto-playing wherever possible

  2. When you see something that you would prefer not to have seen, consider why it’s on your feed, and use the tools to remove it. You can unfollow people, mute people, mute words, or turn off retweets from people

  3. Deliberately don’t engage with things you want to see less of. If you engage with things because they make you angry or scared, social media will dump more of those things on you. Engage with what you want to see more of

  4. One thing I do is ‘tending the garden’: Scroll through your feed one item at a time, and for every single one, consider if you want more or less of that, and take action. Feed what you want, weed out what you don’t. Just a few minutes of deliberate regular maintenance helps a lot.

  5. Try to never use social media apps, just view in the browser, where you’re in control, and use tools like UBlock Origin and TamperMonkey to change things. LLMs are great at writing Tampermonkey scripts, I can simply ask my buddy Claude to make the website just how I want it!

I cannot emphasize #3 enough, and I should try #5. With notably rare exceptions in high value spots, the rule is to never, ever, ever interact with something you want to not see in the future, no matter how wrong someone is on the internet. Negative interaction is interaction. That does not include muting or blocking, or saying ‘see less of this,’ which might not do much but are at least not going to make it worse.

Twitter’s new algorithm has a ‘reputation score’ from 0-100, where low scores reduce reach, and there is no way to check your own rating. I am actually rather sympathetic to this in theory, because reputation should absolutely play a role in reach, and also if you shared people’s reputations you can imagine what would happen next and all the accusations that would fly. The problem is I absolutely do not trust Elon Musk or Twitter to not put a thumb on the scale for various reasons both ideological and otherwise, and I also don’t trust them to not mess this up. If we are going to do this, the algorithm needs to be transparent, even if that doesn’t make it calculable from the outside.

[Editor’s note: As always, if you like avoiding politics as much or even more than I do, especially in 2025, consider skipping this section. The failure to mention other political topics here or elsewhere does not mean that I am not aware of or do not care about them, or that the ones I did choose to mention are the most important.]

It is currently not working due to a shutdown. This is mostly not about that.

President Donald Trump (link has video, from Kirk’s memorial service): [Charlie Kirk] did not hate his opponents. He wanted the best for them.

That’s where I disagreed with Charlie. I hate my opponent and I don’t want the best for them. I’m sorry.

The free speech situation is extremely terrible. I don’t care who started it, or who did what first, free speech is the most important, most sacred principle, period.

FIRE: President Trump suggested today that media outlets are engaging in “hate speech” by being “unfair” to him and “maybe” should be prosecuted.

Trump’s statement demonstrates the inherent danger of “hate speech” laws: Those in power will always weaponize them to silence dissent.

Many Trump statements have repeatedly made it clear he does not believe in free speech and demands control over the media, that he thinks the media needs to support him, or at least not oppose him, or else. Carr’s jawboning, as warned about by Ted Cruz, is only the most blatant incident.

While the situation is extremely dire, we need to avoid saying things like ‘unprecedented attacks’ on free speech, or that it’s all over for free speech, or anything like that. This is a clear misunderstanding of the history of free speech, and an example of the kind of ‘spreading negativity’ that does indeed make you part of the problem, except unlike China I would never want to tell you that you couldn’t say it.

Free speech has always been constantly under attack even in America, and I’m mostly not talking about the last decade. Our second president, John Adams, went hard after the speech of his opponents. We’ve been going back and forth on this for a very long time. Woodrow Wilson went after it hard. McCarthyism went after it extremely hard. Things after 9/11 or around 2020 were very not good. And so on. Social pressure on speech, including by the government, is universal.

It was only this month that YouTube agreed to reinstate (under new government pressure) the accounts of those who were suspended for saying the wrong things about Covid under a very broad supposed ‘misinformation’ crackdown instigated by heavy pressure from the Biden Administration, and admitted that they did so under Biden Administration pressure to censor speech that did not violate YouTube’s policies, which it now says was ‘unacceptable and wrong.’

Many of the statements that got accounts suspended ultimately proved accurate, although of course many others were both highly irresponsible and highly false. Presumably, if I had done readings of my Covid posts on YouTube, or reposted the texts to Facebook, I would have been suspended many times over.

Rep. Jim Jordan: But that’s not all. YouTube is making changes to its platform to prevent future censorship.

YouTube is committing to the American people that it will NEVER use outside so-called “fact-checkers” to censor speech.

No more telling Americans what to believe and not believe.

YouTube also is trying out Community Notes.

@elonmusk was ahead of the curve. Meta followed suit. And now YouTube.

I am glad to see these changes, but it does make one ask about the limiting principle?

eigenrobot: ok here’s a fun one is it restricting free speech to pressure a media platform to reinstate accounts that it had previously removed perhaps in response to government pressure. good luck figuring this one out using principles that are both coherent and non-exploitable

I think my answer is ‘the pressure on YouTube here is in practice okay because it is (1) a push towards more speech and (2) undoing previous government pressure,’ and it would be unacceptable if either clause was untrue.

For this go around, I’d draw a clear distinction, so far, between incidents directly related to Charlie Kirk, and incidents about other things. Performative lawsuits and wishful or general statements aside, so far from what I have seen actual consequences have been confined to people who decided to say ill-advised things specifically related to Charlie Kirk or his assassination, or at least to people’s actions surrounding that. Which is a relatively reasonable and compact thing to get bad mad about. Even comedians have the rule of ‘too soon.’

Let us not forget all the hard earned progress we have made, even if the last decade has involved some backsliding and dialectic escalation. That doesn’t mean we can relax. We have to fight for this all the more. It does mean don’t despair.

Things are quite bad, but don’t catastrophize. Whenever you see a decision like Disney indefinitely caving on Kimmel, suspending a show that was actively hemorrhaging money, you get people saying this means that they are now ‘state owned media’ or fascist or otherwise fully under state control. That’s not how any of this works, and wouldn’t be even if he had stayed off the air, although it was of course very good and right to exert a lot of pressure on Disney to bring him back.

I also notice that, as terrible as this is, we don’t need to be too concerned about broadcast television or its licenses any longer. Only 4% of households rely on broadcast television. If you strike down a Kimmel or Colbert, and demand is there, you only make them stronger. I don’t think we should sell off the broadcast spectrum, at least not quite yet. I think there’s value in preserving the low-end solutions to things. But I wouldn’t lose sleep over it if we did pull that plug entirely.

If you strike down a Kimmel, and then there’s enough noise that Disney puts him back, you’ve very much gone into Streisand Effect territory and also royally pissed everyone involved off, pretty much across the comedy and journalist spectrums.

Then if you respond to the restoration by announcing you’re going to sue ABC, because they dared give into your previous lawsuit to bend the knee and keep the peace? Yeah, I’m guessing that is not going to go the way he would like. It also makes it impossible to pretend Trump wasn’t trying to coerce the network.

AppleTV+ has decided to postpone Jessica Chastain in The Savant in the wake of events. I agree with Aramide Tinubu and also Jessica Chastain that this is a mistake, but it is the type of decision that has previously often been made in similar circumstances. The show will still be there for us in a few months.

Nate Silver notes that liberals who remember what it was like after 9/11 tended to be more wary about progressive cancel culture. Whereas now it seems like we have the opposite, realizing how bad things were and wanting to dish it out even worse. That only ends in one place.

As long as the primary platforms for free speech are mostly owned by companies with a wide array of business interests, upon which the government can exercise broad discretion, it is very difficult for them to push back too hard against attacks on speech, although some good news is that a large part of the public will still to some large extent turn against any platform seen to be caving. It is easy to see why a Disney or Paramount would fold, at least up to a point. Disney found out that not folding has its own dangers.

It is also easy to see why The New York Times didn’t fold.

Michael Schmidt: NEW: Trump just sued The New York Times for $15 billion over stories written by me, @peterbakernyt @russbuettner @susannecraig. The suit has no merit. It’s just “an attempt to stifle and discourage independent reporting. The New York Times will not be deterred by intimidation tactics. We will continue to pursue the facts without fear or favor and stand up for journalists’ First Amendment right to ask questions on behalf of the American people.”

Full NYT statement. “This lawsuit has no merit. It lacks any legitimate legal claims and instead is an attempt to stifle and discourage independent reporting. The New York Times will not be deterred by intimidation tactics. We will continue to pursue the facts without fear or favor and stand up for journalists’ First Amendment right to ask questions on behalf of the American people.”

Matthew Yglesias: One of the benefits of the New York Times being a company whose *onlybusiness is journalism is that unlike Disney or Paramount or whatever they have no choice but to fight for the integrity of their news operation.

I am very confident Michael is correct that the lawsuit against the New Your Times has no merit. I mean, you may have thought previous lawsuits had no merit, but this is a new level of not having merit. We’re talking a complete and profound absence of even the fig leaf of potential merit, full common knowledge of absolutely no merit. He’s literally suing them for things like endorsing Kamala Harris too loudly and saying so, himself, out loud, where we can hear. This really is profoundly not okay.

Looking back on that now that the time for a monthly roundup has come, I notice that we have largely moved on, and the pressure on this already feels like it is subsiding.

It would be nice if we stopped committing murders, by which I mean sinking ‘suspected’ drug ships, accused of non-capital offenses, without due process of law. I don’t want to hear ‘experts warn this raises serious international law questions’ when it’s clearly just straight up murder.

The Trump Administration released its new rule for prioritizing higher wage jobs for H1-B visas (good, and important if we still hit the cap). Except instead of looking at the number known as ‘dollars paid to the employee,’ also called salary, they are using the complete bullshit system called DOL ‘wage levels.’

Jeremy Neufeld: The new Trump H-1B rule just dropped!

It prioritizes DOL “Wage Levels,” not real wages. DOL thinks an experienced acupuncturist making $40k is a higher “Wage Level” than an early-career AI scientist making $280k.

That means more visas for outsourcers, fewer for real talent.

As in, don’t worry about whether a job produces anything of value, or anyone is willing to pay you a lot to do it, ask whether someone is ‘experienced’ in that job.

I have found zero people making any argument whatsoever, even an invalid one, in favor of using these ‘wage levels’ rather than salary.

I never want Robin Hanson to stop Robin Hansoning, and I would never want Tyler Cowen to stop Tyler Cowening, as embodied by his claim that we should not auction off all H1-B visas because this would have failed to attract the best upwardly mobile talent such as Sundar Pichai. This follows a long line of arguments of the form ‘do not allocate [X] by price because the most valuable but neglected talent, especially prospective travelers, would then not buy enough [X], and that is the most important thing’ where famously one [X] was traffic via congestion pricing.

There is a real objection in such cases, which is that externalities exist that can’t be properly priced in, and we are unwilling or unable to reasonably price such externalities, and thus pure allocation by price will fail to be a full first best solution.

It still beats the current allocation strategy of allocation via lottery, or via allocation via willingness to wait in line, including for the purposes Tyler worries about, and is vastly better in the vast majority of allocation decisions. The current system already has a huge invisible graveyard of trips and talent and so on. Vivian Darkbloom points out that in this case that Pichar is a terrible example and would definitely have made it in under the $100k proposed fee, whereas without the fee he has to survive a lottery draw.

I would bet that under a pure auction system (as in, you choose a fee that roughly clears the market), the amount of top talent secured goes way up, there will be a huge correlation with willingness to put up the $100k fee. If you want to additionally subsidize extraordinary people? Sure, if you can identify them, also we have the O-1.

Perhaps this is the best way to make the simple case: Tariffs Mean You Pay More For Worse Products. I prefer paying less for better products.

It seems Trump took the government shutdown as a reason to fire a lot of people in the CDC, with the final total expected to be between 1,100 and 1,200 people?

Sam Stein: As the dust settles, it’s clear that Vought’s RIFs amount to a Friday night massacre at the CDC. Lots of confusion as to the total number gone. But several sources tell me top officials and many staff at the center for Chronic Disease Prevention and Health Promotion and the center for immunization and respiratory diseases are OUT. Am told the Ebola response team has been hit hard too.

Again, there is mass confusion. but it appears the government’s chief agencies responding to outbreaks and studying infectious diseases have been gutted. if you know more we have a secure tip line here. I’m also on signal asteinindc.09

To put a finer point on it. I’m told the ACTING DIRECTOR and CHIEF MEDICAL OFFICER for the National Center for Immunization and Respiratory Diseases are now gone.

Am told CDC’s HR department had been furloughed because of the government shut down. They were then, un-furloughed so that they could process the RIFs to fire their colleagues. Can confirm. Some CDC experts who were RIFed on Friday have already had their firings rescinded by the administration.

This is on top of the loss of 2,400 staff, or 18% of the agency, earlier in the year. About half the initial firings were rescinded this time around, it seems this government has a pattern of thinking it can fire a bunch of people and then say ‘oops’ on some of them later and it’s no big deal, and in this case it seems they’re blaming many of them on ‘coding errors in their job classifications’ which shows the level of attention to detail going on. Matthew Harper called the situation ‘chaos,’ not normal and unprecedented in his 20 years of reporting.

Trump seems to be framing this as retaliation for the shutdown because the CDC is a ‘Democratic’ program, and taking a very ‘look what you made me do’ attitude?

Others are pushing back on the theory that the CDC is bad and incompetent, actually, so this is good actually, a line I’ve seen both from MAGA people and also some others.

I have not been especially impressed with the CDC, shall we say, on Covid-19 related fronts or in other places I’ve gotten a close look. The problem is that presumably we can all agree that we need a well-staffed, highly functional and effective Centers for Disease Control in order to, ya know, track and control disease? Does ‘the Ebola response team has been hard hit’ sound like a wise move?

With AI potentially enabling biological threats this now more than ever is not a program you cut. It seems highly plausible that CDC wasn’t doing a great job, but in that case we should be replacing or reforming the agency. I don’t see any sign of doing that.

I continue to see a steady stream of nightmare stories coming out of the UK. I don’t consider this my beat, but I must note that things seem deeply, horribly wrong.

We see things like UK’s NHS talking about the supposed benefits of first-cousin marriage, almost on a weekly basis. And we get the kind of authoritarian ‘how has this person not been sacked and no one seems to care?’ statements such as this one:

Paul Graham: A spectacular example of Orwellian doublespeak from the UK Home Secretary: “Just because you have a freedom doesn’t mean you have to use it at every moment of every day.”

In fact the ability do something whenever you want is practically the definition of a freedom.

It is sufficiently bad that ACX Grants are giving Sam Glover $60k to fight for UK free speech, you can DM him to volunteer.

When, one must ask, will the people rise up as one…

NewsWire: UK government outlaws free drink refills on hot chocolate, mocha and Coke Cola.

…and say ‘but this time, you’ve gone too far’?

Sections I did not expect to have to write.

I am ashamed of every news article and comment on the subject that does not lead with, or at least put very high up, the obvious fact that Tylenol does not cause autism.

It’s not only that ‘the quality of evidence is godawful, or that the evidence actually points heavily in the other direction, which it does, with the correlations both going away under reasonable controls and also being very easy to explain if you think about common cause for five seconds. It’s that our prior on this should be extremely low and even if there were somehow a non-zero effect size it would be greatly eclipsed by the risks of not taking Tylenol when you need it, given the lack of alternatives available.

The White House is also citing uncontrolled rises in autism rates over time that are very obviously caused mostly by expanded diagnostic criteria and active pushes to diagnose more high-functioning individuals, including calls for ‘diagnostic equality.’ The vast majority of people I know that are considered on the spectrum would have been undiagnosed back when I was a child.

To be fair, there is a possible mechanism that isn’t completely crazy, and this is less terrible than if they had gone after vaccines. So the whole thing isn’t quite maximally bonkers, but again the whole thing is bonkers, deeply irresponsible and deeply stupid.

Steven Pinker: Autism expert (and friend & graduate school classmate) Helen Tager-Flusberg: “I was shocked and appalled to hear the extreme statements without evidence in support of what any of the presenters said. … the most unhinged discussion of autism that I have ever listened to. It was clear that none of the presenters knew much about autism … and nothing about the existing science.”

Key quote:

“Singer: The new recommendations are not based on the science. The largest study in the systematic review that the administration cited found no association between prenatal Tylenol use and autism. The smaller studies that did indicate an association were of different sizes, did different analyses, used different doses and even measured autism in different ways.

The key question is: Why are these pregnant women taking Tylenol in the first place? We know that fever during pregnancy is a risk factor for autism. So if they were taking Tylenol, was it the fever that caused the autism or the Tylenol? The smaller studies did not control sufficiently for this.”

Also there is this:

Jerome Adams MD: The White House, HHS, and all of the media have (completely) buried the lede. Every news headline should actually read:

Despite bringing the full resources of the U.S. government to bear, RFK fails to find a connection between vaccines and autism!

So can we put that to bed?🙏🏽

This is all actually a really big deal, to give women access to zero painkillers and ways to bring down fewer is dangerous, it is extremely painful, and would lead to obvious reactions if we actually act this stupid and cruelly:

Elizabeth Bennett: Just popping in to say that if we tell pregnant women there are no OTC pain relievers they can take for any reason, good luck getting that birth rate up 😬

We could also leave this here:

Rebecca Robbins and Azeen Ghorayshi (NYT): The dean of the Harvard T.H. Chan School of Public Health, who consulted with top Trump health officials ahead of Monday’s warning about Tylenol and autism, was paid at least $150,000 to serve as an expert witness on behalf of plaintiffs in lawsuits against the maker of Tylenol.

In the decision to dismiss the lawsuits, the judge, Denise Cote, agreed with lawyers for the defendants that Dr. Baccarelli had “cherry-picked and misrepresented study results” in his testimony and was therefore “unreliable.”

Jay Wall III is the latest to point out the Jones Act is a monument to inefficiency that costs American families and businesses a fortune while delivering almost none of (I would say the opposite of) its promised benefits. Some highlights:

American-built coastal and feeder ships cost between $190 million and $250 million, while a similar vessel built in a foreign shipyard runs about $30 million.

And what’s the result? One Chinese shipbuilder alone constructed more commercial vessels by tonnage in 2024 than the entire U.S. industry has built since the end of World War II.

The U.S. share of the global commercial shipbuilding market has fallen to a pathetic 0.1%.

Here’s another head-scratcher: The Jones Act is actually bad for the environment.

This is not a detailed case, nor does it lay out the full costs involved, but yes.

What would have happened with 40% less NIH funding over the last 40 years? Given recent events and a proposed 40% cut in the NIH budget, that is a great question, but it is deeply tricky to answer.

As in, while the study described here was worth doing, it doesn’t answer the question.

(To be clear, I strongly believe we should not be cutting NIH’s budget at this time.)

Matt Esche: The study connects NIH grants with the papers they produced and the patents that build on the funded work, whether directly or via citation.

It’s difficult to trace out exactly what an alternate world would look like, but simulations using NIH review scores and outcomes linkages reveal what it could mean.

The alternate world with a 40% smaller NIH could mean a world with 65 fewer FDA-approved drugs, 11% of those approved between 2000 and 2023.

Even under the most strict linkage — a drug patent that directly cites NIH funding — 14 of the 40 FDA-approved drugs with these patents are at risk when cutting the bottom 40% of funding.

And, medicines under threat with a 40% smaller NIH are on average more highly valued, whether measured by the FDA’s priority review process or stock market reactions.

Funding is helpful, but this does not tell us the counterfactual if we had cut all funding, even if we fully trust that we can get the counterfactual funding decisions via measuring prioritization rankings. If you hadn’t received the federal funding, would you have gotten other funding instead, or not? If you hadn’t been able to fund your project, would the project have happened later anyway? Here or overseas?

Would lack of federal funding have caused others to step up, or collapsed the ecosystem? What happens to the displaced talent, and does talent not enter the field at all? Would a lot less time have been wasted seeking grants?

Parallel and inevitable discovery are common, and so are drugs that could have been discovered long ago but happened not to be. It goes both ways. Would lack of progress and experience compound our losses, or would we have more low-hanging fruit?

Work hard or work smart? Jeremy Giffon suggests looking to see who prefers which, whether they maximize effort or elegance, with ‘work hard’ people looking to outwork you and ‘work smart’ people looking for ‘mate in one’ tactics. I reject the dichotomy. If you ‘work hard’ because you find value in working hard, and don’t look for the better way before you start working hard, you are confusing costs and benefits. Looking for the right way to optimize results as a function of work, including the planning work, is hard work.

I’d also note that contra Giffon’s example, Trump in defeating Cruz for the 2016 nomination very much did not ‘mate in one’ but instead worked very hard, in his way, for quite a while, and pulled off a very complex set of moves, even if he often did not consciously know what he was doing. And that game, like Cruz’s, started decades before the campaign. And in his nightclub example, it’s not obvious which solution (work hard at the gate or work hard to skip the gate) is which.

A standard Darvo strategy is:

  1. Gaslight people. Do and say false totally insane awful things. In a calm manner.

  2. Others react strongly, and in various ways point out what happened.

  3. Invoke the mellow heuristic, that whoever is more emotional is usually wrong.

The origin of The Mellow Heuristic seems to be, highly appropriately, Bryan Caplan. Bryan is an expert at staying mellow and calm while saying things that are anywhere from contrarian and correct all the way to patently insane.

The original justification is that ‘emotion clouds judgment’ which is true but it can also be highly useful, or highly appropriate or typical given facts or circumstances. There are cases where that logic applies but they seem rare, and more often the evidence and causation largely runs the other way. As in, sometimes the emotion isn’t causing poor thinking, the emotion in context is instead evidence of poor thinking, if there’s no other explanation for it, but if the emotion is justified by circumstances I think it provides little or no or even negative evidence.

Stefan Schubert (quoting the original Mellow Heuristic post): I think that people who disagree with Mechanize should engage with them with logical arguments, not sarcasm and mockery.

Oliver Habryka: The Mellow Heuristic seems pretty terrible to me. My guess is in most historical disputes I’ve been involved in it would get the wrong answer. Pretty close to as uninformative as it gets.

[is challenged for examples, gives examples, is challenged that the examples are because rationalists perhaps pre-apply too much mellow heuristic, creating selection bias]

I think the selection bias here is merely the result of needing reference points that can be pointed to. I could tell you “that time when I adjudicated a work dispute yesterday” but that’s of course useless.

For lower stakes, my sense is the mellow heuristic is actively anti-correlated. If I have one programmer who has very strong feelings about a topic, and one who doesn’t, the one who has very strong feelings is a decent amount more likely to be right. There is a reason for their feelings!

I think, in this particular case (IYKYK), the correct answer is to engage with the actual arguments, but also it seems right to use sarcasm and mockery, because they deserve it and because it is funny, and because it is often the easiest way to point out the underlying illogic of an argument.

Where the Mellow Heuristic is useful is when the emotion is disproportionate to the situation even if they are telling the truth, or it is clearly so strong it is clouding judgment in ways that hurt their chance of being right, and most importantly when it is being used as itself an argument, an appeal to emotion, in a way that reveals a lack of a more logical argument, in response to something deserving of a counterargument.

It is least useful in cases like this one where, as I put it in the weekly, that’s bait.

Although what’s even funnier is, I think if you properly apply the Mellow Heuristic and similar questions to the Mechanize case, it does not go well for Mechanize, such as here where they respond to Bernie Sanders saying that Mechanize aims to ‘make it easier to pay workers less’ by claiming that they pay their employees vastly more than Bernie Sanders pays his staffers, which (1) is a highly emotional-style claim, (2) which is clearly designed to rile other people up, and (3) is completely irrelevant given the difference in role types.

Again, this is Emergent Misalignment, saying the awful thing because it is awful. It’s play acting at being a villain, because you woke up and realized you didn’t die the hero.

Credit scores, I say with love, are one of our most valuable inventions.

The math behind credit scores is, and this too I say with love, deeply stupid.

Rachel, Spirited Sparrow: My husband’s credit score was 841. He made a final payment on our vehicle and it immediately dropped to 795. Never missed a payment. But they want you to carry debt.

EigenGender: It’s kinda funny that some analyst probably ran a bad linear regression decades ago and it fuels constant conspiracy theories on here.

Jai: Currently making payments predicts paying back further loans. Not currently doing that weakly predicts disengagement. I think the math is sound and fits reality – they do want you to carry debt to prove that you’re still the kind of person who pays back loans on time.

There’s no mystery what is going on here. Having long duration open credit accounts that you’ve been consistently paying absolutely predicts future repayments. Credit scores also take into account various other correlates, to the extent they are legal to take into account, and excludes the ones that aren’t legal to take into account.

It’s all very Level 1 thinking, squarely in the territory of Goodhart’s Law and riddled with obvious mistakes. The measures would be strictly better if they looked backwards in a sensible fashion and included length of paid off loans inside average age of loans, and otherwise take into account what evidence there actually is that you’ll pay your debts on time, and ideally do some Level 2 (or even Level 3) thinking, but those involved aren’t that clever, so they don’t.

Perhaps some expert will chime in and say no, we ran the backtests and that doesn’t actually help, to which I reply that is a Skill Issue, you did it wrong, try again.

The alternative hypothesis is that ‘they want you to carry debt’ is literal.

The resulting scores are still highly useful, and still mostly ‘get it right,’ partly because most of the time obvious answer is right and in part because if you have the discipline to work to raise your credit score, that is strong evidence of good credit.

TIL you can type docs.new or sheets.new into your browser and get a new doc or sheet.

I like the fun fact I learned that Italian has two words for regret, ‘rimorsi’ for something you did and wish you didn’t, and ‘rimpianti’ for something you didn’t do and wish you did. Emmett Shear points out there is no such distinction in machine learning algorithms that work on ‘regret,’ but the distinction is very important for actual outputs and decisions, and for many questions involving alignment.

A report from the Abundance DC conference.

Influencer posts a video asking for help cleaning an ancient temple, gets 60 to help. There’s an obvious win-win opportunity here for a lot of similar content.

A searchable collection of all Slate Star Codex book reviews.

Home production work by women has gone way down over the last century.

Welcome to yet another graph where things get way better until the 1970s, at which point we stop seeing progress and everything stalls out. Great stagnation strikes again.

This is contrasted with much higher investment by both parents in child care, as demands there move to absurd levels but we haven’t had technology to lighten the load. We can get things cleaner faster, but not provide more child care faster, unless you count giving the children screens.

Matthew Lewis praises the transformations of the last decade that have happened in New York City, finding it now the ultimate city that we need more of, where everything is right there for you and highly walkable and everyone is friendly and gets along and no one bats an eye at profoundly different cultural happenings. I agree. Yes, we should turn as many places as possible into Manhattans, which would then make them all much better than Manhattan because housing costs would go down.

David Perell in praise of New York. Mostly this also seems right, especially the secondary note about New York having multiple core industries (and I’d add cultures and worlds) and this making things far less transactional than San Francisco despite New York having the finance world, because the status hierarchies are parallel and also you don’t constantly have business interests with everyone.

The only place that felt wrong to me is he says people in New York are flakey due to excess of options, but I haven’t experienced that, and find people in San Francisco far more flakey even when they are literally the same people.

I agree that transportation to and from the airports is one of the biggest obvious failures, although it’s ultimately a minor cost compared to rent even if you use taxis end to end. I am stubborn and mostly take the subway to and from JFK despite everything (and the train to EWR as well), but it’s certainly not fast and if I ever was using LGA that wouldn’t work.

I also agree that friendships can be difficult and won’t become close by accident. The city is too large, you will meet lots of people but you have to actively make real friendships happen after that initial step.

As he notes, the Duane Reeds and Best Buys of New York have locked up remarkably many goods, and this is super annoying on occasion. In some ways I agree it reflects loss of social trust, but mostly I think it reflects a narrow failure to enforce the shoplifting laws in an otherwise high trust society. As in, I feel exceedingly safe and like I can relax, but It Is Known that if you shoplift you basically get away with it, so in certain spots they have to play defense to avoid being the juiciest target.

David Perell: The ideal distance to live away from your best friends is Walkie-Talkie distance: close enough where you can easily walk to each other’s place but far enough away so everyone has some space. And if you get enough friends in the neighborhood, it starts to feel like college again.

Michael Miraflor: This is what NYC feels like when you’re young and just out of school and working at an office where people generally live a short train ride away. The city is small, your friends are close, and the city is a museum, playground, and source of inspiration wrapped into one.

The office part matters imo. You can meet your best friends or your partner at your first office job. To be young and working hard in the trenches together and also celebrating and making real friendships IRL is an important part of it all – professional camaraderie, personal development, how to function in the world, etc, and a lot of it has been wrecked a bit by WFH.

Alas, most of us are not young anymore, but yes everything being at your fingertips is the big secret weapon, both walking and via subways. You do still have to put in the effort if you want the friendships to be real.

Sasha praises New York City by contrast with the Bay Area, which he refers to as cursed because everything must have purpose and beauty and grace are frowned upon. Really this is 5% praising New York and 95% absolutely unloading on San Francisco, or rather than San Francisco that reads this blog and is full of technology. The post is a joy to read purely for the experience of reading, even if you disagree with all of it.

Sasha Chapin: In the Bay, beauty (personal and otherwise) is looked down on and the famous gender imbalance has chilling effects. Is there a less sexual city than this? Perhaps Salt Lake, but I’d imagine it’s close. My gorgeous friend M is self-conscious about wearing pretty dresses, which is insane anywhere else, but reasonable here: hotness is a quality people aren’t sure what to do with.

Recently there was a themed Gender Ratio party where beautiful young women dressed glamorously, at least one for every man. In other cities this would be referred to as a party.

Sasha Chapin (from the comments): I lived in LA for a couple of years and deeply love it. LA is sincere pretend, the Bay is fake real.

Are concerts and sporting events underpriced?

More Perfect Union: The CEO of Live Nation-Ticketmaster says that concert tickets are “underpriced” and have been “for a long time.”

He also believes there’s plenty of room to raises prices.

Ashley Nowicki: I don’t think a single fan of sports, music, or live entertainment in general would say tickets are underpriced.

Arthur B: People see the cost they’re paying but do not intuitively associate missing out on sold-out shows with tickets being underpriced.

With that said super fans do help market the acts and it makes sense to subsidize their tickets.

A somewhat natural solution could be to increase prices across the board but keep tickets affordable for fans with reward / fidelity programs.

You want all seats filled, so events that do not sell out are usually overpriced even if the selected prices maximize short term revenue, although in some cases you’re stuck in a larger venue than you can plausibly sell out and have no interest in investing in the future, for example perhaps you are the Miami Marlins.

If you sell out, then the default is that you underpriced tickets if and only if resale prices are much higher than original ticket prices. If they’re similar or lower, great job.

There are two catches.

The first catch is that the scalper price is ‘not your fault,’ whereas the venue price is considered your fault. So you gain goodwill by charging less, even though this is dumb. This could be more valuable than the extra revenue and easier access to tickets that you get the other way? Maybe.

The other catch is that you often have preferences over distribution of tickets, and who attends your sold out show, that do not entirely match willingness to pay.

Another strong piece of evidence that prices are too low is that often people will spend lots of time and money to get to a concert, vastly in excess of the ticket price.

I’ve been to three concerts recently, and have realized I don’t do this enough but that selection and preparation are crucial. You want to not miss the good opportunities, or within reason pass on them due to price, but also the mediocre opportunities are meh.

The first was Weird Al Yankovic at Madison Square Garden, very much a ‘play the hits’ show and every inch a Weird Al Yankovic show, including S-tier use of the multimedia screens throughout. It was great fun and I was very happy I got a chance to see him, but at the same time I couldn’t actually see him in a meaningful way and I was mostly watching those screens, and the opening act of Puddles Pity Party was appropriate and did some interesting things but wasn’t ultimately my cup of tea.

The second was The Who, in their The Song Is Over tour, also at Madison Square Garden, with Feist as the opening act. A big problem was that with this amount of rocking out I was unable to hear the lyrics of either band well enough to understand them if I didn’t already know what they were. The majority of the time this wasn’t a problem for The Who, but for Feist it was extremely frustrating as I only knew the one song, so while they seemed great effectively everything else didn’t have lyrics. And you could feel every time they talked how much these guys appreciated and loved their jobs and their fans, and that they were pushing to do this until they physically couldn’t anymore.

The third was Garbage at The Brooklyn Paramount, which was standing room general admission, where the doors open at 7pm, opening act Starcrawler went on at 8pm, and Garbage only went on at 9pm, but not knowing this we showed up just before 7pm. Which despite a ton of waiting was ultimately a great decision, because by making a beeline to the stage, we got to be only about five effective rows deep. And that made a night and day difference. Starcrawler was effectively a (very strong) dancing performance by the lead singer since I couldn’t make out any lyrics at all, but we were close enough to appreciate it.

And then we got to see Garbage up close and that was fantastic, including being able to fully take in the joy on Shirley’s face as she turned the microphone towards the crowd. Find something you love as much as she loves the crowd singing her greatest hits, which resonated a lot with me based on how I feel when I see people citing my classic posts, except of course her version is way cooler. And even the new-to-me more recent stuff was great.

My overall conclusion is that yes, live music is Worth It, even if you’re somewhat old and busted and it requires a babysitter and so on, if and only if you do it right. And what doing it right means is (not that any of this is new or special, but I want to remember for the future and remind others):

  1. Show up for artists you resonate with.

  2. Do your homework. Know most of the songs cold. Ideally including the warmup.

  3. Pay up, in time or money, to get actually good seats if at all possible, and prioritize smaller venues to help do this.

  4. Have a plan for the downtime.

My plan is, of course, to set up an AI to periodically check for opportunities.

A chart of which movies men and women rate relatively highly on IMDB:

The patterns here are rather obvious. In addition to measuring the actual man versus woman gap, there is a clear contamination of the data based on women favoring movies based (I really, really hope!) on the preferences of their kids. If women actually think Despicable Me is the #145 best movie here, or The Hunger Games: Catching Fire is #83, I’m sorry ladies, you’re crazy, and honestly with Catching Fire no one involved has an excuse either way. And aside from Wonder Woman where this opinion is simply wrong, that movie was not good, the other half of that first list I find a highly acceptable expression of different preferences.

The male list definitely seems very male. It especially includes a bunch of long and slow older movies that many really appreciate, which typically is not a thing I like, such as my experiences with Seven Samurai (I get people love it but man it drags) and Lawrence of Arabia, where I couldn’t even.

Scott Sumner offers his latest set of movie reviews. As usual, his evaluations are almost always correct in an abstract Quality sense, but that’s not that big a portion of what I care about. This time around I have seen at most two of them.

I bought into A Big Bold Beautiful Journey (4.5/5 stars) and he didn’t, and I agree that the later scenes rely on somewhat unearned emotion, so I get his only giving out a 2.9, that seems like a reasonable ‘didn’t work for me’ rating here. The other one I think I saw was The Last Days of Disco, he gives it 3.6 but I don’t remember it.

Robin Hanson reviews One Battle After Another and correctly identifies what you see if you take the movie at face value. You can of course respond ‘this is Paul Thomas Anderson and based on Pynchon’s Vineyard and obviously not intended that way,’ and one can debate how relevant that fact is here, as well.

I did not like One Battle After Another, reluctantly giving it 3/5. The critical reaction being this positive, and seeing rave after rave, made me angry. I stand by that on reflection. The Quality level isn’t bad, but I think everyone thinks it is way higher Quality than it actually is, and most importantly even if you can look past the other stuff I hated you have to buy into the idea that Bob (Leo’s character) is sympathetic despite being among other things a completely unrepentant terrorist bomber, or the movie simply doesn’t work. I couldn’t do it.

Critics think we’re both wrong, and gave it a Metacritic 95, but audiences aren’t much biting, only giving it $22.4 million on opening weekend on a $200 million budget, so it almost has to win Best Picture to hope to break even. Film critic Jason Bailey says this is fine, they did it for the prestige and to mend relations with talent, they shouldn’t have to make money. Nice work if you can get it?

I do admit that the whole thing showed strong faith in and willingness to spend on talent for a three hour, R-rated ‘explosive political thriller’ from Paul Thomas Anderson, whose movies are consistently liked but whose box office record is spotty. That the critics think the movie is so political, and this makes them like it even more, helps explain why I like it less.

As for the rest of the movie reviews, as always you can find them on Letterboxd, where I make a point of reviewing everything I see no matter what, and I’ll have an end of year post. The trend of me being almost uncorrelated with the critics this year continues.

This month’s game is Hades 2. I’ve been enjoying it. It’s definitely ‘more Hades’ in very much a ‘meets expectations’ kind of way, so play it if and only if you played the first Hades, took on at least some Heat, and still want to go a second round.

Perfection.

It seems like there should be a good free or cheap version of ‘play against a GTO heads up poker bot and get live +/- EV feedback.’ I imagine the ultimate version of this is where you don’t output a particular action, you output a probabilistic action – you say ‘50% call, 25% fold, 25% raise pot’ or what not, and it then compares this to GTO, after which it selects a line at random and you continue.

I understand why Ben is apologizing here, but he was right the first time.

Ben Landau-Taylor: There’s something very poetic about the biggest technology breakthrough of the last decade being possible only because of a quarter century of investment into higher resolution graphics for computer games.

When I was a teenager I made fun of gamers who cared about graphics more than gameplay. Today I would like to apologize for my sins against technological progress. I didn’t understand and I’m sorry.

Graphics are cool, but if you care more about graphics than gameplay you deserve for us to make fun of you, and focus on graphics has been horrible for gaming. Yes, it turns out that pushing GPUs harder led to LLMs, and you can view that outcome as more important (for good and bad) than the games, but that’s not why they did it. They had bad taste, often still have bad taste, and should be mocked for it.

The lead writer of Clair Obscur: Expedition 33 had never played a video game. She had a cowriter, she kind of would have had to, still this is super impressive. It definitely worked out, the script was very strong, although it doesn’t branch much.

Kelsey Piper writes a plea to let the robots have this one, as in self-driving cars, in order to save over 30,000 American lives a year. Self-driving cars are so amazingly great that I consider preventing most car accidents a secondary benefit versus the lifestyle and mobility benefits. And yet support is perilous:

What’s craziest is that those over 65 want to ban self-driving cars. They stand to benefit the most, because they will soon be unable to drive. Self-driving equals freedom. Or perhaps what’s craziest is that people’s main objection is safety and trust, here is what people say to justify a ban and it isn’t jobs:

These concerns clearly have nothing to do with the actual safety data, which presumably most people don’t know.

So I largely take back not wanting to primarily make the case in terms of safety, because people genuinely don’t understand that Waymos are vastly safer than human drivers. If we make the safety case, then people’s bigger objections go away. Except the whole ‘I hate AI’ objection, I guess.

Waymo is testing at SFO. Woo-hoo! Even if they can’t go to the East Bay directly yet I would totally have them drive me to the last Bart station and go from there.

Exposure makes the public like Waymo, with two thirds of San Francisco now approving, a major shift from two years ago. This is despite only 30% realizing that Waymos are safer than human drivers.

How much safer are they? A lot. We got 96 million new miles of Waymo safety data.

Not only are Waymos involved in vastly fewer serious crashes and injuries than human driven cars, as in 79% less airbag crashes and 91% fewer serious injuries over what is now a very large sample size, including very similar numbers on harm to pedestrians and bike riders.

Very few of Waymo’s most serious crashes were Waymos’s fault. In a majority of the major accidents the Waymo was not even moving. We can determine this because Waymos are full of cameras, so Kai Williams did exactly this. He could not find a single accident that was the fault of the self-driving itself, and 37 out of 41 were mostly or completely the fault of other drivers. The most serious accident where a Waymo was actually at fault involved the front left wheel literally detaching.

This suggests that if we replaced all cars with Waymos, we would get vastly more than a 79% reduction in crashes and injuries. We would get more like a 95%+ reduction.

As I said last month, I prefer not to rely on safety as the central argument, but the safety case is overwhelming. However, it is 2025, so you can just say things, and often people do. For example:

Vital City NYC: This morning, the New York Editorial Board interviewed Bill de Blasio. In light of his history with Uber, the group asked what the city’s posture should be toward Waymo. He said: “The driverless cars are a public safety danger period.”

The public safety danger is that there is the danger they might create public safety?

Joe Weisenthal (who to be clear loves Waymo) worries we aren’t ready for cars to turn from a symbol of freedom to another thing run by Big Cloud. I’m not worried. It will take a while before there is any danger to ‘if you like your car you can keep your car’ or your ability to buy a new one, if you want that. For many of us, the symbol of practical freedom, and also the actual freedom, is the ability to call upon the cloud and get moved around at will.

This is distinct from the ‘what I can do if the authorities or others with power are Out To Get Me’ type of freedom. Yes I do think there will be some loss there and impact on the psyche, but almost no one wants to pay the cost to keep that. A lot of things are going to change because of AI, whether or not we are ready, and the old thing here was not going to get preserved for so many reasons.

Mothers Against Drunk Driving is strongly in favor of autonomous vehicles, as one would expect given their name.

How did Bill Belichick’s coaching stint at UNC go so perfectly terribly? Couldn’t have happened to a nicer guy, or rather if it did we wouldn’t all be so happy about it.

Ollie Connolly: This is as embarrassing as it gets for Belichick. But it’s also a damning (and funny) indictment of Mike Lombardi. It’s his roster — and UNC looks like a bad FCS school compared to any FBS school.

The plan to hand things off to Steve Belichick after two years is not going well.

They turned over the entire roster, and it seems they chose poorly, often fighting teams in second tier conferences for players. Whoops. I’m sad for the kids that got filtered into the wrong tier, but they did choose to play for Bill Belichick, UNC is otherwise a nice school and they are getting paid. The players will be fine.

Kicker quality keeps increasing and field goal range keeps expanding in the NFL. This has a number of downstream effects. If even modest field position gives you 3 points, but you don’t that often get the full 7, this is going to create a lot of risk aversion.

Derek Thompson: interesting that strategic optimization has pushed basketball and football in opposite directions: long shots vs. short passes

NFL in 2025 has:

– highest QB completion % ever

– lowest INT% ever

– fewest yards/catch ever

– and, sort of a separate thing, but: most long field goal attempts ever

fewest punts per game ever, too!

Nate Silver: The field goals thing is sort of related to it. If you’re anywhere past the opponents’ 45-yard line or so, it’s riskier to gamble with downfield passes that may result in an INT because the expected value of the possession is higher when you’re almost assured of a FG.

And really, these considerations apply even before then. With teams also starting out with better field position with the new kickoff rules, they’re often in a position where two first downs = field goal range. Don’t love it, don’t hate it, but it’s definitely different.

I primarily watch college football these days, but I’m pretty sure that in the NFL this has gone too far. You shouldn’t spend this big a percentage of the time in confident field goal range, as it reduces the number of interesting decisions and distinctions between situations, and causes too much risk aversion. The NFL over the years assembled a bunch of moving parts into something that seemed ‘natural and dynamic’ in various ways, and it feels like various changes are moving it to feel more arbitrary and flow less well and also reduce variety.

What should we do about it?

I would start with changing kickoffs back to a default or average of about the 20 yard line, and if you don’t know how to do them safely without them looking bizarre and playing badly, why not just get rid of them and start the ball on the 20, or the option to start on your own 20 with a 4th and [X], for whatever [X] makes sense (maybe only if you’re behind in the second half?), as an onside kick alternative? You don’t actually need a kickoff. Or you could even just start with 4th and 15 all the time from e.g. your own 40, you’re not allowed to attempt a FG unless you make at least one first down, and by default you just punt, since punts don’t require this whole distinct logic and seem safe enough.

Then we need to do something about the kickers. I’m sorry, but it’s bad for the game if 50+ yard field goals are reliable. If we can’t think of anything else, narrow and raise the uprights or even move them further back until it’s hard enough again (and adjust the extra point yard line to the desired difficulty level).

Americans increasingly see legal sports betting as a bad thing for society and sports. I share Josh Barro’s expectation that this will become a campaign issue, if AI doesn’t overwhelm all other issues. As he notes, the Federal government can still ban it, if they’re willing to pull the trigger in full. I don’t think a full ban is the first best solution here, but if it is this for DraftKings and FanDuel, it’s an upgrade. If the future ends up being prediction markets with lots of liquidity at great prices and no discrimination against winners? That’s way better. I have a dream that we kill DraftKings and FanDuel and instead we have Polymarket and Kalshi battle it out with Pinnacle.

CFAR is running new workshops on rationality, November 5-9 in California and January 21-25 near Austin, Texas.

Discussion about this post

Monthly Roundup #35: October 2025 Read More »

new-apple-m5-is-the-centerpiece-of-an-updated-14-inch-macbook-pro

New Apple M5 is the centerpiece of an updated 14-inch MacBook Pro

Apple often releases a smaller second wave of new products in October after the dust settles from its September iPhone announcement, and this year that wave revolves around its brand-new M5 chip. The first Mac to get the new processor will be the new 14-inch MacBook Pro, which the company announced today on its press site alongside a new M5 iPad Pro and an updated version of the Vision Pro headset.

But unlike the last couple MacBook Pro refreshes, Apple isn’t ready with Pro and Max versions of the M5 for higher-end 14-inch MacBook Pros and 16-inch MacBook Pros. Those models will continue to use the M4 Pro and M4 Max for now, and we probably shouldn’t expect an update for them until sometime next year.

Aside from the M5, the 14-inch M5 MacBook Pro has essentially identical specs to the outgoing M4 version. It has a notched 14-inch screen with ProMotion support and a 3024×1964 resolution, three USB-C/Thunderbolt 4 ports, an HDMI port, an SD card slot, and a 12 MP Center Stage webcam. It still weighs 3.4 pounds, and Apple still estimates the battery should last for “up to 16 hours” of wireless web browsing and up to 24 hours of video streaming. The main internal difference is an option for a 4TB storage upgrade, which will run you $1,200 if you’re upgrading from the base 512GB SSD.

New Apple M5 is the centerpiece of an updated 14-inch MacBook Pro Read More »

with-considerably-less-fanfare,-apple-releases-a-second-generation-vision-pro

With considerably less fanfare, Apple releases a second-generation Vision Pro

Apple’s announcement of the Vision Pro headset in 2023 was pretty hyperbolic about the device’s potential, even by Apple’s standards. CEO Tim Cook called it “the beginning of a new era for computing,” placing the Vision Pro in the same industry-shifting echelon as the Mac and the iPhone.

The Vision Pro could still eventually lead to a product that ushers in a new age of “spatial computing.” But it does seem like Apple is a bit less optimistic about the headset’s current form—at least, that’s one possible way to read the fact that the second-generation Vision Pro is being announced via press release, rather than as the centerpiece of a product event.

The new Vision Pro is available for the same $3,499 as the first model, which will likely continue to limit the headset’s appeal outside of a die-hard community of early adopters and curious developers. It’s available for pre-order today and ships on October 22.

The updated Vision Pro is a low-risk, play-it-safe upgrade that updates the device’s processor without changing much else about its design or how the product is positioned. It’s essentially the same device as before, but with the M2 chip switched out for a brand-new M5—a chip that comes with a faster CPU and GPU, 32GB of RAM, and improved image signal processors and video encoding hardware that will doubtlessly refine and improve the experience of using the headset.

With considerably less fanfare, Apple releases a second-generation Vision Pro Read More »

nvidia-sells-tiny-new-computer-that-puts-big-ai-on-your-desktop

Nvidia sells tiny new computer that puts big AI on your desktop

For the OS, the Spark is an ARM-based system that runs Nvidia’s DGX OS, an Ubuntu Linux-based operating system built specifically for GPU processing. It comes with Nvidia’s AI software stack preinstalled, including CUDA libraries and the company’s NIM microservices.

Prices for the DGX Spark start at US $3,999. That may seem like a lot, but given the cost of high-end GPUs with ample video RAM like the RTX Pro 6000 (about $9,000) or AI server GPUs (like $25,000 for a base-level H100), the DGX Spark may represent a far less expensive option overall, though it’s not nearly as powerful.

In fact, according to The Register, the GPU computing performance of the GB10 chip is roughly equivalent to an RTX 5070. However, the 5070 is limited to 12GB of video memory, which limits the size of AI models that can be run on such a system. With 128GB of unified memory, the DGX Spark can run far larger models, albeit at a slower speed than, say, an RTX 5090 (which typically ships with 24 GB of RAM). For example, to run the 120 billion-parameter larger version of OpenAI’s recent gpt-oss language model, you’d need about 80GB of memory, which is far more than you can get in a consumer GPU.

A callback to 2016

Nvidia founder and CEO Jensen Huang marked the occasion of the DGX Spark launch by personally delivering one of the first units to Elon Musk at SpaceX’s Starbase facility in Texas, echoing a similar delivery Huang made to Musk at OpenAI in 2016.

“In 2016, we built DGX-1 to give AI researchers their own supercomputer. I hand-delivered the first system to Elon at a small startup called OpenAI, and from it came ChatGPT,” Huang said in a statement. “DGX-1 launched the era of AI supercomputers and unlocked the scaling laws that drive modern AI. With DGX Spark, we return to that mission.”

Nvidia sells tiny new computer that puts big AI on your desktop Read More »

hackers-can-steal-2fa-codes-and-private-messages-from-android-phones

Hackers can steal 2FA codes and private messages from Android phones


STEALING CODES ONE PIXEL AT A TIME

Malicious app required to make “Pixnapping” attack work requires no permissions.

Samsung’s S25 phones. Credit: Samsung

Android devices are vulnerable to a new attack that can covertly steal two-factor authentication codes, location timelines, and other private data in less than 30 seconds.

The new attack, named Pixnapping by the team of academic researchers who devised it, requires a victim to first install a malicious app on an Android phone or tablet. The app, which requires no system permissions, can then effectively read data that any other installed app displays on the screen. Pixnapping has been demonstrated on Google Pixel phones and the Samsung Galaxy S25 phone and likely could be modified to work on other models with additional work. Google released mitigations last month, but the researchers said a modified version of the attack works even when the update is installed.

Like taking a screenshot

Pixnapping attacks begin with the malicious app invoking Android programming interfaces that cause the authenticator or other targeted apps to send sensitive information to the device screen. The malicious app then runs graphical operations on individual pixels of interest to the attacker. Pixnapping then exploits a side channel that allows the malicious app to map the pixels at those coordinates to letters, numbers, or shapes.

“Anything that is visible when the target app is opened can be stolen by the malicious app using Pixnapping,” the researchers wrote on an informational website. “Chat messages, 2FA codes, email messages, etc. are all vulnerable since they are visible. If an app has secret information that is not visible (e.g., it has a secret key that is stored but never shown on the screen), that information cannot be stolen by Pixnapping.”

The new attack class is reminiscent of GPU.zip, a 2023 attack that allowed malicious websites to read the usernames, passwords, and other sensitive visual data displayed by other websites. It worked by exploiting side channels found in GPUs from all major suppliers. The vulnerabilities that GPU.zip exploited have never been fixed. Instead, the attack was blocked in browsers by limiting their ability to open iframes, an HTML element that allows one website (in the case of GPU.zip, a malicious one) to embed the contents of a site from a different domain.

Pixnapping targets the same side channel as GPU.zip, specifically the precise amount of time it takes for a given frame to be rendered on the screen.

“This allows a malicious app to steal sensitive information displayed by other apps or arbitrary websites, pixel by pixel,” Alan Linghao Wang, lead author of the research paper “Pixnapping: Bringing Pixel Stealing out of the Stone Age,” explained in an interview. “Conceptually, it is as if the malicious app was taking a screenshot of screen contents it should not have access to. Our end-to-end attacks simply measure the rendering time per frame of the graphical operations… to determine whether the pixel was white or non-white.”

Pixnapping in three steps

The attack occurs in three main steps. In the first, the malicious app invokes Android APIs that make calls to the app the attacker wants to snoop on. These calls can also be used to effectively scan an infected device for installed apps of interest. The calls can further cause the targeted app to display specific data it has access to, such as a message thread in a messaging app or a 2FA code for a specific site. This call causes the information to be sent to the Android rendering pipeline, the system that takes each app’s pixels so they can be rendered on the screen. The Android-specific calls made include activities, intents, and tasks.

In the second step, Pixnapping performs graphical operations on individual pixels that the targeted app sent to the rendering pipeline. These operations choose the coordinates of target pixels the app wants to steal and begin to check if the color of those coordinates is white or non-white or, more generally, if the color is c or non-c (for an arbitrary color c).

“Suppose, for example, [the attacker] wants to steal a pixel that is part of the screen region where a 2FA character is known to be rendered by Google Authenticator,” Wang said. “This pixel is either white (if nothing was rendered there) or non-white (if part of a 2FA digit was rendered there). Then, conceptually, the attacker wants to cause some graphical operations whose rendering time is long if the target victim pixel is non-white and short if it is white. The malicious app does this by opening some malicious activities (i.e., windows) in front of the victim app that was opened in Step 1.”

The third step measures the amount of time required at each coordinate. By combining the times for each one, the attack can rebuild the images sent to the rendering pipeline one pixel at a time.

As Ars reader hotball put it in the comments below:

Basically the attacker renders something transparent in front of the target app, then using a timing attack exploiting the GPU’s graphical data compression to try finding out the color of the pixels. It’s not something as simple as “give me the pixels of another app showing on the screen right now.” That’s why it takes time and can be too slow to fit within the 30 seconds window of the Google Authenticator app.

In an online interview, paper co-author Ricardo Paccagnella described the attack in more detail:

Step 1: The malicious app invokes a target app to cause some sensitive visual content to be rendered.

Step 2: The malicious app uses Android APIs to “draw over” that visual content and cause a side channel (in our case, GPU.zip) to leak as a function of the color of individual pixels rendered in Step 1 (e.g., activate only if the pixel color is c).

Step 3: The malicious app monitors the side effects of Step 2 to infer, e.g., if the color of those pixels was c or not, one pixel at a time.

Steps 2 and 3 can be implemented differently depending on the side channel that the attacker wants to exploit. In our instantiations on Google and Samsung phones, we exploited the GPU.zip side channel. When using GPU.zip, measuring the rendering time per frame was sufficient to determine if the color of each pixel is c or not. Future instantiations of the attack may use other side channels where controlling memory management and accessing fine-grained timers may be necessary (see Section 3.3 of the paper). Pixnapping would still work then: the attacker would just need to change how Steps 2 and 3 are implemented.

The amount of time required to perform the attack depends on several variables, including how many coordinates need to be measured. In some cases, there’s no hard deadline for obtaining the information the attacker wants to steal. In other cases—such as stealing a 2FA code—every second counts, since each one is valid for only 30 seconds. In the paper, the researchers explained:

To meet the strict 30-second deadline for the attack, we also reduce the number of samples per target pixel to 16 (compared to the 34 or 64 used in earlier attacks) and decrease the idle time between pixel leaks from 1.5 seconds to 70 milliseconds. To ensure that the attacker has the full 30 seconds to leak the 2FA code, our implementation waits for the beginning of a new 30-second global time interval, determined using the system clock.

… We use our end-to-end attack to leak 100 different 2FA codes from Google Authenticator on each of our Google Pixel phones. Our attack correctly recovers the full 6-digit 2FA code in 73%, 53%, 29%, and 53% of the trials on the Pixel 6, 7, 8, and 9, respectively. The average time to recover each 2FA code is 14.3, 25.8, 24.9, and 25.3 seconds for the Pixel 6, Pixel 7, Pixel 8, and Pixel 9, respectively. We are unable to leak 2FA codes within 30 seconds using our implementation on the Samsung Galaxy S25 device due to significant noise. We leave further investigation of how to tune our attack to work on this device to future work.

In an email, a Google representative wrote, “We issued a patch for CVE-2025-48561 in the September Android security bulletin, which partially mitigates this behavior. We are issuing an additional patch for this vulnerability in the December Android security bulletin. We have not seen any evidence of in-the-wild exploitation.”

Pixnapping is useful research in that it demonstrates the limitations of Google’s security and privacy assurances that one installed app can’t access data belonging to another app. The challenges in implementing the attack to steal useful data in real-world scenarios, however, are likely to be significant. In an age when teenagers can steal secrets from Fortune 500 companies simply by asking nicely, the utility of more complicated and limited attacks is probably of less value.

Post updated to add details about how the attack works.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Hackers can steal 2FA codes and private messages from Android phones Read More »

why-signal’s-post-quantum-makeover-is-an-amazing-engineering-achievement

Why Signal’s post-quantum makeover is an amazing engineering achievement


COMING TO A PHONE NEAR YOU

New design sets a high standard for post-quantum readiness.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

The encryption protecting communications against criminal and nation-state snooping is under threat. As private industry and governments get closer to building useful quantum computers, the algorithms protecting Bitcoin wallets, encrypted Web visits, and other sensitive secrets will be useless. No one doubts the day will come, but as the now-common joke in cryptography circles observes, experts have been forecasting this cryptocalypse will arrive in the next 15 to 30 years for the past 30 years.

The uncertainty has created something of an existential dilemma: Should network architects spend the billions of dollars required to wean themselves off quantum-vulnerable algorithms now, or should they prioritize their limited security budgets fighting more immediate threats such as ransomware and espionage attacks? Given the expense and no clear deadline, it’s little wonder that less than half of all TLS connections made inside the Cloudflare network and only 18 percent of Fortune 500 networks support quantum-resistant TLS connections. It’s all but certain that many fewer organizations still are supporting quantum-ready encryption in less prominent protocols.

Triumph of the cypherpunks

One exception to the industry-wide lethargy is the engineering team that designs the Signal Protocol, the open-source engine that powers the world’s most robust and resilient form of end-to-end encryption for multiple private chat apps, most notably the Signal Messenger. Eleven days ago, the nonprofit entity that develops the protocol, Signal Messenger LLC, published a 5,900-word write-up describing its latest updates that make Signal fully quantum-resistant.

The complexity and problem-solving required for making the Signal Protocol quantum safe are as daunting as just about any in modern-day engineering. The original Signal Protocol already resembled the inside of a fine Swiss timepiece, with countless gears, wheels, springs, hands, and other parts all interoperating in an intricate way. In less adept hands, mucking about with an instrument as complex as the Signal protocol could have led to shortcuts or unintended consequences that hurt performance, undoing what would otherwise be a perfectly running watch. Yet this latest post-quantum upgrade (the first one came in 2023) is nothing short of a triumph.

“This appears to be a solid, thoughtful improvement to the existing Signal Protocol,” said Brian LaMacchia, a cryptography engineer who oversaw Microsoft’s post-quantum transition from 2015 to 2022 and now works at Farcaster Consulting Group. “As part of this work, Signal has done some interesting optimization under the hood so as to minimize the network performance impact of adding the post-quantum feature.”

Of the multiple hurdles to clear, the most challenging was accounting for the much larger key sizes that quantum-resistant algorithms require. The overhaul here adds protections based on ML-KEM-768, an implementation of the CRYSTALS-Kyber algorithm that was selected in 2022 and formalized last year by the National Institute of Standards and Technology. ML-KEM is short for Module-Lattice-Based Key-Encapsulation Mechanism, but most of the time, cryptographers refer to it simply as KEM.

Ratchets, ping-pong, and asynchrony

Like the Elliptic curve Diffie-Hellman (ECDH) protocol that Signal has used since its start, KEM is a key encapsulation mechanism. Also known as a key agreement mechanism, it provides the means for two parties who have never met to securely agree on one or more shared secrets in the presence of an adversary who is monitoring the parties’ connection. RSA, ECDH, and other encapsulation algorithms have long been used to negotiate symmetric keys (almost always AES keys) in protocols including TLS, SSH, and IKE. Unlike ECDH and RSA, however, the much newer KEM is quantum-safe.

Key agreement in a protocol like TLS is relatively straightforward. That’s because devices connecting over TLS negotiate a key over a single handshake that occurs at the beginning of a session. The agreed-upon AES key is then used throughout the session. The Signal Protocol is different. Unlike TLS sessions, Signal sessions are protected by forward secrecy, a cryptographic property that ensures the compromise of a key used to encrypt a recent set of messages can’t be used to decrypt an earlier set of messages. The protocol also offers Post-Compromise Security, which protects future messages from past key compromises. While a TLS  uses the same key throughout a session, keys within a Signal session constantly evolve.

To provide these confidentiality guarantees, the Signal Protocol updates secret key material each time a message party hits the send button or receives a message, and at other points, such as in graphical indicators that a party is currently typing and in the sending of read receipts. The mechanism that has made this constant key evolution possible over the past decade is what protocol developers call a “double ratchet.” Just as a traditional ratchet allows a gear to rotate in one direction but not in the other, the Signal ratchets allow messaging parties to create new keys based on a combination of preceding and newly agreed-upon secrets. The ratchets work in a single direction, the sending and receiving of future messages. Even if an adversary compromises a newly created secret, messages encrypted using older secrets can’t be decrypted.

The starting point is a handshake that performs three or four ECDH agreements that mix long- and short-term secrets to establish a shared secret. The creation of this “root key” allows the Double Ratchet to begin. Until 2023, the key agreement used X3DH. The handshake now uses PQXDH to make the handshake quantum-resistant.

The first layer of the Double Ratchet, the Symmetric Ratchet, derives an AES key from the root key and advances it for every message sent. This allows every message to be encrypted with a new secret key. Consequently, if attackers compromise one party’s device, they won’t be able to learn anything about the keys that came earlier. Even then, though, the attackers would still be able to compute the keys used in future messages. That’s where the second, “Diffie-Hellman ratchet” comes in.

The Diffie-Hellman ratchet incorporates a new ECDH public key into each message sent. Using Alice and Bob, the fictional characters often referred to when explaining asymmetric encryption, when Alice sends Bob a message, she creates a new ratchet keypair and computes the ECDH agreement between this key and the last ratchet public key Bob sent. This gives her a new secret, and she knows that once Bob gets her new public key, he will know this secret, too (because, as mentioned earlier, Bob previously sent that other key). With that, Alice can mix the new secret with her old root key to get a new root key and start fresh. The result: Attackers who learn her old secrets won’t be able to tell the difference between her new ratchet keys and random noise.

The result is what Signal developers describe as “ping-pong” behavior, as the parties to a discussion take turns replacing ratchet key pairs one at a time. The effect: An eavesdropper who compromises one of the parties might recover a current ratchet private key, but soon enough, that private key will be replaced with a new, uncompromised one, and in a way that keeps it free from the prying eyes of the attacker.

The objective of the newly generated keys is to limit the number of messages that can be decrypted if an adversary recovers key material at some point in an ongoing chat. Messages sent prior to and after the compromise will remain off limits.

A major challenge designers of the Signal Protocol face is the need to make the ratchets work in an asynchronous environment. Asynchronous messages occur when parties send or receive them at different times—such as while one is offline and the other is active, or vice versa—without either needing to be present or respond immediately. The entire Signal Protocol must work within this asynchronous environment. What’s more, it must work reliably over unstable networks and networks controlled by adversaries, such as a government that forces a telecom or cloud service to spy on the traffic.

Shor’s algorithm lurking

By all accounts, Signal’s double ratchet design is state-of-the-art. That said, it’s wide open to an inevitable if not immediate threat: quantum computing. That’s because an adversary capable of monitoring traffic passing from two or more messenger users can capture that data and feed it into a quantum computer—once one of sufficient power is viable—and calculate the ephemeral keys generated in the second ratchet.

In classical computing, it’s infeasible, if not impossible, for such an adversary to calculate the key. Like all asymmetric encryption algorithms, ECDH is based on a mathematical, one-way function. Also known as trapdoor functions, these problems are trivial to compute in one direction and substantially harder to compute in reverse. In elliptic curve cryptography, this one-way function is based on the Discrete Logarithm problem in mathematics. The key parameters are based on specific points in an elliptic curve over the field of integers modulo some prime P.

On average, an adversary equipped with only a classical computer would spend billions of years guessing integers before arriving at the right ones. A quantum computer, by contrast, would be able to calculate the correct integers in a matter of hours or days. A formula known as Shor’s algorithm—which runs only on a quantum computer—reverts this one-way discrete logarithm equation to a two-way one. Shor’s Algorithm can similarly make quick work of solving the one-way function that’s the basis for the RSA algorithm.

As noted earlier, the Signal Protocol received its first post-quantum makeover in 2023. This update added PQXDH—a Signal-specific implementation that combined the key agreements from elliptic curves used in X3DH (specifically X25519) and the quantum-safe KEM—in the initial protocol handshake. (X3DH was then put out to pasture as a standalone implementation.)

The move foreclosed the possibility of a quantum attack being able to recover the symmetric key used to start the ratchets, but the ephemeral keys established in the ping-ponging second ratchet remained vulnerable to a quantum attack. Signal’s latest update adds quantum resistance to these keys, ensuring that forward secrecy and post-compromise security are safe from Shor’s algorithm as well.

Even though the ping-ponging keys are vulnerable to future quantum attacks, they are broadly believed to be secure against today’s attacks from classical computers. The Signal Protocol developers didn’t want to remove them or the battle-tested code that produces them. That led to their decision to add quantum resistance by adding a third ratchet. This one uses a quantum-safe KEM to produce new secrets much like the Diffie-Hellman ratchet did before, ensuring quantum-safe, post-compromise security.

The technical challenges were anything but easy. Elliptic curve keys generated in the X25519 implementation are about 32 bytes long, small enough to be added to each message without creating a burden on already constrained bandwidths or computing resources. A ML-KEM 768 key, by contrast, is 1,000 bytes. Additionally, Signal’s design requires sending both an encryption key and a ciphertext, making the total size 2272 bytes.

And then there were three

To handle the 71x increase, Signal developers considered a variety of options. One was to send the 2272-byte KEM key less often—say every 50th message or once every week—rather than every message. That idea was nixed because it doesn’t work well in asynchronous or adversarial messaging environments. Signal Protocol developers Grame Connell and Rolfe Schmidt explained:

Consider the case of “send a key if you haven’t sent one in a week”. If Bob has been offline for 2 weeks, what does Alice do when she wants to send a message? What happens if we can lose messages, and we lose the one in fifty that contains a new key? Or, what happens if there’s an attacker in the middle that wants to stop us from generating new secrets, and can look for messages that are [many] bytes larger than the others and drop them, only allowing keyless messages through?

Another option Signal engineers considered was breaking the 2272-byte key into smaller chunks, say 71 of them that are 32 bytes each. Breaking up the KEM key into smaller chunks and putting one in each message sounds like a viable approach at first, but once again, the asynchronous environment of messaging made it unworkable. What happens, for example, when data loss causes one of the chunks to be dropped? The protocol could deal with this scenario by just repeat-sending chunks again after sending all 71 previously. But then an adversary monitoring the traffic could simply cause packet 3 to be dropped each time, preventing Alice and Bob from completing the key exchange.

Signal developers ultimately went with a solution that used this multiple-chunks approach.

Sneaking an elephant through the cat door

To manage the asynchrony challenges, the developers turned to “erasure codes,” a method of breaking up larger data into smaller pieces such that the original can be reconstructed using any sufficiently sized subset of chunks.

Charlie Jacomme, a researcher at INRIA Nancy on the Pesto team who focuses on formal verification and secure messaging, said this design accounts for packet loss by building redundancy into the chunked material. Instead of all x number of chunks having to be successfully received to reconstruct the key, the model requires only x-y chunks to be received, where y is the acceptable number of packets lost. As long as that threshold is met, the new key can be established even when packet loss occurs.

The other part of the design was to split the KEM computations into smaller steps. These KEM computations are distinct from the KEM key material.

As Jacomme explained it:

Essentially, a small part of the public key is enough to start computing and sending a bigger part of the ciphertext, so you can quickly send in parallel the rest of the public key and the beginning of the ciphertext. Essentially, the final computations are equal to the standard, but some stuff was parallelized.

All this in fact plays a role in the end security guarantees, because by optimizing the fact that KEM computations are done faster, you introduce in your key derivation fresh secrets more frequently.

Signal’s post 10 days ago included several images that illustrate this design:

While the design solved the asynchronous messaging problem, it created a new complication of its own: This new quantum-safe ratchet advanced so quickly that it couldn’t be kept synchronized with the Diffie-Hellman ratchet. Ultimately, the architects settled on a creative solution. Rather than bolt KEM onto the existing double ratchet, they allowed it to remain more or less the same as it had been. Then they used the new quantum-safe ratchet to implement a parallel secure messaging system.

Now, when the protocol encrypts a message, it sources encryption keys from both the classic Double Ratchet and the new ratchet. It then mixes the two keys together (using a cryptographic key derivation function) to get a new encryption key that has all of the security of the classical Double Ratchet but now has quantum security, too.

The Signal engineers have given this third ratchet the formal name: Sparse Post Quantum Ratchet, or SPQR for short. The third ratchet was designed in collaboration with PQShield, AIST, and New York University. The developers presented the erasure-code-based chunking and the high-level Triple Ratchet design at the Eurocrypt 2025 conference. At the Usenix 25 conference, they discussed the six options they considered for adding quantum-safe forward secrecy and post-compromise security and why SPQR and one other stood out. Presentations at the NIST PQC Standardization Conference and the Cryptographic Applications Workshop explain the details of chunking, the design challenges, and how the protocol had to be adapted to use the standardized ML-KEM.

Jacomme further observed:

The final thing interesting for the triple ratchet is that it nicely combines the best of both worlds. Between two users, you have a classical DH-based ratchet going on one side, and fully independently, a KEM-based ratchet is going on. Then, whenever you need to encrypt something, you get a key from both, and mix it up to get the actual encryption key. So, even if one ratchet is fully broken, be it because there is now a quantum computer, or because somebody manages to break either elliptic curves or ML-KEM, or because the implementation of one is flawed, or…, the Signal message will still be protected by the second ratchet. In a sense, this update can be seen, of course simplifying, as doubling the security of the ratchet part of Signal, and is a cool thing even for people that don’t care about quantum computers.

As both Signal and Jacomme noted, users of Signal and other messengers relying on the Signal Protocol need not concern themselves with any of these new designs. To paraphrase a certain device maker, it just works.

In the coming weeks or months, various messaging apps and app versions will be updated to add the triple ratchet. Until then, apps will simply rely on the double ratchet as they always did. Once apps receive the update, they’ll behave exactly as they did before upgrading.

For those who care about the internal workings of their Signal-based apps, though, the architects have documented in great depth the design of this new ratchet and how it behaves. Among other things, the work includes a mathematical proof verifying that the updated Signal protocol provides the claimed security properties.

Outside researchers are applauding the work.

“If the normal encrypted messages we use are cats, then post-quantum ciphertexts are elephants,” Matt Green, a cryptography expert at Johns Hopkins University, wrote in an interview. “So the problem here is to sneak an elephant through a tunnel designed for cats. And that’s an amazing engineering achievement. But it also makes me wish we didn’t have to deal with elephants.”

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Why Signal’s post-quantum makeover is an amazing engineering achievement Read More »

how-close-are-we-to-solid-state-batteries-for-electric-vehicles?

How close are we to solid state batteries for electric vehicles?


Superionic materials promise greater range, faster charges and more safety.

In early 2025, Mercedes-Benz ran its first road tests of an electric passenger car powered by a prototype solid-state battery pack. The carmaker predicts the next-gen battery will increase the electric vehicle’s driving range to over 620 miles (1,000 kilometers). Credit: Mercedes-Benz Group

Every few weeks, it seems, yet another lab proclaims yet another breakthrough in the race to perfect solid-state batteries: next-generation power packs that promise to give us electric vehicles (EVs) so problem-free that we’ll have no reason left to buy gas-guzzlers.

These new solid-state cells are designed to be lighter and more compact than the lithium-ion batteries used in today’s EVs. They should also be much safer, with nothing inside that can burn like those rare but hard-to-extinguish lithium-ion fires. They should hold a lot more energy, turning range anxiety into a distant memory with consumer EVs able to go four, five, six hundred miles on a single charge.

And forget about those “fast” recharges lasting half an hour or more: Solid-state batteries promise EV fill-ups in minutes—almost as fast as any standard car gets with gasoline.

This may all sound too good to be true—and it is, if you’re looking to buy a solid-state-powered EV this year or next. Look a bit further, though, and the promises start to sound more plausible. “If you look at what people are putting out as a road map from industry, they say they are going to try for actual prototype solid-state battery demonstrations in their vehicles by 2027 and try to do large-scale commercialization by 2030,” says University of Washington materials scientist Jun Liu, who directs a university-government-industry battery development collaboration known as the Innovation Center for Battery500 Consortium.

Indeed, the challenge is no longer to prove that solid-state batteries are feasible. That has long since been done in any number of labs around the world. The big challenge now is figuring out how to manufacture these devices at scale, and at an acceptable cost.

Superionic materials to the rescue

Not so long ago, says Eric McCalla, who studies battery materials at McGill University in Montreal and is a coauthor of a paper on battery technology in the 2025 Annual Review of Materials Research, this heady rate of advancement toward powering electric vehicles was almost unimaginable.

Until about 2010, explains McCalla, “the solid-state battery had always seemed like something that would be really awesome—if we could get it to work.” Like current EV batteries, it would still be built with lithium, an unbeatable element when it comes to the amount of charge it can store per gram. But standard lithium-ion batteries use a liquid, a highly flammable one at that, to allow easy passage of charged particles (ions) between the device’s positive and negative electrodes. The new battery design would replace the liquid with a solid electrolyte that would be nearly impervious to fire—while allowing for a host of other physical and chemical changes that could make the battery faster charging, lighter in weight, and all the rest.

“But the material requirements for these solid electrolytes were beyond the state of the art,” says McCalla. After all, standard lithium-ion batteries have a good reason for using a liquid electrolyte: It gives the ionized lithium atoms inside a fluid medium to move through as they shuttle between the battery’s two electrodes. This back-and-forth cycle is how any battery stores and releases energy—the chemical equivalent of pumping water from a low-lying reservoir to a high mountain lake, then letting it run back down through a turbine whenever you need some power. This hypothetical new battery would somehow have to let those lithium ions flow just as freely—but through a solid.

Diagram of rechargable battery

Storing electrical energy in a rechargeable battery is like pumping water from a low-lying reservoir up to a high mountain lake. Likewise, using that energy to power an external device is like letting the water flow back downhill through a generator. The volume of the mountain lake corresponds to the battery’s capacity, or how much charge it can hold, while the lake’s height corresponds to the battery’s voltage—how much energy it gives to each unit of charge it sends through the device.

Credit: Knowable Magazine

Storing electrical energy in a rechargeable battery is like pumping water from a low-lying reservoir up to a high mountain lake. Likewise, using that energy to power an external device is like letting the water flow back downhill through a generator. The volume of the mountain lake corresponds to the battery’s capacity, or how much charge it can hold, while the lake’s height corresponds to the battery’s voltage—how much energy it gives to each unit of charge it sends through the device. Credit: Knowable Magazine

This seemed hopeless for larger uses such as EVs, says McCalla. Certain polymers and other solids were known to let ions pass, but at rates that were orders of magnitude slower than liquid electrolytes. In the past two decades, however, researchers have discovered several families of lithium-rich compounds that are “superionic”—meaning that some atoms behave like a crystalline solid while others behave more like a liquid—and that can conduct lithium ions as fast as standard liquid electrolytes, if not faster.

“So the bottleneck suddenly is not the bottleneck anymore,” says McCalla.

True, manufacturing these batteries can be a challenge. For example, some of the superionic solids are so brittle that they require special equipment for handling, while others must be processed in ultra-low humidity chambers lest they react with water vapor and generate toxic hydrogen sulfide gas.

Still, the suddenly wide-open potential of solid-state batteries has led to a surge of research and development money from funding agencies around the globe—not to mention the launch of multiple startup companies working in partnership with carmakers such as Toyota, Volkswagen, and many more. Although not all the numbers are public, investments in solid-state battery development are already in the billions of dollars worldwide.

“Every automotive company has said solid-state batteries are the future,” says University of Maryland materials scientist Eric Wachsman. “It’s just a question of, When is that future?”

The rise of lithium-ion batteries

Perhaps the biggest reason to ask that “when” question, aside from the still-daunting manufacturing challenges, is a stark economic reality: Solid-state batteries will have to compete in the marketplace with a standard lithium-ion industry that has an enormous head start.

“Lithium-ion batteries have been developed and optimized over the last 30 years, and they work really great,” says physicist Alex Louli, an engineer and spokesman at one of the leading solid-state battery startups, San Jose, California-based QuantumScape.

Diagram showing how li-ion battery works

Charging a standard lithium-ion battery (top) works by applying a voltage between cathode and anode. This pulls lithium atoms from the cathode and strips off an electron from each. The now positively charged lithium ions then flow across the membrane to the negatively charged anode. There, the ions reunite with the electrons, which flowed through an external circuit as an electric current. These now neutral atoms nest in the graphite lattice until needed again. The battery’s discharge cycle (bottom) is just the reverse: Electrons deliver energy to your cell phone or electric car as they flow via a circuit from anode to cathode, while lithium ions race through the membrane to meet them there.

Credit: Knowable Magazine

Charging a standard lithium-ion battery (top) works by applying a voltage between cathode and anode. This pulls lithium atoms from the cathode and strips off an electron from each. The now positively charged lithium ions then flow across the membrane to the negatively charged anode. There, the ions reunite with the electrons, which flowed through an external circuit as an electric current. These now neutral atoms nest in the graphite lattice until needed again. The battery’s discharge cycle (bottom) is just the reverse: Electrons deliver energy to your cell phone or electric car as they flow via a circuit from anode to cathode, while lithium ions race through the membrane to meet them there. Credit: Knowable Magazine

They’ve also gotten really cheap, comparatively speaking. When Japan’s Sony Corporation introduced the first commercial lithium-ion battery in 1991, drawing on a worldwide research effort dating back to the 1950s, it powered one of the company’s camcorders and cost the equivalent of $7,500 for every kilowatt-hour (KwH) of energy it stored. By April 2025 lithium-ion battery prices had plummeted to $115 per KwH, and were projected to fall toward $80 per KwH or less by 2030—low enough to make a new EV substantially cheaper than the equivalent gasoline-powered vehicle.

“Most of these advancements haven’t really been down to any fundamental chemistry improvements,” says Mauro Pasta, an applied electrochemist at the University of Oxford. “What’s changed the game has been the economies of scale in manufacturing.”

Liu points to a prime example: the roll-to-roll process used for the cylindrical batteries found in most of today’s EVs. “You make a slurry,” says Liu, “then you cast the slurry into thin films, roll the films together with very high speed and precision, and you can make hundreds and thousands of cells very, very quickly with very high quality.”

Lithium-ion cells have also seen big advances in safety. The existence of that flammable electrolyte means that EV crashes can and do lead to hard-to-extinguish lithium-ion fires. But thanks to the circuit breakers and other safeguards built into modern battery packs, only about 25 EVs catch fire out of every 100,000 sold, versus some 1,500 fires per 100,000 conventional cars—which, of course, carry around large tanks of explosively flammable gasoline.

In fact, says McCalla, the standard lithium-ion industry is so far ahead that solid-state might never catch up. “EVs are going to scale today,” he says, “and they’re going with the technology that’s affordable today.” Indeed, battery manufacturers are ramping up their lithium-ion capacity as fast as they can. “So I wonder if the train has already left the station.”

But maybe not. Solid-state technology does have a geopolitical appeal, notes Ying Shirley Meng, a materials scientist at the University of Chicago and Argonne National Laboratory. “With lithium-ion batteries the game is over—China already dominates 70 percent of the manufacturing,” she says. So for any country looking to lead the next battery revolution, “solid-state presents a very exciting opportunity.”

Performance potential

Another plus is improved performance. At the very time that EV buyers are looking for ever greater range and charging speed, says Louli, the standard lithium-ion recipe is hitting a performance plateau. To do better, he says, “you have to go back and start doing some material innovations”—like those in solid-state batteries.

Take the standard battery’s liquid electrolyte, for example. It’s not only flammable, but also a limitation on charging speed. When you plug in an electric car, the charging cable acts as an external circuit that’s applying a voltage between the battery’s two electrodes, the cathode and the anode. The resulting electrical forces are strong enough to pull lithium atoms out of the cathode and to strip one electron from each atom. But when they drag the resulting ions through the electrolyte toward the anode, they hit the speed limit: Try to rush the ions along by upping the voltage too far and the electrolyte will chemically break down, ending the battery’s charging days forever.

So score one for solid-state batteries: Not only do the best superionic conductors offer a faster ion flow than liquid electrolytes, they also can tolerate higher voltages—all of which translates into EV recharges in under 10 minutes, versus half an hour or more for today’s lithium-ion power packs.

Score another win for solid-state when the ions arrive at the opposite electrode, the anode, during charging. This is where they reunite with their lost electrons, which have taken the long way around through the external circuit. And this is where standard lithium-ion batteries store the newly neutralized lithium atoms in a layer of graphite.

A solid-state battery doesn’t require a graphite cage to store lithium ions at the anode. This shrinks the overall size of the battery and increases its efficiency in uses such as an electric vehicle power pack. The solid-state design also replaces the porous membrane in the middle with a sturdier barrier. The aim is to create a battery that’s more light-weight, safer, stores more energy and makes recharging more convenient than current electric car batteries.

Credit: Knowable Magazine

A solid-state battery doesn’t require a graphite cage to store lithium ions at the anode. This shrinks the overall size of the battery and increases its efficiency in uses such as an electric vehicle power pack. The solid-state design also replaces the porous membrane in the middle with a sturdier barrier. The aim is to create a battery that’s more light-weight, safer, stores more energy and makes recharging more convenient than current electric car batteries. Credit: Knowable Magazine

Graphite anodes were a major commercial advance in 1991—the innovation that finally brought lithium-ion batteries out of the lab and into the marketplace. Graphite is cheap, chemically stable, excellent at conducting electricity, and able to slot those incoming lithium atoms into its hexagonal carbon lattice like so many eggs in an egg carton.

But graphite imposes yet another charging rate limit, since the lattice can handle only so many ions crowding in at once. And it’s heavy, wasting a lot of mass and volume on a simple container, says Louli: “Graphite is an accommodating host, but it does not deliver energy itself—it’s a passive component.” That’s why range-conscious automakers are eager for an alternative to graphite: The more capacity an EV can cram into the same-sized battery pack, and the less weight it has to haul around, the farther it can go on a single charge.

The ultimate alternative would be no cage at all, with no wasted space or weight—just incoming ions condensing into pure lithium metal with every charging cycle. In effect, such a metallic lithium anode would create and then dissolve itself with every charge and discharge cycle—while storing maybe 10 times more electrical energy per gram than a graphite anode.

Such lithium-metal anodes have been demonstrated in the lab since at least the 1970s, and even featured in some early, unsuccessful attempts at commercial lithium batteries. But even after decades of trying, says Louli, no one has been able to make metal anodes work safely and reliably in contact with liquid electrolytes. For one thing, he says, you get these reactions between your liquid electrolyte and the lithium metal that degrade them both, and you end up with a very bad battery lifetime.

And for another, adds Wachsman, “when you are charging a battery with liquids, the lithium going to the anode can plate out non-uniformly and form what are called dendrites.” These jagged spikes of metal can grow in unpredictable ways and pierce the battery’s separator layer: a thin film of electrically insulating polymer that keeps the two electrodes from touching one another. Breaching that barrier could easily cause a short circuit that abruptly ends the device’s useful life, or even sets it on fire.

Dendrite formation

Standard lithium-ion batteries don’t use lithium-metal anodes because there is too high a risk of the metal forming sharp spikes called dendrites. Such dendrites can easily pierce the porous polymer membrane that separates anode from cathode, causing a short-circuit or even sparking a fire. Solid-state batteries replace the membrane with a solid barrier.

Credit: Knowable Magazine

Standard lithium-ion batteries don’t use lithium-metal anodes because there is too high a risk of the metal forming sharp spikes called dendrites. Such dendrites can easily pierce the porous polymer membrane that separates anode from cathode, causing a short-circuit or even sparking a fire. Solid-state batteries replace the membrane with a solid barrier. Credit: Knowable Magazine

Now compare this with a battery that replaces both the liquid electrolyte and the separator with a solid-state layer tough enough to resist those spikes, says Wachsman. “It has the potential of, one, being stable to higher voltages; two, being stable in the presence of lithium metal; and three, preventing those dendrites”—just about everything you need to make those ultra-high-energy-density lithium-metal anodes a practical reality.

“That is what is really attractive about this new battery technology,” says Louli. And now that researchers have found so many superionic solids that could potentially work, he adds, “this is what’s driving the push for it.”

Manufacturing challenges

Increasingly, in fact, the field’s focus has shifted from research to practice, figuring out how to work the same kind of large-scale, low-cost manufacturing magic that’s made the standard lithium-ion architecture so dominant. These new superionic materials haven’t made it easy.

A prime example is the class of sulfides discovered by Japanese researchers in 2011. Not only were these sulfides among the first of the new superionics to be discovered, says Wachsman, they are still the leading contenders for early commercialization.

Major investments have come from startups such as Colorado-based Solid Power and Massachusetts-based Factorial Energy, as well as established battery giants such as China’s CATL and global carmakers such as Toyota and Honda.

And there’s one big reason for the focus on superionic sulfides, says Wachsman: “They’re easy to drop into existing battery cell manufacturing lines,” including the roll-to-roll process. “Companies have got billions of dollars invested in the existing infrastructure, and they don’t want to just displace that with something new.”

Yet these superionic sulfides also have some significant downsides—most notably, their extreme sensitivity to humidity. This complicates the drop-in process, says Oxford’s Pasta. The dry rooms that are currently used to manufacture lithium-ion batteries have a humidity content that is not nearly low enough for sulfide electrolytes, and would have to be retooled. That sensitivity also poses a safety risk if the batteries are ever ruptured in an accident, he says: “If you expose the sulfides to humidity in the air you will generate hydrogen sulfide gas, which is extremely toxic.”

All of which is why startups such as QuantumScape, and the Maryland-based Ion Storage Systems that spun out of Wachsman’s lab in 2015, are looking beyond sulfides to solid-state oxide electrolytes. These materials are essentially ceramics, says Wachsman, made in a high-tech version of pottery class: “You shape the clay, you fire it in a kiln, and it’s a solid.” Except that in this case, it’s a superionic solid that’s all but impervious to humidity, heat, fire, high voltage, and highly reactive lithium metal.

Yet that’s also where the manufacturing challenges start. Superionic or not, for example, ceramics are too brittle for roll-to-roll processing. Once they have been fired and solidified, says Wachsman, “you have to handle them more like a semiconductor wafer, with machines to cut the sheets to size and robotics to move them around.”

Then there’s the “reversible breathing” that plagues oxide and sulfide batteries alike: “With every charging cycle we’re plating and stripping lithium metal at the anode,” explains Louli. “So your entire cell stack will have a thickness increase when you charge and a thickness decrease when you discharge”—a cycle of tiny changes in volume that every solid-state battery design has to allow for.

At QuantumScape, for example, individual battery cells are made by stacking a number of gossamer-thin oxide sheets like a deck of cards, then encasing this stack inside a metal frame that is just thick enough to let the anode layer on each sheet freely expand and contract. The stack and the frame together are then vacuum-sealed into a soft-sided pouch, says Louli, “so if you pack the cells frame to frame, the stacks can breathe and not push on the adjacent cells.”

In a similar way, says Wachsman, all the complications of solid-state batteries have ready solutions—but solutions that inevitably add complexity and cost. Thus the field’s increasingly urgent obsession with manufacturing. Before an auto company will even consider adopting a new EV battery, he says, “it not only has to be better-performing than their current battery, it has to be cheaper.”

And the only way to make complicated technology cheaper is with economies of scale. “That’s why the biggest impediment to solid-state batteries is just the cost of standing up one of these gigafactories to make them in sufficient volume,” says Wachsman. “That’s why there’s probably going to be more solid-state batteries in early adopter-type applications that don’t require that kind of volume.”

Still, says Louli, the long-term demand is definitely there. “What we’re trying to enable by combining the lithium-metal anode with solid-state technology is threefold,” he says: “Higher energy, higher power and improved safety. So for high-performance applications like electric vehicles—or other applications that require high power density, such as drones or even electrified aviation—solid-state batteries are going to be well-suited.”

This story originally appeared in Knowable Magazine.

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

How close are we to solid state batteries for electric vehicles? Read More »

uk-antitrust-regulator-takes-aim-at-google’s-search-dominance

UK antitrust regulator takes aim at Google’s search dominance

Google is facing multiple antitrust actions in the US, and European regulators have been similarly tightening the screws. You can now add the UK to the list of Google’s governmental worries. The country’s antitrust regulator, known as the Competition and Markets Authority (CMA), has confirmed that Google has “strategic market status,” paving the way to more limits on how Google does business in the UK. Naturally, Google objects to this course of action.

The designation is connected to the UK’s new digital markets competition regime, which was enacted at the beginning of the year. Shortly after, the CMA announced it was conducting an investigation into whether Google should be designated with strategic market status. The outcome of that process is a resounding “yes.”

This label does not mean Google has done anything illegal or that it is subject to immediate regulation. It simply means the company has “substantial and entrenched market power” in one or more areas under the purview of the CMA. Specifically, the agency has found that Google is dominant in search and search advertising, holding a greater than 90 percent share of Internet searches in the UK.

In Google’s US antitrust trials, the rapid rise of generative AI has muddied the waters. Google has claimed on numerous occasions that the proliferation of AI firms offering search services means there is ample competition. In the UK, regulators note that Google’s Gemini AI assistant is not in the scope of the strategic market status designation. However, some AI features connected to search, like AI Overviews and AI Mode, are included.

According to the CMA, consultations on possible interventions to ensure effective competition will begin later this year. The agency’s first set of antitrust measures will likely expand on solutions that Google has introduced in other regions or has offered on a voluntary basis in the UK. This could include giving publishers more control over how their data is used in search and “choice screens” that suggest Google alternatives to users. Measures that require new action from Google could be announced in the first half of 2026.

UK antitrust regulator takes aim at Google’s search dominance Read More »

rocket-report:-bezos’-firm-will-package-satellites-for-launch;-starship-on-deck

Rocket Report: Bezos’ firm will package satellites for launch; Starship on deck


The long, winding road for Franklin Chang-Diaz’s plasma rocket engine takes another turn.

Blue Origin’s second New Glenn booster left its factory this week for a road trip to the company’s launch pad a few miles away. Credit: Blue Origin

Welcome to Edition 8.14 of the Rocket Report! We’re now more than a week into a federal government shutdown, but there’s been little effect on the space industry. Military space operations are continuing unabated, and NASA continues preparations at Kennedy Space Center, Florida, for the launch of the Artemis II mission around the Moon early next year. The International Space Station is still flying with a crew of seven in low-Earth orbit, and NASA’s fleet of spacecraft exploring the cosmos remain active. What’s more, so much of what the nation does in space is now done by commercial companies largely (but not completely) immune from the pitfalls of politics. But the effect of the shutdown on troops and federal employees shouldn’t be overlooked. They will soon miss their first paychecks unless political leaders reach an agreement to end the stalemate.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Danger from dead rockets. A new listing of the 50 most concerning pieces of space debris in low-Earth orbit is dominated by relics more than a quarter-century old, primarily dead rockets left to hurtle through space at the end of their missions, Ars reports. “The things left before 2000 are still the majority of the problem,” said Darren McKnight, lead author of a paper presented October 3 at the International Astronautical Congress in Sydney. “Seventy-six percent of the objects in the top 50 were deposited last century, and 88 percent of the objects are rocket bodies. That’s important to note, especially with some disturbing trends right now.”

Littering in LEO … The disturbing trends mainly revolve around China’s actions in low-Earth orbit. “The bad news is, since January 1, 2024, we’ve had 26 rocket bodies abandoned in low-Earth orbit that will stay in orbit for more than 25 years,” McKnight told Ars. China is responsible for leaving behind 21 of those 26 rockets. Overall, Russia and the Soviet Union lead the pack with 34 objects listed in McKnight’s Top 50, followed by China with 10, the United States with three, Europe with two, and Japan with one. Russia’s SL-16 and SL-8 rockets are the worst offenders, combining to take 30 of the Top 50 slots. An impact with even a modestly sized object at orbital velocity would create countless pieces of debris, potentially triggering a cascading series of additional collisions clogging LEO with more and more space junk, a scenario called the Kessler Syndrome.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

New Shepard flies again. Blue Origin, Jeff Bezos’ space company, launched its sixth crewed New Shepard flight so far this year Wednesday as the company works to increase the vehicle’s flight rate, Space News reports. This was the 36th flight of Blue Origin’s suborbital New Shepard rocket. The passengers included: Jeff Elgin, Danna Karagussova, Clint Kelly III, Will Lewis, Aaron Newman, and Vitalii Ostrovsky. Blue Origin said it has now flown 86 humans (80 individuals) into space. The New Shepard booster returned to a pinpoint propulsive landing, and the capsule parachuted into the desert a few miles from the launch site near Van Horn, Texas.

Two-month turnaround … This flight continued Blue Origin’s trend of launching New Shepard about once per month. The company has two capsules and two boosters in its active inventory, and each vehicle has flown about once every two months this year. Blue Origin currently has command of the space tourism and suborbital research market as its main competitor in this sector, Virgin Galactic, remains grounded while it builds a next-generation rocket plane. (submitted by EllPeaTea)

NASA still interested in former astronaut’s rocket engine. NASA has awarded the Ad Astra Rocket Company a $4 million, two-year contract for the continued development of the company’s Variable Specific Impulse Magnetoplasma Rocket (VASIMR) concept, Aviation Week & Space Technology reports. Ad Astra, founded by former NASA astronaut Franklin Chang-Diaz, claims the vehicle has the potential to reach Mars with human explorers within 45 days using a nuclear power source rather than solar power. The new contract will enable federal funding to support development of the engine’s radio frequency, superconducting magnet, and structural exoskeleton subsystems.

Slow going … Houston-based Ad Astra said in a press release that it sees the high-power plasma engine as “nearing flight readiness.” We’ve heard this before. The VASIMR engine has been in development for decades now, beset by a lack of stable funding and the technical hurdles inherent in designing and testing such demanding technology. For example, Ad Astra once planned a critical 100-hour, 100-kilowatt ground test of the VASIMR engine in 2018. The test still hasn’t happened. Engineers discovered a core component of the engine tended to overheat as power levels approached 100 kilowatts, forcing a redesign that set the program back by at least several years. Now, Ad Astra says it is ready to build and test a pair of 150-kilowatt engines, one of which is intended to fly in space at the end of the decade.

Gilmour eyes return to flight next year. Australian rocket and satellite startup Gilmour Space Technologies is looking to return to the launch pad next year after the first attempt at an orbital flight failed over the summer, Aviation Week & Space Technology reports. “We are well capitalized. We are going to be launching again next year,” Adam Gilmour, the company’s CEO, said October 3 at the International Astronautical Congress in Sydney.

What happened? … Gilmour didn’t provide many details about the cause of the launch failure in July, other than to say it appeared to be something the company didn’t test for ahead of the flight. The Eris rocket flew for 14 seconds, losing control and crashing a short distance from the launch pad in the Australian state of Queensland. If there’s any silver lining, Gilmour said the failure didn’t damage the launch pad, and the rocket’s use of a novel hybrid propulsion system limited the destructive power of the blast when it struck the ground.

Stoke Space’s impressive funding haul. Stoke Space announced a significant capital raise on Wednesday, a total of $510 million as part of Series D funding. The new financing doubles the total capital raised by Stoke Space, founded in 2020, to $990 million, Ars reports. The infusion of money will provide the company with “the runway to complete development” of the Nova rocket and demonstrate its capability through its first flights, said Andy Lapsa, the company’s co-founder and chief executive, in a news release characterizing the new funding.

A futuristic design … Stoke is working toward a 2026 launch of the medium-lift Nova rocket. The rocket’s innovative design is intended to be fully reusable from the payload fairing on down, with a regeneratively cooled heat shield on the vehicle’s second stage. In fully reusable mode, Nova will have a payload capacity of 3 metric tons to low-Earth orbit, and up to 7 tons in fully expendable mode. Stoke is building a launch pad for the Nova rocket at Cape Canaveral Space Force Station, Florida.

SpaceX took an unusual break from launching. SpaceX launched its first Falcon 9 rocket from Florida in 12 days during the predawn hours of Tuesday morning, Spaceflight Now reports. The launch gap was highlighted by a run of persistent, daily storms in Central Florida and over the Atlantic Ocean, including hurricanes that prevented deployment of SpaceX’s drone ships to support booster landings. The break ended with the launch of 28 more Starlink broadband satellites. SpaceX launched three Starlink missions in the interim from Vandenberg Space Force Base, California.

Weather still an issue … Weather conditions on Florida’s Space Coast are often volatile, particularly in the evenings during summer and early autumn. SpaceX’s next launch from Florida was supposed to take off Thursday evening, but officials pushed it back to no earlier than Saturday due to a poor weather forecast over the next two days. Weather still gets a vote in determining whether a rocket lifts off or doesn’t, despite SpaceX’s advancements in launch efficiency and the Space Force’s improved weather monitoring capabilities at Cape Canaveral.

ArianeGroup chief departs for train maker. Current ArianeGroup CEO Martin Sion has been named the new head of French train maker Alstom. He will officially take up the role in April 2026, European Spaceflight reports. Sion assumed the role as ArianeGroup’s chief executive in 2023, replacing the former CEO who left the company after delays in the debut of its main product: the Ariane 6 rocket. Sion’s appointment was announced by Alstom, but ArianeGroup has not made any official statement on the matter.

Under pressure … The change in ArianeGroup’s leadership comes as the company ramps up production and increases the launch cadence of the Ariane 6 rocket, which has now flown three times, with a fourth launch due next month. ArianeGroup’s subsidiary, Arianespace, seeks to increase the Ariane 6’s launch cadence to 10 missions per year by 2029. ArianeGroup and its suppliers will need to drastically improve factory throughput to reach this goal.

New Glenn emerges from factory. Blue Origin rolled the first stage of its massive New Glenn rocket from its hangar on Wednesday morning in Florida, kicking off the final phase of the campaign to launch the heavy-lift vehicle for the second time, Ars reports. In sharing video of the rollout to Launch Complex-36 on Wednesday online, the space company did not provide a launch target for the mission, which seeks to put two small Mars-bound payloads into orbit. The pair of identical spacecraft to study the solar wind at Mars is known as ESCAPADE. However, sources told Ars that on the current timeline, Blue Origin is targeting a launch window of November 9 to November 11. This assumes pre-launch activities, including a static-fire test of the first stage, go well.

Recovery or bust? Blue Origin has a lot riding on this booster, named “Never Tell Me The Odds,” which it will seek to recover and reuse. Despite the name of the booster, the company is quietly confident that it will successfully land the first stage on a drone ship named Jacklyn. Internally, engineers at Blue Origin believe there is about a 75 percent chance of success. The first booster malfunctioned before landing on the inaugural New Glenn test flight in January. Company officials are betting big on recovering the booster this time, with plans to reuse it early next year to launch Blue’s first lunar lander to the Moon.

SpaceX gets bulk of this year’s military launch orders. Around this time each year, the US Space Force convenes a Mission Assignment Board to dole out contracts to launch the nation’s most critical national security satellites. The military announced this year’s launch orders Friday, and SpaceX was the big winner, Ars reports. Space Systems Command, the unit responsible for awarding military launch contracts, selected SpaceX to launch five of the seven missions up for assignment this year. United Launch Alliance (ULA), a 50-50 joint venture between Boeing and Lockheed Martin, won contracts for the other two. These missions for the Space Force and the National Reconnaissance Office are still at least a couple of years away from flying.

Vulcan getting more expensive A closer examination of this year’s National Security Space Launch contracts reveals some interesting things. The Space Force is paying SpaceX $714 million for the five launches awarded Friday, for an average of roughly $143 million per mission. ULA will receive $428 million for two missions, or $214 million for each launch. That’s about 50 percent more expensive than SpaceX’s price per mission. This is in line with the prices the Space Force paid SpaceX and ULA for last year’s contracts. However, look back a little further and you’ll find ULA’s prices for military launches have, for some reason, increased significantly over the last few years. In late 2023, the Space Force awarded a $1.3 billion deal to ULA for a batch of 11 launches at an average cost per mission of $119 million. A few months earlier, Space Systems Command assigned six launches to ULA for $672 million, or $112 million per mission.

Starship Flight 11 nears launch. SpaceX rolled the Super Heavy booster for the next test flight of the company’s Starship mega-rocket out to the launch pad in Texas this week. The booster stage, with 33 methane-fueled engines, will power the Starship into the upper atmosphere during the first few minutes of flight. This booster is flight-proven, having previously launched and landed on a test flight in March.

Next steps With the Super Heavy booster installed on the pad, the next step for SpaceX will be the rollout of the Starship upper stage. That is expected to happen in the coming days. Ground crews will raise Starship atop the Super Heavy booster to fully stack the rocket to its total height of more than 400 feet (120 meters). If everything goes well, SpaceX is targeting liftoff of the 11th full-scale test flight of Starship and Super Heavy as soon as Monday evening. (submitted by EllPeaTea)

Blue Origin takes on a new line of business. Blue Origin won a US Space Force competition to build a new payload processing facility at Cape Canaveral Space Force Station, Florida, Spaceflight Now reports. Under the terms of the $78.2 million contract, Blue Origin will build a new facility capable of handling payloads for up to 16 missions per year. The Space Force expects to use about half of that capacity, with the rest available to NASA or Blue Origin’s commercial customers. This contract award follows a $77.5 million agreement the Space Force signed with Astrotech earlier this year to expand the footprint of its payload processing facility at Vandenberg Space Force Base, California.

Important stuff … Ground infrastructure often doesn’t get the same level of attention as rockets, but the Space Force has identified bottlenecks in payload processing as potential constraints on ramping up launch cadences at the government’s spaceports in Florida and California. Currently, there are only a handful of payload processing facilities in the Cape Canaveral area, and most of them are only open to a single user, such as SpaceX, Amazon, the National Reconnaissance Office, or NASA. So, what exactly is payload processing? The Space Force said Blue Origin’s new facility will include space for “several pre-launch preparatory activities” that include charging batteries, fueling satellites, loading other gaseous and fluid commodities, and encapsulation. To accomplish those tasks, Blue Origin will create “a clean, secure, specialized high-bay facility capable of handling flight hardware, toxic fuels, and explosive materials.”

Next three launches

Oct. 11: Gravity 1 | Unknown Payload | Haiyang Spaceport, China Coastal Waters | 02: 15 UTC

Oct. 12: Falcon 9 | Project Kuiper KF-03 | Cape Canaveral Space Force Station, Florida | 00: 41 UTC

Oct. 13: Starship/Super Heavy | Flight 11 | Starbase, Texas | 23: 15 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: Bezos’ firm will package satellites for launch; Starship on deck Read More »

childhood-vaccines-safe-for-a-little-longer-as-cdc-cancels-advisory-meeting

Childhood vaccines safe for a little longer as CDC cancels advisory meeting

An October meeting of a key federal vaccine advisory committee has been canceled without explanation, sparing the evidence-based childhood vaccination schedule from more erosion—at least for now.

The Advisory Committee on Immunization Practices (ACIP) for the Centers for Disease Control and Prevention was planning to meet on October 22 and 23, which would have been the committee’s fourth meeting this year. But the meeting schedule was updated in the past week to remove those dates and replace them with “2025 meeting, TBD.”

Ars Technica contacted the Department of Health and Human Services to ask why the meeting was canceled. HHS press secretary Emily Hilliard offered no explanation, only saying that the “official meeting dates and agenda items will be posted on the website once finalized.”

ACIP is tasked with publicly reviewing and evaluating the wealth of safety and efficacy data on vaccines and then offering evidence-based recommendations for their use. Once the committee’s recommendations are adopted by the CDC, they set national vaccination standards for children and establish which shots federal programs and private insurance companies are required to fully cover.

In the past, the committee has been stacked with highly esteemed, thoroughly vetted medical experts, who diligently conducted their somewhat esoteric work on immunization policy with little fanfare. That changed when ardent anti-vaccine activist Robert F. Kennedy Jr. became health secretary. In June, Kennedy abruptly and unilaterally fired all 17 ACIP members, falsely accusing them of being riddled with conflicts of interest. He then installed his own hand-selected members. With the exception of one advisor—pediatrician and veteran ACIP member Cody Meissner—the members are poorly qualified, have gone through little vetting, and embrace the same anti-vaccine and dangerous fringe ideas as Kennedy.

Corrupted committee

So far this year, Kennedy’s advisors have met twice, producing chaotic meetings during which members revealed a clear lack of understanding of the data at hand and the process of setting vaccine recommendations, all while setting policy decisions long sought by anti-vaccine activists. The first meeting, in June, included seven members selected by Kennedy. In that meeting, the committee rescinded the recommendation for flu vaccines containing a preservative called thimerosal based on false claims from anti-vaccine groups that it causes autism. The panel also ominously said it would re-evaluate the entire childhood vaccination schedule, putting life-saving shots at risk.

Childhood vaccines safe for a little longer as CDC cancels advisory meeting Read More »

intel’s-next-generation-panther-lake-laptop-chips-could-be-a-return-to-form

Intel’s next-generation Panther Lake laptop chips could be a return to form

Intel says that systems with these chips in them should be shipping by the end of the year. In recent years, the company has launched a small handful of ultraportable-focused CPUs at the end of the year, and then followed that up with a more fully fleshed-out midrange and high-end lineup at CES in January—we’d expect Intel to stick to that basic approach here.

Panther Lake draws near

Panther Lake tries to combine different aspects of the last-generation Lunar Lake and Arrow Lake chips. Intel

Intel’s first Core Ultra chips, codenamed Meteor Lake, were introduced two years ago. There were three big changes that separated these from the 14th-generation Core CPUs and their predecessors: They were constructed of multiple silicon tiles, fused together into one with Intel’s Foveros packaging technologies; some of those tiles were manufactured by TSMC rather than Intel; and they added a neural processing unit (NPU) that could be used for on-device machine learning and generative AI applications.

The second-generation Core Ultra chips continued to do all three of those things, but Intel pursued an odd bifurcated strategy that gave different Core Ultra 200-series processors significantly different capabilities.

The most interesting models, codenamed Lunar Lake (aka Core Ultra 200V), integrated the system RAM on the CPU package, which improved performance and power consumption while making them more expensive to buy and complicated to manufacture. These chips included Intel’s most up-to-date Arc GPU architecture, codenamed Battlemage, plus an NPU that met the performance requirements for Microsoft’s Copilot+ PC initiative.

But Core Ultra 200V chips were mostly used in high-end thin-and-light laptops. Lower-cost and higher-performance laptops got the other kind of Core Ultra 200 chip, codenamed Arrow Lake, which was a mishmash of old and new. The CPU cores used the same architecture as Lunar Lake, and there were usually more of them. But the GPU architecture was older and slower, and the NPU didn’t meet the requirements for Copilot+. If Lunar Lake was all-new, Arrow Lake was mostly an updated CPU design fused to a tweaked version of the original Meteor Lake design (confused by all these lakes yet? Welcome to my world).

Intel’s next-generation Panther Lake laptop chips could be a return to form Read More »

man-gets-drunk,-wakes-up-with-a-medical-mystery-that-nearly-kills-him

Man gets drunk, wakes up with a medical mystery that nearly kills him

And what about the lungs? A number of things could explain the problems in his lungs—including infections from soil bacteria he might encounter in his construction work or a parasitic infection found in Central America. But the cause that best fit was common pneumonia and, more specifically, based on the distribution of opacities in his lung, pneumonia caused by aspiration (inhaling food particles or other things that are not air)—which is something that can happen when people drink excessive amounts of alcohol, as the man regularly did.

“Ethanol impairs consciousness and blunts protective reflexes (e.g., cough and gag), which disrupts the normal control mechanisms of the upper aerodigestive tract,” Dhaliwal noted.

And this is where Dhaliwal made a critical connection. If the man’s drinking led him to develop aspiration pneumonia—accidentally getting food in his lungs—he may have also accidentally gotten nonfood in this gastrointestinal tract at the same time.

Critical connection

The things people most commonly swallow by accident include coins, button batteries, jewelry, and small bones. But these things tend to show up in imaging, and none of the imaging revealed a swallowed object. Things that don’t show up on images, though, are things made of plants.

“This reasoning leads to the search for an organic object that might be ingested while eating and drinking and is seemingly harmless but becomes invasive upon entering the gastrointestinal tract,” Dhaliwal wrote.

“The leading suspect,” he concluded, “is a wooden toothpick—an object commonly found in club sandwiches and used for dental hygiene. Toothpick ingestions often go unnoticed, but once identified, they are considered medical emergencies owing to their propensity to cause visceral perforation and vascular injury.”

If a toothpick had pierced the man’s duodenum, it would completely explain all of the man’s symptoms. He drank too much and lost control of his aerodigestive tract, leading to aspiration that caused pneumonia, and he then swallowed a toothpick, which perforated the duodenum and led to sepsis.

Dhaliwal recommended an endoscopic procedure to look for a toothpick in his intestines. On the man’s third day in the hospital, he had the procedure, and, sure enough, there was a toothpick, piercing through his duodenum and into his right kidney, just as Dhaliwal had deduced.

Doctors promptly removed it and treated the man with antibiotics. He went on to make a full recovery. At a nine-month follow-up, he continued to do well and had maintained abstinence from alcohol.

Man gets drunk, wakes up with a medical mystery that nearly kills him Read More »