Author name: Mike M.

openai-introduces-codex,-its-first-full-fledged-ai-agent-for-coding

OpenAI introduces Codex, its first full-fledged AI agent for coding

We’ve been expecting it for a while, and now it’s here: OpenAI has introduced an agentic coding tool called Codex in research preview. The tool is meant to allow experienced developers to delegate rote and relatively simple programming tasks to an AI agent that will generate production-ready code and show its work along the way.

Codex is a unique interface (not to be confused with the Codex CLI tool introduced by OpenAI last month) that can be reached from the side bar in the ChatGPT web app. Users enter a prompt and then click either “code” to have it begin producing code, or “ask” to have it answer questions and advise.

Whenever it’s given a task, that task is performed in a distinct container that is preloaded with the user’s codebase and is meant to accurately reflect their development environment.

To make Codex more effective, developers can include an “AGENTS.md” file in the repo with custom instructions, for example to contextualize and explain the code base or to communicate standardizations and style practices for the project—kind of a README.md but for AI agents rather than humans.

Codex is built on codex-1, a fine-tuned variation of OpenAI’s o3 reasoning model that was trained using reinforcement learning on a wide range of coding tasks to analyze and generate code, and to iterate through tests along the way.

OpenAI introduces Codex, its first full-fledged AI agent for coding Read More »

meta-argues-enshittification-isn’t-real-in-bid-to-toss-ftc-monopoly-trial

Meta argues enshittification isn’t real in bid to toss FTC monopoly trial

Further, Meta argued that the FTC did not show evidence that users sharing friends-and-family content were shown more ads. Meta noted that it “does not profit by showing more ads to users who do not click on them,” so it only shows more ads to users who click ads.

Meta also insisted that there’s “nothing but speculation” showing that Instagram or WhatsApp would have been better off or grown into rivals had Meta not acquired them.

The company claimed that without Meta’s resources, Instagram may have died off. Meta noted that Instagram co-founder Kevin Systrom testified that his app was “pretty broken and duct-taped” together, making it “vulnerable to spam” before Meta bought it.

Rather than enshittification, what Meta did to Instagram could be considered “a consumer-welfare bonanza,” Meta argued, while dismissing “smoking gun” emails from Mark Zuckerberg discussing buying Instagram to bury it as “legally irrelevant.”

Dismissing these as “a few dated emails,” Meta argued that “efforts to litigate Mr. Zuckerberg’s state of mind before the acquisition in 2012 are pointless.”

“What matters is what Meta did,” Meta argued, which was pump Instagram with resources that allowed it “to ‘thrive’—adding many new features, attracting hundreds of millions and then billions of users, and monetizing with great success.”

In the case of WhatsApp, Meta argued that nobody thinks WhatsApp had any intention to pivot to social media when the founders testified that their goal was to never add social features, preferring to offer a simple, clean messaging app. And Meta disputed any claim that it feared Google might buy WhatsApp as the basis for creating a Facebook rival, arguing that “the sole Meta witness to (supposedly) learn of Google’s acquisition efforts testified that he did not have that worry.”

Meta argues enshittification isn’t real in bid to toss FTC monopoly trial Read More »

nintendo-says-more-about-how-free-switch-2-updates-will-improve-switch-games

Nintendo says more about how free Switch 2 updates will improve Switch games

When Nintendo took the wraps off the Switch 2 in early April, it announced that around a dozen first-party Switch games would be getting free updates that would add some Switch 2-specific benefits to older games running on the new console. We could safely assume that these updates wouldn’t be as extensive as the $10 and $20 paid upgrade packs for games like Breath of the Wild or Kirby and the Forgotten Land, but Nintendo’s page didn’t initially provide any game-specific details.

Earlier this week, Nintendo updated its support page with more game-by-game details about what players of these older games can expect on the new hardware. The baseline improvement for most games is “improved image quality” and optimizations for the Switch 2’s built-in display, but others include support for GameShare multiplayer, support for the new Joy-Cons’ mouse controls, support for HDR TVs, and other tweaks.

The most significant of the announced updates are frame rate improvements for Pokémon Scarlet and Violet, the main-series Pokémon games released in late 2022. Most latter-day Switch games suffered from frame rate dips here and there, as newer games outstripped the capabilities of a low-power tablet processor that had already been a couple of years old when the Switch launched in 2017. But the Pokémon performance problems were so pervasive and widely commented-upon that Nintendo released a rare apology promising to improve the game post-release. Subsequent patches helped somewhat but could never deliver a consistently smooth frame rate; perhaps new hardware will finally deliver what software patches couldn’t.

Nintendo says more about how free Switch 2 updates will improve Switch games Read More »

apple’s-new-carplay-ultra-is-ready,-but-only-in-aston-martins-for-now

Apple’s new CarPlay Ultra is ready, but only in Aston Martins for now

It’s a few years later than we were promised, but an advanced new version of Apple CarPlay is finally here. CarPlay is Apple’s way of casting a phone’s video and audio to a car’s infotainment system, but with CarPlay Ultra it gets a big upgrade. Now, in addition to displaying compatible iPhone apps on the car’s center infotainment screen, CarPlay Ultra will also take over the main instrument panel in front of the driver, replacing the OEM-designed dials like the speedometer and tachometer with a number of different Apple designs instead.

“iPhone users love CarPlay and it has changed the way people interact with their vehicles. With CarPlay Ultra, together with automakers we are reimagining the in-car experience and making it even more unified and consistent,” said Bob Borchers, vice president of worldwide marketing at Apple.

However, to misquote William Gibson, CarPlay Ultra is unevenly distributed. In fact, if you want it today, you’re going to have to head over to the nearest Aston Martin dealership. Because to begin with, it’s only rolling out in North America with Aston Martin, inside the DBX SUV, as well as the DB12, Vantage, and Vanquish sports cars. It’s standard on all new orders, the automaker says, and will be available as a dealer-performed update for existing Aston Martins with the company’s in-house 10.25-inch infotainment system in the coming weeks.

“The next generation of CarPlay gives drivers a smarter, safer way to use their iPhone in the car, deeply integrating with the vehicle while maintaining the very best of the automaker. We are thrilled to begin rolling out CarPlay Ultra with Aston Martin, with more manufacturers to come,” Borchers said.

Apple’s new CarPlay Ultra is ready, but only in Aston Martins for now Read More »

xai’s-grok-suddenly-can’t-stop-bringing-up-“white-genocide”-in-south-africa

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa

Where could Grok have gotten these ideas?

The treatment of white farmers in South Africa has been a hobbyhorse of South African X owner Elon Musk for quite a while. In 2023, he responded to a video purportedly showing crowds chanting “kill the Boer, kill the White Farmer” with a post alleging South African President Cyril Ramaphosa of remaining silent while people “openly [push] for genocide of white people in South Africa.” Musk was posting other responses focusing on the issue as recently as Wednesday.

They are openly pushing for genocide of white people in South Africa. @CyrilRamaphosa, why do you say nothing?

— gorklon rust (@elonmusk) July 31, 2023

President Trump has long shown an interest in this issue as well, saying in 2018 that he was directing then Secretary of State Mike Pompeo to “closely study the South Africa land and farm seizures and expropriations and the large scale killing of farmers.” More recently, Trump granted “refugee” status to dozens of white Afrikaners, even as his administration ends protections for refugees from other countries

Former American Ambassador to South Africa and Democratic politician Patrick Gaspard posted in 2018 that the idea of large-scale killings of white South African farmers is a “disproven racial myth.”

In launching the Grok 3 model in February, Musk said it was a “maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically correct.” X’s “About Grok” page says that the model is undergoing constant improvement to “ensure Grok remains politically unbiased and provides balanced answers.”

But the recent turn toward unprompted discussions of alleged South African “genocide” has many questioning what kind of explicit adjustments Grok’s political opinions may be getting from human tinkering behind the curtain. “The algorithms for Musk products have been politically tampered with nearly beyond recognition,” journalist Seth Abramson wrote in one representative skeptical post. “They tweaked a dial on the sentence imitator machine and now everything is about white South Africans,” a user with the handle Guybrush Threepwood glibly theorized.

Representatives from xAI were not immediately available to respond to a request for comment from Ars Technica.

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa Read More »

after-back-to-back-failures,-spacex-tests-its-fixes-on-the-next-starship

After back-to-back failures, SpaceX tests its fixes on the next Starship

But that didn’t solve the problem. Once again, Starship’s engines cut off too early, and the rocket broke apart before falling to Earth. SpaceX said “an energetic event” in the aft portion of Starship resulted in the loss of several Raptor engines, followed by a loss of attitude control and a loss of communications with the ship.

The similarities between the two failures suggest a likely design issue with the upgraded “Block 2” version of Starship, which debuted in January and flew again in March. Starship Block 2 is slightly taller than the ship SpaceX used on the rocket’s first six flights, with redesigned flaps, improved batteries and avionics, and notably, a new fuel feed line system for the ship’s Raptor vacuum engines.

SpaceX has not released the results of the investigation into the Flight 8 failure, and the FAA hasn’t yet issued a launch license for Flight 9. Likewise, SpaceX hasn’t released any information on the changes it made to Starship for next week’s flight.

What we do know about the Starship vehicle for Flight 9—designated Ship 35—is that it took a few tries to complete a full-duration test-firing. SpaceX completed a single-engine static fire on April 30, simulating the restart of a Raptor engine in space. Then, on May 1, SpaceX aborted a six-engine test-firing before reaching its planned 60-second duration. Videos captured by media observing the test showed a flash in the engine plume, and at least one piece of debris was seen careening out of the flame trench below the ship.

SpaceX ground crews returned Ship 35 to the production site a couple of miles away, perhaps to replace a damaged engine, before rolling Starship back to the test stand over the weekend for Monday’s successful engine firing.

Now, the ship will head back to the Starbase build site, where technicians will make final preparations for Flight 9. These final tasks may include loading mock-up Starlink broadband satellites into the ship’s payload bay and touchups to the rocket’s heat shield.

These are two elements of Starship that SpaceX engineers are eager to demonstrate on Flight 9, beyond just fixing the problems from the last two missions. Those failures prevented Starship from testing its satellite deployer and an upgraded heat shield designed to better withstand scorching temperatures up to 2,600° Fahrenheit (1,430° Celsius) during reentry.

After back-to-back failures, SpaceX tests its fixes on the next Starship Read More »

monthly-roundup-#30:-may-2025

Monthly Roundup #30: May 2025

I hear word a bunch of new frontier AI models are coming soon, so let’s do this now.

  1. Programming Environments Require Magical Incantations.

  2. That’s Not How Any of This Works.

  3. Cheaters Never Stop Cheating.

  4. Variously Effective Altruism.

  5. Ceremony of the Ancients.

  6. Palantir Further Embraces Its Villain Edit.

  7. Government Working.

  8. Jones Act Watch.

  9. Ritual Asking Of The Questions.

  10. Why I Never Rewrite Anything.

  11. All The Half-Right Friends.

  12. Resident Expert.

  13. Do Anything Now.

  14. We Have A New Genuine Certified Pope So Please Treat Them Right.

  15. Which Was the Style at the Time.

  16. Intelligence Test.

  17. Constant Planking.

  18. RSVP.

  19. The Trouble With Twitter.

  20. TikTok Needs a Block.

  21. Put Down the Phone.

  22. Technology Advances.

  23. For Your Entertainment.

  24. Please Rate This Podcast.

  25. I Was Promised Flying Self-Driving Cars.

  26. Gamers Gonna Game Game Game Game Game.

  27. Sports Go Sports.

I don’t see it as gendered, but so much this, although I do have Cursor working fine.

Aella: Never ever trust men when they say setting up an environment is easy

I’ve been burned so bad I have trauma. Any time a guy says “omg u should try x” I start preemptively crying

Pascal Guay (top comment): Just use @cursor_ai agent chat and prompt it to make this or that environment. It’ll launch all the command lines for you; just need to accept everything and you’ll be done in no time.

Aella: THIS WAS SPARKED BY ME BEING UNABLE TO SET UP CURSOR.

Ronny Fernandez (comment #2): have you tried cursor? it’s really easy.

Piq: Who tf would ever say that regardless of gender? It’s literally the hardest part of coding.

My experience is that setting things up involves a series of exacting magical incantations, which are essentially impossible to derive on your own. Sometimes you follow the instructions and everything goes great but if you get things even slightly wrong it becomes hell to figure out how to recover. The same thing goes for many other aspects of programming.

AI helps with this, but not as much as you might think if you get outside the realms where vibe coding just works for you. Then, once you are set up, within the realm of the parts of the UI you understand things are relatively much easier, but there is very much temptation to keep using the features you understand.

People who play standard economic games, like Dictator, Ultimatum, Trust, Public Goods or Prisoner’s Dilemma, frequently don’t understand the rules. For Trust 70% misunderstood, for Dictator 22%, and incentivized comprehension checks didn’t help. Those who misunderstood typically acted more prosocial.

In many ways this makes the games more realistic, not less. People frequently don’t understand the implications of their actions, or the rules of the (literal or figurative) game they are playing. You have to account for this, and often this is what keeps the game in a much better (or sometimes worse) equilibrium, as is the tendency of many players to play ‘irrationally’ or based on vibes. Dictator is a great example. In a real-world one-shot dictator game situation it’s often wise to do a 50-50 split, and saying ‘but the game theory says’ will not change that.

A recurring theme of life, also see Cheaters Gonna Cheat Cheat Cheat Cheat Cheat.

Jorbs: i have this ludicrous thing where if i see someone cheating at something and lying about it, i start to believe that they aren’t an honest person and that i should be suspicious of other things they say and do.

this is only semi tongue-in-cheek. the number of times in my life someone has directly told me about how they cheat and lie about something, with the expectation that that will not affect how i view them otherwise, is like, much much higher than i would expect it to be.

It happens to me too, as if I don’t know how to update on Bayesian evidence or something. I don’t even need them to be lying about it. The cheating is enough.

There are partial mitigations, where they explain why something is a distinct ‘cheating allowed’ magisteria. But only partial ones. It still counts.

This is definitely a special case of ‘how you do anything is how you do everything,’ and also ‘when people tell you who they are, believe them.’

Spaced Out Matt: This person appears to be an active participant in the “Effective Altruist” movement—and a good reminder that hyper-rational political movements often end up funding lifesaving work on critical health issues

Alexander Berger: Really glad that @open_phil was able to step in on short notice (<24h) to make sure Sarah Fortune's work on TB vaccines can continue.

“Much to the relief of a Harvard University researcher, a California-based philanthropic group is getting into the monkey business.

Dana Gerber: Open Philanthropy, a grant advisor and funder, told the Globe on Friday that it authorized a $500,000 grant to allow researchers at the University of Pittsburgh School of Medicine to complete an ongoing tuberculosis vaccine study that was abruptly cut off from its NIH funding earlier this week, imperiling the lives of its rhesus macaque test subjects.

Am I the only one who thought of this?

In all seriousness, this is great, exactly what you want to happen – stepping in quickly in suddenly high leverage opportunities.

Nothing negative about this, man is an absolute legend.

Simeon: The media negativity bias is truly deranged.

Managing to frame a $200B pledge to philanthropy negatively is an all-time prowess.

Gates is doing what other charitable foundations and givers fail to do, which is to actually spend the damn money to help people and then say their work is done, within a reasonable time frame. Most foundations instead attempt to remain in existence indefinitely by refusing to spend the money.

John Arnold: This is a great decision by Gates that will maximize his impact. All organizations become less effective over time, particularly foundations that have no outside accountability. New institutions will be better positioned to deal with the problems of future generations.

I would allocate funds to different targets, but this someone actually trying.

The Secular Solstice (aka Rationalist Solstice) is by far the best such ritual, it isn’t cringe but even if you think it is, if you reject things that work because they’re cringe you’re ngmi.

Guive Assadi: Steven Pinker: I’ve been part of some not so successful attempts to come up with secular humanist substitutes for religion.

Interviewer: What is the worst one you’ve been involved in?

Steven Pinker: Probably the rationalist solstice in Berkeley, which included hymns to the benefits of global supply chains. I mean, I actually completely endorse the lyrics of the song, but there’s something a bit cringe about the performance.

Rob Bensinger: Who wants to gather some more quotes like this and make an incredible video advertisement for the rat solstice

Rob Wiblin: This is very funny.

But people should do the cringe thing if they truly enjoy it. Cringe would ideally remain permanently fashionable.

Nathan: Pinker himself is perhaps answering why secular humanism hasn’t created a replacement for Christianity. It cares too much what it looks like.

The song he’s referring to is Landsailor. It is no Uplift, but it is excellent, now more than ever. Stop complaining about what you think others will think is cringe and start producing harmony and tears. Cringe is that which you believe is cringe. Stop giving power to the wrong paradox spirits.

Indeed, the central problem with this ritual is that it doesn’t go far enough. We don’t only need Bright Side of Life and Here Comes the Sun (yes you should have a few of these and if you wanted to add You Learn or Closer to Fine or something, yes, we have options), but mostly on the margin we need Mel’s Song, and Still Alive, and Little Echo. People keep trying to make it more accessible and less weird.

How are things going over at Palantir? Oh, you know, doubling down on the usual.

I do notice this is a sudden demand to not build software not that can be misused to help violate the US Constitution.

You know what other software can and will be used this way?

Most importantly frontier LLMs, but also most everything else. Hmm.

And if nothing else, as always, I appreciate the candor in the reply. Act accordingly. And beware the Streisand Effect.

Drop Site: ICE Signs $30 Million Contract With Palantir to Build ‘ImmigrationOS’

ICE has awarded Palantir Technologies a $30 million contract to develop a new software platform to expand its surveillance and enforcement operations, building on Palantir’s decade-long collaboration with ICE.

Key features and functions:

➤ ImmigrationOS will give ICE “real-time visibility” into visa overstays, self-deportation cases, and individuals flagged for removal, including foreign students flagged for removal for protesting.

➤ ImmigrationOS will integrate data from multiple government database systems, helping ICE track immigration violators and coordinate with agencies like Customs and Border Protection.

➤ The platform is designed to streamline the entire immigration enforcement process—from identification to removal—aiming to reduce time, labor, and resource costs.

Paul Graham: It’s a very exciting time in tech right now. If you’re a first-rate programmer, there are a huge number of other places you can go work rather than at the company building the infrastructure of the police state.

Incidentally, I’ll be happy to delete this if Palantir publicly commits never to build things that help the government violate the US constitution. And in particular never to build things that help the government violate anyone’s (whether citizens or not) First Amendment rights.

Ted Mabrey (start of a very long post): I am looking forward to the next set of hires that decided to apply to Palantir after reading your post. Please don’t delete it Paul. We work here in direct response to this world view and do not seek its blessing.

Paul Graham: As I said, I’ll be happy to delete it if you commit publicly on behalf of Palantir not to build things that help the government violate the US constitution. Will you do that, Ted?

Ted Mabrey: First, I really don’t want you to delete this and am happy for it to be on the record.

Second, the reason I’m not engaging in the question is because it’s so obviously in bad faith akin to the “will you promise to stop beating your wife” court room parlor trick. Let’s make the dynamics crystal clear. Just by engaging on that question it establishes a presumption of some kind of guilt in the present or future for us or the government. If I answer, you establish that we need to justify something we have done, which we do not, or accept as a given that we will be asked to break the law, which we have not.

or y’all…we have made this promise so many ways from Sunday but I’ll write out a few of them here for them.

Paul Graham: When you say “we have made this promise,” what does the phrase “this promise” refer to? Because despite the huge number of words in your answers, I can’t help noticing that the word “constitution” does not occur once.

Ted? What does “this promise” refer to?

I gave Ted Mabrey two days to respond, but I think we now have to conclude that he has run away. After pages of heroic-sounding doublespeak, the well has suddenly run dry. I was open to being proven wrong about Palantir, but unfortunately it’s looking like I was right.

Ted tried to make it seem like the issue is a complex one. Actually it’s 9 words. Will Palantir help the government violate people’s constitutional rights? And I’m so willing to give them the benefit of the doubt that I’d have taken Ted’s word for if it he said no. But he didn’t.

Continuing reminder: It is totally reasonable to skip this section. I am doing my best to avoid commenting on politics, and as usual my lack of comment on other fronts should not be taken to mean I lack strong opinions on them. The politics-related topics I still mention are here because they are relevant to this blog’s established particular interests, in particular AI, abundance including housing, energy and trade, economics or health and medicine.

In case it needs to be explained why trying to forcibly bring down drug prices via requiring Most Favored Nation status on those prices would be an epic disaster that hurts everyone and helps no one if we were so foolish as to implement it for real, Jason Abaluck is here to help, do note this thread as well so there is a case where there could be some benefit by preventing other governments from forcing prices down.

Then there’s the other terrible option, which is if it worked in lowering the prices or Trump found some other way to impose such price controls, going into what Tyler Cowen calls full supervillain mode. o3 estimates this would reduce global investment in drug innovation by between 33% and 50%. That seems low to me, and is also treating the move as a one-time price shock rather than a change in overall regime.

I would expect that the imposition of price controls here would actually greatly reduce investment in R&D and innovation essentially everywhere, because everyone would worry that their future profits would also be confiscated. Indeed, I would already be less inclined to such investments now, purely based on the stated intention to do this.

Meanwhile, other things are happening, like an EO that requires a public accounting for all regulatory criminal penalties and that they default to requiring mens rea. Who knew? And who knew? This seems good.

The good news is that Pfizer stock didn’t move that much on the announcement, so mostly people do not think the attempt will work.

There is an official government form where you can suggest deregulations. Use it early, use it often, program your AI to find lots of ideas and fill it out for you.

In all seriousness, if I understood the paperwork and other time sink requirements, I would not have created Balsa Research, and if the paperwork requirements mostly went away I would have founded quite a few other businesses along the way.

Katherine Boyle: We don’t talk enough about how many forms you have to fill out when raising kids. Constant forms, releases, checklists, signatures. There’s a reason why litigious societies have fewer children. People just get tired of filling out the forms.

Mike Solana: the company version of this is also insane fwiw. one of the hardest things about running pirate wires has just been keeping track of the paper work — letters every week, from every corner of the country, demanding something new and stupid. insanely time consuming.

people hear me talk shit about bureaucracy and hear something ‘secretly reactionary coded’ or something and it’s just like no, my practical experience with regulation is it prevents probably 90 to 95% of everything amazing in this world that someone might have tried.

treek: this is why lots of people don’t bother with business extreme blackpill ngl

Mike Solana: yes I genuinely believe this. years ago I was gonna build an app called operator that helped you build businesses. I tried to start with food trucks in LA. hundreds of steps, many of them ambiguous. just very clearly a system designed to prevent new businesses from existing.

A good summary of many of the reasons our government doesn’t work.

Tracing Woods: How do we overcome this?

Alec Stapp: This is the best one-paragraph explanation for what’s gone wrong with our institutions:

I could never give that good a paragraph-length explanation, because I would have split that into three paragraphs, but I am on board with the content.

At core, the problem is a ratcheting up of laws and regulatory barriers against doing things, as our legal structures focus on harms and avoiding lawsuits but ignore the ‘invisible graveyard’ of utility lost.

The abundance agenda says actually this is terrible, we should mostly do the opposite. In some places it can win at least small victories, but the ratchet continues, and at this point a lot of our civilization essentially cannot function.

Once again, cutting FDA staff without changing the underlying regulations doesn’t get rid of the stupid regulations, it only makes everything take longer and get worse.

Jared Hopkins (Wall Street Journal): “Biotech companies developing drugs for hard-to-treat diseases and other ailments are being forced to push back clinical trials and drug testing in the wake of mass layoffs at the Food and Drug Administration.”

“When you cut the administrative staff and you still have these product deadlines, you’re creating an unwinnable situation,” he said. The worst thing for companies isn’t getting guidance when needed and following all the steps for approval, only to “prepare a $100 million application and get denied because of something that could’ve been communicated or resolved before the trial was under way,” Scheineson said.

Paul Graham: I heard this directly from someone who works for a biotech startup. Layoffs at the FDA have slowed the development of new drugs.

Jim Cramer makes the case to get rid of the ‘ridiculous Jones Act.’ Oh well, we tried.

The recent proposals around restricting shipping even further caused so much panic (and Balsa to pivot) for a good reason. If enacted in their original forms, they would have been depression-level catastrophic. Luckily, we pulled back from the brink, and are now only proposing ordinary terrible additional restrictions, not ‘kill the world economy’ level restrictions.

Also note that for all the talk about the dangers of Chinese ships, the regulations were set to apply to all non-American ships, Jones Act style, with some amount of absolute requirement to use American ships.

That’s a completely different rule. If the rule only applies to Chinese ships in particular but not to ships built in Japan, South Korea or Europe, I don’t love it, but by 2025 standards it would be ‘fine.’

Ryan Peterson: Good to see the administration listened to feedback on their proposed rule on Chinese ships. The final rule published today is a lot more reasonable.

John Konrad: Nothing in my 18 years since founding Captain has caused more panic than @USTradeRep’s recent proposal to charge companies that own Chinese ships $1 million per port call in the US.

USTR held hearings on the fees and today issued major modifications.

The biggest problem was the original port fees proposed by Trump late February was there were ship size and type agnostic.

All Chinese built ships would be charged $1.5 million per port and $1 million for any ship owned by a company that operates chinese built ships.

This was ok for a very large containership with 17,000 boxes that could absorb the fee. But it would have been devastating for a bulker that only carries low value cement.

The new proposal differentiates between ship size and types of cargo.

Specific fees are $50 per net to with the following caveats that go into effect in 6 months.

•Fees on vessel owners & operators of China based on net cargo tonnage, increasing incrementally over the following years;

•Fees on operators of Chinese-built ships based on net tonnage or containers, increasing incrementally over the following years; and

•To incentivize U.S.-built car carrier vessels, fees on foreign-built car carrier vessels based on their capacity.

The second phase actions will not take place for 3 years and is specifically for LNG ships:

•To incentivize U.S.-built liquified natural gas (LNG) vessels, limited restrictions on transporting LNG via foreign vessels. Restrictions will increase incrementally over 22 years.

… [more details of things we shouldn’t be doing, but probably aren’t catastrophic]

Another major complaint of the original proposal was that ships would be charged the fee each time they enter a US Port. This meant a ship discharging at multiple ports i one voyage would suffer millions in fees and likely cause them to visit fewer small ports.

That cargo would have to be put on trucks, clogging already overburdened highways

The new proposal charges the fee per voyage or string of U.S. port calls.

The proposal also excludes Jones Act ships and short sea shipping options (small ships and barges that move between ports)

In short this new proposal is a lot more adaptable and reasonable but still put heavy disincentives on owners that build ships in China.

These are just the highlights. The best way to learn more is to read @MikeSchuler’s article explaining the new proposal.

They also dropped fleet composition penalties, and the rule has at least some phase-in of the fees, along with dropping the per-port-of-call fee. Overall I see the new proposal as terrible but likely not the same kind of crisis-level situation we had before.

Then there’s the crazy ‘phase 2’ that requires the LNG sector in particular to use a portion US-built vessels. Which is hard, since only one such vessel exists and is 31 years old with an established route, and building new such ships to the extent it can be done is prohibitively expensive. The good news is this would start in 2028 and phase in over 22 (!) years, which is an actually reasonable time frame for trying to do this. There’s still a good chance this would simply kill America’s ability to export LNG, hurting our economy and worsening the climate. Again, if you want to use non-Chinese-built ships, that is something we can work around.

Ryan Peterson asks how to fix the fact that without the Jones Act he fears America would build zero ships, as opposed to currently building almost zero ships. Scott Lincicome suggests starting here, but it mostly doesn’t address the question. The bottom line is that American shipyards are not competitive, and are up against highly subsidized competition. If we feel the need for American shipyards to build our ships, we are going to have to subsidize that a lot plus impose export discipline.

Or we can choose to not to spend enough to actually fix this, or simply accept that comparative advantage is a thing and it’s fine to get our ships from places like Japan, and redirect our shipyards to doing repairs on the newly vastly greater number of passing ships and on building Navy ships to ensure what is left is supported.

Someone clearly is neither culturally rationalist nor culturally Jewish.

Robin Hanson (I don’t agree): “Rituals” are habits and patterns of behavior where we are aware of not fully understanding why we should do them the way we do. A mark of modernity was the aspiration to end ritual by either understanding them or not doing them.

We of course still do lots of behavior patterns that we do not fully understand. Awareness of this fact varies though.

Yes we don’t understand this modern habit fully, making it a ritual.

In My Culture, the profoundest act of worship is to try and understand.

Ritual is not about not understanding, at most it is about not needing to understand at first in order to start, and about preserving something important without having to as robustly preserve understanding of the reasons.

Ritual is about Doing the Thing because it is The Thing You Do. That in no way precludes you understanding why you are doing it.

Indeed, one of the most important Jewish rituals is always asking ‘why do we do this thing, ritual or otherwise?’ This is most explicit in the Seder, where we ask the four questions and we answer them, but in a general sense if you don’t know why you’re doing a Jewish thing and don’t ask why, you are doing it wrong.

This is good. The rationalists follow the same principle. The difference is that rather than carrying over many rituals and traditions for thousands of years, we mostly design them anew for the modern world.

But you can’t do that properly, or choose the right rituals for you, and you certainly can’t wisely choose to stop doing rituals you’re already doing, unless you understand what they are for. Which is a failure mode that is happening a lot, often justified by the invocation of a now-sacred moral principle that must stand above all, even if the all includes key load bearing parts of civilization.

Introducing the all-new Doubling-Back Aversion, the concept that we are reluctant to go backwards, on top of the distinct Sunk Cost Fallacy. I can see it, but I am suspicious, especially of their example of having flown SFO→LAX intending to go then to JFK, and then being more willing to go LAX→DEN→JFK than LAX→SFO→JFK even if the time saved is the same, because you started in SFO. I mean, I can see why it’s frustrating a little, but I suspect the bigger effect here is just that DEN is clearly ‘on the way’ to JFK, and SFO isn’t, and there’s a clear bias against ‘going backwards.’ They do try to cover this, such as here:

But I still don’t see a strong case here for this being a distinct new bias, as opposed to being the sum of existing known issues.

The case by Dr. Todd Kashdan for seeking out ‘48% opposites’ as friends and romantic partners. You want people who think different, he says, so sparks can fly and new ideas can form and fun can be had, not some boring static bubble of sameness. But then he also says to seek ‘slightly different’ people who will make you sweat, which seems very different to me. As in, you want 10%-20% opposites, maybe 30%, but not 48%, probably on the higher end for friends and lower end for romantic partners, and if you’re a man dating women or vice versa that 10%-20% is almost certainly covered regardless.

There are, in theory, exceptions. I do remember once back in the day finding a 99% match on OKCupid (those were the days!), a woman who said she only rarely and slowly ever responded to anyone but whose profile was like a bizarro world female version of me. In my opening email I told her as much, asking her to respond the way she’d respond to herself. I’ll always wonder what that would have been like if we’d ever met in person – would it have been ‘too good’ a match? She did eventually write back months later as per a notification I got, but by then I was with my wife, so I didn’t reply.

Patrick McKenzie is one of many to confirm that there are lots of things about the world that are not so hard to find out or become an expert in, but where no one has chosen to do the relevant work. If there is a particular policy area or other topic where you put your focus, it’s often very practical to become the World’s Leading Expert and even be the person who gets consulted, and for there to be big wins available to be found, simply because no one else is seriously trying or looking. Getting people’s attention? That part is harder.

Kelsey Piper: This is related to one of the most important realizations of my adult life, which is that there is just so much in the modern world that no one is doing; reasonably often if you can’t find the answer to a question it just hasn’t been answered.

If you are smart, competent, a fast learner and willing to really throw yourself into something, you can answer a question to which our civilization does not have an answer with weeks to months of work. You can become an expert in months to years.

There is not an efficient market in ideas; it’s not even close. There are tons and tons of important lines of thought and work that no one is exploring, places where it’d be valuable to have an expert and there simply isn’t one.

Patrick McKenzie: Also one of the most important and terrifying lessons of my adult life.

Mine too.

Michael Nielsen: This is both true *andcan be hard to recognize. A friend once observed that an organization had been important for his formative growth, but it was important to move away, because it was filled with people who didn’t realize how derivative their work was; they thought they were pushing frontiers, but weren’t

One benefit of a good PhD supervisor is that they’ll teach you a lot about how to figure out when you’re on that frontier

And yes, by default you get to improve some small corner of the world, but that’s already pretty good, and occasionally you strike gold.

Zy (QTing Kelsey Piper): There’s so much diminishing returns to this stuff it’s not even funny. 400 years ago you could do this and discover Neptune or cellular life

Today you can do it and figure out a condition wherein SSRIs cause 3% less weight gain or an antenna with 5% better fidelity or something

Marko Jukic: Guy 400 years ago: “There’s so much diminishing returns to this stuff it’s not even funny. 400 years ago you could do this and discover Occam’s Razor or the Golden Rule. Today the best you can do is prove that actually 4% more angels can dance on the head of a pin.”

Autumn: 7 years ago a fairly small team in san francisco figured out how to make machines think.

Alternatively, even if there are diminishing returns, so what? Even the diminished returns, even excluding the long tail of big successes, are still very, very good.

Apologies with longer words are perceived as more genuine. I think this perception is correct. The choice to bother using longer words is a costly signal, which is the point of apologizing in the first place. Even if you’re ‘faking it’ it still kind of counts.

Endorsed:

Cate Hall: Amazing how big the quality of life improvements are downstream of “let me take this off future me’s plate.”

It’s not just shifting work up in time — it’s saving you all the mental friction b/w now & when you do it. Total psychic cost is the integral of cognitive load over time.

Sam Martin: conversely, “I’ll deal with this later” is like swiping a high-interest cognitive load credit card (said the man whose CLCC is constantly maxed out)

Thus there is a huge distinction between ‘things you can deal with later without having to otherwise think about it’ and other things. If you can organize things such that you’ll be able to deal with something later in a way that lets you not otherwise think about it, that’s much better. Whereas if that’s not possible, my lord, do it now.

If you can reasonably do it now, do it now anyway. Time saved in the future is typically worth more than time now, because this gives you slack. When you need time, sometimes you suddenly really desperately need time.

How to make $100k betting on the next Pope, from someone who did so.

I did not wager because I try not to do that anymore and because it’s specifically a mortal sin to bet on a Papal election and I actually respect the hell out of that, but I also thought that the frontrunners almost had to be rich given the history of Conclaves and how diverse the Cardinals are, and the odds seemed to be favoring Italians too much. I wouldn’t have picked out Prevost without doing the research.

I also endorse not doubling down after the white smoke, if anything the odds seemed more reasonable at that point rather than less. Peter Wildeford similarly made money betting purely against Parolin, the clear low-effort move.

The past sucked in so many ways. The quality of news and info was one of them.

Roon: If you read old analytical news articles, im talking even just 30 years old, most don’t even stand to muster against the best thread you read on twitter on any given day. The actual longform analysis pieces in most newspapers are also much better.

we’ve done a great amount of gain of function research on Content.

Roon then tries to walk it back a bit, but I disagree with the walking back. The attention to detail is better now, too. Or rather, we used to pay more attention to detail, but we still get the details much more right today, because it’s just way way easier to check details. It used to be they’d get them wrong and no one would know.

Here’s a much bigger and more well known way the past sucked.

Hunter Ash: People who are desperate to retvrn to the past can’t understand how nightmarish the past was. When you tell them, they don’t believe it.

Tyler Cowen asks how very smart people meet each other. Dare I say ‘at Lighthaven’? My actual answer is that you meet very smart people by going to and participating in the things and spaces smart people are drawn to or that select for smart people. That can include a job, and frequently does.

Also, you meet them by noticing particular very smart people and then reaching out to them, they’re mostly happy to hear from you if you bring interestingness.

Will Bachman: I’m the host of a podcast, The 92 Report, which has the goal of interviewing every member of the Harvard-Radcliffe Class of 1992. Published 130 episodes so far. (~1,500 left to go)

Based on this sample, most friendships start through some extracurricular activity, which provides the opportunity to work together over a sustained period, longer than one course. Also people care about it more than any particular class.

At the Harvard Crimson for example on a typical day in 1990 you’d find in the building Susan B Glasser (New Yorker), Josh Gerstein (Politico), Michael Grunwald (Time, Politico), Julian E Barnes, Ira Stoll, Sewell Chan, Jonathan Cohn, and a dozen other individuals whose bylines are now well known.

Many current non-profit leaders met through their work at Philips Brooks House.

Many top TV writers met at the Harvard Lampoon.

Many Hollywood names met through theatre productions.

Strong lifelong friendships formed in singing groups.

Asking Harvard graduates how they met people is quite the biased sample. ‘Go to Harvard’ is indeed one of the best ways to meet smart or destined-to-be-successful people. That’s the best reason to go to Harvard. Of course they met each other in Harvard-related activities a lot. But this is not an actionable plan, although you can and should attempt to do lesser versions of this. Go where the smart people are, do the things they are doing, and also straight up introduce yourself.

Here’s a cool idea, the key is to ignore the statement when it’s wrong:

Bryan Johnson: when this happens, my team and I now say “plank” and the person speaking immediately stops. Everyone is now much happier.

Gretchen Lynn: This is funny, because every time a person with ADHD interrupts/responds too quickly to me because they think they already understood my sentence, they end up being wrong about what I was saying or missing important context. I see this meme all the time like it’s a superpower, but…be aware you may be driving the people in your life insane 😂

Gretchen is obviously mistaken. Whether or not one has ADHD, very often it is very clear where a sentence (or paragraph, or entire speech) is going well before it is finished. Similarly, often there are scenes in movies or shows where you can safety skip large chunks of them, confident you missed nothing.

That can be a Skill Issue, but often it is not. It is often important that the full version of a statement, scene or piece of writing exists – some people might need it, you’re not putting that work on the other person, and also it’s saying you have thought this through and have brought the necessary receipts. But that doesn’t mean, in this case, you actually have to bother with it.

Then there are situations where there is an ‘obvious’ version of the statement, but that’s not actually what someone was going for.

So when you say ‘plank’ here, you’re saying is ‘there is an obvious-to-me version of where you are going with this, I get it, if that’s what you are saying you can stop, and if it’s more than that you can skip ahead.’

But, if that’s wrong, or you’re unsure it’s right? Carry on, or give me the diff, or give me to quick version. And this in turn conveys the information that you think the ‘plank’ call was premature.

Markets in everything!

Allie: I’m not usually the type to get jealous over other people’s weddings

But I saw a girl on reels say she incentivized people to RSVP by making the order in which people RSVP their order to get up and get dinner and I am being driven to insanity by how genius that is.

No walking it back, this is The Way.

Why do posts with links get limited on Twitter?

Predatory myopic optimization for ‘user-seconds on site,’ Musk explains.

Elon Musk: To be clear, there is no explicit rule limiting the reach of links in posts. The algorithm tries (not always successfully) to maximize user-seconds on X, so a link that causes people to cut short their time here will naturally get less exposure.

xlr8harder: i’m old enough to remember when he used to use the word “unregretted” before “user-seconds”

yes, people, i know unregretted is subjective and hard to measure. the point is it was aspirational and provided some countervailing force against the inexorable tug toward pure engagement optimization.

“whelp. turns out it was hard!” is not a good reason to abandon it.

caden: MLE who used to work on the X algo told me Elon was far more explicit in maximizing user-seconds than previous management The much-maligned hall monitors pre-Elon cared more about the “unregretted” caveat.

Danielle Fong: deleting “unregretted” in “unregretted user seconds” rhymes with deleting “don’t” in “don’t be evil.”

I am also old enough to remember that. Oh well. It’s hard to measure ‘unregretted.’

Even unregretted, of course, would still not understand what is at stake here. You want to provide value to the user, and this is what gets them to want to use your service, to come back, and builds up a vibrant internet with Twitter at its center. Deprioritizing links is a hostile act, quite similarly destructive to a massive tariff, destroying the ability to trade.

It is sad that major corporations consistently prove unable to understand this.

Elon Musk has also systematically crippled the reach and views of Twitter accounts that piss him off, and by ‘piss him off’ we usually mean disagree with him but also he has his absurd beef with Substack.

Stuart Thompson (NYT): The New York Times found three users on X who feuded with Mr. Musk in December only to see their reach on the social platform practically vanish overnight.

Mr. Musk has offered several clues to what happened, writing on X amid the feud that if powerful accounts blocked or muted others, their reach would be sharply limited. (Mr. Musk is the most popular user on X with more than 219 million followers, so his actions to block or mute users could hold significant sway.)

Timothy Lee: This is pretty bad.

At other times It Gets Better, this is Laura Loomer, who explicitly lost her monetization over this and then got it back at the end of the fued:

There’s also a third user listed, Owen Shroyer, who did not recover.

One could say that all three of these are far-right influencers, and this seems unlikely to be a coincidence. It’s still not okay to put one’s thumb on the scale like this, even if it doesn’t carry over to others, but it does change the context and practical implications a lot. He who lives by also dies by, and all that.

Tracing Woods: see also: Taibbi, Matt.

As a general rule, even though technically there Aint No Rule it is not okay and a breach of decorum to ‘bring the receipts’ from text conversations even without an explicit privacy agreement. And most importantly, remember that if you do it to them then it’s open season for them to also do it to you.

Matt Taibbi remains very clearly shadowbanned up through April 2025. If you go to his Twitter page and look at the views on each post, they are flattened out the way Substack view counts are, and are largely uncorrelated with other engagement measures, which indicates they are coming from the Following tab and not from the algorithmic feed. No social media algorithm works this way.

A potential counterargument is that Musk feuds rather often, there are a lot of other claims of similar impacts, and NYT only found these three definitive examples. But three times by default should be considered enemy action, and the examples are rather stark.

The question is, in what other ways is Musk messing with the algorithm?

Here’s a post that Elon Musk retweeted, that seems to have gotten far more views than the algorithm could plausibly have given it on its own, even with that retweet.

Geoffrey Hinton: I like OpenAI’s mission of ‘ensure that artificial general intelligence benefits all of humanity”, and I’d like to stop them from completely gutting it. I’ve signed on to a new letter to @AGRobBonta & @DE_DOJ asking them to halt the restructuring.

AGI is the most important and potentially dangerous technology of our time. OpenAI was right that this technology merits strong structures and incentives to ensure it is developed safely, and is wrong now in attempting to change these structures and incentives. We’re urging the AGs to protect the public and stop this.

.

Hasan Can: I was serious when I said Elon Musk will keep messing with OpenAI as long as he holds power in USA. Geoffrey’s [first] tweet hit a full 31 million views. Getting that level of view with just 6k likes isn’t typically possible; I think Elon himself pushed that post.

Putting together everything that has happened, what should we now make of Elon Musk’s decision to fire 80% of Twitter employees without replacement?

Here is a debate.

Shin Megami Boson: the notion of a “fake email job” is structurally the same as a belief in communism. the communist looks at a system far more complex than he can understand and decides the parts he doesn’t understand must have no real purpose & are instead due to human moral failing of some kind.

Marko Jukic: Would you have told that to Elon Musk before he fired 80% of the people working at Twitter with no negative effect?

Do you think Twitter is the only institution in our society where 80% of people could be fired? What do you think those people are doing besides shuffling emails?

Alexander Doria: Yes, this. He mostly removed salespeople and marketing teams that were the core commercial activity of old Twitter.

Marko Jukic (who somehow doesn’t follow Gwern): You are completely delusional if you think this and so is Gwern, though I can’t see his reply.

Gwern: Yes, and I would have been right. Twitter revenue and users crashed into the floor, and after years of his benevolent guidance, they weren’t even breakeven before the debt interest – and he just bailed out Twitter using Xai, eating a loss of something like $30b to hide it all.

Alexander Doria: If I remember correctly, main ad campaigns stopped primarily as their usual commercial contact was not there anymore. And Musk strategy on this front was totally unclear and unable to reassure.

Marko Jukic: Right, please ignore the goons celebrating their victory and waving around a list of scalps and future targets. Pay no mind to that. This was all just a simple brain fart, where Elon Musk just *forgothow to accept payments for ads, and advertisers forgot how to make them! Duh!

Quite an explanation. “My single best example of how 80% of employees can be cut is Twitter.” “Twitter was one of the biggest disasters ever.” “Ah yes, well, of course, all those goons and scalps. Naturally it failed. What, are you dense? Anyway, 80% of employees are useless.”

There’s no question Twitter has, on a technical and functional level, held up far better than median expectations, although it sure seems like having more productive employees to work on things like the bot problems and Twitter search being a disaster would have been a great idea. And a lot of what Musk did, for good and bad, was because he said so not because of a lack of personnel – if you put me in charge of Twitter I would be able to improve it a lot even if I wasn’t allowed to add headcount.

There’s also no question that Twitter’s revenue collapsed, and that xAI ultimately more or less bailed it out. One can argue that the advertisers left for reasons other than the failures of the marketing department (as in, failing to have a marketing department) and certainly there were other factors but I find it rather suspicious to think that gutting the marketing department without replacement didn’t hurt the marketing efforts quite a bit. I mean, if your boss is out there alienating all the advertisers whose job do you think it is to convince them to stop that and come back? Yes, it’s possible the old employees were terrible, but then hire new ones.

In some sense wow, in another sense there are no surprises here and all these TikTok documents are really saying is they have a highly addictive product via the TikTok algorithm, and it comes with all the downsides of social media platforms, and they’re not that excited to do much about those downsides.

On the other hand, these quotes are doozers. Some people were very much not following the ‘don’t write down what you don’t want printed in the New York Times.’

Neil ‘O Brien: WOW: @JonHaidt got info from inside TikTok [via Attorney Generals] admitting how they target kids: “The product in itself has baked into it compulsive use… younger users… are particularly sensitive to reinforcement in the form of social reward and have minimal ability to self-regulate effectively”

Jon Haidt and Zack Rausch: We organize the evidence into five clusters of harms:

  1. Addictive, compulsive, and problematic use

  2. Depression, anxiety, body dysmorphia, self-harm, and suicide

  3. Porn, violence, and drugs

  4. Sextortion, CSAM, and sexual exploitation

  5. TikTok knows about underage use and takes little action

As one internal report put it:

“Compulsive usage correlates with a slew of negative mental health effects like loss of analytical skills, memory formation, contextual thinking, conversational depth, empathy, and increased anxiety,” in addition to “interfer[ing] with essential personal responsibilities like sufficient sleep, work/school responsibilities, and connecting with loved ones.”

Although these harms are known, the company often chooses not to act. For example, one TikTok employee explained,

“[w]hen we make changes, we make sure core metrics aren’t affected.” This is because “[l]eaders don’t buy into problems” with unhealthy and compulsive usage, and work to address it is “not a priority for any other team.”2

“The reason kids watch TikTok is because the algo[rithm] is really good. . . . But I think we need to be cognizant of what it might mean for other opportunities. And when I say other opportunities, I literally mean sleep, and eating, and moving around the room, and looking at somebody in the eyes.”

“Tiktok is particularly popular with younger users who are particularly sensitive to reinforcement in the form of social reward and have minimal ability to self-regulate effectively.”

As Defendants have explained, TikTok’s success “can largely be attributed to strong . . . personalization and automation, which limits user agency” and a “product experience utiliz[ing] many coercive design tactics,” including “numerous features”—like “[i]nfinite scroll, auto-play, constant notifications,” and “the ‘slot machine’ effect”—that “can be considered manipulative.”

Again, nothing there that we didn’t already know.

Similarly, for harm #2, this sounds exactly like various experiments done with YouTube, and also I don’t really know what you were expecting:

In one experiment, Defendants’ employees created test accounts and observed their descent into negative filter bubbles. One employee wrote, “After following several ‘painhub’ and ‘sadnotes’ accounts, it took me 20 mins to drop into ‘negative’ filter bubble. The intensive density of negative content makes me lower down mood and increase my sadness feelings though I am in a high spirit in my recent life.” Another employee observed, “there are a lot of videos mentioning suicide,” including one asking, “If you could kill yourself without hurting anybody would you?”

The evidence on harms #3 and #4 seemed unremarkable and less bad than I expected.

And it is such a government thing to quote things like this, for #5:

TikTok knows this is particularly true for children, admitting internally: (1) “Minors are more curious and prone to ignore warnings” and (2) “Without meaningful age verification methods, minors would typically just lie about their age.”

To start, TikTok has no real age verification system for users. Until 2019, Defendants did not even ask TikTok users for their age when they registered for accounts. When asked why they did not do so, despite the obvious fact that a lot of the users, especially top users, are under 13,” founder Zhu explained that, “those kids will anyway say they are over 13.”

Over the years, other of Defendants’ employees have voiced their frustration that “we don’t want to [make changes] to the For You feed because it’s going to decrease engagement,” even if “it could actually help people with screen time management.”

The post ends with a reminder of the study where students on average would ask $59 for TikTok and $47 for Instagram in exchange for deleting their accounts, but less than zero if everyone did it at once.

Once again, let’s run this experiment. Offer $100 to every student at some college or high school, in exchange for deleting their accounts. See what happens.

Tyler Cowen links to another study on suspending social media use, which was done in 2020 and came out in April 2025 – seriously, academia, that’s an eternity, we gotta do something about this, just tweet the results out or something. In any case, what they found was that if users were convinced to deactivate Facebook for six weeks before the election, they report an 0.06 standard deviation improvement in happiness, depression and anxiety, and it was 0.041 SDs for Instagram.

Obviously that is a small enough effect to mostly ignore. But once again, we are not comparing to the ‘control condition’ of no social media. We are comparing to the control condition of everyone else being on social media without you, and you previously having invested in social media and now abandoning it, while expecting to come back and being worried about what you aren’t seeing, and also being free to transfer to other platforms.

Again, note the above study – you’d have to pay people to get off TikTok and Instagram, but if you could get everyone else off as well, they’d pay you.

Tyler Cowen: What is wrong with the simple model that Facebook and Instagram allow you to achieve some very practical objectives, such as staying in touch with friends or expressing your opinions, at the cost of only a very modest annoyance (which to be clear existed in earlier modes of communication as well)?

What is wrong with this model is that using Facebook and Instagram also imposes costs on others for not using them, which is leading to a bad equilibrium for many. And also that these are predatory systems engineered to addict users, so contra Zuckerberg’s arguments to Thompson and Patel in recent interviews we should not assume that the users ‘know best’ and are using internet services only when they are better off for it.

Tom Meadowcroft: I regard social media as similar to alcohol.

1. It is not something that we’ve evolved to deal with in quantity.

2. It is mildly harmful for most people.

3. It is deeply harmful for a significant minority for whom it is addictive.

4. Many people enjoy it because it seems to ease social engagement.

5. It triggers receptors in our brains that make us desire it.

6. There are better ways to get those pleasure spikes, but they are harder and rarer IRL.

7. If we were all better people, we wouldn’t need or desire either, but we are who we are.

I use alcohol regularly and social media rarely.

I think social media has a stronger case than alcohol. It does provide real and important benefits when used wisely in a way that you can’t easily substitute for otherwise, whereas I’m not convinced alcohol does this. However, our current versions of social media are not great for most people.

So if the sign of impact for temporary deactivation is positive at all, that’s a sign that things are rather not good, although magnitude remains hard to measure. I would agree that (unlike in the case of likely future highly capable AIs) we do not ‘see a compelling case for apocalyptic interpretations’ as Tyler puts it, but that shouldn’t be the bar for realizing you have a problem and doing something about it.

Court rules against Apple, says it wilfully defied the court’s previous injunction and has to stop charging commissions on purchases outside its software marketplace and open up the App Store to third-party payment options.

Stripe charges 2.9% versus Apple’s 15%-30%. Apple will doubtless keep fighting every way it can, but the end of the line is now likely to come at some point.

Market reaction was remarkably muted, on the order of a few percent, to what is a central threat to Apple’s entire business model, unless you think this was already mostly priced in or gets reversed often on appeal.

Recent court documents seem to confirm the claim that Google actively wanted their search results to be worse so they could serve more ads? This is so obviously insane a thing to do. Yes, short term it might benefit you if it happens you can get away with it, but come on.

A theory about A Minecraft Movie being secretly much more interesting than it looks.

A funny thing that happens these days is running into holiday episodes from an old TV show, rather than suddenly having all the Halloween, Thanksgiving or Christmas episodes happening at the right times. There’s no good fix for this given continuity issues, but maybe AI could fix that soon?

Gallabytes’s stroll down memory lane there reminds me that the actual biggest changes in TV programs are that you previously had to go with whatever happened to be on or that you’d taped – which was a huge pain and disaster and people structured their day around it, this was a huge deal – and that even ignoring that the old shows really did suck. Man, with notably rare exceptions they sucked, on every level, until at least the late 90s. You can defend old movies but you cannot in good faith defend most older television.

Fun fact:

Samuel Hammond: Over half the NYT’s subscriber time on site is now just for the games.

That’s about half a billion in subscriber revenue driven by a crossword and a handful of basic puzzle games.

It is a stunning fact, but I don’t think that’s quite what this means. Time spent on site is very different from value extracted. The ability to read news when it matters is a ton more valuable per minute than the games, even if you spend more time on the games. It’s not obvious what is driving subscriptions.

Further praise for Thunderbolts*, which I rated 4.5/5 stars and for now is my top movie of 2025 (although that probably won’t hold, in 2024 it would have been ~4th), from the perspective of someone treating it purely as a Marvel movie in a fallen era.

Zac Hill: Okay Thunderbolts is in the Paddington 2 tier of “movies that have no business being nearly as good as they somehow are”. Like this feels like the first definitive take on whatever weird era we find ourselves inhabiting now. Also the first great Marvel film in years.

What more is there to want: overt grappling with oblivion-inducing despair stemming from how to construct meaning in a world devoid of load-bearing institutions? Violent Night references? Selina Meyer? Florence Pugh having tons of fun???

Okay I can’t/wont shut up about this movie (Thunderbolts). For every reason New Cap America sucked and was both bad and forgettable, this movie was great – in a way that precisely mirrors the turning of the previous era into this strange new world in which we’re swimming.

Even the credits sequence is just like the graveyarding of every institution whose legitimacy has been hemorrhaged, executed with a subtlety and craftsmanship that is invigorating. But WITHOUT accepting, and giving into, cynicism!

Indeed, it is hard for words to describe the amount of joy I got from the credits sequence, that announced very clearly We Hear You, We Get It, and We Are So Back.

Gwern offers a guide to finding good podcast content, as opposed to the podcast that will get the most clicks. You either need to find Alpha from undiscovered voices, or Beta from getting a known voice ‘out of their book’ and producing new content rather than repeating talking points and canned statements. As a host you want to seek out guests where you can extract either Alpha or Beta, and and as listener or reader look for podcasts where you can do the same.

Alpha is relative to your previous discoveries. As NBC used to say, if you haven’t seen it, it’s new to you. If you haven’t ever heard (Gwern’s example) Mark Zuckerberg talk, his Lex Fridman interview will have Alpha to you despite Lex’s ‘sit back, lob softballs and let them talk’ strategy which lacks Beta.

Another way of putting that is, you only need to hear about any given person’s book (whether or not it involves a literal book, which it often does) once every cycle of talking points. You can get that one time from basically any podcast, and it’s fine. But you then wouldn’t want to do that again.

Gwern lists Mark Zuckerberg and Satya Nadella as tough nuts to crack, and indeed the interviews Dwarkesh did with them showed this, with Nadella being especially ‘well-coached,’ and someone too PR-savvy like MrBeast as a bad guest who won’t let you do anything interesting and might torpedo the whole thing.

My pick for toughest nut to crack is Tyler Cowen. No one has a larger, more expansive book, and most people interviewing him never seem to get him to start thinking. Plus, because he’s Tyler Cowen, he’s the one person Tyler Cowen won’t do the research for.

There are of course also other reasons to listen to or host podcasts.

Surge pricing comes to Waymo. You can no longer raise supply, but you can still ration supply and limit demand, so it is still the correct move. But how will people react? There is a lot of pearl clutching about how this hurts the poor or ‘creates losers,’ but may I suggest that if you can’t take the new prices you can call an Uber or Lyft without them being integrated into the same app? Or you can wait.

Waymo hits 250k rides per week in April 2025, two months after 200k.

Waymo is partnering with Toyota for a new autonomous vehicle platform. Right now, Waymo faces multiple bottlenecks, but one key one is that it is tough to build and equip enough vehicles. Solving that problem would go a long way.

Waymo’s injury rate reductions imply that fully self-driving cars would reduce road deaths by 34,800 annually. It’s probably more than that, because most of the remaining crashes by Waymos are caused by human drivers.

Aurora begins commercial driverless trucking in Texas between Dallas and Houston.

Europa Universalis 5 is coming. If you thought EU4 was complex, this is going to be a lot more complex. It looks like it will be fascinating and a great experience for those who have that kind of time, but this is unlikely to include me. It is so complex they let you automate large portions of the game, with the problem that if you do that how will you then learn it?

They’re remaking the Legend of Heroes games, a classic Japanese RPG series a la Final Fantasy and Dragon Quest, starting with Trials In The Sky in September. Oh to have this kind of time.

They’re considering remaking Chrono Trigger. I agree with the post here that a remake is unnecessary. The game works great as it is.

Proposal for a grand collaboration to prove you cannot beat Super Mario Bros. in less than 17685 frames, the best human time remains 17703. This would be an example of proving things about real world systems, and we’ve already put a ton of effort into optimizing this. Peter is about 50% that there is indeed no way to do better than 17685.

If you know, you know:

Emmett Shear: This is pure genius and would be incredible for teaching about a certain kind of danger. Please please someone do this.

RedJ: i think sama is working on it?

Emmett Shear: LOL wrong game I don’t want them in the game of life.

College sports are allocating talent efficiently. You didn’t come here to play school.

And That’s Terrible?

John Arnold: College sports broken:

“Among the top eight quarterbacks in the Class of 2023, Texas’ Arch Manning is now the only one who hasn’t transferred from the school he signed with out of high school.” –@TheAthletic

I do think it is terrible. Every trade and every transfer makes sports more confusing and less enjoyable. The stories are worse. It harder to root for players and teams. It makes it harder to work as a team or to invest in the future of players, both as athletes and as students. And it enshrines the top teams to always be the top teams. In the long run, I find it deeply corrosive.

I find it confusing that there is this much transferring going on. There are large costs to transferring for the player. You have an established campus life and friends. You have connections to the team and the coach and have established goodwill. There are increasing returns to staying in one place. So you would think that there would be strong incentives to stay put and work out a deal that benefits everyone.

The flip side is that there are a lot of teams out there, so the one you sign with is unlikely to be the best fit going forward, especially if you outperform expectations, which changes your value and also your priorities and needs.

I love college football, but they absolutely need to get the transferring under control. It’s gone way too far. My guess is the best way forward is to allow true professional contracts with teams that replace the current NIL system, which would allow for win-win deals that involve commitment or at least backloading compensation, and various other incentives to limit transfers.

I am not saying the NBA fixes the draft lottery, but… no wait I am saying the NBA fixes the draft lottery, given Dallas getting the first pick this year combined with previous incidents. I don’t know this for certain, but at this point, come on.

As Seth Burn puts it, there are ways to get provably random outcomes. The NBA keeps not using those methods. This keeps resulting in outcomes that are unlikely and suspiciously look like fixes. Three times is enemy action. This is more than three.

On the other hand, I do like that tanking for the first pick is being actively punished, even if it’s being done via blatant cheating. At some point everyone knows the league is choosing the outcome, so it isn’t cheating, and I’m kind of fine with ‘if we think you tanked without our permission you don’t get the first pick.’

Discussion about this post

Monthly Roundup #30: May 2025 Read More »

a-live-look-at-the-senate-ai-hearing

A Live Look at the Senate AI Hearing

So is the plan then to have AI developers not vet their systems before rolling them out? Is OpenAI not planning to vet their systems before rolling them out? This ‘sensible regulation that does not slow us down’ seems to translate to no regulation at all, as per usual.

And that was indeed the theme of the Senate hearing, both from the Senators and from the witnesses. The Senate is also setting up to attempt full preemption of all state and local AI-related laws, without any attempt to replace them, or realistically without any attempt to later pass any laws whatsoever to replace the ones that are barred.

Most of you should probably skip the main section that paraphrases the Senate hearing. I do think it is enlightening, and at times highly amusing at least to me, but it is long and one must prioritize, I only managed to cut it down ~80%. You can also pick and choose from the pull quotes at the end.

I will now give the shortened version of this hearing, gently paraphrased, in which we get bipartisan and corporate voices saying among other things rather alarmingly:

Needless to say, I do not agree with most of that.

It is rather grim out there. Almost everyone seems determined to not only go down the Missile Gap road into a pure Race, but also to use that as a reason to dismiss any other considerations out of hand, and indeed not to even acknowledge that there are any other worries out there to dismiss, beyond ‘the effect on jobs.’ This includes both the Senators and also Altman and company.

The discussion was almost entirely whether we should move to lock out all AI regulations and whether we should impose any standards of any kind on AI at all, except narrowly on deepfakes and such. There was no talk about even trying to make the government aware of what was happening. SB 1047 and other attempts at sensible rules were routinely completely mischaracterized.

There was no sign that anyone was treating this as anything other than a (very important) Normal Technology and Mere Tool.

If you think most of the Congress has any interest in not dying? Think again.

That could of course rapidly change once more. I expect it to. But not today.

The most glaring pattern was the final form of Altman’s pivot to jingoism and opposing all meaningful regulation, while acting as if AI poses no major downside risks, not even technological unemployment let alone catastrophic or existential risks or loss of human control over the future.

Peter then offers a summary of testimony, including how much Cruz is driving the conversation towards ‘lifting any finger anywhere dooms us to become the EU and to lose to China’ style rhetoric.

You also get to very quickly see which Senators are serious about policy and understanding the world, versus those here to create partisan soundbites or push talking points and hear themselves speak, versus those who want to talk about or seek special pork for their state. Which ones are curious and inquisitive versus which ones are hostile and mean. Which ones think they are clever and funny when they aren’t.

It is amazing how consistently and quickly the bad ones show themselves. Every time.

And to be clear, that has very little to do with who is in which party.

Indeed, the House Energy and Commerce Committee is explicitly trying for outright preemption, without replacement, trying to slip it into their budget proposal (edit: I originally thought this was the Senate, not the House):

That is, of course, completely insane, in addition to presumably being completely illegal to put into a budget. Presumably the Byrd rule kills it for now.

It would be one thing to pass this supposed ‘light touch’ AI regulation, that presented a new legal regime to handle AI models at the federal level, and do so while preempting state action.

It is quite another to have your offer be nothing. Literally nothing. As in, we are Congress, we cannot pass laws, but we will prevent you from enforcing any laws, or fixing any laws, for ten years.

They did at least include a carveout for laws that actively facilitate AI, including power generation for AI, so that states aren’t prevented from addressing the patchwork of existing laws that might kneecap AI and definitely will prevent adaptation and diffusion in various ways.

Even with that, the mind boggles to think of even the mundane implications. You couldn’t pass laws against anything, including CSAM or deepfakes. That’s in addition to the inability of the states to do the kinds of things we need them to do that this law is explicitly trying to prevent, such as SB 1047’s requirements that if you want to train a frontier AI model, you have to tell us you are doing that and share your safety and security protocol, and take reasonable care against catastrophic (and existential) risks.

Again, this is with zero sign of any federal rule at all.

In the following, if quote marks are used they literally said it. If not it’s a dramatization. I was maximizing truthiness, not aiming for a literal translation, read this as if it’s an extended SNL sketch, written largely for my own use and amusement.

Senator Ted Cruz (R-Texas): AI innovation. No rules. Have to beat China. Europe overregulates. Yay free internet. Biden had bad vibes and was woke. Not selling advanced AI chips to places that would let our competitors have them would cripple American tech companies. Trump. We need sandboxes and deregulation.

Senator Maria Cantwell (D-Washington): Pacific northwest. University of Washington got Chips act money. Microsoft contracted a fusion company in Washington. Expand electricity production. “Export controls are not a trade strategy.” Broad distribution of US-made AI chips. “American open AI systems” must be dominant across the globe.

Senator Cruz: Thank you. Our witnesses are OpenAI CEO Sam Altman, Lisa Su the CEO of AMD who is good because Texas, Michael Intrator the CEO of Core Weave, and Brad Smith the vice Chair and President of Microsoft.

Sam Altman (CEO OpenAI): Humility. Honor to be here. Scientists are now 2-3 times more productive and other neat stuff. America. Infrastructure. Texas. American innovation. Internet. America. Not Europe. America is Magic.

Dr. Lisa Su (CEO AMD): Honor. America. We make good chips. They do things. AI is transformative technology. Race. We could lose. Must race faster. Infrastructure. Open ecosystems. Marketplace of ideas. Domestic supply chain. Talent. Public-private partnership. Boo export controls, we need everyone to use our chips because otherwise they won’t use our technology and they might use other technology instead, this is the market we must dominate. AMD’s market. I had a Commodore 64 and an Apple II. They were innovative and American.

Senator Cruz: I also had an Apple II, but sadly I’m a politician now.

Michael Intrator (CEO Coreweave): I had a Vic 20. Honor. American infrastructure. Look at us grow. Global demand for AI infrastructure. American competitive edge. Productivity. Prosperity. Infrastructure. Need more compute. AI infrastructure will set the global economic agenda and shape human outcomes. China. AI race.

We need: Strategic investment stability. Stable, predictable policy frameworks, secure supply chains, regulatory environments that foster innovation. Energy infrastructure development. Permitting and regulatory reform. Market access. Trade agreements. Calibrated export controls. Public-private partnership. Innovation. Government and industry must work together.

Brad Smith (President of Microsoft): Chart with AI tech stack. All in this together. Infrastructure. Platform. Applications. Microsoft.

We need: Innovation. Infrastructure. Support from universities and government. Basic research. Faster adaptation, diffusion. Productivity. Economic growth. Investing in skilling and education. Right approach to export controls. Trust with the world. We need to not build machines that are better than people, only machines that make people better. Machines that give us jobs, and make our jobs better. We can do that, somehow.

Tech. People. Ambition. Education. Opportunity. The future.

Rep. Tim Sheey (R-MT, who actually said this verbatim): “Thank you for your testimony. Certainly makes me sleep better at night. Worried about Terminator and Skynet coming after us, knowing that you guys are behind the wheel, but in five words or less, start with you Mr. Smith. What are the five words you need to see from our government to make sure we win this AI race?”

Brad Smith: “More electricians. That’s two words. Broader AI education.”

Rep. Sheey: “And no using ChatGPT as a friend.”

Michael Intrator: “We need to focus on streamlining the ability to build large things.”

Dr. Lisa Su: “Policies to help us run faster in the innovation race.”

Sam Altman: “Allow supply chain-sensitive policy.”

Rep. Sheey: So we race. America wins races. Government support and staying out of your way are how America wins races. How do we incentivize companies so America wins the race? A non-state actor could win.

Sam Altman (a non-state actor currently winning the race): Stargate. Texas. We need electricity, permitting, supply chain. Investment. Domestic production. Talent recruitment. Legal clarity and clear rules. “Of course there will be guardrails” but please assure us you won’t impose any.

Dr. Lisa Su: Compute. AMD, I mean America, must build compute. Need domestic manufacturing. Need simple export rules.

Rep. Sheey: Are companies weighing doing AI business in America versus China?

Dr. Lisa Su: American tech is the best, but if it’s not available they’ll buy elsewhere.

Rep. Sheey: Infrastructure, electricians, universities, regulatory framework, can do. Innovation. Talent. Run faster. Those harder. Can’t manufacture talent, can’t make you run faster. Can only give you tools.

Senator Cantwell: Do we need NIST to set standards?

Sam Altman: “I don’t think we need it. It can be helpful.”

Michael Intrator: “Yes, yes.”

Senator Cantwell: Do we want NIST standards that let us move faster?

Brad Smith: “What I would say is this, first of all, NIST is where standards go to be adopted, but it’s not necessarily where they first go to be created.” “We will need industry standards, we will need American adoption of standards, and you are right. We will need US efforts to really ensure that the world buys into these standards.”

Michael Intrator: Standards need to be standardized.

Senator Cantwell: Standards let you move fast. Like HTTP or HTML. So, on exports, Malaysia. If we sell them chips can we ensure they don’t sell those chips to China?

Michael Intrator (‘well no, but…’): If we don’t sell them chips, someone else will.

Senator Cantwell: We wouldn’t want Huawei to get a better chip than us and put in a backdoor. Again. Don’t buy chips with backdoors. Also, did you notice this new tariff policy that also targets our allies is completely insane? We could use some allies.

Michael Intrator: Yes. Everyone demands AI.

Dr. Lisa Su: We need an export strategy. Our allies need access. Broad AI ecosystem.

Senator Bernie Moreno (R-Ohio): You all need moar power. TSMC making chips in America good. Will those semiconductor fabs use a lot of energy?

Dr. Lisa Su: Yes.

Senator Moreno: We need to make the highest-performing chips in America, right?

Dr. Lisa Su: Right.

Senator Moreno: Excuse me while rant that 90% of new power generation lately has been wind and solar when we could use natural gas. And That’s Terrible.

Brad Smith: Broad based energy solutions.

Senator Moreno: Hey I’m ranting here! Renewables suck. But anyway… “Mr. Altman, thank you for first of all, creating your platform and an open basis and agreeing to stick to the principles of nonprofit status. I think that’s very important.” So, Altman, how do we protect our children from AI? From having their friends be AI bots?

Sam Altman: Iterative deployment. Learn from mistakes. Treat adults like adults. Restrict child access. Happy to work with you on that. We must beware AI and social relationships.

Senator Moreno: Thanks. Mr. Itrator, talk about stablecoins?

Michael Intrator: Um, okay. Potential. Synergy.

Senator Klobuchar (D-Minnesota): AI is exciting. Minnesota. Renewables good, actually. “I think David Brooks put it the best when he said, I found it incredibly hard to write about AI because it is literally noble whether this technology is leading us to heaven or hell, we wanted to lead us to heaven. And I think we do that by making sure we have some rules of the road in place so it doesn’t get stymied or set backwards because of scams or because of use by people who want to do us harm.” Mr. Altman, do you agree that a risk-based approach to regulation is the best way to place necessary guardrails for AI without stifling innovation?

Sam Altman: “I do. That makes a lot of sense to me.”

Senator Klobuchar: “Okay, thanks. And did you figure that out in your attic?”

Sam Altman: “No, that was a more recent discovery.”

Senator Klobuchar: Do you agree that consumers need to be more educated?

Brad Smith: Yes.

Senator Klobuchar: Pivot. Altman, what evals do you use for hallucinations?

Sam Altman: Hallucations are getting much better. Users are smart, they can handle it.

Senator Klobuchar: Uh huh. Pivot. What about my bill about sexploitation and deepfakes? Can we build models that can detect deepfakes?

Brad Smith: Working on it.

Senator Klobuchar: Pivot. What about compensating content creators and journalists?

Brad Smith: Rural newspapers good. Mumble, collective negotiations, collaborative action, Congress, courts. Balance. They can get paid but we want data access.

Senator Cruz: I am very intelligent. Who’s winning, America or China, how close is it and how can we win?

Sam Altman: American models are best, but not by a huge amount of time. America. Innovation. Entrepreneurship. America. “We just need to keep doing the things that have worked for so long and not make a silly mistake.”

Dr. Lisa Su: I only care about chips. America is ahead in chips, but even without the best chips you can get a lot done. They’re catching up. Spirit of innovation. Innovate.

Michael Intrator: On physical infrastructure it’s not going so great. Need power and speed.

Brad Smith: America in the lead but it is close. What matters is market share and adaptation. For America, that is. We need to win trust of other countries and win the world’s markets first.

Ted Cruz: Wouldn’t it be awful if we did something like SB 1047? That’s just like something the EU would do, it’s exactly the same thing. Totally awful, am I right?

Sam Altman: Totally, sure, pivot. We need algorithms and data and compute and the best products. Can’t stop, won’t stop. Need infrastructure, need to build chips in this country. It would be terrible if the government tried to set standards. Let us set our own standards.

Lisa Su: What he said.

Brad Smith (presumably referring to when SB 1047 said you would have had to tell us the model exists and is being trained and what your safety plan was): Yep, what he said. Especially important is no pre-approval requirements.

Michael Intrator: A patchwork of regulatory overlays would cause friction.

Senator Brian Shatz (D-Hawaii): You do know no one is proposing these EU-style laws, right? And the alternative proposal seems to be nothing? Does nothing work for you?

Sam Altman: “No, I think some policy is good. I think it is easy for it to go too far and as I’ve learned more about how the world works, I’m more afraid that it could go too far and have really bad consequences. But people want use products that are generally safe. When you get on an airplane, you kind of don’t think about doing the safety testing yourself. You’re like, well, maybe this is a bad time to use the airplane example, but you kind of want to just trust that you can get on it.”

Senator Brian Shatz: Great example. We need to know what we’re racing for. American values. That said, should AI content be labeled as AI?

Brad Smith: Yes, working on it.

Senator Brian Shatz: “Data really is intellectual property. It is human innovation, human creativity.” You need to pay for it. Isn’t the tension that you want to pay as little as possible?

Brad Smith: No? And maybe we shouldn’t have to?

Brian Shatz: How can an AI agent deliver services and reduce pain points while interacting with government?

Sam Altman: The AI on your phone does the thing for you and answers your questions.

Brad Smith: That means no standing in line at the DMV. Abu Dhabi does this already.

Senator Ted Budd (R-North Carolina): Race. China. Energy. Permitting problems. They are command and control so they’re good at energy. I’m working on permit by rule. What are everyone’s experiences contracting power?

Michael Intrator: Yes, power. Need power to win race. Working on it. Regulation problems.

Brad Smith: We build a lot of power. We do a lot of permitting. Federal wetlands permit is our biggest issue, that takes 18-24 months, state and local is usually 6-9.

Senator Ted Budd: I’m worried people might build on Chinese open models like DeepSeek and the CCP might promote them. How important is American leadership in open and closed models? How can we help?

Sam Altman: “I think it’s quite important in both.” You can help with energy and infrastructure.

Senator Andy Kim (D-New Jersey): What is this race? You said it’s about adaptation?

Brad Smith: It’s about limiting chip export controls to tier 2 countries.

Senator Kim: Altman, is that the right framing of the race?

Sam Altman: It’s about the whole stack. We want them to use US chips and also ChatGPT.

Senator Kim (asking a good question): Does building on our chips mean they’ll use our products and applications?

Sam Altman: Marginally. Ideally they’ll use our entire stack.

Senator Kim: How’re YOU doin? On tools and applications.

Sam Altman: Really well. We’re #1, not close.

Senator Kim: How are we doing on talent?

Dr. Lisa Su: We have the smartest engineers and a great talent base but we need more. We need international students, high skilled immigration.

Senator Eric Schmitt (R-Missouri): St. Louis, Missouri. What can we learn from Europe’s overregulation and failures?

Sam Altman: We’d love to invest in St. Louis. Our EU releases take longer, that’s not good.

Senator Schmitt: How does vertical AI stack integration work? I’ve heard China is 2-6 months behind on LLMs. Does our chip edge give us an advantage here?

Sam Altman: People everywhere will make great models and chips. What’s important is to get users relying on us for their hardest daily tasks. But also chips, algorithm, infrastructure, data. Compound effects.

Senator Schmitt: EU censorship is bad. NIST mentioned misinformation, oh no. How do we not do all that here?

Michael Intrator: It makes Europe uninvestable but it’s not our focus area.

Sam Altman: Putting people in jail for speech is bad and un-American. Freedom.

Senator Hickenlooper (D-Colorado): How does Microsoft evaluate Copilot’s accuracy and performance? What are the independent reviews?

Brad Smith: Don’t look at me, those are OpenAI models, that’s their job, then we have a joint DSBA deployment safety board. We evaluate using tools and ensure it passes tests.

Senator Hickenlooper: “Good. I like that.” Altman, have you considered using independent standard and safety evaluations?

Sam Altman: We do that.

Senator Hickenlooper: Chips act. What is the next frontier in Chip technology in terms of energy efficiency? How can we work together to improve direct-to-chip cooling for high-performance computing?

Lisa Su: Innovation. Chips act. AI is accelerating chip improvements.

Senator John Curtis (R-Utah): Look at me, I had a TRS-80 made by Radio Shack with upgraded memory. So what makes a state, say Utah, attractive to Stargate?

Sam Altman: Power cooling, fast permitting, electricians, construction workers, a state that will partner to work quickly, you in?

Senator Curtis: I’d like to be, but for energy how do we protect rate payers?

Sam Altman: More power. If you permit it, they will build.

Brad Smith: Microsoft helps build capacity too.

Senator Curtis: Yay small business. Can ChatGPT help small business?

Sam Altman: Can it! It can run your small business, write your ads, review your legal docs, answer customer emails, you name it.

Senator Duckworth (D-Illinois): Lab-private partnerships. Illinois. National lab investments. Trump and Musk are cutting research. That’s bad. Doge bad. Innovation. Don’t cut innovation. Help me out here with the national labs.

Sam Altman: We partner with national labs, we even shared model weights with them. We fing love science. AI is great for science, we’ll do ten years of science in a year.

Brad Smith: All hail national labs. We work with them too. Don’t take them for granted.

Dr. Lisa Su: Amen. We also support public-private partnerships with national labs.

Michael Intrator: Science is key to AI.

Senator Duckworth: Anyone want to come to a lab in Illinois? Everyone? Cool.

Senator Cruz: Race. Vital race. Beyond jobs and economic growth. National security. Economic security. Need partners and allies. Race is about market share of RA AI models and solutions in other countries. American values. Not CCP values. Digital trade rules. So my question: If we don’t adopt standards via NIST or otherwise, won’t others pick standards without us?

Brad Smith: Yes, sir. Europe won privacy law. Need to Do Something. Lightweight. Can’t go too soon. Need a good model standard. Must harmonize.

Senator Lisa Blunt Rochester (D-Delaware): Future of work is fear. But never mind that, tell me about your decision to transition to a PBC and attempt to have the PBC govern your nonprofit.

Sam Altman: Oceania has always been at war with Eastasia, we’ve talked to lawyers and regulators about the best way to do that and we’re excited to move forward.

Senator Rochester: I have a bill, Promoting Resilient Supply Chains. Dr. Su, what specific policies would help you overcome supply chain issues?

Dr. Lisa Su: Semiconductors are critical to The Race. We need to think end-to-end.

Senator Rochester: Mr. Smith, how do you see interdependence of the AI stack sections creating vulnerabilities or opportunities in the AI supply chain?

Brad Smith: More opportunities than vulnerabilities, it’s great to work together.

Senator Moran (R-Kansas): Altman, given Congress is bad at job and can’t pass laws, how can consumers control their own data?

Altman: You will happily give us all your data so we can create custom responses for you, mwahahaha! But yeah, super important, we’ll keep it safe, pinky swear.

Senator Moran: Thanks. I hear AI matters for cyberattacks, how can Congress spend money on this?

Brad Smith: AI is both offense and defense and faster than any human. Ukraine. “We have recognize it’s ultimately the people who defend not just countries, but companies and governments” so we need to automate that, stat. America. China. You should fund government agencies, especially NSA.

Senator Moran: Kansas. Rural internet connectivity issues. On-device or low bandwidth AI?

Altman: No problem, most of the work is in the cloud anyway. But also shibboleth, rural connectivity is important.

Senator Ben Ray Lujan (D-New Mexico): Thanks to Altman and Smith for your involvement with NIST AISI, and Su and Altman for partnerships with national labs. Explain the lab thing again?

Altman: We give them our models. o3 is good at helping scientists. Game changer.

Dr. Lisa Su: What he said.

Senator Lujan: What investments in those labs are crucial to you?

Altman: Just please don’t standardize me, bro. Absolutely no standards until we choose them for ourselves first.

Dr. Lisa Su: Blue sky research. That’s your comparative advantage.

Senator Lujan: My bipartisan bill is Test AI Act, to build state capacity for tests and evals. Seems important. Trump is killing basic research, and that’s bad, we need to fix that. America. NSF, NIH, DOE, OSTP. But my question for you is about how many engineers are working optimizations for reduced energy, and what is your plan to reduce water use by data centers?

Brad Smith: Okay, sure, I’ll have someone track that number down. The data centers don’t actually use much water, that’s misinformation, but we also have more than 90 water replenishment projects including in New Mexico.

Michael Intrator: Yeah I also have no idea how many engineers are working on those optimizations, but I assure you we’re working on it, it turns out compute efficiency is kind of a big deal these days.

Senator Lujan: Wonderful. Okay, a good use of time would be to ask “yes or no: Is it important to ensure that in order for AI to reach its full prominence that people across the country should be able to connect to fast affordable internet?”

Dr. Su: Yes.

Senator Lujan: My work is done here.

Senator Cynthia Lummis (R-Wyoming): AI is escalating quickly. America. Europe overregulates. GDPR limits AI. But “China appears to be fast-tracking AI development.” Energy. Outcompete America. Must win. State frameworks burdensome. Only 6 months ahead of China. Give your anti-state-regulation speech?

Sam Altman: Happy to. Hard to comply. Slow us down. Need Federal only. Light touch.

Michael Intrator: Preach. Infrastructure could be trapped with bad regulations.

Senator Lummis: Permitting process. Talk about how it slows you down?

Michael Intrator: It’s excruciating. Oh, the details I’m happy to give you.

Senator Lummis: Wyoming. Natural gas. Biden doesn’t like it. Trump loves it. How terrible would it be if another president didn’t love natural gas like Trump?

Brad Smith: Terrible. Bipartisan natural gas love. Wyoming. Natural gas.

Senator Lummis (here let me Google that for you): Are you exploring small modular nuclear? In Wyoming?

Brad Smith: Yes. Wyoming.

Senator Lummis: Altman, it’s great you’re releasing an open… whoops time is up.

Senator Jacky Rosen (D-Nevada): AI is exciting. We must promote its growth. But it could also be bad. I’ll start with DeepSeek. I want to ban it on government devices and for contractors. How should we deal with PRC-developed models? Could they co-opt AI to promote an ideology? Collect sensitive data? What are YOU doing to combat this threat?

Brad Smith: DeepSeek is both a model and an app. We don’t let employees use the app, we didn’t even put it in our app store, for data reasons. But the model is open, we can analyze and change it. Security first.

Senator Rosen: What are you doing about AI and antisemitism? I heard AI is perpetuating stereotypes. Will you collaborate with civil society on a benchmark?

Sam Altman (probably wondering if Rosen knows that Altman is Jewish): We do collaborate with civil society. We’re not here to be antisemitic.

Senator Rosen (except less coherently than this, somehow): AI using Water. Energy. We’re all concerned. Also data center security. I passed a bill on that, yay me. Got new chip news? Faster and cooler is better. How can we make data more secure? Talk about interoperability.

Dr. Lisa Su (wisely): I think for all our sakes I’ll just say all of that is important.

Senator Dan Sullivan (R-Alaska): Agree with Cruz, national economic and national security. Race. China. Everyone agree? Good. Huge issue. Very important. Are we in the lead?

Sam Altman: We are landing and will keep leading. America. Right thing to do. But we need your help.

Senator Sullivan: Our help, you say. How can we help?

Sam Altman: Infrastructure. Supply chain. Everything in America. Infrastructure. Supply chain. Stargate. Full immunity from copyright for model training. “Reasonable fair like touch regulatory framework.” Ability to deploy quickly. Let us import talent.

Senator Sullivan (who I assume has not seen the energy production charts): Great. “One of our comparative advantages over China in my view has to be energy.” Alaska. Build your data centers here. It is a land. A cold land. With water. And gas.

Sam Altman: “That’s very compelling.”

Senator Sullivan: Alaska is colder than Texas. It has gas. I’m frustrated you can’t see that Alaska is cold. And has gas. And yet Americans invest in China. AI, quantum. Benchmark Capital invested $75 million in Chinese AI. How dare they.

Brad Smith: China can build power plants better than we can. Our advantage is the world’s best people plus venture capital. Need to keep bringing best people here. America.

Senator Sullivan: “American venture capital funds Chinese AI. Is that in our national interest?”

Brad Smith: Good question, good that you’re focusing on that but stop focusing on that, just let us do high-skilled immigration.

Senator Markey (D-Massachusetts): Environmental impact of AI. AI weather forecasting. Climate change. Electricity. Water. Backup diesel generators can cause respiratory and cardiovascular issues and cancer. Need more info. Do you agree more research is needed, like in the AIEIA bill?

Brad Smith: “Yes. One study was just completed last December.” Go ahead and convene your stakeholders and measure things.

Sam Altman: Yes, the Federal government should study and measure that. Use AI.

Senator Markey: AI could cure cancer, but it could also cause climate change. Equally true. Trump wants to destroy incentives for wind, solar, battery. “That’s something you have to weigh in on. Make sure he does not do that.” Now, AI’s impact on disadvantagd communities. Can algorithms be biased and cause discrimination?

Brad Smith: Yes. We test to avoid that outcome.

Senator Markey (clip presumably available now on TikTok): Altman doesn’t want privacy regulations. But AI can cause harm. To marginalized communities. Bias. Discrimination. Mortgage discrimination. Bias in hiring people with disabilities. Not giving women scholarships. Real harms. Happening now. So I wrote the AI Civil Rights Act to ensure you all eliminate bias and discrimination, and we hold you accountable. Black. Brown. LGBTQ. Virtual world needs same protections as real world. No question.

Senator Gary Peters (D-Michigan): America. Must be world leader in AI. Workforce. Need talent here. Various laws for AI scholarships, service, training. University of Michigan, providing AI tools to students. “Mr. Altman, when we met last year in my office and had a great, you said that upwards of 70% of jobs could be limited by AI.” Prepare for social disruption. Everyone must benefit. How can your industry mitigate job loss and social disruption?

Sam Altman: AI will come at you fast. But otherwise it’s like other tech, jobs change, we can handle change. Give people tools early. That’s why we do iterative development. We can make change faster by making it as fast as possible. But don’t worry, it’s just faster change, it’s just a transition.

Senator Peters: We need AI to enhance work, not displace work. Like last 100 years.

Sam Altman (why yes, I do mean the effect on jobs, senator): We can’t imagine the jobs on the other side of this. Look how much programming has changed.

Senator Peters: Open tech is good. Open hardware. What are the benefits of open standards and system interoperability at the hardware level? What are the supply chain implications?

Dr. Lisa Su: Big advantages. Open is the best solution. Also good for security.

Senator Peters: You’re open, Nvidia is closed. Why does that make you better?

Dr. Lisa Su: Anyone can innovate so we don’t have to. We can put that together.

Senator John Fetterman (D-Pennsylvania, I hope he’s okay): Yay energy. National security. Fossil. Nuclear. Important. Concern about Pennsylvania rate payers. Prices might rise 20%. Concerning. Plan to reopen Three Mile Island. But I had to grab my hamster and evacuate in 1979. I’m pro nuclear. Microsoft. But rate payers.

Brad Smith: Critical point. We invest to bring on as much power as we use, so it doesn’t raise prices. We’ll pay for grid upgrades. We create construction jobs.

Senator Fetterman: I’m a real senator which means I get to meet Altman, squee. But what about the singularity? Address that?

Sam Altman: Nice hoodies. I am excited by the rate of progress, also cautious. We don’t understand where this, the biggest technological revolution ever, is going to go. I am curious and interested, yet things will change. Humans adapt, things become normal. We’ll do extraordinary things with these tools. But they’ll do things we can’t wrap our heads around. And do recursive self-improvement. Some call that singularity, some call it takeoff. New era in humanity. Exciting that we get to live through that and make it a wonderful thing. But we’ve got to approach it with humility and some caution, always twirling twirling towards freedom.

Senator Klobuchar (oh no not again): There’s a new pope. I work really hard on a bill. YouTube, RIAA, SAG, MPAA all support it. 75k songs with unauthorized deepfakes. Minnesota has a Grammy-nominated artist. Real concern. What about unauthorized use of people’s image? How do we protect them? If you don’t violate people’s rights someone else will.

Brad Smith: Genuflection. Concern. Deepfakes are bad. AI can identify fakes. We apply voluntary guardrails.

Senator Klobuchar: And will you both read my bill? I worked so hard.

Brad Smith: Absolutely.

Sam Altman: Happy to. Deepfakes. Big issue. Can’t stop them. If you allow open models you can’t stop them from doing it. Guardrails. Have people on lookout.

Senator Klobuchar: “It’s coming, but there’s got to be some ways to protect people. We should do everything privacy. And you’ve got to have some way to either enforce it, damages, whatever, there’s just not going to be any consequences.”

Sam Altman: Absolutely. Bad actors don’t always follow laws.

Senator Cruz: Sorry about tweeting out that AI picture of Senator Federman as the Pope of Greenland.

Senator Klobuchar: Whoa, parody is allowed.

Senator Cruz: Yeah, I got him good. Anyway, Altman, what’s the most surprising use of ChatGPT things you’ve seen?

Sam Altman: Me figuring out how to take care of my newborn baby.

Senator Cruz: My teenage daughter sent me this long detailed emotional text, and it turns out ChatGPT wrote it.

Sam Altman (who lacks that option): “I have complicated feelings about that.”

Senator Cruz: “Well use the app and then tell me what your thoughts. Okay, Google just revealed that their search traffic on Safari declined for the first time ever. They didn’t send me a Christmas card. Will chat GPT replace Google as the primary search engine? And if so when?”

Sam Altman: Probably not, Google’s good, Gemini will replace it instead.

Senator Cruz: How big a deal was DeepSeek? A major seismic shocking development? Not that big a deal? Somewhere in between? What’s coming next?

Sam Altman: “Not a huge deal.” They made a good open-source model and they made a highly downloaded consumer app. Other companies are going to put out good models. If it was going to beat ChatGPT that would be bad, but it’s not.

Dr. Lisa Su: Somewhere in between. Different ways of doing things. Innovation. America. We’re #1 in models. Being open was impactful. But America #1.

Michael Intrator (saying the EMH is false and it was only news because of lack of situational awareness): It raised the specter of China’s AI capability. People became aware. Financial market implications. “China is not theoretically in the race for AI dominance, but actually is very much a formidable competitor.” Starting gun for broader population and consciousness that we have to work. America.

Brad Smith: Somewhere in between. Wasn’t shocking. We knew. All their 200 employees are 4 or less years out of college.

Ted Cruz: I’m glad the AI diffusion rule was rescinded. Bad rule. Too complex. Unfair to our trading partners. “That doesn’t necessarily mean there should be no restrictions and there are a variety of views on what the rules should be concerning AI diffusion.” Nivida wants no rules. What should be the rule?

Sam Altman: I’m also glad that was rescinded. Need some constraints. We need to win diffusion not stop diffusion. America. Best data centers in America. Other data centers elsewhere. Need them to use ChatGPT not DeepSeek. Want them using US chips and US data center technology and Microsoft. Model creation needs to happen here.

Dr. Lisa Su: Happy they were rescinded. Need some restrictions. National security. Simplify. Need widespread adoption of our tech and ecosystem. Simple rules, protect our tech also give out our tech. Devil’s in details. Balance. Broader hat. Stability.

Michael Intrator: Copy all that. National security. Work with regulators. Rule didn’t let us participate enough.

Brad Smith: Eliminate tier 2 restrictions to ensure confidence and access to American tech, even most advanced GPUs. Trusted providers only. Security standards. Protection against diversion or certain use cases. Keep China out, guard against catastrophic misuse like CBRN risks.

Ted Cruz: Would you support a 10-year ban on state AI laws, if we call it a ‘learning period’ and we can revisit after the singularity?

Sam Altman: Don’t meaningfully regulate us, and call it whatever you need to.

Dr. Lisa Su: “Aligned federal approach with really thoughtful regulation would be very, very much appreciated.”

Michael Intrador: Agreed.

Brad Smith: Agreed.

Ted Cruz: We’re done, thanks everyone, and live from New York it’s Saturday night.

This exchange was pretty wild, the one time someone asked about this, and Altman dodges it:

For the time being, we are not hoping the Congress will help us not die, or help the economy and society deal with the transformations that are coming, or even that it can help with the mess that is existing law.

We are instead hoping the Congress will not actively make things worse.

Congress has fully abrogated its job to consider the downside risks of AI, including catastrophic and existential risks. It is fully immersed in jingoism and various false premises, and under the sway of certain corporations. It is attempting to actively prevent states from being able to do literally anything on AI, even to fix existing non-AI laws that have unintended implications no one wants.

We have no real expectation that Congress will be able to step up and pass the laws it is preventing the states from passing. In many cases, it can’t, because fixes are needed for existing state laws. At most, we can expect Congress to manage things like laws against deepfakes, but that doesn’t address the central issues.

On the sacrificial altars of ‘competitiveness’ and ‘innovation’ and ‘market share’ they are going to attempt to sacrifice even the export controls that keep the most important technology, models and compute out of Chinese hands, accomplishing the opposite. They are vastly underinvesting in state capacity and in various forms of model safety and security, even for straight commercial and mundane utility purposes, out of spite and a ‘if you touch anything you kill innovation’ madness.

Meanwhile, they seem little interested in addressing that the main actions of the Federal government in 2025 have been to accomplish the opposite of these goals. Where we need talent, we drive it away. Where we need trade and allies, we alienate our allies and restrict trade.

There is talk of permitting reform and helping with energy. That’s great, as far as it goes. It would at least be one good thing. But I don’t see any substantive action.

They’re not only not coming to save us. They’re determined to get in the way.

That doesn’t mean give up. It especially doesn’t mean give up on the fight over the new diffusion rules, where things are very much up in the air, and these Senators, if they are good actors, have clearly been snookered.

These people can be reached and convinced. It is remarkably easy to influence them. Even under their paradigm of America, Innovation, Race, we are making severe mistakes, and it would go a long way to at least correct those. We are far from the Pareto frontier here, even ignoring the fact that we are all probably going to die from AI if we don’t do something about that.

Later, events will overtake this consensus, the same way they the vibes have previously shifted at the rate of at least once a year. We need to be ready for that, and also to stop them from doing something crazy when the next crisis happens and the public or others demand action.

A Live Look at the Senate AI Hearing Read More »

new-pope-chose-his-name-based-on-ai’s-threats-to-“human-dignity”

New pope chose his name based on AI’s threats to “human dignity”

“Like any product of human creativity, AI can be directed toward positive or negative ends,” Francis said in January. “When used in ways that respect human dignity and promote the well-being of individuals and communities, it can contribute positively to the human vocation. Yet, as in all areas where humans are called to make decisions, the shadow of evil also looms here. Where human freedom allows for the possibility of choosing what is wrong, the moral evaluation of this technology will need to take into account how it is directed and used.”

History repeats with new technology

While Pope Francis led the call for respecting human dignity in the face of AI, it’s worth looking a little deeper into the historical inspiration for Leo XIV’s name choice.

In the 1891 encyclical Rerum Novarum, the earlier Leo XIII directly confronted the labor upheaval of the Industrial Revolution, which generated unprecedented wealth and productive capacity but came with severe human costs. At the time, factory conditions had created what the pope called “the misery and wretchedness pressing so unjustly on the majority of the working class.” Workers faced 16-hour days, child labor, dangerous machinery, and wages that barely sustained life.

The 1891 encyclical rejected both unchecked capitalism and socialism, instead proposing Catholic social doctrine that defended workers’ rights to form unions, earn living wages, and rest on Sundays. Leo XIII argued that labor possessed inherent dignity and that employers held moral obligations to their workers. The document shaped modern Catholic social teaching and influenced labor movements worldwide, establishing the church as an advocate for workers caught between industrial capital and revolutionary socialism.

Just as mechanization disrupted traditional labor in the 1890s, artificial intelligence now potentially threatens employment patterns and human dignity in ways that Pope Leo XIV believes demands similar moral leadership from the church.

“In our own day,” Leo XIV concluded in his formal address on Saturday, “the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice, and labor.”

New pope chose his name based on AI’s threats to “human dignity” Read More »

the-tinkerers-who-opened-up-a-fancy-coffee-maker-to-ai-brewing

The tinkerers who opened up a fancy coffee maker to AI brewing

(Ars contacted Fellow Products for comment on AI brewing and profile sharing and will update this post if we get a response.)

Opening up brew profiles

Fellow’s brew profiles are typically shared with buyers of its “Drops” coffees or between individual users through a phone app.

Credit: Fellow Products

Fellow’s brew profiles are typically shared with buyers of its “Drops” coffees or between individual users through a phone app. Credit: Fellow Products

Aiden profiles are shared and added to Aiden units through Fellow’s brew.link service. But the profiles are not offered in an easy-to-sort database, nor are they easy to scan for details. So Aiden enthusiast and hobbyist coder Kevin Anderson created brewshare.coffee, which gathers both general and bean-based profiles, makes them easy to search and load, and adds optional but quite helpful suggested grind sizes.

As a non-professional developer jumping into a public offering, he had to work hard on data validation, backend security, and mobile-friendly design. “I just had a bit of an idea and a hobby, so I thought I’d try and make it happen,” Anderson writes. With his tool, brew links can be stored and shared more widely, which helped both Dixon and another AI/coffee tinkerer.

Gabriel Levine, director of engineering at retail analytics firm Leap Inc., lost his OXO coffee maker (aka the “Barista Brain”) to malfunction just before the Aiden debuted. The Aiden appealed to Levine as a way to move beyond his coffee rut—a “nice chocolate-y medium roast, about as far as I went,” he told Ars. “This thing that can be hyper-customized to different coffees to bring out their characteristics; [it] really kind of appealed to that nerd side of me,” Levine said.

Levine had also been doing AI stuff for about 10 years, or “since before everyone called it AI—predictive analytics, machine learning.” He described his career as “both kind of chief AI advocate and chief AI skeptic,” alternately driving real findings and talking down “everyone who… just wants to type, ‘how much money should my business make next year’ and call that work.” Like Dixon, Levine’s work and fascination with Aiden ended up intersecting.

The coffee maker with 3,588 ideas

The author’s conversation with the Aiden Profile Creator, which pulled in both brewing knowledge and product info for a widely available coffee:

Levine’s Aiden Profile Creator is a ChatGPT prompt set up with a custom prompt and told to weight certain knowledge more heavily. What kind of prompt and knowledge? Levine didn’t want to give away his exact work. But he cited resources like the Specialty Coffee Association of America and James Hoffman’s coffee guides as examples of what he fed it.

What it does with that knowledge is something of a mystery to Levine himself. “There’s this kind of blind leap, where it’s grabbing the relevant pieces of information from the knowledge base, biasing toward all the expert advice and extraction science, doing something with it, and then I take that something and coerce it back into a structured output I can put on your Aiden,” Levine said.

It’s a blind leap, but it has landed just right for me so far. I’ve made four profiles with Levine’s prompt based on beans I’ve bought: Stumptown’s Hundred Mile, a light-roasted batch from Jimma, Ethiopia, from Small Planes, Lost Sock’s Western House filter blend, and some dark-roast beans given as a gift. With the Western House, Levine’s profile creator said it aimed to “balance nutty sweetness, chocolate richness, and bright cherry acidity, using a slightly stepped temperature profile and moderate pulse structure.” The resulting profile has worked great, even if the chatbot named it “Cherry Timber.”

Levine’s chatbot relies on two important things: Dixon’s work in revealing Fellow’s Aiden API and his own workhorse Aiden. Every Aiden profile link is created on a machine, so every profile created by Levine’s chat is launched, temporarily, from the Aiden in his kitchen, then deleted. “I’ve hit an undocumented limit on the number of profiles you can have on one machine, so I’ve had to do some triage there,” he said. As of April 22, nearly 3,600 profiles had passed through Levine’s Aiden.

“My hope with this is that it lowers the bar to entry,” Levine said, “so more people get into these specialty roasts and it drives people to support local roasters, explore their world a little more. I feel like that certainly happened to me.”

Something new is brewing

Credit: Fellow Products

Having admitted to myself that I find something generated by ChatGPT prompts genuinely useful, I’ve softened my stance slightly on LLM technology, if not the hype. Used within very specific parameters, with everything second-guessed, I’m getting more comfortable asking chat prompts for formatted summaries on topics with lots of expertise available. I do my own writing, and I don’t waste server energy on things I can, and should, research myself. I even generally resist calling language model prompts “AI,” given the term’s baggage. But I’ve found one way to appreciate its possibilities.

This revelation may not be new to someone already steeped in the models. But having tested—and tasted—my first big experiment while willfully engaging with a brewing bot, I’m a bit more awake.

This post was updated at 8: 40 am with a different capture of a GPT-created recipe.

The tinkerers who opened up a fancy coffee maker to AI brewing Read More »

the-last-of-us-episode-5-recap:-there’s-something-in-the-air

The Last of Us episode 5 recap: There’s something in the air

New episodes of season 2 of The Last of Us are premiering on HBO every Sunday night, and Ars’ Kyle Orland (who’s played the games) and Andrew Cunningham (who hasn’t) will be talking about them here every Monday morning. While these recaps don’t delve into every single plot point of the episode, there are obviously heavy spoilers contained within, so go watch the episode first if you want to go in fresh.

Andrew: We’re five episodes into this season of The Last of Us, and most of the infected we’ve seen have still been of the “mindless, screeching horde” variety. But in the first episode of the season, we saw Ellie encounter a single “smart” infected person, a creature that retained some sense of strategy and a self-preservation instinct. It implied that the show’s monsters were not done evolving and that the seemingly stable fragments of civilization that had managed to take root were founded on a whole bunch of incorrect assumptions about what these monsters were and what they could do.

Amidst all the human-created drama, the changing nature of the Mushroom Zombie Apocalypse is the backdrop of this week’s entire episode, starting and ending with the revelation that a 2003-vintage cordyceps nest has become a hotbed of airborne spores, ready to infect humans with no biting required.

This is news to me, as a Non-Game Player! But Kyle, I’m assuming this is another shoe that you knew the series was going to drop.

Kyle: Actually, no. I suppose it’s possible I’m forgetting something, but I think the “some infected are actually pretty smart now” storyline is completely new to the show. It’s just one of myriad ways the show has diverged enough from the games at this point that I legitimately don’t know where it’s going to go or how it’s going to get there at any given moment, which is equal parts fun and frustrating.

I will say that the “smart zombies” made for my first real “How are Ellie and Dina going to get out of this one?” moment, as Dina’s improvised cage was being actively torn apart by a smart and strong infected. But then, lo and behold, here came Deus Ex Jesse to save things with a timely re-entrance into the storyline proper. You had to know we hadn’t seen the last of him, right?

Ellie is good at plenty of things, but not so good at lying low. Credit: HBO

Andrew: As with last week’s subway chase, I’m coming to expect that any time Ellie and Dina seem to be truly cornered, some other entity is going to swoop down and “save” them at the last minute. This week it was an actual ally instead of another enemy that just happened to take out the people chasing Ellie and Dina. But it’s the same basic narrative fake-out.

I assume their luck will run out at some point, but I also suspect that if it comes, that point will be a bit closer to the season finale.

Kyle: Without spoiling anything from the games, I will say you can expect both Ellie and Dina to experience their fair share of lucky and unlucky moments in the episodes to come.

Speaking of unlucky moments, while our favorite duo is hiding in the park we get to see how the local cultists treat captured WLF members, and it is extremely not pretty. I’m repeating myself a bit from last week, but the lingering on these moments of torture feels somehow more gratuitous in an HBO show, even when compared to similarly gory scenes in the games.

Andrew: Well we had just heard these cultists compared to “Amish people” not long before, and we already know they don’t have tanks or machine guns or any of the other Standard Issue The Last of Us Paramilitary Goon gear that most other people have, so I guess you’ve got to do something to make sure the audience can actually take the cultists seriously as a threat. But yeah, if you’re squeamish about blood-and-guts stuff, this one’s hard to watch.

I do find myself becoming more of a fan of Dina and Ellie’s relationship, or at least of Dina as a character. Sure, her tragic backstory’s a bit trite (she defuses this criticism by pointing out in advance that it is trite), but she’s smart, she can handle herself, she is a good counterweight to Ellie’s rush-in-shooting impulses. They are still, as Dina points out, doing something stupid and reckless. But I am at least rooting for them to make it out alive!

Kyle: Personality wise the Dina/Ellie pairing has just as many charms as the Joel/Ellie pairing from last season. But while I always felt like Joel and Ellie had a clear motivation and end goal driving them forward, the thirst for revenge pushing Dina and Ellie deeper into Seattle starts to feel less and less relevant the more time goes on.

The show seems to realize this, too, stopping multiple times since Joel’s death to kind of interrogate whether tracking down these killers is worth it when the alternative is just going back to Jackson and prepping for a coming baby. It’s like the writers are trying to convince themselves even as they’re trying (and somewhat failing, in my opinion) to convince the audience of their just and worthy cause.

Andrew: Yeah, I did notice the points where Our Heroes paused to ask “are we sure we want to be doing this?” And obviously, they are going to keep doing this, because we have spent all this time setting up all these different warring factions and we’re going to use them, dang it!! But this has never been a thing that was going to bring Joel back, and it only seems like it can end in misery, especially because I assume Jesse’s plot armor is not as thick as Ellie or Dina’s.

Kyle: Personally I think the “Ellie and Dina give up on revenge and prepare to start a post-apocalyptic family (while holding off zombies)” would have been a brave and interesting direction for a TV show. It would have been even braver for the game, although very difficult for a franchise where the main verbs are “shoot” and “stab.”

Andrew: Yeah if The Last of Us Part II had been a city-building simulator where you swap back and forth between managing the economy of a large town and building defenses to keep out the hordes, fans of the first game might have been put off. But as an Adventure of Link fan I say: bring on the sequels with few-if-any gameplay similarities to their predecessors!

The cordyceps threat keeps evolving. Credit: HBO

Kyle: “We killed Joel” team member Nora definitely would have preferred if Ellie and Dina were playing that more domestic kind of game. As it stands, Ellie ends up pursuing her toward a miserable-looking death in a cordyceps-infested basement.

The chase scene leading up to this mirrors a very similar one in the game in a lot of ways. But while I found it easy to suspend my disbelief for the (very scripted) chase on the PlayStation, watching it in a TV show made me throw up my hands and say “come on, these heavily armed soldiers can’t stop a little girl that’s making this much ruckus?”

Andrew: Yeah Jesse can pop half a dozen “smart” zombies in half a dozen shots, but when it’s a girl with a giant backpack running down an open hallway everyone suddenly has Star Wars Stormtrooper aim. The visuals of the cordyceps den, with the fungified guys breathing out giant clouds of toxic spores, is effective in its unsettling-ness, at least!

This episode’s other revelation is that what Joel did to the Fireflies in the hospital at the end of last season is apparently not news to Ellie, when she hears it from Nora in the episode’s final moments. It could be that Ellie, Noted Liar, is lying about knowing this. But Ellie is also totally incapable of controlling her emotions, and I’ve got to think that if she had been surprised by this, we would have been able to tell.

Kyle: Yeah, saying too much about what Ellie knows and when would be risking some major spoilers. For now I’ll just say the way the show decided to mix things up by putting this detailed information in Nora’s desperate, spore-infested mouth kind of landed with a wet thud for me.

I was equally perplexed by the sudden jump cut from “Ellie torturing a prisoner” to “peaceful young Ellie flashback” at the end of the episode. Is the audience supposed to assume that this is what is going on inside Ellie’s head or something? Or is the narrative just shifting without a clutch?

Andrew: I took it to mean that we were about to get a timeline-breaking departure episode next week, one where we spend some time in flashback mode filling in what Ellie knows and why before we continue on with Abby Quest. But I guess we’ll see, won’t we!

Kyle: Oh, I’ve been waiting with bated breath for a bevy of flashbacks I knew were coming in some form or another. But the particular way they shifted to the flashback here, with mere seconds left in this particular brutal episode, was baffling to me.

Andrew: I think you do it that way to get people hyped about the possibility of seeing Joel again next week. Unless it’s just a cruel tease! But it’s probably not, right? Unless it is!

Kyle: Now I kind of hope the next episode just goes back to Ellie and Dina and doesn’t address the five seconds of flashback at all. Screw you, audience!

The Last of Us episode 5 recap: There’s something in the air Read More »

the-justice-league-is-not-impressed-in-peacemaker-s2-teaser

The Justice League is not impressed in Peacemaker S2 teaser

Cena, Brooks, Holland, Agee, and Stroma are all back for S2, along with Nhut Lee as Judomaster and Eagly, of course. Robert Patrick is also listed in the S2 cast, reprising his role as Chris’ father, Auggie; since Chris killed him in S1, one assumes Auggie will appear in flashbacks, hallucinations, or perhaps an alternate universe. (This is a soft reboot, after all.) New cast members include Frank Grillo as Rick Flagg Sr. (Grillo voiced the role in the animated Creature Commandos), now head of A.R.G.U.S. and out to avenge his son’s death; Tim Meadows as A.R.G.U.S. agent Langston Fleury; and Sol Rodriguez as Sasha Bordeaux.

Set to “Oh Lord” by Foxy Shazam, the teaser opens with Leota driving Chris to a job interview, assuring him, “They’re gonna be doing backflips to get you to join.” It turns out to be an interview with Justice League members Green Lantern/Guy Gardner (Nathan Fillion), Hawkgirl/Kendra Saunders (Isabel Merced), and Maxwell Lord (Sean Gunn), but they are not really into the interviewing process or taking note of Chris’ marksmanship and combat skills. They even diss poor Chris while accidentally keeping the microphone turned on: “This guy sucks.” (All three reprise their roles from Superman and are listed as S2 cast members, but it’s unclear how frequently they will appear.)

The other team members aren’t faring much better. They saved the world from the butterflies; you’d think people would treat them with a bit more respect, if not as outright heroes. Leota is “living in the worst level of Grand Theft Auto,” per John Economos; Emilia Harcourt has anger management issues and is diagnosed with “a particularly severe form of toxic masculinity”; and Vigilante is working in the food service industry. There’s not much detail as to the plot, apart from Chris going on the run from A.R.G.U.S., but the final scene shows Chris walking through a door and encountering another version of himself. So things are definitely about to get interesting.

The second season of Peacemaker will premiere on Max on August 21, 2025.

The Justice League is not impressed in Peacemaker S2 teaser Read More »