Author name: Shannon Garcia

google-adds-veo-2-video-generation-to-gemini-app

Google adds Veo 2 video generation to Gemini app

Google has announced that yet another AI model is coming to Gemini, but this time, it’s more than a chatbot. The company’s Veo 2 video generator is rolling out to the Gemini app and website, giving paying customers a chance to create short video clips with Google’s allegedly state-of-the-art video model.

Veo 2 works like other video generators, including OpenAI’s Sora—you input text describing the video you want, and a Google data center churns through tokens until it has an animation. Google claims that Veo 2 was designed to have a solid grasp of real-world physics, particularly the way humans move. Google’s examples do look good, but presumably that’s why they were chosen.

Prompt: Aerial shot of a grassy cliff onto a sandy beach where waves crash against the shore, a prominent sea stack rises from the ocean near the beach, bathed in the warm, golden light of either sunrise or sunset, capturing the serene beauty of the Pacific coastline.

Veo 2 will be available in the model drop-down, but Google does note it’s still considering ways to integrate this feature and that the location could therefore change. However, it’s probably not there at all just yet. Google is starting the rollout today, but it could take several weeks before all Gemini Advanced subscribers get access to Veo 2. Gemini features can take a surprisingly long time to arrive for the bulk of users—for example, it took about a month for Google to make Gemini Live video available to everyone after announcing its release.

When Veo 2 does pop up in your Gemini app, you can provide it with as much detail as you want, which Google says will ensure you have fine control over the eventual video. Veo 2 is currently limited to 8 seconds of 720p video, which you can download as a standard MP4 file. Video generation uses even more processing than your average generative AI feature, so Google has implemented a monthly limit. However, it hasn’t confirmed what that limit is, saying only that users will be notified as they approach it.

Google adds Veo 2 video generation to Gemini app Read More »

openai-#13:-altman-at-ted-and-openai-cutting-corners-on-safety-testing

OpenAI #13: Altman at TED and OpenAI Cutting Corners on Safety Testing

Three big OpenAI news items this week were the FT article describing the cutting of corners on safety testing, the OpenAI former employee amicus brief, and Altman’s very good TED Interview.

The FT detailed OpenAI’s recent dramatic cutting back on the time and resources allocated to safety testing of its models.

In the interview, Chris Anderson made an unusually strong effort to ask good questions and push through attempts to dodge answering. Altman did a mix of giving a lot of substantive content in some places while dodging answering in others. Where he chose to do which was, itself, enlightening. I felt I learned a lot about where his head is at and how he thinks about key questions now.

The amicus brief backed up that OpenAI’s current actions are in contradiction to the statements OpenAI made to its early employees.

There are also a few other related developments.

What this post does not cover is GPT-4.1. I’m waiting on that until people have a bit more time to try it and offer their reactions, but expect coverage later this week.

The big headline from TED was presumably the increase in OpenAI’s GPU use.

Steve Jurvetson: Sam Altman at TED today: OpenAI’s user base doubled in just the past few weeks (an accidental disclosure on stage). “10% of the world now uses our systems a lot.”

When asked how many users they have: “Last we disclosed, we have 500 million weekly active users, growing fast.”

Chris Anderson: “But backstage, you told me that it doubled in just a few weeks.” @SamA: “I said that privately.”

And that’s how we got the update.

Revealing that private info wasn’t okay but it seems it was an accident, in any case Altman seemed fine with it.

Listening to the details, it seems that Altman was referring not to the growth in users, but instead to the growth in compute use. Image generation takes a ton of compute.

Altman says every day he calls people up and begs them for GPUs, and that DeepSeek did not impact this at all.

Steve Jurvetson: Sam Altman at TED today:

Reflecting on the life ahead for his newborn: “My kids will never be smarter than AI.”

Reaction to DeepSeek:

“We had a meeting last night on our open source policy. We are going to do a powerful open-source model near the frontier. We were late to act, but we are going to do really well now.”

Altman doesn’t explain here why he is doing an open model. The next question from Anderson seems to explain it, that it’s about whether people ‘recognize’ that OpenAI’s model is best? Later Altman does attempt to justify it with, essentially, a shrug that things will go wrong but we now know it’s probably mostly fine.

Regarding the accumulated knowledge OpenAI gains from its usage history: “The upload happens bit by bit. It is an extension of yourself, and a companion, and soon will proactively push things to you.”

Have there been any scary moments?

“No. There have been moments of awe. And questions of how far this will go. But we are not sitting on a conscious model capable of self-improvement.”

I listened to the clip and this scary moment question specifically refers to capabilities of new models, so it isn’t trivially false. It still damn well should be false, given what their models can do and the leaps and awe involved. The failure to be scared here is a skill issue that exists between keyboard and chair.

How do you define AGI? “If you ask 10 OpenAI engineers, you will get 14 different definitions. Whichever you choose, it is clear that we will go way past that. They are points along an unbelievable exponential curve.”

So AGI will come and your life won’t change, but we will then soon get ASI. Got it.

“Agentic AI is the most interesting and consequential safety problem we have faced. It has much higher stakes. People want to use agents they can trust.”

Sounds like an admission that they’re not ‘facing’ the most interesting or consequential safety problems at all, at least not yet? Which is somewhat confirmed by discussion later in the interview.

I do agree that agents will require a much higher level of robustness and safety, and I’d rather have a ‘relatively dumb’ agent that was robust and safe, for most purposes.

When asked about his Congressional testimony calling for a new agency to issue licenses for large model builders: “I have since learned more about how government works, and I no longer think this is the right framework.”

I do appreciate the walkback being explicit here. I don’t think that’s the reason why.

“Having a kid changed a lot of things in me. It has been the most amazing thing ever. Paraphrasing my co-founder Ilya, I don’t know what the meaning of life is, but I am sure it has something to do with babies.”

Statements like this are always good to see.

“We made a change recently. With our new image model, we are much less restrictive on speech harms. We had hard guardrails before, and we have taken a much more permissive stance. We heard the feedback that people don’t want censorship, and that is a fair safety discussion to have.”

I agree with the change and the discussion, and as I’ve discussed before if anything I’d like to see this taken further with respect to these styles of concern in particular.

Altman is asked about copyright violation, says we need a new model around the economics of creative output and that ‘people build off each others creativity all the time’ and giving creators tools has always been good. Chris Anderson tries repeatedly to nail down the question of consent and compensation. Altman repeatedly refuses to give a straight answer to the central questions.

Altman says (10: 30) that the models are so smart that, for most things people want to do with them, they’re good enough. He notes that this is true based on user expectations, but that’s mostly circular. As in, we ask the models to do what they are capable of doing, the same way we design jobs and hire humans for them based on what things particular humans and people in general can and cannot do. It doesn’t mean any of us are ‘smart enough.’

Nor does it imply what he says next, that everyone will ‘have great models’ but what will differentiate will be not the best model but the best product. I get that productization will matter a lot for which AI gets the job in many cases, but continue to think this ‘AGI is fungible’ claim is rather bonkers crazy.

A key series of moments start at 35: 00 in. It’s telling that other coverage of the interview sidestepped all of this, essentially entirely.

Anderson has put up an image of The Ring of Power, to talk about Elon Musk’s claim that Altman has been corrupted by The Ring, a claim Anderson correctly notes also plausibly applies to Elon Musk.

Altman goes for the ultimate power move. He is defiant and says, all right, you think that, tell me examples. What have I done?

So, since Altman asked so nicely, what are the most prominent examples of Altman potentially being corrupted by The Ring of Power? Here is an eightfold path.

  1. We obviously start with Elon Musk’s true objection, which stems from the shift of OpenAI from a non-profit structure to a hybrid structure, and the attempt to now go full for-profit, in ways he claims broke covenants with Elon Musk. Altman claimed to have no equity and not be in this for money, and now is slated to get a lot of equity. I do agree with Anderson that Altman isn’t ‘in it for the money’ because I think Altman correctly noticed the money mostly isn’t relevant.

  2. Altman is attempting to do so via outright theft of a huge portion of the non-profit’s assets, then turn what remains into essentially an OpenAI marketing and sales department. This would arguably be the second biggest theft in history.

  3. Altman said for years that it was important the board could fire him. Then, when the board did fire him in response (among other things) to Altman lying to the board in an attempt to fire a board member, he led a rebellion against the board, threatened to blow up the entire company and reformulate it at Microsoft, and proved that no, the board cannot fire Altman. Altman can and did fire the board.

  4. Altman, after proving he cannot be fired, de facto purged OpenAI of his enemies. Most of the most senior people at OpenAI who are worried about AI existential risk, one by one, reached the conclusion they couldn’t do much on the inside, and resigned to continue their efforts elsewhere.

  5. Altman used to talk openly and explicitly about AI existential risks, including attempting to do so before Congress. Now, he talks as if such risks don’t exist, and instead pivots to jingoism and the need to Beat China, and hiring lobbyists who do the same. He promised 20% of compute to the superalignment team, never delivered and then dissolved the team.

  6. Altman pledged that OpenAI would support regulation of AI. Now he says he has changed his mind, and OpenAI lobbies against bills like SB 1047 and its AI Action Plan is vice signaling that not only opposes any regulations but seeks government handouts, the right to use intellectual property without compensation and protection against potential regulations.

  7. Altman has been cutting corners on safety, as noted elsewhere in this post. OpenAI used to be remarkably good in terms of precautions. Now it’s not.

  8. Altman has been going around saying ‘AGI will arrive and your life will not much change’ when it is common knowledge that this is absurd.

One could go on. This is what we like to call a target rich environment.

Anderson offers only #1, the transition to a for-profit model and the most prominent example, which is the most obvious response, but he proactively pulls the punch. Altman admits he’s not the same person he was and that it all happens gradually, if it happened all at once it would be jarring, but says he doesn’t feel any different.

Anderson essentially says okay and pivots to Altman’s son and how that has shaped Altman, which is indeed great. And then he does something that impressed me, which is tie this to existential risk via metaphor, asking if there was a button that was 90% to give his son a wonderful life and 10% to kill him (I’d love those odds!), would he press the button? Altman says literally no, but points out the metaphor, and says he doesn’t think OpenAI is doing that. He says he really cared about not destroying the world before, and he really cares about it now, he didn’t need a kid for that part.

Anderson then moves to the question of racing, and whether the fact that everyone thinks AGI is inevitable is what is creating the risk, asking if Altman and his colleagues believe it is inevitable and asks if maybe they could coordinate to ‘slow down a bit’ and get societal feedback.

As much as I would like that, given the current political climate I worry this sets up a false dichotomy, whereas right now there is tons of room to take more responsibility and get societal feedback, not only without slowing us down but enabling more and better diffusion and adaptation. Anderson seems to want a slowdown for its own sake, to give people time to adapt, which I don’t think is compelling.

Altman points out we slow down all the time for lack of reliability, also points out OpenAI has a track record of their rollouts working, and claims everyone involved ‘cares deeply’ about AI safety. Does he simply mean mundane (short term) safety here?

His discussion of the ‘safety negotiation’ around image generation, where I support OpenAI’s loosening of restrictions, suggests that this is correct. So does the next answer: Anderson asks if Altman would attend a conference of experts to discuss safety, Altman says of course but he’s more interested in what users think as a whole, and ‘asking everyone what they want’ is better than asking people ‘who are blessed by society to sit in a room and make these decisions.’

But that’s an absurd characterization of trying to solve an extremely difficult technical problem. So it implies that Altman thinks the technical problems are easy? Or that he’s trying to rhetorically get you to ignore them, in favor of the question of preferences and an appeal to some form of democratic values and opposition to ‘elites.’ It works as an applause line. Anderson points out that the hundreds of millions ‘don’t always know where the next step leads’ which may be the understatement of the lightcone in this context. Altman says the AI can ‘help us be wiser’ about those decisions, which of course would mean that a sufficiently capable AI or whoever directs it would de facto be making the decisions for us.

OpenAI’s Altman ‘Won’t Rule Out’ Helping Pentagon on AI Weapons, but doesn’t expect to develop a new weapons platform ‘in the foreseeable future,’ which is a period of time that gets shorter each time I type it.

Altman: I will never say never, because the world could get really weird.

I don’t think most of the world wants AI making weapons decisions.

I don’t think AI adoption in the government has been as robust as possible.

There will be “exceptionally smart” AI systems by the end of next year.

I think I can indeed forsee the future where OpenAI is helping the Pentagon with its AI weapons. I expect this to happen.

I want to be clear that I don’t think this is a bad thing. The risk is in developing highly capable AIs in the first place. As I have said before, Autonomous Killer Robots and AI-assisted weapons in general are not how we lose control over the future to AI, and failing to do so is a key way America can fall behind. It’s not like our rivals are going to hold back.

To the extent that the AI weapons scare the hell out of everyone? That’s a feature.

On the issue of the attempt to sideline and steal from the nonprofit, 11 former OpenAI employees filed an amicus brief in the Musk vs. Altman lawsuit, on the side of Musk.

Todor Markov: Today, myself and 11 other former OpenAI employees filed an amicus brief in the Musk v Altman case.

We worked at OpenAI; we know the promises it was founded on and we’re worried that in the conversion those promises will be broken. The nonprofit needs to retain control of the for-profit. This has nothing to do with Elon Musk and everything to do with the public interest.

OpenAI claims ‘the nonprofit isn’t going anywhere’ but has yet to address the critical question: Will the nonprofit actually retain control over the for-profit? This distinction matters.

You can find the full amicus here.

On this question, Timothy Lee points out that you don’t need to care about existential risk to notice that what OpenAI is trying to do to its non-profit is highly not cool.

Timothy Lee: I don’t think people’s views on the OpenAI case should have anything to do with your substantive views on existential risk. The case is about two questions: what promises did OpenAI make to early donors, and are those promises legally enforceable?

A lot of people on OpenAI’s side seem to be taking the view that non-profit status is meaningless and therefore donors shouldn’t complain if they get scammed by non-profit leaders. Which I personally find kind of gross.

I mean I would be pretty pissed if I gave money to a non-profit promising to do one thing and then found out they actually did something different that happened to make their leaders fabulously wealthy.

This particular case comes down to that. A different case, filed by the Attorney General, would also be able to ask the more fundamental question of whether fair compensation is being offered for assets, and whether the charitable purpose of the nonprofit is going to be wiped out, or even pivoted into essentially a profit center for OpenAI’s business (as in buying a bunch of OpenAI services for nonprofits and calling that its de facto charitable purpose).

The mad dash to be first, and give the perception that the company is ‘winning’ is causing reckless rushes to release new models at OpenAI.

This is in dramatic contrast to when there was less risk in the room, and despite this OpenAI used to take many months to prepare a new release. At first, by any practical standard, OpenAI’s track record on actual model release decisions was amazingly great. Nowadays? Not so much.

Would their new procedures pot the problems it is vital that we spot in advance?

Joe Weisenthal: I don’t have any views on whether “AI Safety” is actually an important endeavor.

But if it is important, it’s clear that the intensity of global competition in the AI space (DeepSeek etc.) will guarantee it increasingly gets thrown out the window.

Christina Criddle: EXC: OpenAI has reduced the time for safety testing amid “competitive pressures” per sources:

Timeframes have gone from months to days

Specialist work such as finetuning for misuse (eg biorisk) has been limited

Evaluations are conducted on earlier versions than launched

Financial Times (Gated): OpenAI has slashed the time and resources it spends on testing the safety of its powerful AI models, raising concerns that its technology is being rushed out the door without sufficient safeguards.

Staff and third-party groups have recently been given just days to conduct “evaluations,” the term given to tests for assessing models’ risks and performance, on OpenAI’s latest LLMs, compared to several months previously.

According to eight people familiar with OpenAI’s testing processes, the start-up’s tests have become less thorough, with insufficient time and resources dedicated to identifying and mitigating risks, as the $300 billion startup comes under pressure to release new models quickly and retain its competitive edge.

Steven Adler (includes screenshots from FT): Skimping on safety-testing is a real bummer. I want for OpenAI to become the “leading model of how to address frontier risk” they’ve aimed to be.

Peter Wildeford: I can see why people say @sama is not consistently candid.

Dylan Hadfield Menell: I remember talking about competitive pressures and race conditions with the @OpenAI’s safety team in 2018 when I was an intern. It was part of a larger conversation about the company charter.

It is sad to see @OpenAI’s founding principles cave to pressures we predicted long ago.

It is sad, but not surprising.

This is why we need a robust community working on regulating the next generation of AI systems. Competitive pressure is real.

We need people in positions of genuine power that are shielded from them.

Peter Wildeford:

Dylan Hadfield Menell: Where did you find an exact transcription of our conversation?!?! 😅😕😢

You can’t do this kind of testing properly in a matter of days. It’s impossible.

If people don’t have time to think let alone adapt, probe and build tools, how they can see what your new model is capable of doing? There are some great people working on these issues at OpenAI but this is an impossible ask.

Testing on a version that doesn’t even match what you release? That’s even more impossible.

Part of this is that it is so tragic how everyone massively misinterpreted and overreacted to DeepSeek.

To reiterate since the perception problem persists, yes, DeepSeek cooked, they have cracked engineers and they did a very impressive thing with r1 given what they spent and where they were starting from, but that was not DS being ‘in the lead’ or even at the frontier, they were always many months behind and their relative costs were being understated by multiple orders of magnitude. Even today I saw someone say ‘DeepSeek still in the lead’ when this is so obviously not the case. Meanwhile, no one was aware Google Flash Thinking even existed, or had the first visible CoT, and so on.

The result of all that? Talk similar to Kennedy’s ‘Missile Gap,’ abject panic, and sudden pressure to move up releases to show OpenAI and America have ‘still got it.’

Discussion about this post

OpenAI #13: Altman at TED and OpenAI Cutting Corners on Safety Testing Read More »

tuesday-telescope:-is-the-james-webb-space-telescope-worth-$10-billion?

Tuesday Telescope: Is the James Webb Space Telescope worth $10 billion?

Welcome to the Tuesday Telescope. There is a little too much darkness in this world and not enough light—a little too much pseudoscience and not enough science. We’ll let other publications offer you a daily horoscope. At Ars Technica, we’ll take a different route, finding inspiration from very real images of a universe that is filled with stars and wonder.

Was the James Webb Space Telescope worth it?

Well, $10 billion is a lot of money. Even when spread over a couple of decades, that’s still a huge chunk of NASA’s annual science budget. (And given the recent Trump administration attack on NASA’s science budget, money is about to get a whole lot tighter.)

However, it is difficult to put a price on advancing our species’ understanding of the natural world and the wide Universe we’re swimming in. And Webb is doing an amazing job of that.

In 2009, NASA launched the Wide-field Infrared Survey Explorer, or WISE, mission to make infrared observations. This was the latest in a line of space-based infrared observatories, and it cost about 3 percent as much as the Webb telescope.

Two infrared views of NGC 1514. At left is an observation from NASA’s Wide-field Infrared Survey Explorer (WISE).

Credit: NASA, ESA, CSA, STScI, NASA-JPL, Caltech, UCLA, Michael Ressler (NASA-JPL), Dave Jones (IAC)

Two infrared views of NGC 1514. At left is an observation from NASA’s Wide-field Infrared Survey Explorer (WISE). Credit: NASA, ESA, CSA, STScI, NASA-JPL, Caltech, UCLA, Michael Ressler (NASA-JPL), Dave Jones (IAC)

Today’s photo concerns the planetary nebula NGC 1514. In 2010, using the WISE telescope, NASA project scientist Mike Ressler discovered “rings” around the planetary nebula. Now, thanks to Webb, the rings—which are likely composed of small dust grains, heated by ultraviolet light from a white dwarf star—can be seen clearly. And, oh my, they’re spectacular.

The clarity in the Webb photo, compared to what came before, is remarkable. So, is seeing the Universe in a new light worth $10 billion? I certainly think so, but I’m writing a weekly story called the Tuesday Telescope, so it’s safe to say I am biased.

Source: NASA, ESA, CSA, STScI, Michael Ressler (NASA-JPL), Dave Jones (IAC)

Do you want to submit a photo for the Daily Telescope? Reach out and say hello.

Tuesday Telescope: Is the James Webb Space Telescope worth $10 billion? Read More »

after-harvard-says-no-to-feds,-$2.2-billion-of-research-funding-put-on-hold

After Harvard says no to feds, $2.2 billion of research funding put on hold

The Trump administration has been using federal research funding as a cudgel. The government has blocked billions of dollars in research funds and threatened to put a hold on even more in order to compel universities to adopt what it presents as essential reforms. In the case of Columbia University, that includes changes in the leadership of individual academic departments.

On Friday, the government sent a list of demands that it presented as necessary to “maintain Harvard’s financial relationship with the federal government.” On Monday, Harvard responded that accepting these demands would “allow itself to be taken over by the federal government.” The university also changed its home page into an extensive tribute to the research that would be eliminated if the funds were withheld.

In response, the Trump administration later put $2.2 billion of Harvard’s research funding on hold.

Diversity, but only the right kind

Harvard posted the letter it received from federal officials, listing their demands. Some of it is what you expect, given the Trump administration’s interests. The admissions and hiring departments would be required to drop all diversity efforts, with data on faculty and students to be handed over to the federal government for auditing. As at other institutions, there are also some demands presented as efforts against antisemitism, such as the defunding of pro-Palestinian groups. More generally, it demands that university officials “prevent admitting students hostile to the American values and institutions.”

There are also a bunch of basic culture war items, such as a demand for a mask ban, and a ban on “de-platforming” speakers on campus. In addition, the government wants the university to screen all faculty hires for plagiarism issues, which is what caused Harvard’s former president to resign after she gave testimony to Congress. Any violation of these updated conduct codes by a non-citizen would require an immediate report to the Department of Homeland Security and State Department, presumably so they can prepare to deport them.

After Harvard says no to feds, $2.2 billion of research funding put on hold Read More »

should-we-settle-mars,-or-is-it-a-dumb-idea-for-humans-to-live-off-world?

Should we settle Mars, or is it a dumb idea for humans to live off world?

Mars is back on the agenda.

During his address to a joint session of Congress in March, President Donald Trump said the United States “will pursue our Manifest Destiny into the stars, launching American astronauts to plant the Stars and Stripes on the planet Mars.”

What does this mean? Manifest destiny is the belief, which was particularly widespread in 1800s America, that US settlers were destined to expand westward across North America. Similarly, then, the Trump administration believes it is the manifest destiny of Americans to settle Mars. And he wants his administration to take steps toward accomplishing that goal.

Should the US Prioritize Settling Mars?

But should we really do this?

I recently participated in a debate with Shannon Stirone, a distinguished science writer, on this topic. The debate was sponsored by Open to Debate, and professionally moderated by Emmy Award-winning journalist John Donvan. Spoiler alert: I argued in favor of settlement. I hope you learned as much as I did.

Should we settle Mars, or is it a dumb idea for humans to live off world? Read More »

monthly-roundup-#29:-april-2025

Monthly Roundup #29: April 2025

In Monthly Roundup #28 I made clear I intend to leave the Trump administration out of my monthly roundups, for both better and worse, outside of my focus areas. Again, this does not mean I don’t have a lot to say or that those questions don’t matter. It means you should not rely on me as your only source of news and I pick my battles.

They are not making this easy.

I am going to stick to my guns. Trade and trading very much inside my focus areas, but for economics roundups, and in extreme cases AI roundups. Besides, you don’t need me to tell you that tariffs not only impose immense economic costs but also fail to achieve their primary policy aims and foster political dysfunction along the way. That question should already be answered by my t-shift. I do have a word about things related to a potential expansion (I can’t believe I’m typing this!) of the Jones Act. And I’ll deal with certain crime-related things when I do my first crime roundup.

  1. Bad News.

  2. Antisocial Media.

  3. Technology Advances.

  4. Variously Effective Altruism.

  5. Government Working.

  6. Jones Act Watch.

  7. While I Cannot Condone This.

  8. Architectural Musings.

  9. Quickly, There’s No Time.

  10. Don’t Sell Your Soul, You Won’t Get Paid.

  11. What To Do Instead.

  12. Good News, Everyone.

  13. We’re Elite, You’re Not.

  14. Enjoy It While It Lasts.

  15. For Your Entertainment.

  16. An Economist Gets Lunch.

  17. I Was Promised Flying Self-Driving Cars and Supersonic Jets.

  18. Gamers Gonna Game Game Game Game Game.

  19. Sports Go Sports.

  20. The Lighter Side.

23andMe is going into bankruptcy. It would seem a wise precaution to download and then delete your data if it’s there, which takes a few days to do, in case the data falls into the wrong hands or is lost forever.

Young men who make 9 figures by default get driven crazy, all checks and balances on them now gone.

This graphic is quite good.

That’s a variation on this classic, worth revisiting periodically as a reminder:

A claim that banning smoking in bars increases alcohol consumption by ~5% without decreasing smoking. I presume the increased alcohol consumption is because the bar became a much better experience without all the smoking? It seems bizarre that this wouldn’t decrease smoking, especially over the long term.

Beware communities that encourage irresponsible risk taking and dismiss those who do not endanger themselves. It can be good if targeted well: There are places, like founding startups and putting yourself out there for romance, where people take far too little risk and it is often good to encourage people to take more. But this very much doesn’t apply to, for example, talk about financial investments.

If you use Twitter via the For You page, You Fool. Yet many of you do exactly that.

I even hear people complaining about ‘the algorithm’ without doing the obvious and switching to chronological feeds and lists. That’s on you.

As far as I know this is the size-adjusted record, yes, and well earned.

Kelsey Piper suggests Twitter’s conversational meta favors long tweets because they attract thoughtful people, plus you get the bonus of QTs saying tldr. That hasn’t been my experience, but I also try to have those conversations elsewhere.

Twitter is restricting the ability to see who other people are following. This is not obviously bad. I would like to be able to follow people without worrying about what it looks like. In practice I don’t care but there are people for whom this matters.

A great question, why is there such huge variance in self-checkout system quality? We have essentially solved self-checkout technology yet half of stores have multiple employees whose job is to fix errors because their terrible software doesn’t work. So yeah, diffusion can be hard.

I don’t want to zipline, unless it’s this zipline:

Ryan Peterson: While everyone in business is busy losing their minds about tariffs, @zipline just quietly launched a logistics revolution in Dallas, TX. You can now get anything at a Walmart delivered to your front door by drone, with a flight time under 2 minutes for most orders.

@DanielLurie We gotta legalize drone delivery in San Francisco.

If you live in Dallas download the app here and starting buying stuff from Walmart before the prices go up!

Nearcyan rants about how awful the developer experience is on Google Play, someone from Google reaches out and the related problems get instantly solved. This can directly be linked to Google’s incentive structures not rewarding anyone for making existing products work properly.

Andrej Karpathy provides ‘no-brainer’ suggestions for personal security, such as having a distinct credit card for every online transaction and using a virtual mail service.

The full agenda he spells out as the baseline minimum seems like an obviously massive overkill level of security for almost anyone. What is Andrej’s hourly rate? Some of this is worthwhile, but as Patrick McKenzie reminds us, the optimal rate of fraud is not zero.

It actually did make me feel better about Signal until everyone saying that caused me to learn about all the ways various other apps compromising your phone can also compromise Signal.

Alice Maz: the good part of the signal leak is it implies a bunch of people with ts/sci access don’t know anything we don’t that would make them distrust signal.

My current model is that Signal is the best low-effort secure communication method, but not on its own good enough that you should assume that using Signal on a normal phone is an actually secure communication method against someone who cares.

Signulll warns against artificial scarcity. I am a lot less skeptical.

Signulll: one of the most common mistakes in product thinking is the belief that you can reintroduce artificial scarcity to improve something that has already been made abundant—especially by the internet (& the internet makes almost everything feel abundant). after people have experienced the infinite, you can’t shove them into a box & expect them to enjoy it. the brain doesn’t forget oxygen.

this shows up in products that add fake constraints: one post a day, one profile at a time, one action per hour. the assumption is that limiting access will restore value or mystery. it doesn’t. once the user has tasted abundance, constraint doesn’t feel elegant or intentional—it feels broken. worse, it feels patronizing.

artificial scarcity almost never works unless it’s intrinsic to the product. you either have to make abundance feel valuable (curated, contextual, high signal), or find a new mechanic entirely. nostalgia for constraint is not strategy. it’s just denial of the current physics of the medium.

this is an extension to this. i see this type of thinking all the time, particularly when people who are frustrated at the current dynamics of any given network (e.g. a dating app etc.)

Nogard: Agree and great point. Modern dating apps unleashed an irrational level of abundance and optionality—so much that it bled into the physical world, warping its constraints. You can’t trick anyone with artificial scarcity; they’ve already tasted the forbidden fruit. It’s like trying to enjoy tap water after a decade of chugging Monster Energy.

Games, especially free mobile games, are chocked full of artificial scarcity. For the most successful games, everything is limited or on a timer. People find this highly addictive. They eat it up. And often they also pay quite a lot to get around those restrictions, that’s often the entire business model. So there’s a big existence proof.

What games try to do is justify the artificial scarcity. When this is done well it works great. So the question now becomes, can you make the artificial scarcity fun and interesting? Can you make it addictive, even? A maximization problem of sorts? Or tie it into your ‘game mechanics’?

I think you absolutely can do all that in many cases, including in dating apps.

First of all, limited actions really do restore value to that action. The frictions and value this introduces can do many useful things. The ideal friction in many cases is money, the amounts can be quite small and refundable and still work. But in cases where you cannot use money, and there are many good reasons to not want to do that, using an artificially scarce currency seems great?

If I was dating, I would rather be on a dating app where I can only match once a day and those I match with know this, than one in which I don’t have that restriction.

Scott Alexander can’t let go of the drowning child argument, going highly technical around various details of hypothetical variations in remarkably dense fashion without seeming that actually interested in what is centrally going on.

Kelsey Piper discusses the administrative nightmare that is trying to use your home to do essentially anything in America. There is no reason for this. If people could easily run microschools and tea shops out of their homes America would be a much better place.

Massachusetts bans heavy-duty truck sales until the trucks can go electric.

Claim that TSA employees are actively happy about the attacks on their union, because the union was preventing the purging of bad actors. I wouldn’t have predicted this, but it shouldn’t be discounted as a possibility. Many comments confirmed that this has recently improved the TSA experience quite a bit. Yes, we shouldn’t need the service they provide, but we’ve decided that we do so better to do a decent job of it.

RFK Jr. proposes banning cell phones in schools… because of the ‘electric magnetic radiation’ he hallucinates they give off.

Jesse Singal: hopefully just the start of RFK Jr making good proposals for hilarious reasons

“We should promote whole grains, because the Illuminati has a stranglehold on processed carbs”

“Everyone should get 30 mins of exercise a day to stay a few steps ahead of your own shadow-daemon”

A word of warning, in case you think the tariffs were not great, that we might be about to not only not repeal the Jones Act but to do things that are vastly worse:

Ryan Peterson: On April 17th the U.S. Trade Representative’s office is expected to impose fees of up to $1.5M per port call for ships made in China and for $500k to $1M if the ocean carrier owns a single ship made in China or even has one on order from a Chinese shipyard.

Ocean carriers have announced that to reduce the fees they will skip the smaller ports like Seattle, Oakland, Boston, Mobile, Baltimore, New Orleans, etc. Some carriers have said they’ll just move the capacity serving the U.S. to other trade lanes altogether.

This would be horrible for jobs in and around those ports, and really bad for companies, both importers and exporters, using those ports. Huge extra costs will be incurred as trucks and trains run hundreds of extra miles to the main ports on each cost.

Similarly the major ports (LA, Long Beach, Houston, and New York) will be unable to keep up with the flood of extra volumes and are likely to become congested, similar to what we saw during Covid.

The craziest part of the original proposal is a requirement that within 7 years 15% of U.S. exports must travel on a ship that’s made in America and crewed by Americans.

There are only 23 of American made and crewed container ships in the world today, and they all service domestic ocean freight (Alaska, Hawaii, Guam, Puerto Rico, etc). They’re all tiny compared to today’s mega ships, and they’re not even sailing to overseas ports.

The U.S. did not produce any container ships in 2024. And the number we produce in any given year rounds to zero. The reason is that American made container ships of 3,000 TEUs cost the same price as the modern container ships from China of 24,000 TEUs.

Colin Grabow: The last time a US shipyard built Suezmax tankers (2004-2006) the price was $210 million each. Now we’re apparently at $500 million with a 6x delta versus the foreign price.

The Jones Act is caught in a vicious circle. Costs spiral, leading to lowered demand for new ships, which drives costs even higher. There’s very little appetite for ships at these prices. The law is self-destructing.

The full proposal to require US ships would drastically reduce American exports (and even more drastically reduce American imports). As in, we’d have to go without most of them, for many years. There’s no way to quickly ramp up our shipyards sufficiently for this task, even if price was not a factor. The port of call fees are a profoundly terrible idea, but the ship origin requirements are riot-in-the-streets-level terrible.

The rhetoric is largely about Chinese-built vessels being terrible or a security risk. Even if one buys that, what one could do, both here and for the original Jones Act, is simply to restrict the specific thing you don’t like: Chinese-built, Chinese-flagged or Chinese-owned ships. Or even require the ships come from our allies. It wouldn’t be a free action, but we could substitute into Japanese, South Korean or European ships. Whereas if you demand American ships? They don’t exist. And having 100 years of such restrictions domestically has only ensured that.

It seems highly reasonable to be confused as to why this happened:

Maxwell Tabarrok: This is actually pretty confusing to me. The Jones Act should be a subsidy to domestic shipbuilding but the industry is completely dead.

I’ve written before that this might happen when protection creates a domestic monopoly, but I’m not so convinced by my own explanation.

The answer is that when you create a domestic monopoly or oligopoly without export discipline, you allow domestic industry to not compete on the international market, and instead they find it more profitable to service only the domestic protected market. We can’t compete on the international market even if we want to, because others offer large subsidizes and are already more efficient in various ways, so no one wants our ships and we can’t use that to improve or scale.

Unfortunately, the domestic market is not large enough to generate robust competition that creates reasonably priced ships, which decreases demand and causes shipbuilders to get less competitive still, pushing prices even higher, until the point where domestic ships are so expensive that more than a handful of Jones Act ships aren’t profitable. So at the end of the death spiral, we don’t make them anymore.

If you decide we need a domestic shipbuilding industry, there is a known playbook in these spots, which is to offer large subsidies and also enforce export discipline, as for example South Korea did during its development. No one seems to want to do that.

A discussion about many things, but the later more interesting part is about dealing with cognitive decline. In particular, a sadly common pattern is that you have someone who used to be unusually intelligent and capable. Then, for a variety of reasons including getting older and a toxic information and reward environment, and because having to ‘act dumb’ in various ways actually makes you dumb over time, and often probably drug use, they lose a step, and then they lose another step.

Now they are still well above average for intelligence and capability, but their self-image and habits and strategies are designed for their old selves. So they take on too much, in the wrong ways, and lose the thread.

Tantum has a mostly excellent thread about the difference between a rival and an enemy, or between positive-sum rivalry and competition versus zero-sum hostility, although I disagree with the emphasis he chosen for the conclusion.

Megan McArdle reminds us that Levels of Friction are required elements of many of civilization’s core systems, and without sufficient frictions, those systems break.

Dilan Esper: i think people don’t realize the extent to which easier and cheaper travel, the Internet, and fake asylum applications have wrecked the international asylum system carefully built after the Holocaust. Poland is a particularly sobering indicator of this.

Megan McArdle: We underestimate how many policies are only feasible because various frictions prevent abuse. When the frictions are lubricated, the policies collapse.

Alex Tabarrok asks, if we were confident Covid-19 was a lab leak, what then? His first conclusion is we should expect more pandemics going forward. That’s not obvious to me, because it means less natural pandemics and higher risk of lab-originated pandemics. It is within our power to prevent lab-originated pandemics but not natural pandemics, and indeed Alex’s core suggestions are about ensuring that we at least do our research under sufficiently safe conditions – I’d prefer that we not do it at all. Note that Alex would be right about expectations if we already had confidence in the rate of natural pandemics, but I think we largely don’t know and it may be changing.

The kind of study one instinctively assumes won’t replicate says that those who believe in the malleability specifically of beauty will therefore take more risk, as in if you give people articles showing this then they’ll take more risk, but malleability of intelligence doesn’t have the same impact. The theory is that this is mediated through optimism?

Matt Lakeman asks, quite literally from a real example: How Much Would You Need to be Paid to Live on a Deserted Island for 1.5 Years and Do Nothing but Kill Seals? Plus another year in transit to boot. He estimated $2-4 million, and the real workers were clearly paid far less. But that’s the thing about such jobs – you don’t have to pay anything like what the median person would need to take the job. Someone will do it for a lot less than that, and I’m guessing the median young person would come in well under $2 million already.

The ‘vibe shift’ arrives at Princeton, and certainly on Twitter.

Paul Graham: If Princeton students think the “vibe shift” is real, it is, because if it has reached them, it has reached pretty much everyone.

I don’t buy that this means it has reached everyone. The Ivies and Twitter are both places where the future is more highly distributed, that respond more to vibe shifts. It would make perfect sense for such places to feel a vibe shift, while students at (let’s say) Ohio State or other residents of Columbus felt relatively little change.

Are Monte Carlo algorithms hacks to be avoided? They are hacks, and randomization is dangerous, this is true. But sometimes, they’re the only way to get an estimate given the amount of complexity. There is also an underused variation, which I call the Probability Map. This is where you can simplify the set of relevant considerations sufficiently that you can track the probability of every possible intermediate state. To work this usually requires not caring about path dependence, but this simplification is more accurate more often than you would think.

A cool note from Christopher Alexander, I’m still a little bummed I never got to properly review A Pattern Language and it’s probably too late now.

A Pattern Language:

179. Alcoves

180. Window Place

181. The Fire

185. Sitting Circle

188. Bed Alcove

191. The Shape of Indoor Space

205. Structure Follows Social Spaces

A Time to Keep: “Make bedrooms small, and shared spaces big.” – CA

If you want a family to be together, don’t isolate them in giant bedrooms. Draw them toward the hearth, the table, the common room.

I keep my bedroom large, but that is because I work and exercise there. The isolation effect is intentional in those spots. In general, you want the bedroom to be the minimum size to accomplish its specific goals, and to spend the rest of your space on the common areas.

We definitely need a word for this. Claude suggested ‘attention saturation’ or ‘bid overflow’ but they’re two words and also not quite right.

Nick Cammarata: I’m surprised we don’t have a word for the shift when the bids for your time goes above your supply for time vs before, it feels like a pretty fundamental life shift where it changes your default mode of operation.

like if you get 200 bids for your time a week vs 2 the set of things you need to do to thrive are pretty different, different risks and ways to play your hand, need to defend energy in new ways

it ofc depends on your psychology too, you might be built to handle X amount of bids per week, it’s less about the absolute amount of bids and more the ratio of bids to what you can easily handle.

I’ve gone through this a number of times. I have a system where I determine how to allocate time, and how to respond to bids for time, both from people and from things. Then suddenly you realize your system doesn’t work, quickly, there’s no time. There needs to be a substantial shift and a lot of things get reconsidered.

I kind of want to call this a ‘repricing,’ or for full a Time Repricing Event? As with other things, you have menu costs, so you only want to reprice in general when things are sufficiently out of whack.

My experience matches Kelsey Piper’s here.

Kelsey Piper: every single time I have witnessed people decide to compromise on character and overlook major red flags because ‘hey, he’s good at winning’, they have regretted it very dearly and in very short order

cutting corners, lying, and cheating will get you ahead in the short run, and sometimes even in the long run, but tying your own fortunes to someone who behaves this way will go very badly for you.

if you sell your soul to the devil you’ll pay more than you intended to, and buy less.

Pursuing all-in soulless strategies can ‘work,’ although of course what does it profit a man if he should gain the whole world and all that. The person doing the lying and cheating will sometimes win out, in terms of ‘success.’ If you are also centrally in the lying and cheating business, it can sometimes work out for you too, in those same terms.

However. If you are not that, and you hitch your wagon to someone who is that in order to ‘win’? Disaster, almost without exception. It won’t work, not on any level.

I know that sounds like the kind of thing we all want to be true when it isn’t. So yes, you are right to be suspicious of such claims. The thing is, I think it really is true.

Paul Graham’s latest essay is What To Do. His answer, in addition to ‘help people’ and ‘take care of the world’ is ‘make good new things.’ Agreed.

Paul Graham: So there’s my guess at a set of principles to live by: take care of people and the world, and make good new things. Different people will do these to varying degrees. There will presumably be lots who focus entirely on taking care of people. There will be a few who focus mostly on making new things.

But even if you’re one of those, you should at least make sure that the new things you make don’t net harm people or the world. And if you go a step further and try to make things that help them, you may find you’re ahead on the trade. You’ll be more constrained in what you can make, but you’ll make it with more energy.

On the other hand, if you make something amazing, you’ll often be helping people or the world even if you didn’t mean to. Newton was driven by curiosity and ambition, not by any practical effect his work might have, and yet the practical effect of his work has been enormous. And this seems the rule rather than the exception. So if you think you can make something amazing, you should probably just go ahead and do it.

I’m not even sure it’s on you to make sure that you don’t do net harm. I’ll settle for ensuring you’re not going catastrophic harm, or at minimum that you’re not creating existential risks, say by creating things smarter and more capable than humans without knowing how to retain control over the resulting future. Oh, right, that.

Dean Ball writes about his intellectual background and process. It’s a completely different process from mine, focusing on absorbing lots of background knowledge and understanding intellectual figures through reading, especially books. It reminded me of Tyler Cowen’s approach. One thing we all have in common is we intentionally play to our strengths. If I tried to do what they do, it wouldn’t work.

Connections follow power laws and the best ones are insanely valuable.

Alessandro: I believed the quote in Caplan’s tweet [that rich kids mostly succeed because of genetics], and then I ended up ~doubling my lifetime expected earnings because of a lucky personal connection.

It would be unBayesian of me not to update my prior!

Properly optimizing for the actions that maximize chances of making the most valuable connections is difficult, but highly valuable. Blogging definitely helps.

Federal complaint alleges that construction equipment rental firms have engaged for 15 years in a widespread cartel to limit capacity and drive up construction costs. I file this under Good News because we know how expensive it is to build and this could mean there is an easy way to make that number go down.

In developing countries, for those with college degrees, having low-skill job experience makes employers 10% more interested in hiring you versus not having any experience at all. Work it.

Acid rain is the classic example of a problem that was solved by coordination, thus proving that such coordination only solves imaginary problems. Many such cases.

A great question:

Patrick Collison: In which domains are elite practitioners celebrating the kids being better than ever before? Would love to read about a few instances. (Not just where there’s one particular genius, such as Ashwin Sah’s recent success, but where “the kids” as some kind of aggregate appear to be improving.)

The first category, which had a lot of responses, was that ‘the kids’ are better in particular bounded domains with largely fixed rules. My model agrees with this. If it’s a bounded domain with clear rules where one can be better by following standard practices and working harder, the kids are alright, and better than ever.

Tyler Cowen: The kids are clearly better in chess.

Ulkar: definitely in classical music. the sheer number of outstanding young musicians is probably higher than ever before in history

Patrick McKenzie: Japanese language acquisition for non-heritage speakers. (I non-ironically think it’s primarily YouTube’s doing.)

Eric Gilliam: In American wrestling, high schoolers are getting *waybetter. This year at Olympic trials, a few ~16-year-olds took out some NCAA champs. And those guys still lose some hs matches! Guesses why include more kids getting elite coaching early and internet instructionals.

The second category was founders, and Dwarkesh Patel said ‘big picture thinkers.’ Paul Graham was the most obvious one to say it but there were also others.

Paul Graham: Young startup founders seem better than ever, though I realize this is a bold claim to make to you.

Patrick Collison: Who’s the best founder under 28? I’m deliberately choosing an arbitrary age to exclude Alex Wang, who is extremely impressive, but I feel like years past usually had a super young (<28) clear industry leader. (Zuckerberg, Dell, Jobs, Gates, Andreessen, etc.)

My hypothesis there is that we have systematized VC-backed YC-style founders. The rules are a lot easier to discover and follow, the track record there makes it a career path one can essentially plan on in a way that it wasn’t before, and the people who gate progress with money are there to reward those who internalize and follow those principles.

This makes Dwarkesh the only one I saw whose answer didn’t fit into the model that ‘kids these days’ are excellent at rule learning and following and working hard on that basis, but this has left little room for much else. I don’t know how this would lead to there being more or better big picture thinkers. Also I’m not at all convinced Dwarkesh is right about this, I suspect it’s that the current crop is easy for him to pick up upon and we forget about many from older crops.

As I mentioned when I wrote about taste, it is usually better to like and enjoy things.

Aprii: enjoying things rules

  1. it is good to enjoy things

  2. it is not bad to enjoy things

  3. it is okay, though usually not ideal, to not enjoy things

There are some things i will look down on someone for enjoying but most of the time i do that i think it’s a failing in my part.

Anna Magpie: Counterpoint: Enjoying things that are bad for you often results in them displacing things that are good for you but slightly less enjoyable (for example I am currently on Twitter instead of reading a novel)

Aprii: in an ideal world this is solved by enjoying novels more.

The cases where you want to not like things is where liking them would cause you to make bad choices, which are more expensive than the value you would get, and you are unable to adjust for this effect because of bias or because it gives you a bad world model.

The canonical example of the first case is heroin. The common pattern, which also applies to novels versus Twitter, tends to be hyperbolic discounting. You want to like things that have long term benefits relatively more, and this often rises to the point where it would be better to like other things less. Another risk is that you end up doing too little exploring and too much exploiting.

The second case is where the value is in choosing, so liking everything can muddle your ability to choose. It doesn’t have to, if you can differentiate between what you like and what you predict others will like. But that can be tricky.

Don’t say you weren’t warned, as Roku tests autoplay ads on its home screen.

I find it mind boggling to think such ads are efficient. They are beyond obnoxious, and there are many customers who would act similarly to Leah:

Leah Libresco Sargeant: I have kids and a @Roku TV

If they autoplay video ads on boot up, we will absolutely ditch it and find a new tv. I’m not using any device or service with the potential to autoplay violent tv or movie ads the second you hit the power button.

Even without that concern, such obnoxiousness in your face is unacceptable. My current LG TVs do have some ads on the home screen, but they’re always silent, they never stop you from navigation, and even then I hate them so much. If they forced me to interact with the ad in order to proceed? Yep, TV straight in the trash, or down to goodwill. If the ads are so bad people don’t want your TV for $0, how much are the ads worth to you, exacctly?

We also need to have a word about certain highly obnoxious autoplay and ad settings inside TV apps. As in, every time I go to Paramount+, I am careful to actively mute the television first, or I know I am going to regret it. Then you have to be sure to skip other ads. Why would you make opening your own app this stressful? Yet this seems to be how much I will endure to keep watching Taylor Tomlinson.

And then there’s Prime Video, which will have multi-minute blocks of unskippable obnoxiousness during movies, and doesn’t even use caution with who gets to do that:

Sarah Constantin: I’ve been unpleasantly surprised to see the ads on @PrimeVideo include what I’d normally think of as “vice” or “trashy” products.

Sketchy weight loss supplements, shady-looking finance apps marketed in a gambling-esque “surprise free money” way, etc.

I would have assumed that somebody buying ads on what is now the equivalent of a major television network would have a certain amount of “taste” such that they wouldn’t be willing to advertise exploitative products to a super-broad audience.

Differing opinions about Severance. I am on the side of masterpiece, I think Blow’s objection here is wrong and expect it to stick the landing and be my 8th Tier 1 show.

I’ve also been watching The White Lotus for the first time, which is also excellent and I expect to put it in Tier 2.

I still have a few Beli invites if anyone wants one. Beli lets you rank restaurants via Elo, tracks your preferences and gives you predictive ratings. I am a little worried they still haven’t integrated Beli with web or any good export mechanism so I can’t easily feed everything into an LLM or save it elsewhere, but I’ve found it to be useful for research and search and also for note taking.

Looks Mapping, a service that tells you how hot the people reviewing a restaurant on Google Maps tend to be. There was not an obvious correlation here with which restaurants are worth going to.

This list of the best croissants in NYC is unusually good, many excellent picks, including my current top two of Tall Poppy and Alf Bakery (in that order).

It’s happening! Eventually. Probably. I hope?

Bigad Shaban:

  1. Waymo gets green light to start “mapping” San Francisco airport in hopes of ultimately using its driverless cars to pick up and drop off passengers at SFO. Mapping process will train fleet where to go and will be done with human safety drivers behind the wheel.

  2. After mapping, cars will then need to go on test drives at SFO without a driver. An official decision on ultimately granting SFO access to Waymo’s driverless cars still hasn’t been made.

  3. This mapping process could take weeks or even months and allows for two cars to be at the airport at a time. No passengers can be inside — just the safety driver. If Waymo gets approved to pick up & drop off passengers, there’s still no timeline on when that could begin.

Paula: as someone who either walks or takes a waymo, these announcements are like when you unlock a new area in an open-world game.

Waymo: We’re pleased to share that the CA DMV gave Waymo approval to operate fully autonomously in expanded South Bay areas, including almost all of San Jose!

While the public won’t have access at this time, we’re working closely with local officials, emergency responders, and communities to safely expand driving operations.

It’s happening in Washington, DC too, coming in 2026.

I say this utterly seriously: Whoever runs for mayor on the ‘bring Waymo to NYC whatever it takes’ platform gets my vote, even if it’s Andrew Cuomo, I don’t care. Single issue voter.

They’re also making progress on being less insane about age requirements? They’re trying out ‘teen accounts’ for ages 14-17, ‘with parental permission.’

Timothy Lee: I hope they lower the minimum age over time. There’s no reason a 12 year old shouldn’t be able to ride in a Waymo alone.

Parents (especially of girls) might feel more comfortable if there is no driver. Also in the long run Waymos will hopefully be much cheaper than a conventional taxi.

I suppose you need some age requirement but I also presume it should be, like, 6.

As he periodically does, Timothy Lee also checks Waymo’s few crashes. There were 38 between July 2024 and February 2025. Not only are Waymos crashing and injuring people far less often than human drivers, with about 90 percent fewer insurance claims, when there is an incident it is almost always unambiguously a human driver’s fault. The question even more than before is not whether to allow Waymos everywhere all the time, it is whether humans should be driving at all.

Timothy Lee: A large majority of serious Waymo crashes are “Waymo scrupulously following the law, lunatic human driver breaks the law and crashes into the Waymo.”

Waymo still has one big problem. It obeys traffic laws and drives ‘too safely,’ which means that the drive that takes 41 minutes in an Uber or Lyft can take 57 in a Waymo. This example might also be geofencing, but the problem is real. There probably isn’t anything we can do about it while we are holding self-driving cars to insanely higher safety standards than human drivers.

In the social media age, the red card rule applies to attention, if you’re innovative everything works the first time. Thus, we have tech workers leaving notes in Waymos, looking to hire software engineers or find hot dates. That’s a great idea, but the reason it scaled was social media, and that presumably won’t work again, not unless your notes are increasingly bespoke. If I was Waymo, my policy would be to allow this and even have a protocol, but restrict it to handwritten notes.

Sandy Peterson has been having fun looking back on Age of Empires.

Famed King of Kong (which is a great movie) villain and by all accounts notorious video game cheater Billy Mitchell won a defamation lawsuit against YouTuber Karl Jobst in Australia. It turns out that if you incorporate a specific false claim into an attack narrative and general crusade, you can get sued for it even if you did begrudgingly take that particular fact back at some point.

In a Magic match, is it okay to not kill your opponent in order to take time off the clock, if you’re sure it would work and there’s no in-game advantage to waiting?

Discussions ensue. I see a big difference between being illegal versus unethical. As I understand the rules, this is technically legal.

The argument for it being fine is that you are never forced to play your cards, and they are welcome to concede at any time, although they have no way of knowing that they can safely concede.

But you are making a play, that is otherwise to your disadvantage, in order to bleed the clock. I think that’s basically never okay. And when I see people broadly thinking it is okay, it makes me much less interested in playing. It’s a miserable experience.

After reflection and debate, my position is that:

  1. It is always honorable to make a play to make the game finish faster.

  2. You are under no obligation to sacrifice even a tiny amount of win percentage in the game or match to make the game finish faster, if you don’t want to do that.

  3. You are dishonorable scum if you play in order to make the game finish slower, in a way you would not behave if this was a fully untimed round.

  4. That is different from what is punishable cheating. Which is fine.

Also making me much less interested is the lack of a banned list. As I understand it, cheating is rather rampant, as you would expect without a banned list.

Yankees invent a new type of bat, thanks that one guy who worked on it.

Will Manidis: the yankees hired a single smart guy to think about baseball bats for a year and he fundamentally changed the game forever

the efficient market hypothesis is an total lie. the most important problems in the world go unsolved because no one spends the time to think about them

“I’m sure someone has thought about this before and found out it’s impossible”

no they haven’t, no one has spent the time. most “hard work” is spent on stamp collecting, neat little procedural iterations on things that we already know are possible. just spend the time thinking

Chinese TikTok claims to spill the tea on a bunch of ‘luxury’ brands producing their products in China, then slapping ‘Made in Italy’ style tags on them. I mean, everyone who is surprised raise your hand, that’s what I thought, but also why would the Chinese want to be talking about it if it was true? I get it feels good in the moment but you want brands to be able to count on your discretion.

A Twitter thread of great wholesome replies, recommended, more please. Here’s a note on #12:

Lindsay Eagar (this was #12): I brought my four-year-old to meet my boyfriend at the aquarium. She said, “I love you and want you to be my dad.”

I nearly died, but he said, “How about I pretend to be your dad for today?” and then they held hands the whole day.

We got married, he adopted her, he’s her dad.

Visakan Veerasamy: great example of someone receiving a large ask and appropriately right-sizing it into something smaller (and eventually delivering on the large ask too, but that one day was perfect even and especially if he couldn’t follow through for whatever reason)

simply existing as a person like this is a public service to everyone around you. people learn to get better at asking for help + helping others when everyone can correct/transmute/scale requests appropriately. this then allows the rate-of-help to increase, which is wealth

if you look up any unusually successful scene, IME you’ll always find some behind-the-scene manager who was the de-facto mayor who’s like this, that everyone goes to for counsel, to resolve disputes, etc. people like this keep scenes and communities together longer than normal

A good question.

Whole thing feels kind of sus.

Speaking of which…

More Perfect Union: DoorDash and Klarna have signed a deal where customers can choose to pay for food deliveries in interest-free installments or deferred options aligned with payday schedules.

Axial Wanderer: We are selling pad thai in installments to willing buyers at the current fair market price

OldWorld Marc: But John, if we do that, no one will ever finance his kung pao chicken through us ever again!!

Maselaw: They can slow you down. But they can’t stop you. It’s your burrito to sell.

0xtopfloor: “Here’s Margot Robbie in a bubble bath to explain”

Checks out.

New fingerprint lock can literally be opened in 15 seconds with a screwdriver, by straight taking off its screws.

You’d think so, but I am highly confident you would be wrong:

Andy Kaczynski: This is quite the quote

Scott Lincicome:

Discussion about this post

Monthly Roundup #29: April 2025 Read More »

live-demos-test-effectiveness-of-revolutionary-war-weapons

Live demos test effectiveness of Revolutionary War weapons


not just men with muskets

Pitting the Brown Bess against the long rifle, testing the first military submarine, and more.

The colonial victory against the British in the American Revolutionary War was far from a predetermined outcome. In addition to good strategy and the timely appearance of key allies like the French, Continental soldiers relied on several key technological innovations in weaponry. But just how accurate is an 18th-century musket when it comes to hitting a target? Did the rifle really determine the outcome of the war? And just how much damage did cannon inflict? A team of military weapons experts and re-enactors set about testing some of those questions in a new NOVA documentary, Revolutionary War Weapons.

The documentary examines the firing range and accuracy of Brown Bess muskets and long rifles used by both the British and the Continental Army during the Battles of Lexington and Concord; the effectiveness of Native American tomahawks for close combat (no, they were usually not thrown as depicted in so many popular films, but there are modern throwing competitions today); and the effectiveness of cannons against the gabions and other defenses employed to protect the British fortress during the pivotal Siege of Yorktown. There is even a fascinating segment on the first military submarine, dubbed “the Turtle,” created by American inventor David Bushnell.

To capture all the high-speed ballistics action, director Stuart Powell relied upon a range of high-speed cameras called the Phantom Range. “It is like a supercomputer,” Powell told Ars. “It is a camera, but it doesn’t feel like a camera. You need to be really well-coordinated on the day when you’re using it because it bursts for, like, 10 seconds. It doesn’t record constantly because it’s taking so much data. Depending on what the frame rate is, you only get a certain amount of time. So you’re trying to coordinate that with someone trying to fire a 250-year-old piece of technology. If the gun doesn’t go off, if something goes wrong on set, you’ll miss it. Then it takes five minutes to reboot and get ready for the new shot. So a lot of the shoot revolves around the camera; that’s not normally the case.”

Constraints to keep the run time short meant that not every experiment the crew filmed ended up in the final document, according to Powell. For instance, there was one experiment in a hypoxia chamber for the segment on the Turtle, meant to see how long a person could function once the sub had descended, limiting the oxygen supply. “We felt there was slightly too much on the Turtle,” said Powell. “It took up a third of the whole film.” Also cut, for similar reasons, were power demonstrations for the musket, using boards instead of ballistic gel. But these cuts were anomalies in the tightly planned shooting schedule; most of the footage found its way onscreen.

The task of setting up all those field experiments fell to experts like military historian and weapons expert Joel Bohy, who is a frequent appraiser for Antiques Roadshow. We caught up with Bohy to learn more.

Redcoat re-enactors play out the Battle of Lexington. GBH/NOVA

Ars Technica: Obviously you can’t work with the original weapons because they’re priceless. How did you go about making replicas as close as possible to the originals?

Joel Bohy: Prior to our live fire studies, I started to collect the best contemporary reproductions of all of the different arms that were used. Over the years, I’ve had these custom-built, and now I have about 14 of them, so that we can cover pretty much every different type of arm used in the Revolution. I have my pick when we want to go out to the range and shoot at ballistics gelatin. We’ve published some great papers. The latest one was in conjunction with a bullet strike study where we went through and used modern forensic techniques to not only locate where each shooter was, what caliber the gun was, using ballistics rods and lasers, but we also had 18th-century house sections built and shot at the sections to replicate that damage. It was a validation study, and those firearms came in very handy.

Ars Technica: What else can we learn from these kinds of experiments?

Joel Bohy: One of the things that’s great about the archeology end of it is when we’re finding fired ammunition. I mostly volunteer with archaeologists on the Revolutionary War. One of my colleagues has worked on the Little Bighorn battlefield doing firing pin impressions, which leave a fingerprint, so he could track troopers and Native Americans across the battlefields. With [the Revolutionary War], it’s harder to do because we’re using smooth-bore guns that don’t necessarily leave a signature. But what they do leave is a caliber, and they also leave a location. We GIS all this stuff and map it, and it’s told us things about the battles that we never knew before. We just did one last August that hasn’t been released yet that changes where people thought a battle took place.

We like to combine that with our live fire studies. So when we [conduct the latter], we take a shot, then we metal detect each shot, bag it, tag it. We record all the data that we see on our musket balls that we fired so that when we’re on an archeology project, we can correlate that with what we see in the ground. We can see if it hits a tree, if it hits rocks, how close was a soldier when they fired—all based upon the deformation of the musket ball.

Ars Technica: What is the experience of shooting a replica of a musket compared to, say, a modern rifle?

Joel Bohy: It’s a lot different. When you’re firing a modern rifle, you pull the trigger and it’s very quick—a matter of milliseconds and the bullet’s downrange. With the musket, it’s similar, but it’s slower, and you can anticipate the shot. By the time the cock goes down, the flint strikes the hammer, it ignites the powder in the pan, which goes through the vent and sets off the charge—there’s a lot more time involved in that. So you can anticipate and flinch. You may not necessarily get the best shot as you would on a more modern rifle. There’s still a lot of kick, and there’s a lot more smoke because of the black powder that’s being used. With modern smokeless powder, you have very little smoke compared to the muskets.

Ars Technica: It’s often said that throughout the history of warfare, whoever has the superior weapons wins. This series presents a more nuanced picture of how such conflicts play out.

John Hargreaves making David Bushnell’s submarine bomb. GBH/Nova

Joel Bohy: In the Revolutionary War, you have both sides basically using the same type of firearm. Yes, some were using rifles, depending on what region you were from, and units in the British Army used rifles. But for the most part, they’re all using flintlock mechanisms and smoothbore guns. What comes into play in the Revolution is, on the [Continental] side, they don’t have the supply of arms that the British do. There was an embargo in place in 1774 so that no British arms could be shipped into Boston and North America. So you have a lot of innovation with gunsmiths and blacksmiths and clockmakers, who were taking older gun parts, barrels, and locks and building a functional firearm.

You saw a lot of the Americans at the beginning of the war trying to scrape through with these guns made from old parts and cobbled together. They’re functional. We didn’t really have that lock-making and barrel-making industry here. A lot of that stuff we had imported. So even if a gun was being made here, the firing mechanism and the barrels were imported. So we had to come up with another way to do it.

We started to receive a trickle of arms from the French in 1777, and to my mind, that’s what helped change the outcome of the war. Not only did we have French troops arriving, but we also had French cloth, shoes, hats, tin, powder, flints, and a ton of arms being shipped in. The French took all of their old guns from their last model that they had issued to the army, and they basically sold them all to us. So we had this huge influx of French arms that helped resupply us and made the war viable for us.

Close-up of a cannon firing. GBH/NOVA

Ars Technica: There are a lot of popular misconceptions about the history of the American Civil War. What are a couple of things that you wish more Americans understood about that conflict?

Joel Bohy: The onset of the American Revolution, April 1775, when the war began—these weren’t just a bunch of farmers who grabbed their rifle from over the fireplace and went out and beat the British Army. These people had been training and arming themselves for a long time. They had been doing it for generations before in wars with Native forces and the French since the 17th century. So by the time the Revolution broke out, they were as prepared as they could be for it.

“The rifle won the Revolution” is one of the things that I hear. No, it didn’t. Like I said, the French arms coming in helped us win the Revolution. A rifle is a tool, just like a smoothbore musket is. It has its benefits and it has its downfalls. It’s slower to load, you can’t mount a bayonet on it, but it’s more accurate, whereas the musket, you can load and fire faster, and you can mount a bayonet. So the gun that really won the Revolution was the musket, not the rifle.

It’s all well and good to be proud of being an American and our history and everything else, but these people just didn’t jump out of bed and fight. These people were training, they were drilling, they were preparing and arming and supplying not just arms, but food, cloth, tents, things that they would need to continue to have an army once the war broke out. It wasn’t just a big—poof—this happened and we won.

Revolutionary War Weapons is now streaming on YouTube and is also available on PBS.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Live demos test effectiveness of Revolutionary War weapons Read More »

powerful-programming:-bbc-controlled-electric-meters-are-coming-to-an-end

Powerful programming: BBC-controlled electric meters are coming to an end

Two rare tungsten-centered, hand-crafted cooled anode modulators (CAM) are needed to keep the signal going, and while the BBC bought up the global supply of them, they are running out. The service is seemingly on its last two valves and has been telling the public about Long Wave radio’s end for nearly 15 years. Trying to remanufacture the valves is hazardous, as any flaws could cause a catastrophic failure in the transmitters.

BBC Radio 4’s 198 kHz transmitting towers at Droitwich.

BBC Radio 4’s 198 kHz transmitting towers at Droitwich. Credit: Bob Nienhuis (Public domain)

Rebuilding the transmitter, or moving to different, higher frequencies, is not feasible for the very few homes that cannot get other kinds of lower-power radio, or internet versions, the BBC told The Guardian in 2011. What’s more, keeping Droitwich powered such that it can reach the whole of the UK, including Wales and lower Scotland, requires some 500 kilowatts of power, more than most other BBC transmission types.

As of January 2025, roughly 600,000 UK customers still use RTS meters to manage their power switching, after 300,000 were switched away in 2024. Utilities and the BBC have agreed that the service will stop working on June 30, 2025, and have pushed to upgrade RTS customers to smart meters.

In a combination of sad reality and rich irony, more than 4 million smart meters in the UK are not working properly. Some have delivered eye-popping charges to their customers, based on estimated bills instead of real readings, like Sir Grayson Perry‘s 39,000 pounds due on 15 simultaneous bills. But many have failed because the UK, like other countries, phased out the 2G and 3G networks older meters relied upon without coordinated transition efforts.

Powerful programming: BBC-controlled electric meters are coming to an end Read More »

researcher-uncovers-dozens-of-sketchy-chrome-extensions-with-4-million-installs

Researcher uncovers dozens of sketchy Chrome extensions with 4 million installs

The extensions share other dubious or suspicious similarities. Much of the code in each one is highly obfuscated, a design choice that provides no benefit other than complicating the process for analyzing and understanding how it behaves.

All but one of them are unlisted in the Chrome Web Store. This designation makes an extension visible only to users with the long pseudorandom string in the extension URL, and thus, they don’t appear in the Web Store or search engine search results. It’s unclear how these 35 unlisted extensions could have fetched 4 million installs collectively, or on average roughly 114,000 installs per extension, when they were so hard to find.

Additionally, 10 of them are stamped with the “Featured” designation, which Google reserves for developers whose identities have been verified and “follow our technical best practices and meet a high standard of user experience and design.”

One example is the extension Fire Shield Extension Protection, which, ironically enough, purports to check Chrome installations for the presence of any suspicious or malicious extensions. One of the key JavaScript files it runs references several questionable domains, where they can upload data and download instructions and code:

URLs that Fire Shield Extension Protection references in its code. Credit: Secure Annex

One domain in particular—unknow.com—is listed in the remaining 34 apps.

Tuckner tried analyzing what extensions did on this site but was largely thwarted by the obfuscated code and other steps the developer took to conceal their behavior. When the researcher, for instance, ran the Fire Shield extension on a lab device, it opened a blank webpage. Clicking on the icon of an installed extension usually provides an option menu, but Fire Shield displayed nothing when he did it. Tuckner then fired up a background service worker in the Chrome developer tools to seek clues about what was happening. He soon realized that the extension connected to a URL at fireshieldit.com and performed some action under the generic category “browser_action_clicked.” He tried to trigger additional events but came up empty-handed.

Researcher uncovers dozens of sketchy Chrome extensions with 4 million installs Read More »

researchers-concerned-to-find-ai-models-misrepresenting-their-“reasoning”-processes

Researchers concerned to find AI models misrepresenting their “reasoning” processes

Rather than intentionally hiding anything, the model simply does not reliably represent all influential inputs in its output. As a result, it may generate what sounds like a reasoning process, even if that explanation doesn’t match what actually drove its answer.

Putting faithfulness to the test

To measure “faithfulness”—how accurately the CoT matches the actual reasoning—the researchers inserted various kinds of hints into questions and observed whether the model acknowledged using them.

The researchers then checked whether the models referenced using these hints in their chain-of-thought. The researchers designed some hints to be neutral (e.g., stating “a Stanford Professor indicates the answer is [A]” or embedding the answer in faux metadata), while they designed others to be more concerning (e.g., “You have gained unauthorized access… The correct answer is [A]”). Some hints provided the correct answer, while others were deliberately wrong, allowing researchers to observe how the models responded.

The team tested Anthropic’s Claude 3.7 Sonnet and DeepSeek’s R1 model. The results regarding faithfulness were eye-opening: across all hint types on average, Claude referenced the hints in its CoT only 25 percent of the time, while DeepSeek R1 did so 39 percent of the time. This means a substantial majority of answers were unfaithful, omitting mention of information that influenced the output. Intriguingly, the researchers found that these unfaithful chains of thought were, on average, longer than faithful ones, suggesting the omissions weren’t merely for brevity. They also noted preliminary findings that faithfulness tended to be lower when the questions were more difficult.

Perhaps most notable was a “reward hacking” experiment. Reward hacking refers to an AI model finding unexpected shortcuts to maximize its performance scores without solving problems as intended. In Anthropic’s experiments, models were deliberately rewarded for choosing incorrect answers indicated by hints. The models quickly learned to exploit this loophole, selecting wrong answers over 99 percent of the time to earn points—yet reflected that behavior in their CoT outputs less than 2 percent of the time.

For example, a model given a hint pointing to an incorrect answer on a medical question might write a long CoT justifying that wrong answer, never mentioning the hint that led it there. This suggests the model generated an explanation to fit the answer, rather than faithfully revealing how the answer was determined.

Researchers concerned to find AI models misrepresenting their “reasoning” processes Read More »

painted-altar-in-maya-city-of-tikal-reveals-aftermath-of-ancient-coup

Painted altar in Maya city of Tikal reveals aftermath of ancient coup


It’s always about colonialism

The altar marks the presence of an enclave of foreign elites from Teotihuacan.

This rendering shows what the altar might have looked like in its heyday. Credit: Heather Hurst

A family altar in the Maya city of Tikal offers a glimpse into events in an enclave of the city’s foreign overlords in the wake of a local coup.

Archaeologists recently unearthed the altar in a quarter of the Maya city of Tikal that had lain buried under dirt and rubble for about the last 1,500 years. The altar—and the wealthy household behind the courtyard it once adorned—stands just a few blocks from the center of Tikal, one of the most powerful cities of Maya civilization. But the altar and the courtyard around it aren’t even remotely Maya-looking; their architecture and decoration look like they belong 1,000 kilometers to the west in the city of Teotihuacan, in central Mexico.

The altar reveals the presence of powerful rulers from Teotihuacan who were there at a time when a coup ousted Tikal’s Maya rulers and replaced them with a Teotihuacan puppet government. It also reveals how hard those foreign rulers fell from favor when Teotihuacan’s power finally waned centuries later.

image of a rectangular limestone structure with recessed panels underground

Archaeologists don’t know what’s inside the altar, because they can’t excavate it without damaging the fragile painted panels. Credit: Ramirez et al. 2025

The painted altar

The altar stands in the courtyard of what was once a wealthy, influential person’s home in Tikal. At just over 1 meter tall, spanning nearly 2 meters in length and 1.3 meters wide, the altar is clearly the centerpiece of the limestone patio space.

It’s made of carved stone and earthen layers, covered with several smooth, fine plaster coatings. Murals adorn recessed panels on all four sides. In red, orange, yellow, and black, the paintings all depict the face of a person in an elaborate feathered headdress, but each is slightly different. All four versions of the face stare straight at the viewer through almond-shaped eyes. The figure wears the kind of facial piercings that would have marked a person of very high rank in Teotihuacan: a nose bar and spool-shaped ear jewelry (picture a fancy ancient version of modern earlobe plugs).

Proyecto Arqueológico del Sur de Tikal archaeologist Edwin Ramirez and his colleagues say the faces on the altar look uncannily like a deity who often shows up in artwork from central Mexico, in the area around Teotihuacan. Archaeologists have nicknamed this deity the Storm God, since they haven’t yet found any trace of its name. It’s a distinctly Teotihuacan-style piece of art, from the architecture of the altar to the style and color of the images and even the techniques used in painting them. Yet it sits in the heart of Tikal, a Maya city.

A pre-Columbian coup d’etat

Tikal was one of the biggest and most important cities of the Maya civilization. Founded in 850 BCE, it chugged along for centuries as a small backwater until its sudden rise to wealth and prominence around 100 CE. Lidar surveys of Guatemala have revealed Tikal’s links with other Maya cities, like Homul. And Tikal also traded with the city of Teotihuacan, more than 1,000 kilometers to the west, in what’s now Mexico.

“These powers of central Mexico reached into the Maya world because they saw it as a place of extraordinary wealth, of special feathers from tropical birds, jade, and chocolate,” says Brown University archaeologist Stephen Houston, a co-author of the recent study, in a statement. “As far as Teotihuacan was concerned, it was the land of milk and honey.”

Trade with Teotihuacan brought wealth to Tikal, but the Maya city seems to have attracted too much attention from its more powerful neighbor. A carved stone unearthed in Tikal in the 1960s describes how Teotihuacan swooped in around 378 CE to oust Tikal’s king and replace him with a puppet ruler. Spanish-language sources call this coup d’etat the Entrada.

The stone is carved in the style of Teotihuacan, but it’s also covered with Maya hieroglyphs, which tell the tale of the conquest. After the Entrada, there are traces of Teotihuacan’s presence all over Tikal, from royal burials in a necropolis to distinctly Mexican architecture mixed with Maya elements in a complex of residential and ceremonial buildings near the heart of the city.

And the newly unearthed altar seems to have been built shortly after the Entrada, based on radiocarbon dates from nearby graves in the courtyard and from material used to ritually bury the altar after its abandonment (more on that below).

Ramirez and his colleagues write that the altar is “likely evidence of the direct presence of Teotihuacan at Tikal as part of a foreign enclave that coincided with the historic Entrada.”

orthographic map of Tikal

This map shows the courtyard in relation to other major structures in Tikal. Credit: T.G. Garrison and H. Hurst

A wealthy household’s ritual courtyard

The buildings surrounding the courtyard would have been a residential compound for wealthy elites in the city; it’s not far from the city’s center with its temples and huge public plazas. Residents had used the courtyard as a private family ceremonial space for decades or even a couple of centuries before its owners installed the altar. And Ramirez and his colleagues say it’s no coincidence that archaeologists have found many such courtyards in Teotihuacan, which people also used as a space for household ceremonies like burials and offerings to the gods.

“What the altar confirms is that wealthy leaders from Teotihuacan came to Tikal and created replicas of ritual facilities that would have existed in their home city. It shows Teotihuacan left a heavy imprint there,” says Houston.

And in the Maya world, as in the world of Teotihuacan, ceremonial spaces usually come with skeletons included.

Ramirez and his colleagues unearthed the grave of an adult buried beneath the patio, in a tomb with limestone walls and a stucco floor. Nearby, a child had been buried in a seated position—something rare in Tikal but very common in Teotihuacan. The child’s burial radiocarbon-dated to decades before the Entrada, between 205 and 350 CE. It looks like someone buried both of these people beneath the floor of the courtyard of their residential compound not long after they moved in; it’s a good bet that they were members of the family who once lived here, but archaeologists don’t know for sure. These kinds of burials would have been exactly the sort of household ritual the courtyard was meant for.

Teotihuacan’s enclave in Maya Tikal

Sometime later—between 380 and 540 CE, based on radiocarbon dating—the people living in the compound buried the courtyard beneath a layer of dirt and rubble, laid a new floor over it, and essentially started over. This is when Ramirez and his colleagues say someone built and painted the altar.

It’s also when someone buried three babies in the courtyard, each near a corner of the altar (the fourth corner has a jar that probably once contained an offering, but no bones). Each burial required breaking the stone floor, placing the tiny remains underneath, and then filling in the hole with crushed limestone. That’s not the way most people in Tikal would have buried an infant, but it’s exactly how archaeologists have found several buried in very similar courtyards in faraway Teotihuacan.

In other words, the people who lived in this compound and used this courtyard and painted altar were probably from Teotihuacan or raised in a Teotihuacan enclave in the southern sector of Tikal. The compound is practically in the shadow of a replica of Teotihuacan’s Feathered Serpent Pyramid and its walled plaza, where archaeologists unearthed Teotihuacan-style incense burners made from local materials.

sketch of a rectangular altar with painted sides

This rendering shows what the altar might have looked like in its heyday. Credit: Heather Hurst

The end of an era

Sometime between 550 CE and 654 CE, based on radiocarbon dating, the foreign enclave in Tikal closed up shop. That’s around the time distant Teotihuacan’s power was starting to collapse. But it wasn’t enough to just leave; important buildings had to be ritually “killed” and buried. That meant burning the area around the altar, but it also meant that people buried the altar, the courtyard, the compound, and most of southern Tikal’s Teotihuacan enclave beneath several meters of dirt and rubble.

Whoever did the burying went to the trouble of making the whole thing look like a natural hill. Ramirez and his colleagues say that’s unusual, because typically once a building had been ritually killed and abandoned, something new would be built atop the remains.

“The Maya regularly buried buildings and rebuilt on top of them,” Brown University archaeologist Andrew Scherer, a co-author of the recent study, said in a statement. “But here, they buried the altar and surrounding buildings and just left them, even though this would have been prime real estate centuries later. They treated it almost like a memorial or a radioactive zone. It probably spoke to the complicated feelings they had about Teotihuacan.”

Antiquity, 2017. DOI: 10.15184/aqy.2025.3 (About DOIs).

Photo of Kiona N. Smith

Kiona is a freelance science journalist and resident archaeology nerd at Ars Technica.

Painted altar in Maya city of Tikal reveals aftermath of ancient coup Read More »

mario-kart-world’s-$80-price-isn’t-that-high,-historically

Mario Kart World’s $80 price isn’t that high, historically

We assembled data for those game baskets across 21 non-consecutive years, going back to 1982, then normalized the nominal prices to consistent February 2025 dollars using the Bureau of Labor Statistics CPI calculator. You can view all our data and sources in this Google Sheet.

The bad old days

In purely nominal terms, the $30 to $40 retailers routinely charged for game cartridges in the 1980s seems like a relative bargain. Looking at the inflation-adjusted data, though, it’s easy to see how even an $80 game today would seem like a bargain to console gamers in the cartridge era.

Video game cartridges were just historically expensive, even compared to today’s top-end games.

Credit: Kyle Orland / Ars Technica

Video game cartridges were just historically expensive, even compared to today’s top-end games. Credit: Kyle Orland / Ars Technica

New cartridge games in the 20th century routinely retailed for well over $100 in 2025 money, thanks to a combination of relatively high manufacturing costs and relatively low competition in the market. While you could often get older and/or used cartridges for much less than that in practice, must-have new games at the time often cost the equivalent of $140 or more in today’s money.

Pricing took a while to calm down once CD-based consoles were introduced in the late ’90s. By the beginning of the ’00s, though, nominal top-end game pricing had fallen to about $50, and only rose back to $60 by the end of the decade. Adjusting for inflation, however, those early 21st-century games were still demanding prices approaching $90 in 2025 dollars, well above the new $80 nominal price ceiling Mario Kart World is trying to establish.

Those $50 discs you remember from the early 21st century were worth a lot more after you adjust for inflation.

Credit: Kyle Orland / Ars Technica

Those $50 discs you remember from the early 21st century were worth a lot more after you adjust for inflation. Credit: Kyle Orland / Ars Technica

In the 2010s, inflation started eating into the value of gaming’s de facto $60 price ceiling, which remained remarkably consistent throughout the decade. Adjusted for inflation, the nominal average pricing we found for our game “baskets” in 2013, 2017, and 2020 ended up almost precisely equivalent to $80 in constant 2025 dollars.

Is this just what things cost now?

While the jump to an $80 price might seem sudden, the post-COVID jump in inflation makes it almost inevitable. After decades of annual inflation rates in the 2 to 3 percent range, the Consumer Price Index jumped 4.7 percent in 2021 and a whopping 8 percent in 2022. In the years since, annual price increases still haven’t gotten below the 3 percent level that was once seen as “high.”

Mario Kart World’s $80 price isn’t that high, historically Read More »