Author name: Shannon Garcia

universities-(finally)-band-together,-fight-“unprecedented-government-overreach”

Universities (finally) band together, fight “unprecedented government overreach”

We speak with one voice against the unprecedented government overreach and political interference now endangering American higher education… We must reject the coercive use of public research funding…

American institutions of higher learning have in common the essential freedom to determine, on academic grounds, whom to admit and what is taught, how, and by whom… In their pursuit of truth, faculty, students, and staff are free to exchange ideas and opinions across a full range of viewpoints without fear of retribution, censorship, or deportation.

This is fine, as far as it goes. But what are all these institutions going to do about the funding cuts, attempts to revoke their nonprofit status, threats not to hire their graduates, and student speech-based deportations? They are going to ask the Trump administration for “constructive engagement that improves our institutions and serves our republic.”

This sounds lovely, if naive, and I hope it works out well for every one of them as they seek good-faith dialogue with a vice president who has called universities the “enemy” and an administration that demanded Harvard submit to the vetting of every department for unspecified “viewpoint diversity.”

As a first step to finding common ground and speaking with a common voice, the statement is a start. But statements, like all words, can be cheap. We’ll see what steps schools actually take—and how much they can speak and act in concert—as Trump’s pressure campaign continues to ratchet.

Universities (finally) band together, fight “unprecedented government overreach” Read More »

openai-wants-to-buy-chrome-and-make-it-an-“ai-first”-experience

OpenAI wants to buy Chrome and make it an “AI-first” experience

According to Turley, OpenAI would throw its proverbial hat in the ring if Google had to sell. When asked if OpenAI would want Chrome, he was unequivocal. “Yes, we would, as would many other parties,” Turley said.

OpenAI has reportedly considered building its own Chromium-based browser to compete with Chrome. Several months ago, the company hired former Google developers Ben Goodger and Darin Fisher, both of whom worked to bring Chrome to market.

Close-up of Google Chrome Web Browser web page on the web browser. Chrome is widely used web browser developed by Google.

Credit: Getty Images

It’s not hard to see why OpenAI might want a browser, particularly Chrome with its 4 billion users and 67 percent market share. Chrome would instantly give OpenAI a massive install base of users who have been incentivized to use Google services. If OpenAI were running the show, you can bet ChatGPT would be integrated throughout the experience—Turley said as much, predicting an “AI-first” experience. The user data flowing to the owner of Chrome could also be invaluable in training agentic AI models that can operate browsers on the user’s behalf.

Interestingly, there’s so much discussion about who should buy Chrome, but relatively little about spinning off Chrome into an independent company. Google has contended that Chrome can’t survive on its own. However, the existence of Google’s multibillion-dollar search placement deals, which the DOJ wants to end, suggests otherwise. Regardless, if Google has to sell, and OpenAI has the cash, we might get the proposed “AI-first” browsing experience.

OpenAI wants to buy Chrome and make it an “AI-first” experience Read More »

controversial-doc-gets-measles-while-treating-unvaccinated-kids—keeps-working

Controversial doc gets measles while treating unvaccinated kids—keeps working

In the video with Edwards that has just come to light, CHD once again uses the situation to disparage MMR vaccines. Someone off camera asks Edwards if he had never had measles before, to which he replies that he had gotten an MMR vaccine as a kid, though he didn’t know if he had gotten one or the recommended two doses.

“That doesn’t work then, does it?” the off-camera person asks, referring to the MMR vaccine. “No, apparently not, ” Edwards replies. “Just wear[s] off.”

It appears Edwards had a breakthrough infection, which is rare, but it does occur. They’re more common in people who have only gotten one dose, which is possibly the case for Edwards.

A single dose of MMR is 93 percent effective against measles, and two doses are 97 percent effective. In either case, the protection is considered lifelong.

While up to 97 percent effectiveness is extremely protective, some people do not mount protective responses and are still vulnerable to an infection upon exposure. However, their illnesses will likely be milder than if they had not been vaccinated. In the video, Edwards described his illness as a “mild case.”

The data on the outbreak demonstrates the effectiveness of vaccination. As of April 18, Texas health officials have identified 597 measles cases, leading to 62 hospitalizations and two deaths in school-aged, unvaccinated children with no underlying medical conditions. Most of the cases have been in unvaccinated children. Of the 597 cases, 12 (2 percent) had received two MMR doses previously, and 10 (1.6 percent) had received one dose. The remaining 96 percent of cases are either unvaccinated or have no record of vaccination.

Toward the end of the video, Edwards tells CHD he’s “doing what any doctor should be doing.”

Controversial doc gets measles while treating unvaccinated kids—keeps working Read More »

neuroscientists-are-racing-to-turn-brain-waves-into-speech

Neuroscientists are racing to turn brain waves into speech

Many thousands of people a year could benefit from so-called voice prosthesis. Their cognitive functions remain more or less intact, but they have suffered speech loss due to stroke, the neurodegenerative disorder ALS, and other brain conditions. If successful, researchers hope the technique could be extended to help people who have difficulty vocalizing because of conditions such as cerebral palsy or autism.

The potential of voice neuroprosthesis is beginning to trigger interest among businesses. Precision Neuroscience claims to be capturing higher resolution brain signals than academic researchers, since the electrodes of its implants are more densely packed.

The company has worked with 31 patients and plans soon to collect data from more, providing a potential pathway to commercialization.

Precision received regulatory clearance on April 17 to leave its sensors implanted for up to 30 days at a time. That would enable its scientists to train their system with what could within a year be the “largest repository of high resolution neural data that exists on planet Earth,” said chief executive Michael Mager.

The next step would be to “miniaturize the components and put them in hermetically sealed packages that are biocompatible so they can be planted in the body forever,” Mager said.

Elon Musk’s Neuralink, the best-known brain-computer interface (BCI) company, has focused on enabling people with paralysis to control computers rather than giving them a synthetic voice.

An important obstacle to the development of brain-to-voice technology is the time patients take to learn how to use the system.

A key unanswered question is how much the response patterns in the motor cortex—the part of the brain that controls voluntary actions, including speech—vary between people. If they remained very similar, machine-learning models trained on previous individuals could be used for new patients, said Nick Ramsey, a BCI researcher at University Medical Centre Utrecht.

Neuroscientists are racing to turn brain waves into speech Read More »

ghost-forests-are-growing-as-sea-levels-rise

Ghost forests are growing as sea levels rise

Like giant bones planted in the earth, clusters of tree trunks, stripped clean of bark, are appearing along the Chesapeake Bay on the United States’ mid-Atlantic coast. They are ghost forests: the haunting remains of what were once stands of cedar and pine. Since the late 19th century, an ever-widening swath of these trees have died along the shore. And they won’t be growing back.

These arboreal graveyards are showing up in places where the land slopes gently into the ocean and where salty water increasingly encroaches. Along the United States’ East Coast, in pockets of the West Coast, and elsewhere, saltier soils have killed hundreds of thousands of acres of trees, leaving behind woody skeletons typically surrounded by marsh.

What happens next? That depends. As these dead forests transition, some will become marshes that maintain vital ecosystem services, such as buffering against storms and storing carbon. Others may become home to invasive plants or support no plant life at all—and the ecosystem services will be lost. Researchers are working to understand how this growing shift toward marshes and ghost forests will, on balance, affect coastal ecosystems.

Many of the ghost forests are a consequence of sea level rise, says coastal ecologist Keryn Gedan of George Washington University in Washington, DC, coauthor of an article on the salinization of coastal ecosystems in the 2025 Annual Review of Marine Science. Rising sea levels can bring more intense storm surges that flood saltwater over the top of soil. Drought and sea level rise can shift the groundwater table along the coast, allowing saltwater to journey farther inland, beneath the forest floor. Trees, deprived of fresh water, are stressed as salt accumulates.

Yet the transition from living forest to marsh isn’t necessarily a tragedy, Gedan says. Marshes are important features of coastal ecosystems, too. And the shift from forest to marsh has happened throughout periods of sea level rise in the past, says Marcelo Ardón, an ecosystem ecologist and biogeochemist at North Carolina State University in Raleigh.

“You would think of these forests and marshes kind of dancing together up and down the coast,” he says.

Marshes provide many ecosystem benefits. They are habitats for birds and crustaceans, such as salt marsh sparrows, marsh wrens, crabs, and mussels. They are also a niche for native salt-tolerant plants, like rushes and certain grasses, which provide food and shelter for animals.

Ghost forests are growing as sea levels rise Read More »

to-regenerate-a-head,-you-first-have-to-know-where-your-tail-is

To regenerate a head, you first have to know where your tail is

Before a critical point in development, the animals failed to close the wound made by the cut, causing the two embryo halves to simply spew cells out into the environment. Somewhat later, however, there was excellent survival, and the head portion of the embryo could regenerate a tail segment. This tells us that the normal signaling pathways present in the embryo are sufficient to drive the process forward.

But the tail of the embryo at this stage doesn’t appear to be capable of rebuilding its head. But the researchers found that they could inhibit wnt signaling in these posterior fragments, and that was enough to allow the head to develop.

Lacking muscle

One possibility here is that wnt signaling is widely active in the posterior of the embryo at this point, blocking formation of anterior structures. Alternatively, the researchers hypothesize that the problem is with the muscle cells that normally help organize the formation of a stem-cell-filled blastema, which is needed to kick off the regeneration process. Since the anterior end of the embryo develops earlier, they suggest there may simply not be enough muscle cells in the tail to kick off this process at early stages of development.

To test their hypothesis, they performed a somewhat unusual experiment. They started by cutting off the tails of embryos and saving them for 24 hours. At that point, they cut the front end off tails, creating a new wound to heal. At this point, regeneration proceeded as normal, and the tails grew a new head. This isn’t definitive evidence that muscle cells are what’s missing at early stages, but it does indicate that some key developmental step happens in the tail within the 24-hour window after the first cut.

The results reinforce the idea that regeneration of major body parts requires the re-establishment of the signals that lay out organization of the embryo in development—something that gets complicated if those signals are currently acting to organize the embryo. And it clearly shows that the cells needed to do this reorganization aren’t simply set aside early on in development but instead take some time to appear. All of that information will help clarify the bigger-picture question of how these animals manage such a complex regeneration process.

Current Biology, 2025. DOI: 10.1016/j.cub.2025.03.065  (About DOIs).

To regenerate a head, you first have to know where your tail is Read More »

regrets:-actors-who-sold-ai-avatars-stuck-in-black-mirror-esque-dystopia

Regrets: Actors who sold AI avatars stuck in Black Mirror-esque dystopia

In a Black Mirror-esque turn, some cash-strapped actors who didn’t fully understand the consequences are regretting selling their likenesses to be used in AI videos that they consider embarrassing, damaging, or harmful, AFP reported.

Among them is a 29-year-old New York-based actor, Adam Coy, who licensed rights to his face and voice to a company called MCM for one year for $1,000 without thinking, “am I crossing a line by doing this?” His partner’s mother later found videos where he appeared as a doomsayer predicting disasters, he told the AFP.

South Korean actor Simon Lee’s AI likeness was similarly used to spook naïve Internet users but in a potentially more harmful way. He told the AFP that he was “stunned” to find his AI avatar promoting “questionable health cures on TikTok and Instagram,” feeling ashamed to have his face linked to obvious scams.

As AI avatar technology improves, the temptation to license likenesses will likely grow. One of the most successful companies that’s recruiting AI avatars, UK-based Synthesia, doubled its valuation to $2.1 billion in January, CNBC reported. And just last week, Synthesia struck a $2 billion deal with Shutterstock that will make its AI avatars more human-like, The Guardian reported.

To ensure that actors are incentivized to license their likenesses, Synthesia also recently launched an equity fund. According to the company, actors behind the most popular AI avatars or featured in Synthesia marketing campaigns will be granted options in “a pool of our company shares” worth $1 million.

“These actors will be part of the program for up to four years, during which their equity awards will vest monthly,” Synthesia said.

For actors, selling their AI likeness seems quick and painless—and perhaps increasingly more lucrative. All they have to do is show up and make a bunch of different facial expressions in front of a green screen, then collect their checks. But Alyssa Malchiodi, a lawyer who has advocated on behalf of actors, told the AFP that “the clients I’ve worked with didn’t fully understand what they were agreeing to at the time,” blindly signing contracts with “clauses considered abusive,” even sometimes granting “worldwide, unlimited, irrevocable exploitation, with no right of withdrawal.”

Regrets: Actors who sold AI avatars stuck in Black Mirror-esque dystopia Read More »

sunderfolk-review:-rpg-magic-that-transports-your-friends-together

Sunderfolk review: RPG magic that transports your friends together


Using your phone as a controller keeps you engaged with this accommodating RPG.

The creators of Sunderfolk wanted to make a video game that would help players “Rediscover game night.” By my reckoning, they have succeeded, because I am now regularly arguing with good friends over stupid moves. Why didn’t I pick up that gold? Don’t you see how ending up there messed up an area attack? Ah, well.

That kind of friendly friction, inside dedicated social time, only gets harder to come by as you get older, settle into routines, and sometimes move apart. I’ve hosted four Sunderfolk sessions with three friends, all in different states, and it has felt like reclaiming something I lost. Sunderfolk is a fun game with a lot of good ideas, and the best one is convincing humans to join up in pondering hex tiles, turn order, and what to name the ogres who shoot arrows (“Pointy Bros”).

Maybe you already have all the gaming appointments you need with friends, online or in person. Sunderfolk, I might suggest, is a worthy addition to your queue as a low-effort way to give everyone a break from being the organizer. It does a decent job of tutorializing and onboarding less experienced players, then adds depth as it goes on. Given that only one person out of four has to own the game on some system, and the only other hardware needed is a phone, it’s a pretty light lift for what I’m finding to be a great payoff. Some parts could be improved, but the core loop and its camaraderie engine feel sturdy.

I haven’t reached the mine cart missions yet but am glad to know they exist.

Credit: Dreamhaven

I haven’t reached the mine cart missions yet but am glad to know they exist. Credit: Dreamhaven

Pick a class, take a seat

My party getting a well-deserved level up. From left: Boom Boom the berserker, Roguefer, Bob the mage, and Fire Bob.

Credit: Kevin Purdy

My party getting a well-deserved level up. From left: Boom Boom the berserker, Roguefer, Bob the mage, and Fire Bob. Credit: Kevin Purdy

Sunderfolk is a turn-based tactical RPG, putting you and your friends on a grid filled with objects, enemies, and surprises. You pick from familiar role-playing character classes—my party picked rogue, berserker, wizard, and a kind of pyromancer—and choose one ability card each turn. The cards put a Gloomhaven-like emphasis on sequence and map positioning. One of my rogue’s potential moves is a quick attack, then gaining strength by picking up nearby gold. Another involves moving, hitting, moving, hitting, then one more single-hex move at the end, to stay out of danger and get a protective “Shrouded” effect.

You and your squad are all watching the same screen, be it a living room TV, a laptop, or a window streamed over Zoom or Discord. You choose your cards, plot your movement, and interact with everything using your phone or tablet’s touchscreen.  Once you’ve won a quest by beating the baddies and/or hitting other markers, you head back to town and do a whole bunch of housekeeping tasks. Sunderfolk has mechanics for both players not being present (scaling the quests and keeping the missing leveled up) and for someone having to drop mid-battle (someone else can play their seat and their own). It’s accommodating to players of different RPG experience levels and different schedules.

Sunderfolk launch date trailer.

You have my sword—and my phone

Let’s address the phone controls, the roughly 6-inch-diagonal elephant in the room. My three friends all had to spend a few minutes getting used to using their phone screen as a multi-modal controller: touchpad for hex movement and cursor pointing, card picker and info box reader, and then the town landscape screen. After that, nobody had any real issues with the controls themselves. The tactile feedback guides your finger, and there was no appreciable lag in our sessions.

Sometimes we’d get momentarily flummoxed by the card-choosing flow, and there is perhaps some inherent mental tax in switching between screens. But the phone controls, besides making couch co-op possible, also allowed everyone in my group to play in their most comfortable spot: a TV streaming the Discord app, a tablet on the couch, a laptop at the kitchen counter.

Not for nothing, but with each player using their phone for controls—and the game announcing when players had “disconnected” if they switched to another app too long, only for them to come right back—Sunderfolk can apply some anti-scrolling pressure and keep everyone checked in. You could get around this with secondary devices or tiled windows, but it’s better to be present and ask your friends whose turn it is to fire off their Ultimate card.

(To clarify how remote play works: One owner of the game can screen-cast their game to Discord, Zoom, Meet, or whatever service, everyone playing can chat there or elsewhere, and players log in their phones/controllers with a QR code that is displayed from the screen-casted main title screen)

Cheerhaven

Most times, your party will be spread out all over the town, but in this provided screenshot, everybody has remembered to upgrade their Fate cards at the Temple.

Credit: Dreamhaven

Most times, your party will be spread out all over the town, but in this provided screenshot, everybody has remembered to upgrade their Fate cards at the Temple. Credit: Dreamhaven

All this adventuring takes place in a world of anthropomorphized animals and overgrown woods and purple-blue ogres that strongly evoke World of Warcraft’s style (at least in Act 1). When you’re back in town, you tap around to chat with recurring NPCs, gain friendship levels that sometimes result in gifts, and upgrade parts of the town to your liking. The town hub provides more opportunities for strategy and bonding with players. You might send some of the gold you greedily picked up on the last quest to the friend so they can nab a great weapon. You might, as a group, buy a quirky town upgrade just for the chance to rename some things.

But the town is one place I felt some friction, familiar from more in-depth board games. Some players will be done with making their decisions and speeding through dialogue faster than others and may or may not be more engaged with the town chatterboxes. Just as with cardboard games, you can take this moment to get up, stretch your legs, and maybe refresh a drink. But you might have to nudge people along if they’re overwhelmed by gear choices.

The non-gory, often goofy nature of Sunderfolk’s setting makes it appropriate for a wider range of players. The voice acting, almost all of it by Anjali Bhimani (i.e., Symmetra), re-creates the feel of having a game master switch between “Frightened Blue Jay miner” and “Furious Ogre Queen” in one session. I’m not too engaged in the broad plot after one act—ogres, fueled by Darkstone, want to extinguish the village’s Brightstone—but it’s not a big deal. The game has given my group something else to latch onto: naming things.

Touching the Neatos to heal Boom Boom

Michael Keaton must reach an exit hex!

Credit: Kevin Purdy

Michael Keaton must reach an exit hex! Credit: Kevin Purdy

The chance to name things in Sunderfolk, and have those names stick for the whole campaign, is something a good GM would do to engage their players and break up tension. Sunderfolk is clever about this, offering both secret naming prompts to individual players on their phones or dishing out naming opportunities in town. In my party’s campaign, healing statues are named Neatos, the town bridge is Seagull Murder (a misremembered, obscure Peace Bridge reference), and the beetle we rescued is named, as it was during my preview, Michael Keaton. It’s fun to build your own stupid world out of goofy names, something too few games provide.

Individual phone controls give the game a chance to pull off a few other tricks, like only telling certain players about how a certain enemy looks like they’re carrying great loot. If Sunderfolk added even more of this, I would not mind at all.

I’m generally enjoying the combat, difficulty ramp (on the default setting), and upgrade paths of the characters. After three or four sessions, your character has much more move variety, and items and weapons are more useful and varied. The town of Arden, while overly chatty, has more to offer. It feels like a game that has had its pacing and onboarding fine-tuned.

But I have nits to pick:

  • There are cheap but great upgrades you can easily miss in town, like tavern meals and temple fate cards
  • Enemy variety feels slightly lacking in the first act
  • Some things, like mission selection, demand all-party agreement; perhaps the game could figuratively flip a coin when parties are divided
  • Everyone in my party has accidentally skipped an attack once or twice, despite an “Are you sure?” prompt
  • Movement traces and signaling could be clearer, as we have all also wasted hexes and shoved each other around

A very human computer game

You and your friends deal with a lot more stuff as Sunderfolk goes on: Boom Shrooms, loot, pits, explosives, and lots of little coin piles.

Credit: Kevin Purdy

You and your friends deal with a lot more stuff as Sunderfolk goes on: Boom Shrooms, loot, pits, explosives, and lots of little coin piles. Credit: Kevin Purdy

It’s been hard to be overly critical of a game that has all but forced me to log off and talk to friends for a couple hours each week. The downsides of Sunderfolk have mostly been the same as those of playing any tabletop game with humans: waiting, expertise imbalance, distraction, and someone’s dog needing attention.

Beyond that, I think Sunderfolk is a success at what it set out to do: Put the cardboard, cards, and dice on the screen and make it easier for everyone to show up. It won’t replace the traditional game night, but it might bring more people into it and remind people like me why it’s so good.

This post was updated at 10: 45 a.m. with a note about how remote play can work.

Photo of Kevin Purdy

Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch.

Sunderfolk review: RPG magic that transports your friends together Read More »

recap:-wheel-of-time’s-third-season-balefires-its-way-to-a-hell-of-a-finish

Recap: Wheel of Time’s third season balefires its way to a hell of a finish

Andrew Cunningham and Lee Hutchinson have spent decades of their lives with Robert Jordan and Brandon Sanderson’s Wheel of Time books, and they previously brought that knowledge to bear as they recapped each first season episode and second season episode of Amazon’s WoT TV series. Now we’re back in the saddle for season 3—along with insights, jokes, and the occasional wild theory.

These recaps won’t cover every element of every episode, but they will contain major spoilers for the show and the book series. We’ll do our best to not spoil major future events from the books, but there’s always the danger that something might slip out. If you want to stay completely unspoiled and haven’t read the books, these recaps aren’t for you.

New episodes of The Wheel of Time season three will be posted for Amazon Prime subscribers every Thursday. This write-up covers the season three finale, “He Who Comes With the Dawn,” which was released on April 17.

Lee: Wow. That was… a lot.

One of the recurring themes of our recaps across seasons has been, “Well, I guess we’re going to have to give up on seeing $SEMI_MAJOR_BOOK_SETTING_OR_EVENT on screen because of budget or time or narrative reasons,” and we’ve had to let go of a lot of stuff. But this episode kicks off with a flashback showing Elaida walking out of a certain twisted redstone doorframe, looking smug and fingering a bracelet. Sharp-eyed viewers might have spotted this doorway in the background of the season premiere, when the Black Ajah loots the Tower’s ter’angreal storeroom, and now in true Chekov’s Gun fashion, the doorway comes ’round again—and not just this one, because like many things in the Wheel of Time, the doorways come in a binary set.

We surely owe show-watchers a very quick recap of the Finn—and I believe we glossed over a scene in an earlier episode where the boys are actually playing the snakes-and-foxes game that these horrifying fae-folk are based on—but before we do that, let’s take a breath and look at what else we’ve got in the episode. Closure! (Well, some.) Balefire! Blocks breaking! Rand pulling a Paul Atreides and making it rain on Dune! I mean, uh, in the Three-Fold Land! And many other things!

Image of an Eelfin

According to the book, this Cat-in-the-Hat-looking mfer’s clothes are made of human flesh. Creepy.

Credit: Prime/Amazon MGM Studios

According to the book, this Cat-in-the-Hat-looking mfer’s clothes are made of human flesh. Creepy. Credit: Prime/Amazon MGM Studios

Andrew: I found this episode less than satisfying after last week’s specifically because of that grab-bag approach. There is some exciting, significant, season finale-style stuff happening here, but it’s also one of those piece-moving episodes with scene after scene of setup, setup, setup without a ton of room for payoff. Setup for a fourth season that, as of this writing, we still don’t know whether we’re getting!

So a number of things just feel rushed, most significantly Rand’s hard turn on Lanfear after a cursory attempt to coax her back to the side of the Light, and the existence of balefire as a concept. I actually love how the show visualizes it—it’s essentially a giant death laser that melts you out of the Pattern so thoroughly that it doesn’t just kill you, it also erases the last few seconds of your existence, represented here as a little shadow of a person that rewinds a bit before dissipating. The books use balefire extensively as a get-out-of-jail-free card for certain major character deaths, so it really feels like something that needs a little more preamble than it gets here.

Lee: Definitely hear you on the Rand and Lanfear stuff—though I think I was so excited by the things I cared about that I wasn’t really paying a lot of attention to the things I didn’t. And Rand & Moiraine & Lanfear are kind of at the bottom of my list of things I’m paying attention to as we slide into the finish—yeah, the Car’a’carn is Car’a’carning and Lanfear is Lanfear’ing.

Image of balefire balefiring someone.

Balefire looks a little Ghostbusters-y, but I definitely wouldn’t want to get hit with any.

Credit: Prime/Amazon MGM Studios

Balefire looks a little Ghostbusters-y, but I definitely wouldn’t want to get hit with any. Credit: Prime/Amazon MGM Studios

Andrew: It’s hard to know where to start with the rest of it! There are some recreations of book events that happen roughly where they’re “supposed” to in the story. There are recreations of book events that have been pulled way forward to save some time. There are things that emphatically don’t happen in the books, also done at least partly in the interests of time. And there’s at least one thing that felt designed specifically to fake out book-readers.

What to dig into first?

Lee: The fake-out! Let’s jump in there. The books make a big deal about Rand needing a teacher for him to get good at channeling, and it can’t be a female Aes Sedai (as the oft-repeated bit about “a bird cannot teach a fish to swim” makes clear). It seemed like it might be poor neglected Logain (remember him?), but now the show makes it clear that the man on the spot is instead going to be Sammael—and then Moghedien comes along and puts all of Sammael’s insides on the outside. Soooooo… I guess Sammael is off the board.

Image of Sammael being extraordinarily dead

Sammael (center) appears to be about as dead as Siuan. So much for that plotline.

Credit: Prime/Amazon MGM Studios

Sammael (center) appears to be about as dead as Siuan. So much for that plotline. Credit: Prime/Amazon MGM Studios

Andrew: Yup! We still have one Forsaken missing, by my count—there are eight in total in the show’s world, and we’ve seen five and had two more referenced by name. So the big open question is whether the eighth is the Forsaken who does end up in the Rand-teacher role in the books. I feel like the show wouldn’t have spent so much time setting up “Rand needs a teacher” without then bothering to follow up on it in some way, but this episode wants to tease people who are asking that question rather than answering it. Fair enough!

Sammael’s early death (pulled forward from book seven) has its own story reverberations. In the books he’s one of a few Forsaken who set themselves up as heads of state, and Rand has to run around individually defeating them and bringing all of these separate kingdoms together in time for the Last Battle (this is less exciting than it sounds, because it takes forever and requires endless patience for navigating the politics of each region).

It seems, increasingly, that we may just be skipping over a bunch of that stuff. That was already implied by the downplaying of Cairhienin politicking that we got on screen in season two, and I tend to see “putting all of Sammael’s blood on the outside” as another possible nod in that direction. As ever with this show, “knowing how it goes in the books” only gives us a limited amount of insight into what the show is going to do.

Lee: I’m liking it. I consider Rand’s world-unifying to be one of the core components of The Slog that we discussed last week, and I think anything that greases the skids on that entire plotline is unequivocally a good thing—that’s also about where I start skipping entire chapters if the word “Elayne” appears in them (trust me on this, show-watchers who might become book-readers: Elayne spends thousands of pages playing the most boring version of the Game of Thrones imaginable, and we suffer through every single interminable import/export discussion with her).

Speaking of Game of Thrones—at least in the sense of killing off characters and potentially shortening The Slog—Siuan’s dead! And probably not in a “can be fixed” kind of way, since we very clearly see her head separated from her body, and Moiraine gasps out confirmation. This one kind of shook me, since Siuan has a big major role to play in a certain big major thing that happens several books hence—but the more I think about it, the more this feels like the same kind of narrative belt-tightening that brought us Loial’s death last episode. Because up until that certain big major thing happens, Siuan spends a lot of her post-Amyrlin time as a scullery maid and underpants-washer. I think we can transplant that certain big major thing onto one of a half-dozen other characters and lose nothing. At least…I think. What about you?

Image of a dead Siuan Sanche

Siuan (center) has passed on. She is no more. She has ceased to be.

Credit: Prime/Amazon MGM Studios

Siuan (center) has passed on. She is no more. She has ceased to be. Credit: Prime/Amazon MGM Studios

Andrew: Yeah, I mean, not nothing, exactly. Every book character we have lost on the show has done stuff that I liked in the books that is now probably not going to happen. Complaining about The Slog aside, people like these books in part because they successfully build a super-dense world inhabited by a million named characters who all have Moments. Post-Amyrlin Siuan’s journey is about humility, finding happiness, and showing that the literal One Power is not the only kind of power there is to wield; it’s not always thrilling, but I won’t say it’s of zero narrative value.

And even when discussing The Slog, part of the reason it was so infuriating is because you and I were reading these as they were coming out. If you wait three years for a book, and then it comes out and nothing happens: that’s maddening! It is also not a problem that exists for modern readers or re-readers, now that the books have been done and dusted for over a decade. My assessment of Knife of Dreams, the series’ 11th book and the last one written entirely by Jordan, went way up on my last re-read because I was able to experience it without also having to experience the bookless years before and after. (It also made me newly sad that Jordan wasn’t able to conclude the story himself, as someone who finds the Sanderson-assisted books a bit clunky and utilitarian.)

All of that being said! I agree that from this point forward in the story, Siuan is not a load-bearing character in the way that Rand or Egwene or the others are. You do also get the sense that the show wants to surprise book-readers with something big every now and again. This particular death achieves that and also cuts down on what the show has left to adapt. I get why they did it! But I also sympathize with people who will miss her.

Image of Elaida as Amyrlin

Now that she’s Amyrlin, Elaida (center) gets to wear the biggest hat of all.

Credit: Prime/Amazon MGM Studios

Now that she’s Amyrlin, Elaida (center) gets to wear the biggest hat of all. Credit: Prime/Amazon MGM Studios

Lee: Let’s pivot, because I can’t wait to discuss Mat’s journey into Finn-land—one of the most important things that happens to his character in the books. I was pretty convinced that we simply weren’t going to get any of this in the show—that the Aelfinn and Eelfinn would be too outside what Amazon is willing to pay for. And yet, there are our two twisted redstone doorways. They’re repositioned somewhat from their book locations, but in a believable fashion. We have no idea what Elaida might have been doing in the doorway in the bowels of the White Tower—presumably she visited the snake-like Aelfinn (and the subtitles confirm this), which leaves Mat visiting the fox-like Eelfin.

The show has been dropping hints about this all season, from flashing us a shot of the first doorway in episode one, to actually showing the “snakes and foxes” tabletop game being played, and finally, here we are—while hunting for the control necklace in the Panarch’s palace in Tanchico, Mat steps through the doorway and… gets three wishes from a horrifying BDSM furry?

Break it down for us, Andrew. What the hell are we looking at?

Andrew: When you enter through these doors, the Finn give you stuff! The Aelfinn give you knowledge, by answering three questions. And the Eelfinn give you Things, both tangible and intangible, by granting three wishes. Exactly what these people are, where they live, why they have this arrangement with anyone who enters through the doorways: even in a series obsessed with overexplaining things, these are “don’t worry about it, that’s just how it is” questions. What you need to know is that the Aelfinns’ answers are often cryptic and open to interpretation, and the Eelfinns’ wish-granting is hyper-literal and comes with, uh, strings attached, as Mat quickly discovers.

Mat getting his things from the Eelfinn is essentially the moment he becomes the Mat he is for the rest of the story, like Perrin’s wolf powers or Egwene’s dream-walking or Rand’s channeling. So it’s pivotal! What did you think of how the show handled it?

Image of Set Sjöstrand as Couladin

Set Sjöstrand as Rand’s Shaido rival Couladin (center), giving off real Great Value Brand Khal Drogo energy here.

Credit: Prime/Amazon MGM Studios

Set Sjöstrand as Rand’s Shaido rival Couladin (center), giving off real Great Value Brand Khal Drogo energy here. Credit: Prime/Amazon MGM Studios

Lee: I thought it was pretty fantastic! We get to see Mat’s foxhead medallion—granted in response to his screaming about how sick he is of being “bollocked about by every bloody magic force on this bloody planet.” But more importantly—possibly the most important thing of all to a certain class of book reader!—is that we also finally get to see the weapon that will define Mat both in combat and out for the entire rest of the series. That’s right, kids, it’s an actual-for-real Ashandarei—and Mat’s hanging from it, just like in the books! Well, sort of. Sort of somewhat similarly to the books!

Mat is being aligned and equipped very well now to head toward his destiny. In fact, after this much of a build-up, the most Wheel of Time-esque thing to happen now would be for him to be completely absent from season four. Ell-oh-ell.

Image of Mat hanging from his knife-wrench-thing.

A bargain made, a price is paid. It’s a little hard to make out, but you can clearly see Mat’s (center) Ashandarei stabbed into the top of the doorframe—just follow the rope.

Credit: Prime/Amazon MGM Studios

A bargain made, a price is paid. It’s a little hard to make out, but you can clearly see Mat’s (center) Ashandarei stabbed into the top of the doorframe—just follow the rope. Credit: Prime/Amazon MGM Studios

Andrew: The Tanchico plotline is also kind of wrapped up here in abrupt fashion. In essence, our heroes fail. Not only do Moghedien and Liandrin manage to escape with all the parts of the collar they need to corral and control the Dragon Reborn, but they also agree to team up so they can beat the other Big Bads and become the biggest bads of all. I cannot see this ending well for either of them, but Kate Fleetwood’s Liandrin is such an unhinged presence on this show that I’m glad she’s sticking around.

Our heroes don’t walk away entirely empty-handed, I suppose. Thom tells Elayne that they actually know each other and tells her that “Lord Gaebril” is actually a Forsaken and a usurper whom she hasn’t actually known her whole life. And Nynaeve gets pitched into the sea, where a near-death experience dissolves the block that is keeping her from channeling freely (the show doesn’t say this overtly, but this is only lightly altered from a similar sequence that happens in book seven or eight, I think).

Image of Nynaeve saving herself from drowning

Nynaeve (center) doing her best Charlton Heston impression.

Credit: Prime/Amazon MGM Studios

Nynaeve (center) doing her best Charlton Heston impression. Credit: Prime/Amazon MGM Studios

Lee: Right, I believe Nynaeve’s block gets busted in book seven—I remember because when I started reading the series, that was the latest available book and the event stuck out. I very much like bringing it forward, too. In the books, keeping the block around makes sense narratively and serves a solid set of purposes; in the show, it was starting to feel less like a legitimate plot device and more like a bad storytelling crutch. It has served its purpose, and it’s time to get rid of it and get on with things.

(Though it is kind of funny to note that Liandrin was the one trying to help Nynaeve break the block in the show a couple of seasons ago. Looks like Liandrin finally found a method that works! The results, though, will not be what she expects.)

Image of Mat's foxhead medallion.

The foxhead medallion—one of the three items that come to define Matrim Cauthon (center).

Credit: Prime/Amazon MGM Studios

The foxhead medallion—one of the three items that come to define Matrim Cauthon (center). Credit: Prime/Amazon MGM Studios

Andrew: The show has set us all up to converge in Tear in season four, essentially going backwards in the story and doing parts of book three; my guess would be that, if it’s still identifiable as an adaptation of any particular Wheel of Time book, we see parts of books five and maybe six mixed in there, too. But all of that is contingent on the show getting another season, and for the first time going into a WoT finale, we aren’t actually sure if that’s happening, right?

Lee: Ugh, yeah, still no word on the next season, which sucks, because this one was so damn good. We wrap in the desert, where Rand has darkened the skies (enough to be seen all over the world!) and brought rain. Everyone looks on portentously. The Stone of Tear and the sword within it (Callandor! It’s the sword in the stone!) beckon. We just need the all-swallowing monster that is Amazon to spare some pocket change to make it happen.

Image of Rand summoning the storm

Rand (center-right) summons the rains.

Credit: Prime/Amazon MGM Studios

Rand (center-right) summons the rains. Credit: Prime/Amazon MGM Studios

Andrew: I’ve been worried about this renewal. Dramas like this just don’t get as many seasons as they would have in eras of TV gone by, and we’re several years past the end of streaming TV’s blank check era (unless you’re Apple TV+, I guess). This season has earned a lot of praise from more people than us—it’s got a higher Rotten Tomatoes score than either of the previous seasons, and higher than the second season of Rings of Power.

But it also doesn’t seem like Wheel of Time has become the breakout crossover smash-hit success that Jeff Bezos had in mind when he demanded his own Game of Thrones all those years ago. It’s expensive, and shows get more expensive the longer they run, as the people in front of and behind the camera negotiate raises and contract renewals.

I would love to see this get a fourth season. The third season had enough great stuff in it that I would be legitimately sad to see it canceled now, which is more attached than I was to the show at the end of its first or second seasons. How ’bout you?

Image of Rhuarc pledging fealty to the Car'a'carn

“And how can this be? For he is the Kwisatz Haderach!” I’m sorry, I’m sorry, no more Dune jokes.

Credit: Prime/Amazon MGM Studios

“And how can this be? For he is the Kwisatz Haderach!” I’m sorry, I’m sorry, no more Dune jokes. Credit: Prime/Amazon MGM Studios

Lee: I’ve said it a bunch, and I’ll say it again: This has been the season where the show found itself. I have every confidence that the next few seasons—if they’re allowed to exist—are going to kick ass.

But this is 2025, the year all dreams die. Perhaps this show, too, is a dream—one from which we are fated to wake sooner, rather than later.

I suppose we’ll know shortly. Until then, dear readers, may you always find water and shade, and may the hand of the Creator shelter you all. And also perhaps knock some sense into Bezos.

Credit: WoT Wiki

Recap: Wheel of Time’s third season balefires its way to a hell of a finish Read More »

ai-#112:-release-the-everything

AI #112: Release the Everything

OpenAI has upgraded its entire suite of models. By all reports, they are back in the game for more than images.

GPT-4.1 and especially GPT-4.1-mini are their new API non-reasoning models. All reports are that GPT-4.1-mini especially is very good.

o3 is the new top of the line ChatGPT reasoning model, with o3-pro coming in a few weeks. Reports are that it too looks very good, even without us yet taking much advantage of its tool usage. If you have access, check it out. Full coverage is coming soon. There’s also o4-mini and o4-mini-high.

Oh, they also made ChatGPT memory cover all your conversations, if you opt in, and gave us a version of Claude Code called Codex. And an update to their preparedness framework that I haven’t had time to examine yet.

Anthropic gave us (read-only for now) Google integration (as in GMail and Calendar to complement Drive), and also a mode known as Research, which would normally be exciting but this week we’re all a little busy.

Google and everyone else also gave us a bunch of new stuff. The acceleration continues.

Not covered yet, but do go check them out: OpenAI’s o3 and o4-mini.

Previously this week: GPT-4.1 is a Mini Upgrade, Open AI #13: Altman at TED and OpenAI Cutting Corners on Safety Testing

  1. Language Models Offer Mundane Utility. But doctor, you ARE ChatGPT!

  2. Language Models Don’t Offer Mundane Utility. Cuomo should have used o3.

  3. Huh, Upgrades. ChatGPT now has full memory across conversations.

  4. On Your Marks. A new benchmark for browsing agents.

  5. Research Quickly, There’s No Time. Just research. It’s cleaner. Check your email.

  6. Choose Your Fighter. Shoutouts to Google-AI-in-search and Mistral-Small-24B?

  7. Deepfaketown and Botpocalypse Soon. Building your own AI influencer.

  8. The Art of the Jailbreak. ChatGPT can now write its own jailbreaks.

  9. Get Involved. Study with UT Austin, or work for Ted Cruz. We all make choices.

  10. Introducing. Google offers agent development kit, OpenAI copies Claude Code.

  11. In Other AI News. Oh no, please, not another social network.

  12. Come on OpenAI, Again? Funny what keeps happening to the top safety people.

  13. Show Me the Money. Thinking Machines and SSI.

  14. In Memory Of. Ways to get LLMs real memory?

  15. Quiet Speculations. What even is AGI anyway, and other questions.

  16. America Restricts H20 Sales. We did manage to pull this one out.

  17. House Select Committee Report on DeepSeek. What they found was trouble.

  18. Tariff Policy Continues To Be How America Loses. It doesn’t look good.

  19. The Quest for Sane Regulations. Dean Ball joins the White House, congrats man.

  20. The Week in Audio. Hassabis, Davidson and several others.

  21. Rhetorical Innovation. No need to fight, they can all be existential dangers.

  22. Aligning a Smarter Than Human Intelligence is Difficult. Working among us.

  23. AI 2027. Okay, fair.

  24. People Are Worried About AI Killing Everyone. Critch is happy with A2A.

  25. The Lighter Side. The numbers are scary. The words are scarier.

Figure out what clinical intuitions convert text reports to an autism diagnosis. The authors were careful to note this was predicting who would be diagnosed, not who actually has autism.

Kate Pickert asserts in Bloomberg Why AI is Better Than Doctors at the Most Human Part of Medicine. AI can reliably express sympathy to match the situation, is always there to answer and doesn’t make you feel pressured or rushed. Even the gung ho doctors still saying things like ‘AI is not going to replace physicians, but physicians who know how to use AI are going to be at the top of their game going forward’ and saying how it ‘will allow doctors to be more human,’ and the article calls that an ‘ideal state.’ Isn’t it amazing how every vision of the future picks some point where it stops?

The US Government is deploying AI to clean up its personnel records and correct inaccurate information. That’s great if we do a good job.

Translate dolphin vocalizations?

Pin down where photographs were taken. It seems to be very good at this.

Henry: ten years ago the CIA would have gotten on their knees for this. every single human has just been handed an intelligence superweapon. it’s only getting stranger

This may mean defense will largely beat offense on deepfakes, if one has a model actually checking. If I can pinpoint exact location, presumably I can figure out when things don’t quite add up.

Andrew Cuomo used ChatGPT for his snoozefest of a vacuous housing plan, which is fine except he did not check its work.

Hell Gate: Andrew Cuomo used ChatGPT to help write the housing plan he released this weekend, which included several nonsensical passages. The plan even cites to ChatGPT on a section about the Rent Guidelines Board.

He also used ChatGPT for at least two other proposals. It’s actively good to use AI to help you, but this is not that. He didn’t even have someone check its work.

If New York City elects Andrew Cuomo as mayor we deserve what we will get.

What else isn’t AI doing for us?

Matthew Yglesias: AI is not quite up to the task of “make up a reason for declining a work-related invitation that will stand up to mild scrutiny, doesn’t make me sound weird, and is more polite than ‘I don’t want to do it.’”

Tyler Cowen: Are you sure?

I think AI is definitely up to that task to the extent it has sufficient context to generate a plausible reason. Certainly it can do an excellent job of ‘use this class of justification to generate a maximally polite and totally non-weird reply.’

As usual, the best way to not get utility is not to use them, fraudulent company edition.

Peter WIldeford: I guess fake it until you make it, do things that don’t scale, etc. only works to a point.

Nico: Ship fast, break things, (go to jail)

I don’t think ‘create human call centers in order to get market share and training data to then make them into AI call centers’ is even a terrible startup idea. The defrauding part did run into a little trouble.

A technical analysis of some fails by Claude Plays Pokemon, suggesting issues stemming from handling and management of long context. This both suggests ways to improve Claude in general, and ways one could improve the scaffolding and allow Claude to play superior Pokemon (without ‘cheating’ or otherwise telling it about the game in any specific way.)

Apple’s demo of Siri’s new abilities to access reader emails and find real-time flight data and plot routes in maps came as news to the people working on Siri. In general Mac Rumors paints a picture of a deeply troubled and confused AI effort at Apple, with eyes very much not on the ball.

ChatGPT memory now extends to the full contents of all your conversations. You can opt out of this. You can also do incognito windows that won’t interact with your other chats. You can also delete select conversations.

Noam Brown: Memory isn’t just another product feature. It signals a shift from episodic interactions (think a call center) to evolving ones (more like a colleague or friend).

Still a lot of research to do but it’s a step toward fundamentally changing how we interact with LLMs.

This shift has its disadvantages. There’s a huge freedom and security and ability to relax when you know that an interaction won’t change things overall. When you interact with a human, there’s always this kind of ‘social calculator’ in the back of your brain whether you’re conscious of it or not, and oh my is it a relief to turn it off. I hate that now when I use so many services, I have to worry in that same way about ‘what my actions say about me’ and how they influence what I will see in the future. It makes it impossible to fully relax. Being able to delete chats helps, but not fully.

My presumption is you still very much want this feature on. Most of the time, memory will be helpful, and it will be more helpful if you put in effort to make it helpful – for example it makes sense to offer feedback to ChatGPT about how it did and what it can do better in the future, especially if you’re on $200/month and thus not rate limited.

I wonder if it is now time to build a tool to let one easily port their chat histories between various chatbots? Presumably this is actually easy, you can copy over the entire back-and-forth with and tags and paste it in, saying ‘this is so you can access these other conversations as context’ or what not?

Anna Gat is super gung ho on memory, especially on it letting ChatGPT take on the role of therapist. It can tell you your MBTI and philosophy and lead you to insights about yourself and take different points of view and other neat stuff like that. I am skeptical that doing this is the best idea, but different people work differently.

Like Sean notes, my wife uses my account too (I mean it’s $200/month!) and presumably that’s going to get a bit confusing if you try things like this.

Gemini 2.5 Pro was essentially rushed into general availability before its time, so we should still expect it to improve soon when we get the actual intended general availability version, including likely getting a thinking budget similar to what is implemented in Gemini 2.5 Flash.

Google upgrades AI Studio (oh no?), they list key improvements as:

  1. New Starter Apps.

  2. Refined Prompting Interface, persistent top action bar for common tasks.

  3. Dedicated Developer Dashboard including API keys and changelog.

A quick look says what they actually did was mostly ‘make it look and feel more like a normal chat interface.’

Cohere gives us Embed 4.

My feedback is they don’t do a good job here explaining the value proposition, and what differentiates it from other offerings or from Embed 3. It seems like it is… marginally better at search and retrieval? But they don’t give me a way to feel what they uniquely enable, or where this will outperform.

ChatGPT will have all your images in one place. A small thing, but a good thing.

Grok gives us Grok Studio, offering code execution and Google Drive support.

LM Arena launches a ‘search Arena’ leaderboard, Gemini 2.5 Pro is on top with Perplexity-Sonar-Reasoning-Pro (high) slightly behind on presumably more compute.

OpenAI introduces BrowseComp, a 1,266 question benchmark for browsing agents. From looking at sample questions they provide this is extremely obscure inelegant trivia questions, except you’ll be allowed to use the internet? As in:

Give me the title of the scientific paper published in the EMNLP conference between 2018-2023 where the first author did their undergrad at Dartmouth College and the fourth author did their undergrad at University of Pennsylvania. (Answer: Frequency Effects on Syntactic Rule Learning in Transformers, EMNLP 2021)

I mean, yeah, okay, that is a test one could administer I suppose, but why does it tell us much about how good you are as a useful browsing agent?

When asking about ‘1 hour tasks’ there is a huge gap between ‘1 hour given you know the context’ versus ‘1 hour once given this spec.’

Charles Foster: Subtle point [made in this post]: there’s a huge difference between typical tasks from your job that take you 1 hour of work, and tasks that a brand new hire could do in their first hour on the job. Most “short” tasks you’ve done probably weren’t standalone: they depended on tons of prior context.

A lot of getting good at using LLMs is figuring out how, or doing the necessary work, to give them the appropriate context. That includes you knowing that context too.

How badly did Llama-4 go? This badly:

Casper Hansen: Llama 4 quietly dropped from 1417 to 1273 ELO, on par with DeepSeek v2.5.

Previously, Llama-4 took second place in Arena with an intentionally sloppified version optimized for Arena. That’s gone now, we are testing the actual Llama-4-Maverick, and shall we say it’s not going great.

Here’s a very bullish report on Gemini 2.5 Pro (prior to the release of o3):

Leo Abstract: there’s been a lot going on lately and i neglected to chime in earlier and say that gemini 2.5 pro absolutely crushes the private benchmarks i’d been using until now, in a new way and to a new extent.

i’m going to need a bigger benchmark.

Well, well, what do we have here.

Anthropic: Today we’re launching Research, alongside a new Google Workspace integration.

Claude now brings together information from your work and the web.

Research represents a new way of working with Claude.

It explores multiple angles of your question, conducting searches and delivering answers in minutes.

The right balance of depth and speed for your daily work.

Claude can also now connect with your Gmail, Google Calendar, and Docs.

It understands your context and can pull information from exactly where you need it.

Research is available in beta for Max, Team, and Enterprise plans in the United States, Japan, and Brazil.

Separately, the Google Workspace integration is now available for all paid plans.

Oh. My. God. Huge if true! And by true I mean good at job.

I’m excited for both features, but long term I’m more excited for Google integration than for research. Yes, this should 100% be Gemini’s You Had One Job, but Gemini is not exactly nailing it, so Claude? You’re up. Right now it’s read-only, and it’s been having trouble finding things and having proper access in my early tests, but I’m waiting until I try it more. Might be a few bugs to work out here.

Anthropic: We’ve crafted some prompt suggestions that help you quickly get insights from across your Google Workspace.

After turning it on, try: “Reflect on my calendar as if I was 100 years old looking back at this time.”

Peter Wildeford: 👀

Just tried “Claude Research”, it’s much faster (takes <1min) but much weaker than Gemini's and ChatGPT's "Deep Research" (fails to find key facts and sources that Gemini/ChatGPT do).

I think it’s a great replacement for a Google quick dive but not for serious research.

Claude integration with Google apps seems potentially awesome and I’d be curious how it compares against Gemini (which is surprisingly weak in this area given the home field advantage)

But alas “Claude for Google Drive is not approved by Advanced Protection” so can’t yet try it. Still, I expect to use this a lot.

Also LOL in the video how everyone compliments the meticulous sabbatical planning even though it was just a Claude copy+paste.

John Pressman says people are sleeping on Mistral-Small-24B, and in particular it speeds up his weave-agent project dramatically (~10x). Teortaxes asks about Reka-21B. There’s this entire other ecosystem of small open models I mostly don’t cover.

Liron Shapira is liking the new version of Google AI-in-search given its integration of web content and timely news. I’m not there yet.

The general case of this is my biggest complaint about Gemini 2.5 Pro.

Agus: Gemini 2.5 Pro seems to systematically refuse me when I ask it to provide probabilities for things, wut

Max Alexander: True human level capabilities.

An advertisement for tools to build an AI influencer on Instagram and OnlyFans. I mean, sure, why not. The problem is demand side, not supply side, as they say.

You can use AI to create bad new Tom & Jerry cartoons, I guess, if you want to?

With its new memory feature, Pliny found that ChatGPT wasn’t automatically jailbroken directly, but it did give Pliny a jailbreak prompt, and the prompt worked.

Join the office of Ted Cruz.

US Senate Employment Office: Chairman #TedCruz seeks a #conservative #counsel to join the #Republican staff of the #Senate #Commerce Committee to lead on #artificialintelligence #policy. Job referral #231518

In all seriousness this seems like a high leverage position for someone who understands AI and especially AI existential risk. Ted Cruz has some very large confusions about AI related matters. As a result he is attempting to do some highly damaging things. We also have our disagreements, but a lot of it is that he seems to conflate ethics and wokeness concerns with notkilleveryoneism concerns, and generally not understand what is at stake or what there is to worry about. One can join his team, sincerely help him, and also help explain this.

If you do go for this one, thank you for your service.

Anthropic is looking for a genius prompt engineer for Model Behavior Architect, Alignment Fine Tuning.

Scott Aaronson is building an OpenPhil backed AI alignment group at UT Austin, prospective postdocs and PhD students in CS should apply ASAP for jobs starting as soon as August. You’ll need your CV, links to representative publications and two recommendation letters, you can email Chandra.

AI Innovation & Security Policy Workshop in Washington DC, July 11-13, apply by May 4th. All travel and accommodation expenses covered, great speaker lineup, target is US citizens considering careers in AI policy.

UK AISI is funding alignment research, you can fill out a 5-minute contract form.

80,000 Hours Podcast is making a strategic shift to focus on AGI, and looking to grow its team with a third host/interviewer (!) and a chief of staff, deadline is May 6.

Google presents the Agent Development Kit (ADK) (GitHub download, ReadMe).

  • Code-First Development: Define agents, tools, and orchestration logic for maximum control, testability, and versioning.

  • Multi-Agent Architecture: Build modular and scalable applications by composing multiple specialized agents in flexible hierarchies.

  • Rich Tool Ecosystem: Equip agents with diverse capabilities using pre-built tools, custom Python functions, API specifications, or integrating existing tools.

  • Flexible Orchestration: Define workflows using built-in agents for predictable pipelines, or leverage LLM-driven dynamic routing for adaptive behavior.

  • Integrated Developer Experience: Develop, test, and debug locally with a CLI and visual web UI.

  • Built-in Evaluation: Measure agent performance by evaluating response quality and step-by-step execution trajectory.

  • Deployment Ready: Containerize and deploy your agents anywhere – scale with Vertex AI Agent Engine, Cloud Run, or Docker.

  • Native Streaming Support: Build real-time, interactive experiences with native support for bidirectional streaming (text and audio).

  • State, Memory & Artifacts: Manage short-term conversational context, configure long-term memory, and handle file uploads/downloads.

  • Extensibility: Customize agent behavior deeply with callbacks and easily integrate third-party tools and services.

pip install google-adk

OpenAI offers us Codex CLI, a feature adopted from Claude. This is open source so presumably you could try plugging in Claude or Gemini. It runs from the command line and can do coding things or ask questions about files based on a natural language request, up to and including building complete apps from scratch in ‘full auto mode,’ which is network disabled and sandboxed to its directory.

Noam Brown (OpenAI): I now primarily use codex for coding. @fouadmatin and team did an amazing job with this!

I sympathize!

Austen Allred: An anecdotal survey of GauntletAI grads came to the consensus that staying on the cutting edge of AI takes about one hour per day.

Yes, per day.

How long it takes depends what goals you have, and which cutting edges they include. It seems highly plausible that ‘be able to apply AI at the full cutting edge at maximum efficiency’ is one hour a day. That’s a great deal, and also a great deal.

OpenAI is working on a Twitter-like social network. Unfortunately, I checked and the name Twitter is technically not available, but since when has OpenAI cared about copyright law? Fingers crossed!

Mostly they’re crossed hoping OpenAI does not do this. As in, the world as one says please, Sam Altman, humanity is begging you, you do not have to do this. Then again, I love Twitter to death, currently Elon Musk is in charge of it, and if there is going to be a viable backup plan for it I’d rather it not be Threads or BlueSky.

OpenAI offers an update to its preparedness framework. I will be looking at this in more detail later, for now simply noting that this exists.

Anthropic, now that it has Google read-only integration and Research, is reportedly next going to roll out a voice mode, ‘as soon as this month.’

New DeepMind paper uses subtasks and using capabilities towards a given goal to measure goal directedness. As we already knew, LLMs often ‘fail to employ their capabilities’ and are not ‘fully goal-directed’ at this time, although we are seeing them become more goal directed over time. I note the goalpost move (not the paper’s fault!) from ‘LLMs don’t have goals’ to ‘LLMs don’t maximally pursue the goals they have.’

Well, this doesn’t sound awesome, especially on top of what else we learned recently. It seems we’ve lost another head of the Preparedness Framework. It does seem like OpenAI has not been especially prepared on these fronts lately. When GPT-4.1 was released we got zero safety information of any kind that I could find.

Garrison Lovely: 🚨BREAKING🚨 OpenAI’s top official for catastrophic risk, Joaquin Quiñonero Candela, quietly stepped down weeks ago — the latest major shakeup in the company’s safety leadership. I dug into what happened and what it means.

Candela, who led the Preparedness team since July, announced on LinkedIn he’s now an “intern” on a healthcare team at OpenAI.

A company spokesperson told me Candela was involved in the successor framework but is now “focusing on different areas.”

This marks the second unannounced leadership change for the Preparedness team in less than a year. Candela took over after Aleksander Mądry was quietly reassigned last July — just days before Senators wrote to Sam Altman about safety concerns.

Candela’s departure is part of a much larger trend. OpenAI has seen an exodus of top safety talent over the past year, including cofounder John Schulman, safety lead Lilian Weng, Superalignment leads Ilya Sutskever & Jan Leike, and AGI readiness advisor Miles Brundage.

When Leike left, he publicly stated that “safety culture and processes have taken a backseat to shiny products” at OpenAI. Miles Brundage cited publishing constraints and warned that no company is ready for artificial general intelligence (AGI).

With key safety leaders gone, OpenAI’s formal governance is crucial but increasingly opaque. Key members of the board’s Safety and Security Committee are gone, and the members of a new internal “Safety Advisory Group” (SAG) haven’t been publicly identified.

An OpenAI spox told me that Sandhini Agarwal has been leading the safety group for 2 months, but that information hadn’t been announced or previously reported. Given how much OpenAI has historically emphasized AI safety, shouldn’t we know who is in charge of it?

A former employee wrote to me, “Even while working at OpenAI, details about safety procedures were very siloed. I could never really tell what we had promised, if we had done it, or who was working on it.”

This pattern isn’t unique to OpenAI. Google still hasn’t published a safety report for Gemini 2.5 Pro, arguably the most capable model available, in likely violation of the company’s voluntary commitments.

Mira Murati’s Thinking Machines has doubled their fundraising target to $2 billion, and the team keeps growing, including Alec Radford. I expect them to get it.

Ilya Sutskever’s SSI now valued at $32 billion. That is remarkably high.

Matt Turck: Hearing rumors of massive secondaries in some of those huge pre-product AI rounds. Getting nervous that the level of risk across the AI industry is getting out of control. No technology even revolutionary can live up in the short term to this level of financial excess.

James Campbell: As far as I know, there are four main ways we could get LLM memory:

  1. We could simply use very long contexts, and the context grows over an instance’s “lifetime”; optionally, we could perform iterative compression or summarization.

  2. A state-space model that keeps memory in a constant-size vector.

  3. Each context is a “day.” Then, the model is retrained each “night” on the day’s data so that it has long-term knowledge of what happened (just as humans sleep).

  4. Retrieval Augmented Generation (RAG) on text or state vectors, and the RAG performs sophisticated operations such as reflection or summarization in the background. Perhaps reinforce the model to use the scaffold extremely skillfully.

Are there any methods I missed?

I’m surprised labs don’t take the “fine-tune user-specific instances” route. Infrastructurally, it might be hard, but I’m holding out hope that Thinky might do this.

Gallabytes: very bullish on 3, interested in how far lightweight approximations can go via 2, and super bearish on 1.

4 is orthogonal. Taking notes seems good.

I am also highly bearish on #1 and throwing everything into context, you’d be much better off in a #4 scenario at that point unless I’m missing something. The concept on #3 is intriguing, I’d definitely be curious to see it tried more. In theory you could also update the weights continuously, but I presume that would slow you down too much, which also is presumably why humans do it this way?

Gideon Lichfield is mostly correct that ‘no one knows’ what the term ‘artificial general intelligence’ or AGI means. Mostly we have a bunch of different vague definitions at best. Lichfield does a better job than many of then taking future AI capabilities seriously and understanding that everyone involved is indeed pointing at real things, and notice that “most of the things AI will be capable of, we can’t even imagine today.” Gideon does also fall back on several forms of copium, like intelligence not being general, or the need to ‘challenge conventional wisdom,’ or that to think like a human you have to do the things humans do (e.g. sleep (?), eat (?!), have sex (what?) or have exactly two arms (???)).

Vladimir Nesov argues that even if timelines are short and your work’s time horizon is long, that means your alignment (or other) research gets handed off to AIs, so any groundwork you can lay remains helpful.

Robin Hanson once again pitches that AI impacts will be slow and take decades, this time based on previous GPTs (general purpose technologies) taking decades. Sometimes I wonder about an alternative Hanson who is looking for Hansonian reasons AI will go fast. Claude’s version of this seemed uninspired.

Paul Graham: The AI boom is not just probably bigger than the two previous ones I’ve seen (integrated circuits and the internet), but also seems to be spreading faster.

It took a while for society to “digest” integrated circuits and the internet in the sense of figuring out all the ways they could be used. This seems to be happening faster with AI. Maybe because so many uses of intelligence are already known.

For sure there will be new uses of AI as well, perhaps more important than the ones we already know about. But we already know about so many that existing uses are enough to generate rapid growth.

Over 90% of the code being written by the latest batch of startups is written by AI. Sold now?

Another way of putting this is, yes being a GPT means that the full impact will take longer, but there being additional impact later doesn’t mean less impact soon.

Tyler Cowen says it’s nonsense that China is beating us, and the reason is AI, which he believes will largely favor America due to all AIs having ‘souls rooted in the ideals of Western Civilization,’ due to being trained primarily on Western data, and this is ‘far more radical’ than things like tariff rates and more important than China’s manufacturing prowess.

I strongly agree that AI likely strongly favors the United States (although mostly for other reasons), and that AI is going be big, really big, no bigger than that, it’s going to be big. It is good to see Tyler affirm his belief in both of these things.

I will however note that if AI is more important than tariffs, then what was the impact of tariff rates on GDP growth again? Credible projections for RGDP growth for 2025 were often lowered by several percent on news of the tariffs. I find these projections reasonable, despite widespread anticipation that mostly the tariffs will be rolled back. So, what does that say about the projected impact of AI, if it’s a much bigger deal?

Also, Tyler seems to be saying the future is going to be shaped primarily by AIs, but he’s fine with that because they will be ‘Western’? And thus it will be a triumph of ‘our’ soft power? No, they will soon be highly alien, and the soft power will not be ours. It will be theirs.

(I also noticed him once again calling Manus a ‘top Chinese AI model,’ a belief that at this point has to be a bizarre anachronism or something? The point that it was based on Claude is well taken.)

We are going to be at least somewhat smarter about the selling China AI chips part. It turns out this time we didn’t actually fully sell out for a $1 million Mar-a-Lago dinner.

Samuel Hammond (last week, infamously and correctly, about our being about to let China buy H20s): what the actual fuck.

Thomas Hochman: Finally, an answer to the question: What if we made our AI companies pay a tariff on a chip that we designed but also ended export control restrictions.

Pater Wildeford: Why does banning the H20 matter?

Current US chip export controls focus on China’s ability to TRAIN frontier AI models.

But it’s now INFERENCE that is becoming the central driver of AI innovation through reasoning models, agentic AI, and automated research.

By restricting the H20 NOW, the US can limit China’s inference hardware accumulation before it becomes the dominant compute paradigm.

Matthew Yglesias: We’re doing a form of trade war with China where half the stuff normal Americans buy will get more expensive, but China still gets access to the leading edge technology they need to dominate AI and whatever else.

Jason Hausenloy (link to full article): Exporting H20 Chips to China Undermines America’s AI Edge.

Good news, everyone! We did it. We restricted, at least for now, sales of the H20.

We know it will actually impact chip flows because Nvidia filed to expect $5.5 billion in H20-related charges for Q1 (for now) and traded 6% down on the news.

Last week’s announcement may have been a deliberate leak as an attempt to force the administration’s hand. If so, it did not work.

We still have to sustain this decision. It is far from final, and no doubt Nvidia will seek a license to work around this, and will redesign its chips once again to maximally evade our restrictions.

Also, they will look the other way while selling chips elsewhere. Jensen is not the kind of CEO who cooperates in spirit. Which leads to our other problem, that we are decimating BIS rather than strengthening BIS. What good are our restrictions if we have no way to enforce them?

Ben Thompson is the only person I’ve seen disagree with the restrictions on the H20. That position was overdetermined because he opposes all the export controls and thinks keeping people locked in the CUDA ecosystem is more important than who has better AIs or compute access. He consistently has viewed AI as another tech platform play, as I’ve discussed in the past, especially when dissecting his interview with Altman, where he spent a lot of it trying to convince Altman to move to an advertising model.

Ben’s particular claim was that the H20 is so bad that no one outside China would want them, thus they have to write off $5.5 billion. That’s very clearly not the case. Nvidia has to write off the $5.5 billion as a matter of accounting, whether or not they ultimately sell the chips in the West. There are plenty of buyers for H20s, and as my first test of o3 it confirmed that the chips would absolutely sell in the Western markets well above their production costs, it estimates ~$10 billion total. Which means that not only does China get less compute, we get more compute.

Nvidia is definitely worth less than if they were allowed to sell China better AI chips, but they can mostly sell whatever chips they can make. I am not worried for them.

How is BIS enforcement going right now?

Aidan O’Gara: So excited to see the next batch of models out of Malaysia.

Kakashii: Nvidia Chip Shipments to Malaysia Skyrocket to Record Highs Despite U.S. Warnings — March 2025 Update

On March 23, 2025, the Financial Times reported that U.S. officials asked Malaysia to monitor every shipment entering the country when it involves Nvidia chips. “[The U.S. is] asking us to make sure that we monitor every shipment that comes to Malaysia when it involves Nvidia chips”, Malaysian Trade Minister Zafrul Aziz said.

But did Malaysia really start the monitor and crackdown? Let’s assume U.S. officials approached Malaysian officials about a week before the FT published the report—which means Malaysia had half a month to start monitoring. That’s plenty of time to change the picture of the flows. I assume.

So what did they do when the Singapore tunnel was about to close? Yep, you guessed right—boost Malaysia’s role in this routing.

Malaysia is now dominating the Nvidia chip flow into Asia, and March has officially taken the crown as the biggest month ever for shipments to the country.

Let’s talk numbers:

2022: $817 million

2023: $1.276 billion

2024: $4.877 billion — an increase of almost 300% YoY

2025:

January: $1.12 billion (!) — nearly 700% year-over-year increase (!)

February: $626.5 million

March: a record-breaking $1.96 billion (!) — an astonishing 3,433% increase from 2023 to 2025 (!)

Total GPU flow from Taiwan to Malaysia in Q1 2025? $3.71 billion. If we take Nvidia’s estimated total revenue for Q1, Malaysia’s shipments alone make up almost 10% of the company’s estimated revenue in Q1 (!).

As in, Nvidia is getting 10% of their Q1 2025 revenue selling chips to China in direct violation of the export controls. And we are doing approximately nothing about it.

In related news, The House Select Committee on the CCP has issued a report entitled “DeepSeek Unmasked: Exposing the CCP’s Latest Tool For Spying, Stealing, and Subverting U.S. Export Control Restrictions.”

I am glad they are paying attention to the situation, especially the issues with American export controls. Many of their proposed interventions are long overdue, and several proposals are things we’d love to have but that we need to do work on to find a method of implementation.

What is dismaying is that they are framing AI danger entirely as a threat from the sinister PRC. I realize that is what the report is about, but one can tell that they are viewing all this purely in terms of our (very real and important to win) competition with the PRC, to the exclusion of other dangers.

This is clearly part of a push to ban DeepSeek, at least in terms of using the official app and presumably API. I presume they aren’t going to try and shut down use of the weights. The way things are presented, it’s not clear that everyone understands that this is mostly very much not about an app.

The report’s ‘key findings’ are very much in scary-Congressional-report language.

Key findings include:

  • Censorship by Design: More than 85% of DeepSeek’s responses are manipulated to suppress content related to democracy, Taiwan, Hong Kong, and human rights—without disclosure to users.

  • Foreign Control: DeepSeek is owned and operated by a CCP-linked company led by Lian Wenfang and ideologically aligned with Xi Jinping Thought.

  • U.S. User Data at Risk: The platform funnels American user data through unsecured networks to China, serving as a high-value open-source intelligence asset for the CCP.

  • Surveillance Network Ties: DeepSeek’s infrastructure is linked to Chinese state-affiliated firms including ByteDance, Baidu, Tencent, and China Mobile—entities known for censorship, surveillance, and data harvesting.

  • Illicit Chip Procurement: DeepSeek was reportedly developed using over 60,000 Nvidia chips, which may have been obtained in circumvention of U.S. export controls.

  • Corporate Complicity: Public records show Nvidia CEO Jensen Huang directed the company to design a modified chip specifically to exploit regulatory loopholes after October 2023 restrictions. The Trump Administration is working to close this loophole.

They also repeat the accusation that DeepSeek was distilling American models.

A lot of these ‘key findings’ are very much You Should Know This Already, presented as something to be scared of. Yes, it is a Chinese company. It does the mandated Chinese censorship. It uses Chinese networks. It might steal your user data. Who knew? Oh, right, everyone.

The more interesting claims are the last two.

(By the way, ‘DeepSeek was developed?’ I get what you meant to say, but: Huh?)

We previously dealt with claims of 50k Nvidia chips, now it seems it is 60k. They are again citing SemiAnalysis. It’s definitely true that this was reported, but it seems unlikely to actually be true. Also note that their 60k chips here include 30k H20s, and the report makes clear that by ‘illegal chip’ procurement they are including legal chips that were designed by Nvidia to ‘skirt’ export controls, and conflating this with potential actual violations of export restrictions.

In this sense, the claim on Corporate Complicity is, fundamentally, 100% true. Nvidia has repeatedly modified its AI chips to be technically in compliance with our export restrictions. As I’ve said before and said above, they have zero interest in cooperating in spirit and are treating this as an adversarial game.

This also includes exporting to Singapore and now Malaysia in quantities very obviously too large for anything but the Chinese secondary market.

I don’t think this approach is going to turn out well for Nvidia. In an iterated game where the other party has escalation dominance, and you can’t even fully meet demand for your products, you might not want to constantly hit the defect button?

Kristina Partsinevelos: Nvidia response: “The U.S. govt instructs American businesses on what they can sell and where – we follow the government’s directions to the letter” […] “if the government felt otherwise, it would instruct us.”

Daniel Eth: Hot take but this is actually really dumb of Nvidia. USG has overarching goals here – by following the letter of the law but not the spirit, Nvidia is risking getting caught in the cross hairs of legal updates (as they just did) and giving up the opportunity to create good will.

If I was Nvidia I would be cooperating in spirit. And of course I’d be asking for quite a lot of consideration in exchange in various ways. I’d expect to get it. Alas.

The report recommends two sets of things. First, on export controls:

  1. Improve funding for BIS. On this I am in violent agreement.

  2. Further restrict the export controls, to include the H20 and also manufacturing equipment. We have not banned selling tools and sub components. This is potentially enabling Huawei to give China quite a lot of effective chips. Ben Thompson mentions this issues as well. So again, yes, agreed.

  3. Impose remote access controls on all data center, compute cluster and models trained with the use of US-origin GPUs and other U.S.-origin data center accelerants, including but not limited to TPUs. I’d want to see details, as worded this seems to be a very extreme position.

  4. Give BIS the ability to create definitions based on descriptions of capability, that can be used to describe AI models with national security significance. Yes.

  5. Essentially whistleblower provisions on export control violations. Yes.

  6. Consider requiring tracking end-users of chips and equipment. How?

  7. Actually enforce our export controls against smuggling. Yes.

  8. Require companies to install on-chip location verification capabilities in order to receive an export license for chips restricted from export to any country with a high risk of diversion to the PRC. A great idea subject to implementation issues. Do we have the capability to do this in a reasonable way? We should definitely be working on being able to do it.

  9. “Ensure the secure and safe use of AI systems by directing a federal agency (e.g., NIST and AISI, CISA, NSA) to develop physical and cybersecurity standards and benchmarks for frontier AI developers to protect against model distillation, exfiltration, and other risks.” Violent agreement on exfiltration risks. On distillation, I must ask: How do you intend to do that?

  10. “Address national security risks and the PRC’s strategy to capture AI market share with low-cost, open-source models by placing a federal procurement prohibition on PRC-origin AI models, including a prohibition on the use of such models on government devices.” As stated this only applies to federal procurement. It seems fine for government devices to steer clear, not because of ‘market share’ concerns (what?) but because of potential security issues, and also because it doesn’t much matter, there is no actual reason to use DeepSeek’s products here anyway.

Their second category is interesting: “Prevent and prepare for strategic surprise related to advanced AI.”

I mean, yes, we should absolutely be doing that, but mostly concerns about that have nothing to do with DeepSeek or the PRC. I mean, I can’t argue with lines like ‘AI will affect many aspects of government functioning, including aspects relating to defense and national security.’

This narrow, myopic and adversarial focus, treating AI purely as a USA vs. PRC situation, misses most of the point, but if it gets our government to actually build state capacity and pay attention to developments in AI, then that’s very good. If they only monitor PRC progress, that’s better than not monitoring anyone’s progress, but we should be monitoring our own progress too, and everyone else’s.

The ‘AI strategic surprise’ to worry about most is, of course, a future highly capable AI (or the lab that created it) strategically surprising you.

Similarly, yes, please incorporate AI into your ‘educational planning’ and look for national security related AI challenges, including those that emerge from US-PRC AI competition. But notice that a lot of the danger is that the competition pushes capabilities or their deployment forward recklessly.

Otherwise, we end up responding to this by pushing forward ever harder and faster and more recklessly, without ability to align or control the AIs we are creating that are already increasingly acting misaligned (I’ll discuss this more when I deal with o3), exacerbating the biggest risks.

We are going about tariffs exactly backwards. This is threatening to cripple our AI efforts along with everything else. So here we are once again.

Mostly tariffs are terrible and one should almost never use them, but to the extent there is a point, it would be to shift production of high-value goods, and goods vital to national security, away from China and towards America and our allies.

That would mean putting a tariff on the high-value finished products you want to produce. And it means not putting a tariff on the raw materials used to make those products, or on products that have low value, aren’t important to national security and that you don’t want to reshore, like cheap clothes and toys.

And most importantly, it means stability. To invest you need to know what to expect.

Mark Gurman: BREAKING: President Donald Trump’s administration exempted smartphones, computers and other electronics from its so-called reciprocal tariffs in win for Apple (and shoppers). This lowers the China tariff from 125%.

Ryan Peterson: So we’re exempting all advanced electronics from Chinese tariffs and putting a 125 pct tariff on textiles and toys?

Armand Domalewski: right now, if I import the parts to make a computer, I pay a massive tariff, but if I import the fully assembled computer, I get no tariff. truly a genius plan to reshore manufacturing

Nassim Nicholas Taleb: Pathological incoherence of Neronulus Trump:

+U.S. made electronics pay duties on imported material/parts. But imports of electronics THAT INCLUDE SAME MATERIAL & PARTS are exempt; this is a CHINA SUBSIDY.

+Moving US industry from high value added into socks, lettuce…

Mike Bird: If you were running a secret strategy to undermine American manufacturing, exempting popular finished consumer goods from tariffs and keeping them in place for intermediate goods, capital goods and raw materials would be a good way to go about it.

Spencer Hakimian: So let’s just get this crystal clear.

If somebody is trying to manufacture a laptop in the U.S., the components that they import are going to be tariffed at 145%.

But if somebody simply makes the laptop entirely in China, they are fully tariff free?

And this is all to bring manufacturing back to the U.S.?

NY-JOKER:

Joey Politano (and this wasn’t what made him the joker, that was a different thread): lol Commerce posted today that they’re looking into tariffs on the machinery used to make semiconductors

This will just make it harderto make semiconductors in the US. It is so unbelievably stupid that I cannot put it into words.

The exact example the administration used of something they (very incorrectly) insisted we could ‘make in America’ was the iPhone. We can’t do that ourselves at any sane price any time soon, but it is not so crazy to say perhaps we should not depend on China for 87% of our iPhones. In response to massive tariffs, Apple was planning to shift more production from China to India.

But then they got an exemption, so forget all that, at least for right now?

Aaron Blake: Lutnick 5 days ago said smartphone manufacturing was coming to the U.S. because of tariffs.

Now we learn smartphones are excluded from the tariffs.

Stephen Miller here clarifies in response that no, and Trump clarified as only he can, such products are (for now, who knows what tomorrow will bring?) still subject to the original IEEPA tariff on China of 20%. That is still a lot less than 145%. What exactly does Miller think is going to happen?

Or maybe not? The methodology for all this, we are told, is ‘on instinct.’

The Kobeissi Letter: BREAKING: Commerce Secretary Lutnick says tariffs on semiconductors and electronics will come in “a month or so.”

Once again, markets are very confused this morning.

It appears that semiconductor tariffs are still coming but not technically a part of reciprocal tariffs.

They will be in their own class, per Lutnick.

De Itaone: White House official: Trump will issue a Section 232 study on semiconductors soon.

White House official: Trump has stated that autos, steel, pharmaceuticals, chips, and other specific materials will be included in specific tariffs to ensure tariffs are applied fairly and effectively.

Ryan Peterson: Tariffs on semiconductors and electronics will be introduced in about a month, according to U.S. Commerce Secretary Lutnick.

Those products were exempted from tariffs just yesterday. The whole system seems designed to create paralysis.

Josh Wingrove: Trump, speaking to us on Air Force One tonight, repeated that the semiconductor tariffs are coming but opened the door to exclusions.

“We’ll be discussing it, but we’ll also talk to companies,” he said. “You have to show a certain flexibility. Nobody should be so rigid.”

Joey Politano: I AM NOW THE JOKER.

You can imagine how those discussions are going to go. How they will work.

So what the hell are you supposed to do now until and unless you get your sit down?

Sit and wait, presumably.

Between the uncertainty about future tariffs and exemptions, including on component parts, who would invest in American manufacturing or other production right now, in AI and tech or elsewhere? At best you are in a holding pattern.

Stargate is now considering making some of its data center investments in the UK.

Indeed, trade is vanishing stunningly fast.

That is only through April 8. I can’t imagine it’s gotten better since then. I have no idea what happens when we start actually running out of all the things.

We are setting ourselves up on a path where our own production will actively fall since no one is able to plan, and we will be unable to import what we need. This combination is setting us up for rather big trouble across the board.

Ryan Peterson: Met another US manufacturer who’s decided to produce their product overseas where they won’t have to pay duties on component imports.

Paul Graham: Someone in the pharma business told me companies are freezing hiring and new projects because of uncertainty about tariffs. I don’t think Trump grasps how damaging mere uncertainty is.

Even if he repealed all the tariffs tomorrow, businesses would still have to worry about what he might do next, and be correspondingly conservative about plans for future expansion.

Kelsey Piper: the only thing that will actually prompt a full recovery is Congress reasserting its power over taxation so the markets can actually be sure we won’t do this song and dance again in a few months.

The trading of Apple and its options around the exemption announcement was, shall we say, worthy of the attention of the SEC. It’s rather obvious what this looks like.

If you took the currently written rules at face value, what would happen to AI? Semianalysis projected on April 10 that the direct cost increase for GPU cloud operators was at that point likely less than 2%, cheap enough to shrug off, but that water fabrication equipment for US labs would rise 15% and optical module costs will increase 25%-40%.

Semianalysis (Dylan Patel et al): Nevertheless, we identified a significant loophole in the current tariff structure – explained in detail in the following section – that allows US companies to avoid tariffs on certain goods, including GPUs, imported from Mexico and Canada.

While GPUs are subject to a 32% tariff on all GPUs exported from Taiwan to the US, there is a loophole in the current tariff structure that allows US companies to import GPUs from Mexico and Canada at a 0% tariff.

That would mean the primary impact here would be on the cost and availability of financing, and the decline in anticipated demand. But by the time you read this, that report and Peter’s summary of it will be at least days old. You need to double check.

You also need to hold onto your talent. The world’s top talent, at least until recently, wanted to beat a path to our door. We are so foolish that we often said no. And now, that talent, including those we said yes to, are having second thoughts.

Shin Megami Boson: I run an AI startup and some of my best employees are planning to leave the country because, while they’re on greencards and have lived here for over a decade, they’re not willing to risk getting disappeared to an el salvadorean blacksite if they try to visit family abroad.

My current guess on the best place to go, if one had to go, would be Canada. I can verify that I indeed know someone (in AI) who really did end up moving there.

Meanwhile, this sums up how it’s going:

Ryan Peterson: Two of our American customers devastated by the tariffs gave up and sold themselves to their Chinese factories in the last week.

Thousands, and then millions, of American small businesses, including many iconic brands, we’ll go bankrupt this year if the tariff policies on China don’t change.

These small businesses are largely unable to move their manufacturing out of China. They are last in line when they try to go to a new country as those other countries can’t even keep up with the demand from mega corporations.

The manufacturers in Vietnam and elsewhere can’t be bothered with small batch production jobs typical of a small business’s supply chain.

When the brands fail, they will be purchased out of bankruptcy by their Chinese factories who thus far have built everything except a customer facing brand, which is where most of the value capture happens already.

Consumer goods companies typically mark up the goods 3x or more to support their fixed costs (including millions of American employees).

Now the factories will get to vertically integrate and capture the one part of the chain they haven’t yet dominated.

And when they die, it may actually be the final victory for the Chinese manufacturer as they scoop up brands that took decades to build through the blood, sweat and tears of some of the most creative and entrepreneurial people in the world. American brand builders are second to none world wide.

Dean Ball joins the White House office of Science and Technology Policy as a Senior Policy Advisor on AI and Emerging Technology. This is great news and also a great sign, congrats to him and also those who hired him. We have certainly had our disagreements, but he is a fantastic pick and very much earned it.

Director Michael Kratsios: Dean [Ball] is a true patriot and one of the sharpest minds in AI and tech policy.

I’m proud to have him join our @WHOSTP47 team.

The Stop Stealing Our Chips Act, introduced by Senator Rounds and Senator Warner, would help enable BIS, whose capabilities are being crippled by forced firings, to enforce our regulations on exporting chips, including whistleblower procedures and creating associated protections and rewards. I agree this is a no-brainer. BIS desperately needs our help.

Corin Katzke and Gideon Futerman make the latest attempt to explain why racing to artificial superintelligence would undermine America’s national security, since ‘why it would kill everyone’ is not considered a valid argument by so many people. They warn of great power conflict when the PRC reacts, the risk of loss of control and the risk of concentration of power.

They end by arguing that ASI projects are relatively easy to monitor reasonably well, and the consequences severe, thus cooperation to avoid developing ASI is feasible.

I checked out the other initial AI Frontiers posts as well. They read as reasonable explainers for people who know little about AI, if there is need for that.

Where we are regarding AI takeover and potential human disempowerment…

Paul Graham: Sam, if all these politicians are going to start using ChatGPT to generate their policies, you’d better focus on making it generate better policies. Or we could focus on electing better politicians. But I doubt we can improve the hardware as fast as you can improve the software.

Demis Hassabis talks to Reid Hoffman and Aria Finger, which sounds great on principle but the description screams failure to know how to extract any Alpha.

Yann LeCun says he’s not interested in LLMs anymore, which may have something to do with Meta’s utter failure to produce an interesting LLM with Llama 4?

Austin Lin: the feeling is mutual.

As usual, our media has a normal one and tells us what we need to know.

Brendan Steinhauser: My friend and colleague @MarkBeall appeared on @OANN to discuss AI and national security.

Check it out. #ai

Yes, they do indeed try to ask him about our vulnerability to the spying Chinese robot dogs. Mark Beall does an excellent job pivoting to powerful AI and the race to superintelligence, the risks of AI cyberattacks and how far behind the curve is our government’s situational awareness. Then Chanel asks better questions, including about the (very crucial) need for more rapid military adaptation cycles. When Beall says America and China are ‘neck and neck’ here he’s talking about military adaptation, where we are moving painfully slowly.

Why does unsupervised learning work? The answer given is compression, finding the shortest program that explains the data, including representing randomness. The explanation says that’s what pretraining does, and it learns very quickly. I notice I do not find this satisfying, it doesn’t satiate my confusion or curiosity here. Nor do I buy that it is explaining that large a percentage of what is going on to say ‘intelligence is efficient compression.’ I see a similar perspective here from Dean Ball, where he notes that more intelligent people can detect patterns before a dumber person. That’s true, but I believe that other things can enable this that aren’t (to me) intelligence, and that there are things that are intelligence but that don’t do this, and also that it’s largely about threshold effects rather than speed.

Conjecture Co-Founder Gabe Alfour on The Human Podcast.

It seems there is a humorous AI Jesus podcast.

Peter Wildeford offers a 1-minute recap of the GPT-4.5 livestream, that one minute of attention is all you need. GPT-4.5 is 10x more compute than GPT-4, the process took two years, data availability is a pain point, the next step is another 10x increase to 1M GPU runs over 1-2 years.

Dylan Patel: OpenAI successfully wasted an hour of time for every person in pre-training by making them pay the upmost attention for insane possible alpha, but subsequently having none.

I hate the OpenAI livestreams. They’re almost never worthwhile, and I’ve stopped watching, but I have to worry I am missing something and I have to wait for the news to arrive later in text form. Please send text tokens for greater training efficiency.

Tom Davidson spends three hours out of the 80,000 Hours (podcast) talking about AI-enabled coups.

A new paper also looks at these various mechanisms of AI-enabled coups. This is still framed here as worry about a person or persons taking over rather than worry about the AI itself taking over. In the coup business it’s all the same except that the AI is a lot smarter and more capable.

  1. The first concern is if AI is singularly loyal, note that this loyalty functions the same whether or not it is nominally to a particular person. Any mechanism of singular control will do.

  2. The second concern are ‘hard-to-detect secret loyalties’ which in a broad sense are inevitable and ubiquitous under current techniques. The AI might not be secretly loyal ‘to a person’ but its loyalties are not what you had in mind. One still does want to prevent this from being done more explicitly or deliberately, to focus loyalty to particular persons or groups. Note that it won’t be ‘clean’ to get an AI to refuse such modifications, what other modifications will it learn to refuse?

  3. The third concern is ‘exclusive access to coup-enabling capabilities,’ so essentially just a controllable group becoming relatively highly capable, thus granting it the ability to steer the future.

The core problem isn’t preventing a ‘coup,’ as in allowing a small group to steer the future. That alone seems doable. The problem is, you need to prevent this, but you also need humanity to still collectively be able to meaningfully steer a future that contains entities smarter than humans, at the same time, and protecting against various forms of gradual disempowerment and the vulnerable world hypothesis. That is very hard, the final boss version of the classic problems of governance we’ve been dealing with for a very long time, to which we’ve only found least horrendous answers.

Last week I mentioned that OpenAI was attempting to transition their non-profit arm into essentially ‘do ordinary non-profit things.’ Former OpenAI employee Jacob Hilton points out I was being too generous, and that the new mission would better be interpreted as ‘proliferate OpenAI’s products among nonprofits.’ Clever, that way even the money you do pay the nonprofit you largely get to steal back, too.

Michael Nielsen in a very strong essay argues that the ‘fundamental danger’ from ASI isn’t ‘rogue ASI’ but rather that ASI enables dangerous technologies, while also later in the essay dealing with other (third) angles of ASI danger. He endorses and focuses on the Vulnerable World Hypothesis. By aligning models we bring the misuse threats closer, so maybe reconsider alignment as a goal. This is very much a ‘why not both’ situation, or as Google puts it, why not all four: Misalignment, misuse, mistakes and multi-agent structural risks like gradual disempowerment. This isn’t a competition.

One place where Nielsen is very right is that alignment is insufficient, however I must remind everyone that it is necessary. We have a problem where many people equate (in both good faith and bad faith) existential risk from AI uniquely with a ‘rogue AI,’ then dismiss a ‘rogue AI’ as absurd and therefore think creating smarter than human, more capable minds is a safe thing for humans to do. That’s definitely a big issue, but that doesn’t mean misalignment isn’t a big deal. If you don’t align the ASIs, you lose, whether that looks like ‘going rogue’ or not.

Another important point is that deep understanding of the world is everywhere and always dual use, as is intelligence in general, and that most techniques that make models more ‘safe’ also can be repurposed to make models more useful, including in ‘unsafe’ ways, and one does not simply take a scalpel to the parts of understanding that you dislike.

He ends with a quick and very good discussion of the risk of loss of control, reiterating why many dumb arguments against it (like ‘we can turn it off’ or ‘we won’t give it goals or have it seek power’ or ‘we wouldn’t put it in charge’) are indeed dumb.

A thread with some of the most famous examples of people Speaking Directly Into the Microphone, as in advocating directly for human extinction.

A good reminder that when people make science fiction they mostly find the need to focus on the negative aspects of things, and how they make humans suffer.

Mark Krestschmann: Forget “Black Mirror”. We need WHITE MIRROR An optimistic sci-fi show about cool technology and how it relates to society. Who’s making this?

Rafael Ruiz: People are saying this would be boring and not worth watching, yet 90% of people are irrationally afraid of technology because of Black Mirror-esque sci-fi.

The “fiction to bad epistemics” pipeline remains undefeated.

We need to put up more positive visions for the future.

Andrew Rettek: Star Trek wasn’t boring.

If there’s one thing we have definitely established, it’s that most AI companies have essentially zero interest in any actions that don’t directly impact the bottom line.

Joshua Clymer: favorite line (from the 80k hours podcast with Buck):

“5 years ago I thought addressing misalignment was very difficult. Now the situation feels like man we know a list of 40 things, none of which feel that hard, but i’ve also updated drastically downward on how many things AI companies have the time / appetite to do”

My position is that addressing misalignment still looks extremely difficult, indeed many of the required elements feel very hard, and also navigating the narrow path to a worthy future will be extremely difficult even if alignment does get solved.

Alas, I fully endorse the second half of Buck’s statement. The AI companies are dramatically underinvesting in alignment, security and safety work, even relative to their own private economic self-interest via the resulting more useful models.

John Wentworth asks, what would it look like if everything AI was kind of fake? I agree that this is something to keep an eye on, but not to endorse.

A hotfix to the OpenAI model spec, great work everyone, and we have a changelog.

Jason Wolfe: We just released a small bugfix update to the OpenAI Model Spec. Key changes:

– Fixed a copy-editing error where “misalignment” wasn’t listed as a reason assistants might pursue incorrect goals.

– Clarified that “Do not lie” allows “pleasantries” rather than “white lies.”

Since they thanked me explicitly for pointing some of this out: You’re welcome. And thank you for fixing it.

Daniel Kokotajlo remains worried that the models can still lie to avoid revealing privileged information. It’s obviously fine to refuse to answer questions, once the models start lying for glomarization it’s not clear where that ends.

Buck Shlegeris et al offer a new paper testing methods for AI control with a red teamer attempting to get a coding agent to download malware. The solution is called ‘Ctrl-Z: Controlling AI Agents via Resampling,’ as in once suspicion is aroused you can ‘go back in time’ and resample earlier actions (note that this implies you can isolate all the relevant preconditions for the action, which I expect to often break as an assumption).

We measure the tradeoff between attack prevention and usefulness; our best protocol combines resampling with analysis of previous steps, reducing the success rate of attacks from 58% to 7% at a 5% cost to the performance of a non-malicious agent.

I see what they did there, but the limitations seems too large for me to update much.

An analysis of AI behavior in the game Among Us as a sandbox for agentic deception, which was ubiquitous when the agents were given only the game rules.

Here’s how various models did, clearly reasoning models had a big edge.

A high win-rate at the game can be achieved by being a better “crewmate” or by being a better “impostor”. We show that Claude/DeepSeek become much better at winning by being better impostors.

When comparing deception and detection capability, we found smaller and non-reasoning models comparable to frontier models at detection (proxied by being a crewmate), while frontier and reasoning models are much better at deception (proxied by being an impostor). This suggests the frontier pushes more for deceptive capability rather than detection.

They expect humans are still better than the AIs, but that’s with the AIs playing the game ‘straight up’ without training. I have very little doubt that an actual AlphaUs would be well above human level.

If deception ‘scales’ better than detection, that is very bad news. It means that if you use AIs to try and detect deception from other AIs, this will work less well over time even if the AIs remain evenly matched in overall capability.

Better news was that for now using probes detected deception rather easily, as seen in the chart below. Autoencoder performance was more mixed.

Lucius Bushnaq sees three (or four) assumptions often implicitly made by people doing interpretability, and notes that the third, that a model’s activations can be decomposed into the sum of the represented features, is rarely if ever made explicit.

An interview with Eli Lifland, mostly duplicative if you’re read various other things.

This is a highly fair objection:

Shako: How I felt listening to Scott Alexander reason about how the populace will react to ASI on the Dwarkesh podcast

Andrew Critch remains worried, but he is now modestly less worried.

Andrew Critch: News from Google this week reduced my fear of an extinction lock-in (EL) event by end-of-year 2029, by a factor of ~0.85 (-15%). Below is my new subjective CDF, and why it’s lower:

p(EL by eoy 2025)=5%

P(EL by eoy 2027)=13%

p(EL by eoy 2029)=25%

p(EL by eoy 2031)=45%

p(EL by eoy 2033)=55%

p(EL by eoy 2035)=65%

p(EL by eoy 2037)=70%

p(EL by eoy 2039)=75%

p(EL by eoy 2049)=85%

That’s still an 85% chance of extinction lockin within 25 years. Not great. But every little bit helps, as they say. What was the cause for this update?

Andrew Critch: The main news is Google’s release of an Agent to Agent (A2A) protocol.

Alvaro Cintas: Google introduces Agent2Agent (A2A) protocol for AI interoperability.

It enables AI agents from different vendors to communicate and collaborate seamlessly.

Google: Today, we’re launching a new, open protocol called Agent2Agent (A2A), with support and contributions from more than 50 technology partners like Atlassian, Box, Cohere, Intuit, Langchain, MongoDB, PayPal, Salesforce, SAP, ServiceNow, UKG and Workday; and leading service providers including Accenture, BCG, Capgemini, Cognizant, Deloitte, HCLTech, Infosys, KPMG, McKinsey, PwC, TCS, and Wipro. The A2A protocol will allow AI agents to communicate with each other, securely exchange information, and coordinate actions on top of various enterprise platforms or applications.

A2A facilitates communication between a “client” agent and a “remote” agent. A client agent is responsible for formulating and communicating tasks, while the remote agent is responsible for acting on those tasks in an attempt to provide the correct information or take the correct action.

Andrew Critch: First, I wasn’t expecting a broadly promoted A2A protocol until the end of this year, not because it would be hard to make, but because I thought business leaders in AI weren’t thinking enough about how much A2A interaction will dominate the economy soon.

Second, I was expecting a startup like Cursor to have to lead the charge on promoting A2A. Google doing this is heartening — and something I’ve been hoping for — because more than any other company, they “keep the internet running”, and they *shouldlead on this.

I’d lower my risk estimates further, except that I don’t *yetsee how the US and China are going to sort out their relations around AI. But FWIW, I’ve been hoping for years that trade negotiations like tariffs would be America’s primary approach to that.

Anyway, seeing leadership from big business on agent-to-agent interaction protocols in Q2 2025 is yielding a significant (*0.85) shift downward in my worries.

Thanks Google!

This A2A feature certainly seems cool and useful, and yes it seems positive for Google to be the one providing the protocol. It will be great if your agent can securely call other agents and relay its subtasks, rather than each agent having to navigate all those subtasks on its own. You can call agents to do various things the way traditional programs can call functions. Great job Google, assuming this is good design. From what I could tell it looked like good design but I’m not going to invest the kind of time that would let me confidently judge that.

What I don’t see is why this substantially improves humanity’s chances to survive.

Which way?

The right mind for this job does not yet exist.

But it calls for my new favorite motivational poster.

Arthur Dent (if that is his real name, and maybe it is?): I asked the AI for the least inspiring inspirational poster and I weirdly like it

Here’s an alternative AI-generated motivational poster.

AI Safety Memes: ChatGPT, create a metaphor about AI then turn it into an image.

The work is mysterious and important.

Yes. They’re scary. The numbers are scary.

Whereas the cats are cute.

Michi: My workplace held an AI-generated image contest and I submitted an illustration I drew. Nobody noticed it wasn’t AI and I ended up winning.

Discussion about this post

AI #112: Release the Everything Read More »

looking-at-the-universe’s-dark-ages-from-the-far-side-of-the-moon

Looking at the Universe’s dark ages from the far side of the Moon


meet you in the dark side of the moon

Building an observatory on the Moon would be a huge challenge—but it would be worth it.

A composition of the moon with the cosmos radiating behind it

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

There is a signal, born in the earliest days of the cosmos. It’s weak. It’s faint. It can barely register on even the most sensitive of instruments. But it contains a wealth of information about the formation of the first stars, the first galaxies, and the mysteries of the origins of the largest structures in the Universe.

Despite decades of searching for this signal, astronomers have yet to find it. The problem is that our Earth is too noisy, making it nearly impossible to capture this whisper. The solution is to go to the far side of the Moon, using its bulk to shield our sensitive instruments from the cacophony of our planet.

Building telescopes on the far side of the Moon would be the greatest astronomical challenge ever considered by humanity. And it would be worth it.

The science

We have been scanning and mapping the wider cosmos for a century now, ever since Edwin Hubble discovered that the Andromeda “nebula” is actually a galaxy sitting 2.5 million light-years away. Our powerful Earth-based observatories have successfully mapped the detailed location to millions of galaxies, and upcoming observatories like the Vera C. Rubin Observatory and Nancy Grace Roman Space Telescope will map millions more.

And for all that effort, all that technological might and scientific progress, we have surveyed less than 1 percent of the volume of the observable cosmos.

The vast bulk of the Universe will remain forever unobservable to traditional telescopes. The reason is twofold. First, most galaxies will simply be too dim and too far away. Even the James Webb Space Telescope, which is explicitly designed to observe the first generation of galaxies, has such a limited field of view that it can only capture a handful of targets at a time.

Second, there was a time, within the first few hundred million years after the Big Bang, before stars and galaxies had even formed. Dubbed the “cosmic dark ages,” this time naturally makes for a challenging astronomical target because there weren’t exactly a lot of bright sources to generate light for us to look at.

But there was neutral hydrogen. Most of the Universe is made of hydrogen, making it the most common element in the cosmos. Today, almost all of that hydrogen is ionized, existing in a super-heated plasma state. But before the first stars and galaxies appeared, the cosmic reserves of hydrogen were cool and neutral.

Neutral hydrogen is made of a single proton and a single electron. Each of these particles has a quantum property known as spin (which kind of resembles the familiar, macroscopic property of spin, but it’s not quite the same—though that’s a different article). In its lowest-energy state, the proton and electron will have spins oriented in opposite directions. But sometimes, through pure random quantum chance, the electron will spontaneously flip around. Very quickly, the hydrogen notices and gets the electron to flip back to where it belongs. This process releases a small amount of energy in the form of a photon with a wavelength of 21 centimeters.

This quantum transition is exceedingly rare, but with enough neutral hydrogen, you can build a substantial signal. Indeed, observations of 21-cm radiation have been used extensively in astronomy, especially to build maps of cold gas reservoirs within the Milky Way.

So the cosmic dark ages aren’t entirely dark; those clouds of primordial neutral hydrogen are emitting tremendous amounts of 21-cm radiation. But that radiation was emitted in the distant past, well over 13 billion years ago. As it has traveled through the cosmic distances, all those billions of light-years on its way to our eager telescopes, it has experienced the redshift effects of our expanding Universe.

By the time that dark age 21-cm radiation reaches us, it has stretched by a factor of 10, turning the neutral hydrogen signal into radio waves with wavelengths of around 2 meters.

The astronomy

Humans have become rather fond of radio transmissions in the past century. Unfortunately, the peak of this primordial signal from the dark ages sits right below the FM dial of your radio, which pretty much makes it impossible to detect from Earth. Our emissions are simply too loud, too noisy, and too difficult to remove. Teams of astronomers have devised clever ways to reduce or eliminate interference, featuring arrays scattered around the most desolate deserts in the world, but they have not been able to confirm the detection of a signal.

So those astronomers have turned in desperation to the quietest desert they can think of: the far side of the Moon.

It wasn’t until 1959 when the Soviet Luna 3 probe gave us our first glimpse of the Moon’s far side, and it wasn’t until 2019 when the Chang’e 4 mission made the first soft landing. Compared to the near side, and especially low-Earth orbit, there is very little human activity there. We’ve had more active missions on the surface of Mars than on the lunar far side.

Chang’e-4 landing zone on the far side of the moon. Credit: Xiao Xiao and others (CC BY 4.0)

And that makes the far side of the Moon the ideal location for a dark-age-hunting radio telescope, free from human interference and noise.

Ideas abound to make this a possibility. The first serious attempt was DARE, the Dark Ages Radio Explorer. Rather than attempting the audacious goal of building an actual telescope on the surface, DARE was a NASA-funded concept to develop an observatory (and when it comes to radio astronomy, “observatory” can be as a simple as a single antenna) to orbit the Moon and take data when it’s on the opposite side as the Earth.

For various bureaucratic reasons, NASA didn’t develop the DARE concept further. But creative astronomers have put forward even bolder proposals.

The FarView concept, for example, is a proposed radio telescope array that would dwarf anything on the Earth. It would be sensitive to frequency ranges between 5 and 40 MHz, allowing it to target the dark ages and the birth of the first stars. The proposed design contains 100,000 individual elements, with each element consisting of a single, simple dipole antenna, dispersed over a staggering 200 square kilometers. It would be infeasible to deliver that many antennae directly to the surface of the Moon. Instead, we’d have to build them, mining lunar regolith and turning it into the necessary components.

The design of this array is what’s called an interferometer. Instead of a single big dish, the individual antennae collect data on their own and then correlate all their signals together later. The effective resolution of an interferometer is the same as a single dish as big as the widest distance among the elements. The downside of an interferometer is that most of the incoming radiation just hits dirt (or in this case, lunar regolith), so the interferometer has to collect a lot of data to build up a decent signal.

Attempting these kinds of observations on the Earth requires constant maintenance and cleaning to remove radio interference and have essentially sunk all attempts to measure the dark ages. But a lunar-based interferometer will have all the time in the world it needs, providing a much cleaner and easier-to-analyze stream of data.

If you’re not in the mood for building 100,000 antennae on the Moon’s surface, then another proposal seeks to use the Moon’s natural features—namely, its craters. If you squint hard enough, they kind of look like radio dishes already. The idea behind the project, named the Lunar Crater Radio Telescope, is to find a suitable crater and use it as the support structure for a gigantic, kilometer-wide telescope.

This idea isn’t without precedent. Both the beloved Arecibo and the newcomer FAST observatories used depressions in the natural landscape of Puerto Rico and China, respectively, to take most of the load off of the engineering to make their giant dishes. The Lunar Telescope would be larger than both of those combined, and it would be tuned to hunt for dark ages radio signals that we can’t observe using Earth-based observatories because they simply bounce off the Earth’s ionosphere (even before we have to worry about any additional human interference). Essentially, the only way that humanity can access those wavelengths is by going beyond our ionosphere, and the far side of the Moon is the best place to park an observatory.

The engineering

The engineering challenges we need to overcome to achieve these scientific dreams are not small. So far, humanity has only placed a single soft-landed mission on the distant side of the Moon, and both of these proposals require an immense upgrade to our capabilities. That’s exactly why both far-side concepts were funded by NIAC, NASA’s Innovative Advanced Concepts program, which gives grants to researchers who need time to flesh out high-risk, high-reward ideas.

With NIAC funds, the designers of the Lunar Crater Radio Telescope, led by Saptarshi Bandyopadhyay at the Jet Propulsion Laboratory, have already thought of the challenges they will need to overcome to make the mission a success. Their mission leans heavily on another JPL concept, the DuAxel, which consists of a rover that can split into two single-axel rovers connected by a tether.

To build the telescope, several DuAxels are sent to the crater. One of each pair “sits” to anchor itself on the crater wall, while another one crawls down the slope. At the center, they are met with a telescope lander that has deployed guide wires and the wire mesh frame of the telescope (again, it helps for assembling purposes that radio dishes are just strings of metal in various arrangements). The pairs on the crater rim then hoist their companions back up, unfolding the mesh and lofting the receiver above the dish.

The FarView observatory is a much more capable instrument—if deployed, it would be the largest radio interferometer ever built—but it’s also much more challenging. Led by Ronald Polidan of Lunar Resources, Inc., it relies on in-situ manufacturing processes. Autonomous vehicles would dig up regolith, process and refine it, and spit out all the components that make an interferometer work: the 100,000 individual antennae, the kilometers of cabling to run among them, the solar arrays to power everything during lunar daylight, and batteries to store energy for round-the-lunar-clock observing.

If that sounds intense, it’s because it is, and it doesn’t stop there. An astronomical telescope is more than a data collection device. It also needs to crunch some numbers and get that precious information back to a human to actually study it. That means that any kind of far side observing platform, especially the kinds that will ingest truly massive amounts of data such as these proposals, would need to make one of two choices.

Choice one is to perform most of the data correlation and processing on the lunar surface, sending back only highly refined products to Earth for further analysis. Achieving that would require landing, installing, and running what is essentially a supercomputer on the Moon, which comes with its own weight, robustness, and power requirements.

The other choice is to keep the installation as lightweight as possible and send the raw data back to Earthbound machines to handle the bulk of the processing and analysis tasks. This kind of data throughput is outright impossible with current technology but could be achieved with experimental laser-based communication strategies.

The future

Astronomical observatories on the far side of the Moon face a bit of a catch-22. To deploy and run a world-class facility, either embedded in a crater or strung out over the landscape, we need some serious lunar manufacturing capabilities. But those same capabilities come with all the annoying radio fuzz that already bedevil Earth-based radio astronomy.

Perhaps the best solution is to open up the Moon to commercial exploitation but maintain the far side as a sort of out-world nature preserve, owned by no company or nation, left to scientists to study and use as a platform for pristine observations of all kinds.

It will take humanity several generations, if not more, to develop the capabilities needed to finally build far-side observatories. But it will be worth it, as those facilities will open up the unseen Universe for our hungry eyes, allowing us to pierce the ancient fog of our Universe’s past, revealing the machinations of hydrogen in the dark ages, the birth of the first stars, and the emergence of the first galaxies. It will be a fountain of cosmological and astrophysical data, the richest possible source of information about the history of the Universe.

Ever since Galileo ground and polished his first lenses and through the innovations that led to the explosion of digital cameras, astronomy has a storied tradition of turning the technological triumphs needed to achieve science goals into the foundations of various everyday devices that make life on Earth much better. If we’re looking for reasons to industrialize and inhabit the Moon, the noble goal of pursuing a better understanding of the Universe makes for a fine motivation. And we’ll all be better off for it.

Photo of Paul Sutter

Looking at the Universe’s dark ages from the far side of the Moon Read More »

autism-rate-rises-slightly;-rfk-jr.-claims-he’ll-“have-answers-by-september”

Autism rate rises slightly; RFK Jr. claims he’ll “have answers by September”

Among the sites, there were large differences. Prevalence ranged from 9.7 per 1,000 children who were 8 years old in Texas (Laredo) to 53.1 in California. These differences are likely due to “differences in availability of services for early detection and evaluation and diagnostic practices,” the CDC and network researchers wrote.

For instance, California—the site with the highest prevalence among 8-year-olds and also 4-year-olds—has a local initiative called the Get SET Early model. “As part of the initiative, hundreds of local pediatricians have been trained to screen and refer children for assessment as early as possible, which could result in higher identification of ASD, especially at early ages,” the authors write. “In addition, California has regional centers throughout the state that provide evaluations and service coordination for persons with disabilities and their families.”

On the other hand, the low ASD rates at the network’s two Texas sites could “suggest lack of access or barriers to accessing identification services,” the authors say. The two Texas sites included primarily Hispanic and lower-income communities.

The newly revealed higher rates in some of the network’s underserved communities could link ASD prevalence to social determinants of health, such as low income and housing and food insecurity, the authors say. Other factors, such as higher rates of preterm birth, which is linked to neurodevelopmental disabilities, as well as lead poisoning and traumatic brain injuries, may also contribute to disparities.

Anti-vaccine voices

The detailed data-heavy report stands in contrast to the position of health secretary Robert F. Kennedy Jr., a longtime anti-vaccine advocate who promotes the false and thoroughly debunked claim that autism is caused by vaccines. Last month, Kennedy hired the discredited anti-vaccine advocate David Geier to lead a federal study examining whether vaccines cause autism, despite numerous high-quality studies already finding no link between the two.

Geier, who has no medical or scientific background, has long worked with his father, Mark Geier, to promote the idea that vaccines cause autism. In 2011, Mark Geier was stripped of his medical license for allegedly mistreating children with autism, and David Geier was fined for practicing medicine without a license.

In a media statement Tuesday in response to the new report, Kennedy called autism an “epidemic” that is “running rampant.” He appeared to reference his planned study with Geier, saying: “We are assembling teams of world-class scientists to focus research on the origins of the epidemic, and we expect to begin to have answers by September.”

Autism rate rises slightly; RFK Jr. claims he’ll “have answers by September” Read More »