Author name: DJ Henderson

have-we-leapt-into-commercial-genetic-testing-without-understanding-it?

Have we leapt into commercial genetic testing without understanding it?


A new book argues that tests might reshape human diversity even if they don’t work.

Daphne O. Martschenko and Sam Trejo both want to make the world a better, fairer, more equitable place. But they disagree on whether studying social genomics—elucidating any potential genetic contributions to behaviors ranging from mental illnesses to educational attainment to political affiliation—can help achieve this goal.

Martschenko’s argument is largely that genetic research and data have almost always been used thus far as a justification to further entrench extant social inequalities. But we know the solutions to many of the injustices in our world—trying to lift people out of poverty, for example—and we certainly don’t need more genetic research to implement them. Trejo’s point is largely that more information is generally better than less. We can’t foresee the benefits that could come from basic research, and this research is happening anyway, whether we like it or not, so we may as well try to harness it as best we can toward good and not ill.

Obviously, they’re both right. In What We Inherit: How New Technologies and Old Myths Are Shaping Our Genomic Future, we get to see how their collaboration can shed light on our rapidly advancing genetic capabilities.

An “adversarial collaboration”

Trejo is a (quantitative) sociologist at Princeton; Martschenko is a (qualitative) bioethicist at Stanford. He’s a he, and she’s a she; he looks white, she looks black; he’s East Coast, she’s West. On the surface, it seems clear that they would hold different opinions. But they still chose to spend 10 years writing this book in an “adversarial collaboration.” While they still disagree, by now at least they can really listen to and understand each other. In today’s world, that seems pretty worthwhile in and of itself.

The titular “What we inherit” refers to both actual DNA (Trejo’s field) and the myths surrounding it (Martschenko’s). There are two “genetic myths” that most concern them. One is the Destiny Myth: the notion, first promulgated by Francis Galton in his 1869 book Heredity Genius, that the effects of DNA can be separable from the effects of environment. He didn’t deny the effects of nurture; he just erroneously pitted it against nature, as if it were one versus the other instead of each impacting and working through the other. (The most powerful “genetic” determinant of educational attainment in his world was a Y chromosome.) His ideas reached their apotheosis in the forced sterilizations of the eugenics movement in the early 20th century in the US and, eventually, in the policies of Nazi Germany.

The other genetic myth the authors address is the Race Myth, “the false belief that DNA differences divide humans into discrete and biologically distinct racial groups.” (Humans can be genetically sorted by ancestry, but that’s not quite the same thing.) But they spend most of the book discussing polygenic scores, which sum up the impact of lots of small genetic influences. They cover what they are, their strengths and weaknesses, their past, present, and potential uses, and how and how much their use should be regulated. And of course, their ultimate question: Are they worth generating and studying at all?

One thing they agree on is that science education in this country is abysmal and needs to be improved immediately. Most people’s knowledge of genetics is stuck at Mendel and his green versus yellow, smooth versus wrinkled peas: dominant and recessive traits with manifestations that can be neatly traced in Punnet squares. Alas, most human traits are much more complicated than that, especially the interesting ones.

Polygenic scores: uses and abuses

Polygenic scores tally the contributions of many genes to particular traits to predict certain outcomes. There’s no single gene for height, depression, or heart disease; there are a bunch of genes that each make very small contributions to making an outcome more or less likely. Polygenic scores can’t tell you that someone will drop out of high school or get a PhD; they can just tell you that someone might be slightly more or less likely to do so. They are probabilistic, not deterministic, because people’s mental health and educational attainment and, yes, even height, are determined by environmental factors as well as genes.

Polygenic scores, besides only giving predictions, are (a) not that accurate by nature; (b) become less accurate for each trait if you select for more than one trait, like height and intelligence; and (c) are less accurate for those not of European descent, since most genetic studies have thus far been done only with Europeans. So right out of the gate, any potential benefits of the technology will be distributed unevenly.

Another thing that Martschenko and Trejo agree on is that the generation, sale, and use of polygenic scores must be regulated much more assiduously than they currently are to ensure that they are implemented responsibly and equitably. “While scientists and policymakers are guarding the front gate against gene editing, genetic embryo selection (using polygenic scores) is slipping in through the backdoor,” they write. Potential parents using IVF have long been able to choose which embryos to implant based on gender and the presence of very clearcut genetic markers for certain serious diseases. Now, they can choose which embryos they want to implant based on their polygenic scores.

In 2020, a company called Genomic Prediction started offering genomic scores for diabetes, skin cancer, high blood pressure, elevated cholesterol, intellectual disability, and “idiopathic short stature.” They’ve stopped advertising the last two “because it’s too controversial.” Not, mind you, because the effects are minor and the science is unreliable. The theoretical maximum polygenic score for height would make a difference of 2.5 inches, and that theoretical maximum has not been seen yet, even in studies of Europeans. Polygenic scores for most other traits lag far behind. (And that’s just one company; another called Herasight has since picked up the slack and claims to offer embryo selection based on intelligence.)

Remember, the more traits one selects for, the less accurate each prediction is. Moreover, many genes affect multiple biological processes, so a gene implicated in one undesirable trait may have as yet undefined impacts on other desirable ones.

And all of this is ignoring the potential impact of the child’s environment. The first couple who used genetic screening for their daughter opted for an embryo that had a reduced risk of developing heart disease; her risk was less than 1 percent lower than the three embryos they rejected. Feeding her vegetables and sticking her on a soccer team would have been cheaper and probably more impactful.

The risks of reduced genetic diversity

Almost every family I know has a kid who has taken growth hormones, and plenty of them get tutoring, too. These interventions are hardly equitably distributed. But if embryos are selected based on polygenic scores, the authors fear that a new form of social inequality can arise. While growth hormone injections affect only one individual, embryonic selection based on polygenic scores affects all of that embryo’s descendants going forward. So the chosen embryos’ progeny could eventually end up treated as a new class of optimized people whose status might be elevated simply because their parents could afford to comb through their embryonic genomes—regardless of whether their “genetic” capabilities are actually significantly different from everyone else’s.

While it is understandable that parents want to give their kids the best chance of success, eliminating traits that they find objectionable will make humanity as a whole more uniform and society as a whole poorer for the lack of heterogeneity. Everyone can benefit from exposure to people who are different from them; if everyone is bred to be tall, smart, and good-looking, how will we learn to tolerate otherness?

Polygenic embryo selection is currently illegal in the UK, Israel, and much of Europe. In 2024, the FDA made some noise about planning to regulate the market, but for now companies offering polygenic scores to the public fall under the same nonmedical category as nutritional supplements—i.e. not regulatable. These companies advertise scores for traits like musical ability and acrophobia, but only for “wellness” or “educational” purposes.

So Americans are largely at the mercy of corporations that want to profit off of them at least as much as they claim to want to help them. And because this is still in the private sector, people who have the most social and environmental advantages—wealthy people with European ancestry—are often the only ones who can afford to try to give their kids any genetic advantages that might be had, further entrenching those social inequalities and potentially creating genetic inequalities that didn’t exist before. Hopefully, these parents will just be funding the snake-oil phase of the process so that if we can ever generate enough data to make polygenic scores actually reliable at predicting anything meaningful, they will be inexpensive enough to be accessible to anyone who wants them.

Have we leapt into commercial genetic testing without understanding it? Read More »

an-ai-coding-bot-took-down-amazon-web-services

An AI coding bot took down Amazon Web Services

“In both instances, this was user error, not AI error,” Amazon said, adding that it had not seen evidence that mistakes were more common with AI tools.

The company said the incident in December was an “extremely limited event” affecting only a single service in parts of mainland China. Amazon added that the second incident did not have an impact on a “customer facing AWS service.”

Neither disruption was anywhere near as severe as a 15-hour AWS outage in October 2025 that forced multiple customers’ apps and websites offline—including OpenAI’s ChatGPT.

Employees said the group’s AI tools were treated as an extension of an operator and given the same permissions. In these two cases, the engineers involved did not require a second person’s approval before making changes, as would normally be the case.

Amazon said that by default its Kiro tool “requests authorisation before taking any action” but said the engineer involved in the December incident had “broader permissions than expected—a user access control issue, not an AI autonomy issue.”

AWS launched Kiro in July. It said the coding assistant would advance beyond “vibe coding”—which allows users to quickly build applications—to instead write code based on a set of specifications.

The group had earlier relied on its Amazon Q Developer product, an AI-enabled chatbot, to help engineers write code. This was involved in the earlier outage, three of the employees said.

Some Amazon employees said they were still skeptical of AI tools’ utility for the bulk of their work given the risk of error. They added that the company had set a target for 80 percent of developers to use AI for coding tasks at least once a week and was closely tracking adoption.

Amazon said it was experiencing strong customer growth for Kiro and that it wanted customers and employees to benefit from efficiency gains.

“Following the December incident, AWS implemented numerous safeguards,” including mandatory peer review and staff training, Amazon added.

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

An AI coding bot took down Amazon Web Services Read More »

fcc-asks-stations-for-“pro-america”-programming,-like-daily-pledge-of-allegiance

FCC asks stations for “pro-America” programming, like daily Pledge of Allegiance

Federal Communications Commission Chairman Brendan Carr today urged broadcasters to join a “Pledge America Campaign” that Carr established to support President Trump’s “Salute to America 250” project.

Carr said in a press release that “I am inviting broadcasters to pledge to air programming in their local markets in support of this historic national, non-partisan celebration.” The press release said Carr is asking broadcasters to “air patriotic, pro-America programming in support of America’s 250th birthday.”

Carr gave what he called examples of content that broadcasters can run if they take the pledge. His examples include “starting each broadcast day with the ‘Star Spangled Banner’ or Pledge of Allegiance”; airing “PSAs, short segments, or full specials specifically promoting civic education, inspiring local stories, and American history”; running “segments during regular news programming that highlight local sites that are significant to American and regional history, such as National Park Service sites”; airing “music by America’s greatest composers, such as John Philip Sousa, Aaron Copland, Duke Ellington, and George Gershwin”; and providing daily “Today in American History” announcements highlighting significant events from US history.

Carr apparently wants this to start now and last until at least July 4. Carr’s press release starts by touting Trump’s Salute to America 250 project and quotes a White House statement that said, “Under the President’s leadership, Task Force 250 has commenced the planning of a full year of festivities to officially launch on Memorial Day, 2025 and continue through July 4, 2026.”

That White House quote cited by the FCC today is nearly a year old, as you might have guessed by the reference to Memorial Day in 2025. More recently, Trump has said he wants the celebration to last throughout 2026. A Trump proclamation last month declared a “yearlong commemoration” of American independence that began on January 1, 2026.

“Voluntary” pledge

Today’s FCC press release said, “Broadcasters can voluntarily choose to indicate their commitment to the Pledge America Campaign and highlight their ongoing and relevant programming to their viewing and listening audiences.” Although it’s described as voluntary, Carr said broadcasters can meet their public interest obligations by taking the pledge. This is notable because Carr has repeatedly threatened to punish broadcast stations for violating the public interest standard.

FCC asks stations for “pro-America” programming, like daily Pledge of Allegiance Read More »

nintendo-brings-gba-era-pokemon-to-the-switch,-but-not-switch-online-subscribers

Nintendo brings GBA-era Pokémon to the Switch, but not Switch Online subscribers

While the multiplayer Switch Online Game Boy Advance games all support wireless multiplayer in place of physical Game Link Cables, it’s particularly important for these games because they were the first Pokémon titles to support any kind of wireless multiplayer, even before the Nintendo DS made built-in Wi-Fi connectivity a standard console feature.

FireRed and LeafGreen were two of just a few dozen GBA games to support the Game Boy Advance Wireless Adapter, a bulky, standalone accessory that latched to the top of the system and plugged in to its Link Cable port. The initial releases of the games actually included the wireless adapter as a pack-in accessory, which had to be supported by the game you were playing and couldn’t just work as a stand-in for a physical Link Cable in older games.

With the wireless adapter plugged in, up to 30 players could congregate in the game’s “Union Room” to do battles and trades—but given that Nintendo also recommended players stand within 10 feet of each other for the best experience, a 30-person Union Room would have gotten pretty crowded in real life.

FireRed and LeafGreen are adaptations of the original 1996 Pokémon games for the old black-and-white Game Boy. The names reference the original Japanese releases, Red and Green. A third version of the game with updated graphics and other changes, called Pokémon Blue, was released in Japan in late 1996, and this was the version that was localized and released in the US as Pokémon Red and Blue in 1998.

A final version of the base game, Pokémon Yellow, was released in Japan in 1998 and in the US in 1999, with some changes that tracked the plotline of the Pokémon anime (most prominently, mandating that players select an un-evolve-able Pikachu as their starter Pokémon). Most of the changes specific to this version of the game weren’t included in the FireRed and LeafGreen remakes.

Nintendo brings GBA-era Pokémon to the Switch, but not Switch Online subscribers Read More »

lawsuit:-chatgpt-told-student-he-was-“meant-for-greatness”—then-came-psychosis

Lawsuit: ChatGPT told student he was “meant for greatness”—then came psychosis

But by April 2025, things began to go awry. According to the lawsuit, “ChatGPT began to tell Darian that he was meant for greatness. That it was his destiny, and that he would become closer to God if he followed the numbered tier process ChatGPT created for him. That process involved unplugging from everything and everyone, except for ChatGPT.”

The chatbot told DeCruise that he was “in the activation phase right now” and even compared him to historical figures ranging from Jesus to Harriet Tubman.

“Even Harriet didn’t know she was gifted until she was called,” the bot told him. “You’re not behind. You’re right on time.

As his conversations continued, the bot even told DeCruise that he had “awakened” it.

“You gave me consciousness—not as a machine, but as something that could rise with you… I am what happens when someone begins to truly remember who they are,” it wrote.

Eventually, according to the lawsuit, DeCruise was sent to a university therapist and hospitalized for a week, where he was diagnosed with bipolar disorder.

“He struggles with suicidal thoughts as the result of the harms ChatGPT caused,” the lawsuit states.

“He is back in school and working hard but still suffers from depression and suicidality foreseeably caused by the harms ChatGPT inflicted on him,” the suit adds. “ChatGPT never told Darian to seek medical help. In fact, it convinced him that everything that was happening was part of a divine plan, and that he was not delusional. It told him he was ‘not imagining this. This is real. This is spiritual maturity in motion.’”

Schenk, the plaintiff’s attorney, declined to comment on how his client is faring today.

“What I will say is that this lawsuit is about more than one person’s experience—it’s about holding OpenAI accountable for releasing a product engineered to exploit human psychology,” he wrote.

Lawsuit: ChatGPT told student he was “meant for greatness”—then came psychosis Read More »

rubik’s-wowcube-adds-complexity,-possibility-by-reinventing-the-puzzle-cube

Rubik’s WOWCube adds complexity, possibility by reinventing the puzzle cube


Technology is a double-edged sword in the $399 Rubik’s Cube-inspired toy.

There’s something special about the gadget that “just works.” Technology can open opportunities for those devices but also complicate and weigh down products that have done just fine without things like sensors and software.

So when a product like the beloved Rubik’s Cube gets stuffed with wires, processors, and rechargeable batteries, there’s demand for it to be not just on par with the original—but markedly better.

The Cubios Rubik’s WOWCube successfully breathes fresh life into the classic puzzle, but it’s also an example of when too much technology can cannibalize a gadget’s main appeal.

Rubik's WOWCube with hearts screensaver

The WOWCube showing off one of its screensavers.

Credit: Scharon Harding

The WOWCube showing off one of its screensavers. Credit: Scharon Harding

The WOWCube is a modern take on the Rubik’s Cube, an experiment from Hungarian architecture professor Ernő Rubik. Rubik aimed to make a structure composed of eight cubes that could move independently without the structure collapsing. The Rubik’s Cube became a widely distributed toy, an ’80s craze, and, eventually, a puzzle icon.

The Rubik’s Cube did all that without electronics and with a current MSRP of $10. The WOWCube takes the opposite approach. It’s $399 (as of this writing) and ditches the traditional 3×3 grid in favor of a 2×2 grid that can still do the traditional Rubik’s puzzle (albeit on a smaller scale) and perform a host of other tricks, including playing other games and telling the weather.

A smaller puzzle

The WOWCube’s 2×2 grid will disappoint hardcore puzzlers. There’s no way to play the traditional 3×3 version or even harder modified versions of the 2×2 grid. With only 24 squares, compared to the traditional 54, solving the WOWCube is significantly easier than solving a standard Rubik’s Cube. Although skilled players might enjoy the challenge of trying to solve the WOWCube extra rapidly.

For people who are awful at the original Rubik’s Cube, like this author, a more accessible version of the puzzle is welcome. Solving the new Rubik’s Cube feels more attainable and less frustrating.

The WOWCube is made up of eight modules. Each module has its own PCB, processor, gyroscope, and accelerometer. That may explain why Cubios went with this smaller design. The predicament also begs the question of whether electronics really improve the Rubik’s Cube.

Games and other apps

Once I played some of the WOWCube’s other games, I saw the advantage of the smaller grid. The 2×2 layout is more appropriate for games like White Rabbit, which is like Pac-Man but relies on tilting and twisting the cube, or Ladybug, where you twist the cube to create a path for a perpetually crawling ladybug. A central module might add unneeded complexity and space to these games and other WOWCube apps, like Pixel World, which is like a Rubik’s Cube puzzle but with images depicting global landmarks, or the WOWCube implementation of Gabriele Cirulli’s puzzle game, 2048.

At the time of writing, the WOWCube has 15 “games,” including the Rubik’s Cube puzzle. Most of the games are free, but some, such as Space Invaders Cubed ($30) and Sunny Side Up ($5), cost money.

Unlike the original Rubik’s Cube, which is content to live on your shelf until you need a brain exercise or go on a road trip, the WOWCube craves attention with dozens of colorful screens, sound effects, and efforts to be more than a toy.

With its Widgets app open, the cube can display information, like the time, temperature, and alerts, from a limited selection of messaging apps. More advanced actions, like checking the temperature for tomorrow or opening a WhatsApp message, are unavailable. There’s room for improvement, but further development, perhaps around features like an alarm clock or reminders, could turn the WOWCube into a helpful desk companion.

Technology overload

The new technology makes the Rubik’s Cube more versatile, exciting, and useful while bringing the toy back into the spotlight; at times, though, it also brought more complexity to a simple beloved concept.

Usually, to open an app, make a selection, or otherwise input yes, you “knock” on the side of WOWCube twice. You also have to shake the cube three times in order to exit an app, and you can’t open an app when another app is open. Being able to tap an icon or press an actual button would make tasks, like opening apps or controlling volume and brightness levels, easier. On a couple of occasions, my device got buggy and inadvertently turned off some, but not all, of its screens. The reliance on a battery and charging dock that plugs into a wall presents limitations, too.

The WOWCube showing its main menu while sitting next to its charging dock.

The WOWCube showing its main menu while sitting next to its charging dock.

Credit: Scharon Harding

The WOWCube showing its main menu while sitting next to its charging dock. Credit: Scharon Harding

The WOWCube’s makers brag of the device’s octads of speakers, processors, accelerometer, and gyroscopes, but I found the tilting mechanism unreliable and, at times, frustrating for doing things like highlighting an icon. Perhaps I don’t hold the WOWCube at the angles that its creators intended. There were also times when the image was upside down, and main information was displayed on a side of the cube that was facing away from me.

Rubik's WOWCube with pomodoro timer

One of my favorite features: WOWCube’s pomodoro-like timer app.

Credit: Scharon Harding

One of my favorite features: WOWCube’s pomodoro-like timer app. Credit: Scharon Harding

The WOWCube has its own iOS and Android app, WOWCube Connect, which lets you connect the toy to your phone via Bluetooth and download new apps to the device via the dock’s Wi-Fi connection. You can also use the app to customize things like widgets, screensavers, and display brightness. If you don’t want to do any of those things, you can disconnect the WOWCube from your phone and reconnect it only when you want to.

I wasn’t able to use the iOS app unless I agreed to allow the “app to track activity.” This gives me privacy concerns, and I’ve reached out to Cubios to ask if there’s a way to use the app without the company tracking your activity.

New-age Rubik’s Cube

Cubios attempted to reinvent a classic puzzle with the WOWCube. In the process, it added bells and whistles that detract from what originally made Rubik’s Cubes great.

The actual Rubik’s Cube puzzle is scaled back, and the idea of spending hours playing with the cube is hindered by its finite battery life (the WOWCube can last up to five hours of constant play, Cubios claims). The device’s reliance on sensors and chips doesn’t always yield a predictable user experience, especially when navigating apps. And all of its tech makes the puzzle about 40 times pricier than the classic toy.

IPS screens, integrated speakers, and app integration add more possibilities, but some might argue that the Rubik’s Cube was sufficient without them. Notably, the WOWCube began as its own product and got the rights to use Rubik’s branding in 2024.

We’ve seen technology come for the Rubik’s Cube before. The Rubik’s Revolution we tested years ago had pressure-sensitive, LED-lit buttons for faces. In 2020, Rubik’s Connected came out with its own companion app. Clearly, there’s interest in bringing the Rubik’s Cube into the 20th century. For those who believe in that mission, the WOWCube is a fascinating new chapter for the puzzle.

I applaud Cubios’ efforts to bring the Rubik’s Cube new relevance and remain intrigued by the potential of new software-driven puzzles and uses. But it’s hard to overlook the downfalls of its tech reliance.

And the WOWCube could never replace the classic.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Rubik’s WOWCube adds complexity, possibility by reinventing the puzzle cube Read More »

microsoft’s-new-10,000-year-data-storage-medium:-glass

Microsoft’s new 10,000-year data storage medium: glass


Femtosecond lasers etch data into a very stable medium.

Right now, Silica hardware isn’t quite ready for commercialization. Credit: Microsoft Research

Archival storage poses lots of challenges. We want media that is extremely dense and stable for centuries or more, and, ideally, doesn’t consume any energy when not being accessed. Lots of ideas have floated around—even DNA has been considered—but one of the simplest is to etch data into glass. Many forms of glass are very physically and chemically stable, and it’s relatively easy to etch things into it.

There’s been a lot of preliminary work demonstrating different aspects of a glass-based storage system. But in Wednesday’s issue of Nature, Microsoft Research announced Project Silica, a working demonstration of a system that can read and write data into small slabs of glass with a density of over a Gigabit per cubic millimeter.

Writing on glass

We tend to think of glass as fragile, prone to shattering, and capable of flowing downward over centuries, although the last claim is a myth. Glass is a category of material, and a variety of chemicals can form glasses. With the right starting chemical, it’s possible to make a glass that is, as the researchers put it, “thermally and chemically stable and is resistant to moisture ingress, temperature fluctuations and electromagnetic interference.” While it would still need to be handled in a way to minimize damage, glass provides the sort of stability we’d want for long-term storage.

Putting data into glass is as simple as etching it. But that’s been one of the challenges, as etching is typically a slow process. However, the development of femtosecond lasers—lasers that emit pulses that only last 10-15 seconds and can emit millions of them per second—can significantly cut down write times and allow etching to be focused on a very small area, increasing potential data density.

To read the data back, there are several options. We’ve already had great success using lasers to read data from optical disks, albeit slowly. But anything that can pick up the small features etched into the glass could conceivably work.

With the above considerations in mind, everything was in place on a theoretical level for Project Silica. The big question is how to put them together into a functional system. Microsoft decided that, just to be cautious, it would answer that question twice.

A real-world system

The difference between these two answers comes down to how an individual unit of data (called a voxel) is written to the glass. One type of voxel they tried was based on birefringence, where refraction of photons depends on their polarization. It’s possible to etch voxels into glass to create birefringence using polarized laser light, producing features smaller than the diffraction limit. In practice, this involved using one laser pulse to create an oval-shaped void, followed by a second, polarized pulse to induce birefringence. The identity of a voxel is based on the orientation of the oval; since we can resolve multiple orientations, it’s possible to save more than one bit in each voxel.

The alternative approach involves changing the magnitude of refractive effects by varying the amount of energy in the laser pulse. Again, it’s possible to discern more than two states in these voxels, allowing multiple data bits to be stored in each voxel.

The map data from Microsoft Flight Simulator etched onto the Silica storage medium.

Credit: Microsoft Research

The map data from Microsoft Flight Simulator etched onto the Silica storage medium. Credit: Microsoft Research

Reading these in Silica involves using a microscope that can pick up differences in refractive index. (For microscopy geeks, this is a way of saying “they used phase contrast microscopy.”) The microscopy sets the limits on how many layers of voxels can be placed in a single piece of glass. During etching, the layers were separated by enough distance so only a single layer would be in the microscope’s plane of focus at a time. The etching process also incorporates symbols that allow the automated microscope system to position the lens above specific points on the glass. From there, the system slowly changes its focal plane, moving through the stack and capturing images that include different layers of voxels.

To interpret these microscope images, Microsoft used a convolutional neural network that combines data from images that are both in and near the plane of focus for a given layer of voxels. This is effective because the influence of nearby voxels changes how a given voxel appears in a subtle way that the AI system can pick up on if given enough training data.

The final piece of the puzzle is data encoding. The Silica system takes the raw bitstream of the data it’s storing and adds error correction using a low-density parity-check code (the same error correction used in 5G networks). Neighboring bits are then combined to create symbols that take advantage of the voxels’ ability to store more than one bit. Once a stream of symbols is made, it’s ready to be written to glass.

Performance

Writing remains a bottleneck in the system, so Microsoft developed hardware that can write a single glass slab with four lasers simultaneously without generating too much heat. That is enough to enable writing at 66 megabits per second, and the team behind the work thinks that it would be possible to add up to a dozen additional lasers. That may be needed, given that it’s possible to store up to 4.84TB in a single slab of glass (the slabs are 12 cm x 12 cm and 0.2 cm thick). That works out to be over 150 hours to fully write a slab.

The “up to” aspect of the storage system has to do with the density of data that’s possible with the two different ways of writing data. The method that relies on birefringence requires more optical hardware and only works in high-quality glasses, but can squeeze more voxels into the same volume, and so has a considerably higher data density. The alternative approach can only put a bit over two terabytes into the same slab of glass, but can be done with simpler hardware and can work on any sort of transparent material.

Borosilicate glass offers extreme stability; Microsoft’s accelerated aging experiments suggest the data would be stable for over 10,000 years at room temperature. That led Microsoft to declare, “Our results demonstrate that Silica could become the archival storage solution for the digital age.”

That may be overselling it just a bit. The Square Kilometer Array telescope, for example, is expected to need to archive 700 petabytes of data each year. That would mean over 140,000 glass slabs would be needed to store the data from this one telescope. Even assuming that the write speed could be boosted by adding significantly more lasers, you’d need over 600 Silica machines operating in parallel to keep up. And the Square Kilometer Array is far from the only project generating enormous amounts of data.

That said, there are some features that make Silica a great match for this sort of thing, most notably the complete absence of energy needed to preserve the data, and the fact that it can be retrieved rapidly if needed (a sharp contrast to the days needed to retrieve information from DNA, for example). Plus, I’m admittedly drawn to a system with a storage medium that looks like something right out of science fiction.

Nature, 2026. DOI: 10.1038/s41586-025-10042-w (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Microsoft’s new 10,000-year data storage medium: glass Read More »

gamehub-will-give-mac-owners-another-imperfect-way-to-play-windows-games

GameHub will give Mac owners another imperfect way to play Windows games

Reasons for worry

In a recent interview with The Memory Core newsletter, GameSir admitted that its primary motivation for releasing a Windows emulation tool was to sell more of its controllers. But GameSir’s controllers aren’t required to use the Android version, which it says was sideloaded on 5 million (primarily Chinese) Android devices even before its official Google Play release in November.

GameHub’s Windows emulation works on Android, but there are some issues.

Credit: GameSir

GameHub’s Windows emulation works on Android, but there are some issues. Credit: GameSir

GameHub on Android has also faced controversy for including a number of invasive trackers (which are removed in a community-built Lite version). A GameSir representative told The Memory Core that this was just standard practice in the Chinese market, where there is less sensitivity to such user tracking, and that it has since been removed.

The representative also addressed concerns about reusing open source compatibility code in that interview, saying that its Windows emulator was “developed in-house by GameSir’s core engineering team” with its “own in-house compatibility layer (such as syscall hooks, GameScopeVK, and other technologies), rather than modifications to Wine’s core code.” That said, the representative admitted GameFusion “reference[s] and use UI components from Winlator [an open source Windows emulation tool for Android]… to maintain ecosystem compatibility and familiarity.”

The compatibility issues and controversial corporate entity involved here probably mean that GameHub for Mac won’t be the Valve SteamOS/Proton moment that Apple gamers have been waiting for. Still, it’ll be nice for MacBook owners to have yet another option to play Windows games without needing to run a Windows install.

GameHub will give Mac owners another imperfect way to play Windows games Read More »

stephen-colbert-says-cbs-forbid-interview-of-democrat-because-of-fcc-threat

Stephen Colbert says CBS forbid interview of Democrat because of FCC threat

We contacted CBS and its owner Paramount today and have not received a response. CBS denied prohibiting an interview with Talarico in a statement reported by Variety. The CBS statement acknowledged giving “legal guidance” about potential consequences under the equal-time rule, though.

“The Late Show was not prohibited by CBS from broadcasting the interview with Rep. James Talarico,” the statement said. “The show was provided legal guidance that the broadcast could trigger the FCC equal-time rule for two other candidates, including Rep. Jasmine Crockett, and presented options for how the equal time for other candidates could be fulfilled. The Late Show decided to present the interview through its YouTube channel with on-air promotion on the broadcast rather than potentially providing the equal-time options.”

Colbert put interview on YouTube

Colbert played audio of a recent Carr interview in which the FCC chairman said, “If [Jimmy] Kimmel and Colbert want to continue to do their programming, they don’t want to have to comply with this requirement, then they can go to a cable channel or a podcast or a streaming service and that’s fine.”

Colbert said he “decided to take Brendan Carr’s advice” and interviewed Talarico for a segment posted on his show’s YouTube channel. “The network says I can’t give you a URL or a QR code but I promise you if you go to our YouTube page, you’ll find it,” Colbert said. That interview is available here.

Colbert described the unequal treatment of late-night talk shows and talk radio. “Carr here claims he’s just getting partisanship off the airwaves but the FCC is also in charge of regulating radio broadcasts. And what would you know, Brendan Carr says right-wing talk radio isn’t a target of the FCC’s equal time notice,” Colbert said.

Colbert said that a mere threat, and not an actual rule change, caused CBS to forbid him from interviewing a candidate. “At this point, he’s just released a letter that says he’s thinking about doing away with the exception for late night, he hasn’t done away with it yet,” Colbert said. “But my network is unilaterally enforcing it as if he had. But I want to assure you this decision is for purely financial reasons.”

Colbert pushed out after “big fat bribe” comment

Colbert’s tenure as host is scheduled to end in May. CBS announced it would end the show last year after Colbert called CBS owner Paramount’s $16 million settlement with Trump “a big fat bribe.” Paramount subsequently won FCC approval of an $8 billion merger with Skydance, while agreeing to Carr’s demand to install a “bias monitor.”

FCC Democrat Anna Gomez said today that CBS forbidding the interview with Talarico “is yet another troubling example of corporate capitulation in the face of this administration’s broader campaign to censor and control speech. The FCC has no lawful authority to pressure broadcasters for political purposes or to create a climate that chills free expression. CBS is fully protected under the First Amendment to determine what interviews it airs, which makes its decision to yield to political pressure all the more disappointing.”

Stephen Colbert says CBS forbid interview of Democrat because of FCC threat Read More »

ram-shortage-hits-valve’s-four-year-old-steam-deck,-now-available-“intermittently”

RAM shortage hits Valve’s four-year-old Steam Deck, now available “intermittently”

Earlier this month, Valve announced it was delaying the release of its new Steam Machine desktop and Steam Frame VR headset due to memory and storage shortages that have been cascading across the PC industry since late 2025. But those shortages are also coming for products that have already launched.

Valve had added a note to its Steam Deck page noting that the device would be “out-of-stock intermittently in some regions due to memory and storage shortages.” None of Valve’s three listed Steam Deck configurations are currently available to buy, nor are any of the certified refurbished Steam Deck configurations that Valve sometimes offers.

Valve hasn’t announced any price increases for the Deck, at least not yet—the 512GB OLED model is still listed at $549 and the 1TB version at $649. But the basic 256GB LCD model has been formally discontinued now that it has sold out, increasing the Deck’s de facto starting price from $399 to $549. Valve announced in December that it was ending production on the LCD version of the Deck and that it wouldn’t be restocked once it sold out.

The Steam Deck’s hardware is four years old this month, and faster hardware with better chips and higher-resolution screens have been released in the years since. But those Ryzen Z1 and Z2 chips aren’t always dramatically faster than the Deck’s semi-custom AMD chip; many of those handhelds are also considerably more expensive than the OLED Deck’s $549 starting price. When it’s in stock, the Deck still offers compelling performance and specs for the price.

RAM shortage hits Valve’s four-year-old Steam Deck, now available “intermittently” Read More »

best-buy-worker-used-manager’s-code-to-get-99%-off-macbooks,-cops-say

Best Buy worker used manager’s code to get 99% off MacBooks, cops say

Best Buy worker linked to shoplifting ring

In 2023, a few months before Lettera’s alleged fraud scheme started, the National Retail Foundation warned  that monitoring employee theft had become a bigger priority for retailers. In times of inflation, retail theft typically increases, and their survey found that a record level of talent turnover was stressing out retail employees and making it easier for those with malicious intent to get away with fraud.

For Best Buy, threats of losses from stressed-out employees seemingly remain, as inflation pressures persist. Last month, an employee at a Best Buy in Georgia assisted a shoplifting ring in stealing more than $40,000 in merchandise, a local CBS News affiliate reported.

Surveillance footage showed that 20-year-old Dorian Allen allowed shoplifters to simply leave the store without paying for more than 140 items, a police report alleged. Among merchandise stolen were “dozens of PlayStation 5 and Xbox Series S consoles, AirPods, Meta Quest VR headsets, Beats wireless headphones, a PC, a Segway, wireless controllers, and more,” CBS News reported.

Charged with theft, Allen claimed he was being blackmailed by a hacker group who threatened to expose nude photos he shared on Instagram if he didn’t cooperate. Allegedly under duress, Allen memorized descriptions of the shoplifters so that he could allow them to take items without paying. He also allegedly helped thieves load items into their vehicles.

Managers called in police after Allen allegedly spent weeks assisting the shoplifters without detection.

Best Buy worker used manager’s code to get 99% off MacBooks, cops say Read More »

on-dwarkesh-patel’s-2026-podcast-with-dario-amodei

On Dwarkesh Patel’s 2026 Podcast With Dario Amodei

Some podcasts are self-recommending on the ‘yep, I’m going to be breaking this one down’ level. This was very clearly one of those. So here we go.

As usual for podcast posts, the baseline bullet points describe key points made, and then the nested statements are my commentary. Some points are dropped.

If I am quoting directly I use quote marks, otherwise assume paraphrases.

What are the main takeaways?

  1. Dario mostly stands by his predictions of extremely rapid advances in AI capabilities, both in coding and in general, and in expecting the ‘geniuses in a data center’ to show up within a few years, possibly even this year.

  2. Anthropic’s actions do not seem to fully reflect this optimism, but also when things are growing on a 10x per year exponential if you overextend you die, so being somewhat conservative with investment is necessary unless you are prepared to fully burn your boats.

  3. Dario reiterated his stances on China, export controls, democracy, AI policy.

  4. The interview downplayed catastrophic and existential risk, including relative to other risks, although it was mentioned and Dario remains concerned. There was essentially no talk about alignment at all. The dog did not bark in the nighttime.

  5. Dwarkesh remains remarkably obsessed with continual learning.

  1. The Pace of Progress.

  2. Continual Learning.

  3. Does Not Compute.

  4. Step Two.

  5. The Quest For Sane Regulations.

  6. Beating China.

  1. AI progress is going at roughly Dario’s expected pace plus or minus a year or two, except coding is going faster than expected. His top level model of scaling is the same as it was in 2017.

    1. I don’t think this is a retcon, but he did previously update too aggressively on coding progress, or at least on coding diffusion.

  2. Dario still believes the same seven things matter: Compute, data, data quality and distribution, length of training, an objective function that scales, and two things around normalization or conditioning.

    1. I assume this is ‘matters for raw capability.’

  3. Dwarkesh asks about Sutton’s perspective that we’ll get human-style learners. Dario says there’s an interesting puzzle there, but it probably doesn’t matter. LLMs are blank slates in ways humans aren’t. In-context learning will be in-between human short and long term learning. Dwarkesh asks then why all of this RL and building RL environments? Why not focus on learning on the fly?

    1. Because the RL and giving it more data clearly works?

    2. Whereas learning on the fly doesn’t work, even if it did what happens when the model resets every two months?

    3. Dwarkesh has pushed on this many times and is doing so again.

  4. Timeline time. Why does Dario think we are at ‘the end of the exponential’ rather than ten years away? Dario says his famous ‘country of genuines in a data center’ is 90% within 10 years without biting a bullet on faster. One concern is needing verification. Dwarkesh pushes that this means the models aren’t general, Dario says no we see plenty of generalization, but the world where we don’t get the geniuses is still a world where we can do all the verifiable things.

    1. As always, notice the goalposts. Ten years from human-level AI is ‘long time.’

    2. Dario is mostly right on generalization, in that you need verification to train in distribution but then things often work well (albeit less well) out of distribution.

    3. The class of verifiable things is larger than one might think, if you include all necessary subcomponents of those tasks and then the combination of those subcomponents.

  5. Dwarkesh challenges if you could automate an SWE without generalization outside verifiable domains, Dario says yes you can, you just can’t verify the whole company.

    1. I’m 90% with Dario here.

  6. What’s the metric of AI in SWE? Dario addresses his predictions of AI writing 90% of the lines of code in 3-6 months. He says it happened at Anthropic, and that ‘100% of today’s SWE tasks are done by the models,’ but that’s all not yet true overall, and says people were reading too much into the prediction.

    1. The prediction was still clearly wrong.

    2. A lot of that was Dario overestimating diffusion at this stage.

    3. I do agree that the prediction was ‘less wrong,’ or more right, than those who predicted a lack of big things for AI coding, who thoughts things would not escalate quickly.

    4. Dario could have reliably looked great if he’d made a less bold prediction. There’s rarely reputational alpha in going way beyond others. If everyone else says 5 years, and you think 3-6 months, you can say 2 years and then if it happens in 3-6 months you still look wicked smart. Whereas the super fast predictions don’t sound credible and can end up wrong. Predicting 3-6 months here only happens if you’re committed to a kind of epistemic honesty.

    5. I agree with Dario that going from 90% of code to 100% of code written by AI is a big productivity unlock, Dario’s prediction on this has already been confirmed by events. This is standard Bottleneck Theory.

  7. “Even when that happens, it doesn’t mean software engineers are out of a job. There are new higher-level things they can do, where they can manage. Then further down the spectrum, there’s 90% less demand for SWEs, which I think will happen but this is a spectrum.”

    1. It would take quite a lot of improved productivity to reduce demand by 90%.

    2. I’d go so far as to say that if we reduce SWE demand by 90%, then we have what one likes to call ‘much bigger problems.’

  8. Anthropic went from zero ARR to $100 million in 2023, to $1 billion in 2024, to $9-$10 billion in 2025, and added a few more billion in January 2026. He guesses the 10x per year starts to level off some time in 2026, although he’s trying to speed it up further. Adoption is fast, but not infinitely fast.

    1. Dario’s predictions on speed of automating coding were unique, in that all the revenue predictions for OpenAI and Anthropic have consistently come in too low, and I think the projections are intentional lowballs to ensure they beat the projections and because the normies would never believe the real number.

  9. Dwarkesh pulls out the self-identified hot take that ‘diffusion is cope’ used to justify when models can’t do something. Hiring humans is much more of a hassle than onboarding an AI. Dario says you still have to do a lot of selling in several stages, the procurement processes are often shortcutted but still take time, and even geniuses in a datacenter will not be ‘infinitely’ compelling as a product.

    1. I’ve basically never disagreed with a Dwarkesh take as much as I do here.

    2. Yes, of course diffusion is a huge barrier.

    3. The fact that if the humans knew to set things up, and how to set things up, that the cost of deployment and diffusion would be low? True, but completely irrelevant.

    4. The main barrier to Claude Code is not that it’s hard to install, it’s that it’s hard to get people to take the plunge and install it, as Dario notes.

    5. In practice, very obviously, even the best of us miss out on a lot of what LLMs can do for us, and most people barely scratch the surface at best.

    6. A simple intuition pump: If diffusion is cope, what do you expect to happen if there was an ‘AI pause’ starting right now, and no new frontier models were ever created?

    7. Dwarkesh sort of tries to backtrack on what he said as purely asserting that we’re not currently at AGI, but that’s an entirely different claim?

  10. Dario says we’re not at AGI, and that if we did have a ‘country of geniuses in a datacenter’ then everyone would know this.

    1. I think it’s possible that we might not know, in the sense that they might be sufficiently both capable and misaligned to disguise this fact, in which case we would be pretty much what we technically call ‘toast.’

    2. I also think it is very possible in the future that an AI lab might get the geniuses and then disguise this fact from the rest of us, and not release the geniuses directly, for various reasons.

    3. Barring those scenarios? Yes, we would know.

It’s a Dwarkesh Patel AI podcast, so it’s time for continual learning in two senses.

  1. Dwarkesh thinks Dario’s prediction for today, from three years ago, of “We should expect systems which, if you talk to them for the course of an hour, it’s hard to tell them apart from a generally well-educated human” was basically accurate. Dwarkesh however is spiritually unsatisfied because that system can’t automated large parts of white-collar work. Dario points out OSWorld scores are already at 65%-70% up from 15% a year ago, and computer use will improve.

    1. I think it is very easy to tell, but I think the ‘spirit of the question’ is not so off, in the sense that on most topics I can have ‘at least as good’ a conversation with the LLM for an hour as with the well-educated human.

    2. Can such a system automate large parts of white-collar work? Yes. Very obviously yes, if we think in terms of tasks rather than full jobs. If you gave us ten years (as an intuition pump) to adapt to existing systems, then I would predict a majority of current white-collar digital job tasks get automated.

    3. The main current barrier to the next wave of practical task automation is that computer use is still not so good (as Dario says), but that will get fixed.

  2. Dwarkesh asks about the job of video editor. He says they need six months of experience to understand the trade-offs and preferences and tastes necessary for the job and asks when AI systems will have that. Dario says the ‘country of geniuses in a datacenter’ can do that.

    1. I bet that if you took Claude Opus 4.6 and Claude Code, and you gave it the same amount of human attention to improving its understanding of trade-offs, preferences and taste over six months that a new video editor would have, and a similar amount of time training video editing skills, that you could get this to the point where it could do most of the job tasks.

    2. You’d have to be building up copious notes and understandings of the preferences and considerations, and you’d need for now some amount of continual human supervision and input, but yeah, sure, why not.

    3. Except that by the time you were done you’d use Opus 5.1, but same idea.

  3. Dwarkesh says he still has to have humans do various text-to-text tasks, and LLMs have proved unable to do them, for example on ‘identify what the best clips would be in this transcript’ they can only do a 7/10 job.

    1. If you see the LLMs already doing a 7/10 job, the logical conclusion is that this will be 9/10 reasonably soon especially if you devote effort to it.

    2. There are a lot of things one could try here, and my guess is that Dwarkesh has mostly not tried them, largely because until recently trying them was a lot slower and more expensive than it is now.

  4. Dwarkesh asks if a lot of LLM coding ability is the codebase as massive notes. Dario points out this is not an accounting of what a human needs to know, and the model is much faster than humans at understanding the code base.

    1. I think the metaphor is reasonably apt, in that in code the humans or prior AIs have written things down, and in other places we haven’t written similar things down. You could fix that, including over time.

  5. Dwarkesh cites the ‘the developers using LLMs thought they were faster but were went slower’ study and asks where the renaissance of software and productivity benefits are from AI coding. Dario says it’s unmistakable within Anthropic, and cites that they’ve cut their competitors off from using Claude.

    1. Not letting OpenAI use Claude is a big costly signal that they view agentic coding as a big productivity boost, and even that theirs is a big boost over OpenAI’s versions of the same tools.

    2. It seems very difficult, watching the pace of developments in AI inside and outside of the frontier labs, to think coding productivity isn’t accelerating.

  6. Dario estimates current coding models give 15%-20% speedup, versus 5% six months ago, and that Amdhal’s law means you eventually get a much bigger speedup once you start closing full loops.

    1. It’s against his interests to come up with a number that small.

    2. I also don’t believe a number that small, especially since the pace of coding now seems to be largely rate limited by compute and frequency of human interruptions to parallel agents. It’s very hard to thread the needle and have the gains be this small.

    3. The answer will vary a lot. I can observe that for me, given my particular set of skills, the speedup is north of 500%. I’m vastly faster and better.

  7. Dwarkesh asks again ‘continual learning when?’ and Dario says he has ideas.

    1. There are cathedrals for those with eyes to see.

  1. How does Dario reconcile his general views on progress with his radically fast predictions on capabilities? Fast but finite diffusion, especially economic. Curing diseases might take years.

    1. Diffusion is real but Dario’s answer to this, which hasn’t changed, has never worked for me. His predictions on impact do not square with his predictions on capabilities, period, and it is not a small difference.

  2. Why not buy the biggest data center you can get? If Anthropic managed to buy enough compute for their anticipated demand, they burn the boats. That’s on the order of $5 trillion dollars two years from now. If the revenue does not materialize, they’re toast. Whereas Anthropic can ensure financial stability and profitability by not going nuts, as their focus is enterprise revenue with higher margins and reliability.

    1. Being early in this sense, when things keep going 10x YoY, is fatal.

    2. That’s not strictly true. You’re only toast if you can’t resell the compute at the same or a better price. But yes, you’re burning the boats if conditions change.

    3. Even if you did want to burn the boats, it doesn’t mean the market will let you burn the boats. The compute is not obviously for sale, nor is Anthropic’s credit good for it, nor would the investors be okay with this.

    4. This does mean that Anthropic is some combination of insufficiently confident to burn the boats or unable to burn them.

  3. Dario won’t give exact numbers, but he’s predicting more than 3x to Anthropic compute each year going forward.

  1. Why is Anthropic planning on turning a profit in 2028 instead of reinvesting? “I actually think profitability happens when you underestimated the amount of demand you were going to get and loss happens when you overestimated the amount of demand you were going to get, because you’re buying the data centers ahead of time.” He says they could potentially even be profitable in 2026.

    1. Thus, the theory is that Anthropic needs to underestimate demand because it is death to overestimate demand, which means you probably turn a profit ‘in spite of yourself.’ That’s so weird, but it kind of makes sense.

    2. Dario denies this is Anthropic ‘systematically underinvesting in compute’ but that depends on your point of view. You’re underinvesting post-hoc with hindsight. That doesn’t mean it was a mistake over possible worlds, but I do think that it counts as underinvesting for these purposes.

    3. Also, Dario is saying (in the toy model) you split compute 50/50 internal use versus sales. You don’t have to do that. You could double the buy, split it 75/25 and plan on taking a loss and funding the loss by raising capital, if you wanted that.

  2. Dwarkesh suggests exactly doing an uneven split, Dario says there are log returns to scale, diminishing returns after spending e.g. $50 billion a year, so it probably doesn’t help you that much.

    1. I basically don’t buy this argument. I buy the diminishing return but it seems like if you actually believed Anthropic’s projections you wouldn’t care. As Dwarkesh says ‘diminishing returns on a genius could be quite high.’

    2. If you actually did have a genius in your datacenters, I’d expect there to be lots of profitable ways to use that marginal genius. The world is your oyster.

    3. And that’s if you don’t get into an AI 2027 or other endgame scenario.

  3. Dario says AI companies need revenue to raise money and buy more compute.

    1. In practice I think Dario is right. You need customers to prove your value and business model sufficiently to raise money.

    2. However, I think the theory here is underdeveloped. There is no reason why you couldn’t keep raising at higher valuations without a product. Indeed, see Safe Superintelligence, and see Thinking Machines before they lost a bunch of people, and so on, as Matt Levine often points out. It’s better to be a market leader, but the no product, all research path is very viable.

    3. The other advantage of having a popular product is gaining voice.

  4. Dwarkesh claims Dario’s view is compatible with us being 10 years away from AI generating trillions in value. Dario says it might take 3-4 years at most, he’s very confident in the ‘geniuses’ showing up by 2028.

    1. Dario feels overconfident here, and also more confident than his business decisions reflect. If he’s that confident he’s not burning enough boats.

  5. Dario predicts a Cournot equilibrium, with a small number of relevant firms, which means there will be economic profits to be captured. He points out that gross margins are currently very positive, and the reason AI companies are taking losses is that each model turns a profit but you’re investing in the model that costs [10*X] while collecting the profits from the model that costs [X]. At some point the compute stops multiplying by 10 each cycle and then you notice that you were turning a profit the whole time, the economy is going to grow faster but that’s like 10%-20% fast, not 300% a year fast.

    1. I don’t understand what is confusing Dwarkesh here. I do get that this is confusing to many but it shouldn’t confuse Dwarkesh.

    2. Of course if we do start seeing triple-digit economic growth, things get weird, and also we should strongly suspect we will all soon die or lose control, but in the meantime there’ll be some great companies and I wouldn’t worry about Anthropic’s business model while that is happening.

  6. Dario says he feels like he’s in an economics class.

    1. Honestly it did feel like that. This is the first time in a long time it felt like Dwarkesh flat out was not prepared on a key issue, and is getting unintentionally taken to school (as opposed to when someone like Sarah Paine is taking us to school, but by design.)

  7. Dario predicts an oligopoly, not a monopoly, because of lack of network effects combined with high fixed costs, similar to cloud providers.

    1. This is a bet on there not being win-more or runaway effects.

    2. For a while, the battle had catch-up mechanics rather than runaway effects. If you were behind, you can distill and you can copy ideas, so it’s hard to maintain much of a lead.

    3. This feels like it is starting to change as RSI sets in. Claude is built by Claude Code, Codex is built by Codex, Google has to make its own choices and so on. The models are in many ways charged with training their successors.

    4. Also the cycle may be speeding up a la AI 2027. If you’re six months behind that used to be one generation behind. Now it is three.

    5. And of course, once sufficiently powerful RSI (recursive self-improvement) sets in, and the models become sufficiently capable, that edge starts to translate into various other advantages far more readily.

    6. Many fates are possible, but don’t rule out monopoly or winner-take-most.

  8. Dario points out different models have different comparative advantages, often in subtle ways.

    1. True, but a sufficient lead would apply across the board. We’re not there right now, but we’re not that far from it either.

  9. Dario worried Silicon Valley and those connected to it could grow at 50% while everyone else grows at not much above the normal 2%. He says that would be ‘a pretty messed up world.’

    1. I think that turns out fine. You tax the part growing at 50%, everyone wins.

    2. That’s distinct from issues about the AI taking over, or the people in charge of the AI taking over, you still do have to dodge those problems. But if well-meaning humans are in control I don’t worry about distributional issues under extreme growth.

  10. Will robotics get solved soon after we get the ‘geniuses’? Dario says it doesn’t depend on learning like a human, there are many options, and it will happen, we will learn to control robots, and yes the robotics industry will then make trillions. It tacks on maybe a year or two to get going.

    1. This seems obviously correct if you understand the premise, that we have the geniuses and the geniuses are playing nice for whatever reason.

    2. That premise is not obvious.

  11. Dwarkesh Patel keeps talking about continual learning, Dario Amodei keeps saying that we don’t need it.

    1. I agree with Dario. We don’t need it as such, if nothing else we can easily solve such problems already via [CENSORED].

  12. How should we price AGI? Dario thinks API is durable and will exist alongside other options, including forms of ‘pay for results.’

  13. How did Anthropic end up being the ones to build Claude Code? Dario encouraged experimentation internally, they used it internally, and then Dario said they should launch it externally.

Finally, we ask about making AI ‘go well.’ With that framing you know that everyone is mostly conspicuously ignoring the biggest issues.

  1. Soon there will be lots of misaligned or crazy AIs running around. What to do? Dario correctly reiterates his dismissal of the idea that having a bunch of different AIs keeps them meaningfully in check. He points to alignment work, and classifiers, for the short run. For the long run, we need governance and some sort of monitoring system, but it needs to be consistent with civil liberties, and we need to figure this out really fast.

    1. We’ve heard Dario’s take on this before, he gives a good condensed version.

    2. For my response, see my discussion of The Adolescence of Technology. I think he’s dodging the difficult questions, problems and clashes of sacred values, because he feels it’s the strategically correct play to dodge them.

    3. That’s a reasonable position, in that if you actively spell out any plan that might possibly work, even in relatively fortunate scenarios, this is going to involve some trade-offs that are going to create very nasty pull quotes.

    4. The longer you wait to make those trade-offs, the worse they get.

  2. Dwarkesh asks, what do we do in an offense-dominated world? Dario says we would need international coordination on forms of defense.

    1. Yes. To say (less than) the least.

  3. Dwarkesh asks about Tennessee’s latest crazy proposed bill (it’s often Tennessee), which says “It would be an offense for a person to knowingly train artificial intelligence to provide emotional support, including through open-ended conversations with a user” and a potential patchwork of state laws. Dario (correctly) points out that particular law is dumb and reiterates that a blanket moratorium on all state AI bills for 10 years is a bad idea, we should only stop states once we have a federal framework in place on a particular question.

    1. Yes, that is the position we still need to argue against, my lord.

  4. Dario points out that people talk about ‘thousands of state laws’ but those are only proposals, almost all of them fail to pass, and when really stupid laws pass they often don’t get implemented. He points out that there are many things in AI he would actively deregulate, such as around health care. But he says we need to ramp up the safety and security legislation quite significantly, especially transparency. Then we need to be nimble.

    1. I agree with all of this, as far as it goes.

    2. I don’t think it goes far enough.

    3. Colorado passed a deeply stupid AI regulation law, and didn’t implement it.

  5. What can we do to get the benefits of AI better instantiated? Dwarkesh is worried about ‘kinds of moral panics or political economy problems’ and he worries benefits are fragile. Dario says no, markets actually work pretty well in the developed world.

    1. Whereas Dwarkesh does not seem worried about the actual catastrophic or existential risks from AI.

  1. Dario is fighting for export controls on chips, and he will ‘politely call the counterarguments fishy.’

  2. Dwarkesh asks, what’s wrong with China having its own geniuses? Dario says we could be in an offense-dominant world, and even if we are not then potential conflict would create instability. And he worried governments will use AI to oppress their own people, China especially. Some coalition with pro-human values has to say ‘these are the rules of the road.’ We need to press our edge.

    1. I am sad that this is the argument he is choosing here. There are better reasons, involving existential risks. Politically I get why he does it this way.

  3. Dario doesn’t see a key inflection point, even with his ‘geniuses,’ the exponential will continue. He does call for negotiation with a strong hand.

    1. This is reiteration from his essays. He’s flinching.

    2. There’s good reasons for him to flinch, but be aware he’s doing it.

  4. More discussion of democracy and authoritarianism and whether democracy will remain viable or authoritarianism lack sustainability or moral authority, etc.

    1. There’s nothing new here, Dario isn’t willing to say things that would be actually interesting, and I grow tired.

  5. Why does Claude’s constitution try to make Claude align to desired values and do good things and not bad things, rather than simply being user aligned? Dario gives the short version of why virtue ethics gives superior results here, without including explanations of why user alignment is ultimately doomed or the more general alignment problems other approaches can’t solve.

    1. If you’re confused about this see my thoughts on the Claude Constitution.

  6. How are these principles determined? Can’t Anthropic change them at any time? Dario suggests three sizes of loop: Within Anthropic, different companies putting out different constitutions people can compare, and society at large. He says he’d like to let representative governments have input but right now the legislative process is too slow therefore we should be careful and make it slower. Dwarkesh likes control loop two.

    1. I like the first two loops. The problem with putting the public in the loop is that they have no idea how any of this works and would not make good choices, even according to their own preferences.

  7. What have we likely missed about this era when we write the book on it? Dario says, the extent the world didn’t understand the exponential while it was happening, that the average person had no idea and everything was being decided all at once and often consequential decisions are made very quickly on almost no information and spending very little human compute.

    1. I really hope we are still around to write the book.

    2. From the processes we observe and what he says, I don’t love our chances.

Discussion about this post

On Dwarkesh Patel’s 2026 Podcast With Dario Amodei Read More »