Physics

microsoft-lays-out-its-path-to-useful-quantum-computing

Microsoft lays out its path to useful quantum computing


Its platform needs error correction that works with different hardware.

Some of the optical hardware needed to make Atom Computing’s machines work. Credit: Atom Computing

On Thursday, Microsoft’s Azure Quantum group announced that it has settled on a plan for getting error correction on quantum computers. While the company pursues its own hardware efforts, the Azure team is a platform provider that currently gives access to several distinct types of hardware qubits. So it has chosen a scheme that is suitable for several different quantum computing technologies (notably excluding its own). The company estimates that the system it has settled on can take hardware qubits with an error rate of about 1 in 1,000 and use them to build logical qubits where errors are instead 1 in 1 million.

While it’s describing the scheme in terms of mathematical proofs and simulations, it hasn’t shown that it works using actual hardware yet. But one of its partners, Atom Computing, is accompanying the announcement with a description of how its machine is capable of performing all the operations that will be needed.

Arbitrary connections

There are similarities and differences between what the company is talking about today and IBM’s recent update of its roadmap, which described another path to error-resistant quantum computing. In IBM’s case, it makes both the software stack that will perform the error correction and the hardware needed to implement it. It uses chip-based hardware, with the connections among qubits mediated by wiring that’s laid out when the chip is fabricated. Since error correction schemes require a very specific layout of connections among qubits, once IBM decides on a quantum error correction scheme, it can design chips with the wiring needed to implement that scheme.

Microsoft’s Azure, in contrast, provides its users with access to hardware from several different quantum computing companies, each based on different technology. Some of them, like Rigetti and Microsoft’s own planned processor, are similar to IBM’s in that they have a fixed layout during manufacturing, and so can only handle codes that are compatible with their wiring layout. But others, such as those provided by Quantinuum and Atom Computing, store their qubits in atoms that can be moved around and connected in arbitrary ways. Those arbitrary connections allow very different types of error correction schemes to be considered.

It can be helpful to think of this using an analogy to geometry. A chip is like a plane, where it’s easiest to form the connections needed for error correction among neighboring qubits; longer connections are possible, but not as easy. Things like trapped ions and atoms provide a higher-dimensional system where far more complicated patterns of connections are possible. (Again, this is an analogy. IBM is using three-dimensional wiring in its processing chips, while Atom Computing stores all its atoms in a single plane.)

Microsoft’s announcement is focused on the sorts of processors that can form the more complicated, arbitrary connections. And, well, it’s taking full advantage of that, building an error correction system with connections that form a four-dimensional hypercube. “We really have focused on the four-dimensional codes due to their amenability to current and near term hardware designs,” Microsoft’s Krysta Svore told Ars.

The code not only describes the layout of the qubits and their connections, but also the purpose of each hardware qubit. Some of them are used to hang on to the value of the logical qubit(s) stored in a single block of code. Others are used for what are called “weak measurements.” These measurements tell us something about the state of the ones that are holding on to the data—not enough to know their values (a measurement that would end the entanglement), but enough to tell if something has changed. The details of the measurement allow corrections to be made that restore the original value.

Microsoft’s error correction system is described in a preprint that the company recently released. It includes a family of related geometries, each of which provides different degrees of error correction, based on how many simultaneous errors they can identify and fix. The descriptions are about what you’d expect for complicated math and geometry—”Given a lattice Λ with an HNF L, the code subspace of the 4D geometric code CΛ is spanned by the second homology H2(T4Λ,F2) of the 4-torus T4Λ—but the gist is that all of them convert collections of physical qubits into six logical qubits that can be error corrected.

The more hardware qubits you add to host those six logical qubits, the greater error protection each of them gets. That becomes important because some more sophisticated algorithms will need more than the one-in-a-million error protection that Svore said Microsoft’s favored version will provide. That favorite is what’s called the Hadamard version, which bundles 96 hardware qubits to form six logical qubits, and has a distance of eight (distance being a measure of how many simultaneous errors it can tolerate). You can compare that with IBM’s announcement, which used 144 hardware qubits to host 12 logical qubits at a distance of 12 (so, more hardware, but more logical qubits and greater error resistance).

The other good stuff

On its own, a description of the geometry is not especially exciting. But Microsoft argues that this family of error correction codes has a couple of significant advantages. “All of these codes in this family are what we call single shot,” Svore said. “And that means that, with a very low constant number of rounds of getting information about the noise, one can decode and correct the errors. This is not true of all codes.”

Limiting the number of measurements needed to detect errors is important. For starters, measurements themselves can create errors, so making fewer makes the system more robust. In addition, in things like neutral atom computers, the atoms have to be moved to specific locations where measurements take place, and the measurements heat them up so that they can’t be reused until cooled. So, limiting the measurements needed can be very important for the performance of the hardware.

The second advantage of this scheme, as described in the draft paper, is the fact that you can perform all the operations needed for quantum computing on the logical qubits these schemes host. Just like in regular computers, all the complicated calculations performed on a quantum computer are built up from a small number of simple logical operations. But not every possible logical operation works well with any given error correction scheme. So it can be non-trivial to show that an error correction scheme is compatible with enough of the small operations to enable universal quantum computation.

So, the paper describes how some logical operations can be performed relatively easily, while a few others require manipulations of the error correction scheme in order to work. (These manipulations have names like lattice surgery and magic state distillation, which are good signs that the field doesn’t take itself that seriously.)

So, in sum, Microsoft feels that it has identified an error correction scheme that is fairly compact, can be implemented efficiently on hardware that stores qubits in photons, atoms, or trapped ions, and enables universal computation. What it hasn’t done, however, is show that it actually works. And that’s because it simply doesn’t have the hardware right now. Azure is offering trapped ion machines from IonQ and Qantinuum, but these top out at 56 qubits—well below the 96 needed for their favored version of these 4D codes. The largest it has access to is a 100-qubit machine from a company called PASQAL, which barely fits the 96 qubits needed, leaving no room for error.

While it should be possible to test smaller versions of codes in the same family, the Azure team has already demonstrated its ability to work with error correction codes based on hypercubes, so it’s unclear whether there’s anything to gain from that approach.

More atoms

Instead, it appears to be waiting for another partner, Atom Computing, to field its next-generation machine, one it’s designing in partnership with Microsoft. “This first generation that we are building together between Atom Computing and Microsoft will include state-of-the-art quantum capabilities, will have 1,200 physical qubits,” Svore said “And then the next upgrade of that machine will have upwards of 10,000. And so you’re looking at then being able to go to upwards of a hundred logical qubits with deeper and more reliable computation available. “

So, today’s announcement was accompanied by an update on progress from Atom Computing, focusing on a process called “midcircuit measurement.” Normally, during quantum computing algorithms, you have to resist performing any measurements of the value of qubits until the entire calculation is complete. That’s because quantum calculations depend on things like entanglement and each qubit being in a superposition between its two values; measurements can cause all that to collapse, producing definitive values and ending entanglement.

Quantum error correction schemes, however, require that some of the hardware qubits undergo weak measurements multiple times while the computation is in progress. Those are quantum measurements taking place in the middle of a computation—midcircuit measurements, in other words. To show that its hardware will be up to the task that Microsoft expects of it, the company decided to demonstrate mid-circuit measurements on qubits implementing a simple error correction code.

The process reveals a couple of notable features that are distinct from doing this with neutral atoms. To begin with, the atoms being used for error correction have to be moved to a location—the measurement zone—where they can be measured without disturbing anything else. Then, the measurement typically heats up the atom slightly, meaning they have to be cooled back down afterward. Neither of these processes is perfect, and so sometimes an atom gets lost and needs to be replaced with one from a reservoir of spares. Finally, the atom’s value needs to be reset, and it has to be sent back to its place in the logical qubit.

Testing revealed that about 1 percent of the atoms get lost each cycle, but the system successfully replaces them. In fact, they set up a system where the entire collection of atoms is imaged during the measurement cycle, and any atom that goes missing is identified by an automated system and replaced.

Overall, without all these systems in place, the fidelity of a qubit is about 98 percent in this hardware. With error correction turned on, even this simple logical qubit saw its fidelity rise over 99.5 percent. All of which suggests their next computer should be up to some significant tests of Microsoft’s error correction scheme.

Waiting for the lasers

The key questions are when it will be released, and when its successor, which should be capable of performing some real calculations, will follow it? That’s something that’s a challenging question to ask because, more so than some other quantum computing technologies, neutral atom computing is dependent on something that’s not made by the people who build the computers: lasers. Everything about this system—holding atoms in place, moving them around, measuring, performing manipulations—is done with a laser. The lower the noise of the laser (in terms of things like frequency drift and energy fluctuations), the better performance it’ll have.

So, while Atom can explain its needs to its suppliers and work with them to get things done, it has less control over its fate than some other companies in this space.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Microsoft lays out its path to useful quantum computing Read More »

ibm-now-describing-its-first-error-resistant-quantum-compute-system

IBM now describing its first error-resistant quantum compute system


Company is moving past focus on qubits, shifting to functional compute units.

A rendering of what IBM expects will be needed to house a Starling quantum computer. Credit: IBM

On Tuesday, IBM released its plans for building a system that should push quantum computing into entirely new territory: a system that can both perform useful calculations while catching and fixing errors and be utterly impossible to model using classical computing methods. The hardware, which will be called Starling, is expected to be able to perform 100 million operations without error on a collection of 200 logical qubits. And the company expects to have it available for use in 2029.

Perhaps just as significant, IBM is also committing to a detailed description of the intermediate steps to Starling. These include a number of processors that will be configured to host a collection of error-corrected qubits, essentially forming a functional compute unit. This marks a major transition for the company, as it involves moving away from talking about collections of individual hardware qubits and focusing instead on units of functional computational hardware. If all goes well, it should be possible to build Starling by chaining a sufficient number of these compute units together.

“We’re updating [our roadmap] now with a series of deliverables that are very precise,” IBM VP Jay Gambetta told Ars, “because we feel that we’ve now answered basically all the science questions associated with error correction and it’s becoming more of a path towards an engineering problem.”

New architectures

Error correction on quantum hardware involves entangling a group of qubits in a way that distributes one or more quantum bit values among them and includes additional qubits that can be used to check the state of the system. It can be helpful to think of these as data and measurement qubits. Performing weak quantum measurements on the measurement qubits produces what’s called “syndrome data,” which can be interpreted to determine whether anything about the data qubits has changed (indicating an error) and how to correct it.

There are lots of potential ways to arrange different combinations of data and measurement qubits for this to work, each referred to as a code. But, as a general rule, the more hardware qubits committed to the code, the more robust it will be to errors, and the more logical qubits that can be distributed among its hardware qubits.

Some quantum hardware, like that based on trapped ions or neutral atoms, is relatively flexible when it comes to hosting error-correction codes. The hardware qubits can be moved around so that any two can be entangled, so it’s possible to adopt a huge range of configurations, albeit at the cost of the time spent moving atoms around. IBM’s technology is quite different. It relies on qubits made of superconducting electronics laid out on a chip, with entanglement mediated by wiring that runs between qubits. The layout of this wiring is set during the chip’s manufacture, and so the chip’s design commits it to a limited number of potential error-correction codes.

Unfortunately, this wiring can also enable crosstalk between neighboring qubits, causing them to lose their state. To avoid this, existing IBM processors have their qubits wired in what they term a “heavy hex” configuration, named for its hexagonal arrangements of connections among its qubits. This has worked well to keep the error rate of its hardware down, but it also poses a challenge, since IBM has decided to go with an error-correction code that’s incompatible with the heavy hex geometry.

A couple of years back, an IBM team described a compact error correction code called a low-density parity check (LDPC). This requires a square grid of nearest-neighbor connections among its qubits, as well as wiring to connect qubits that are relatively distant on the chip. To get its chips and error-correction scheme in sync, IBM has made two key advances. The first is in its chip packaging, which now uses several layers of wiring sitting above the hardware qubits to enable all of the connections needed for the LDPC code.

We’ll see that first in a processor called Loon that’s on the company’s developmental roadmap. “We’ve already demonstrated these three things: high connectivity, long-range couplers, and couplers that break the plane [of the chip] and connect to other qubits,” Gambetta said. “We have to combine them all as a single demonstration showing that all these parts of packaging can be done, and that’s what I want to achieve with Loon.” Loon will be made public later this year.

Two diagrams of blue objects linked by red lines. The one on the left is sparse and simple, while the one on the right is a complicated mesh of red lines.

On the left, the simple layout of the connections in a current-generation Heron processor. At right, the complicated web of connections that will be present in Loon. Credit: IBM

The second advance IBM has made is to eliminate the crosstalk that the heavy hex geometry was used to minimize, so heavy hex will be going away. “We are releasing this year a bird for near-term experiments that is a square array that has almost zero crosstalk,” Gambetta said, “and that is Nighthawk.” The more densely connected qubits cut the overhead needed to perform calculations by a factor of 15, Gambetta told Ars.

Nighthawk is a 2025 release on a parallel roadmap that you can think of as user-facing. Iterations on its basic design will be released annually through 2028, each enabling more operations without error (going from 5,000 gate operations this year to 15,000 in 2028). Each individual Nighthawk processor will host 120 hardware qubits, but 2026 will see three of them chained together and operating as a unit, providing 360 hardware qubits. That will be followed in 2027 by a machine with nine linked Nighthawk processors, boosting the hardware qubit number over 1,000.

Riding the bicycle

The real future of IBM’s hardware, however, will be happening over on the developmental line of processors, where talk about hardware qubit counts will become increasingly irrelevant. In a technical document released today, IBM is describing the specific LDPC code it will be using, termed a bivariate bicycle code due to some cylindrical symmetries in its details that vaguely resemble bicycle wheels. The details of the connections matter less than the overall picture of what it takes to use this error code in practice.

IBM describes two implementations of this form of LDPC code. In the first, 144 hardware qubits are arranged so that they play host to 12 logical qubits and all of the measurement qubits needed to perform error checks. The standard measure of a code’s ability to catch and correct errors is called its distance, and in this case, the distance is 12. As an alternative, they also describe a code that uses 288 hardware qubits to host the same 12 logical qubits but boost the distance to 18, meaning it’s more resistant to errors. IBM will make one of these collections of logical qubits available as a Kookaburra processor in 2026, which will use them to enable stable quantum memory.

The follow-on will bundle these with a handful of additional qubits that can produce quantum states that are needed for some operations. Those, plus hardware needed for the quantum memory, form a single, functional computation unit, built on a single chip, that is capable of performing all the operations needed to implement any quantum algorithm.

That will appear with the Cockatoo chip, which will also enable multiple processing units to be linked on a single bus, allowing the logical qubit count to grow beyond 12. (The company says that one of the dozen logical qubits in each unit will be used to mediate entanglement with other units and so won’t be available for computation.) That will be followed by the first test versions of Starling, which will allow universal computations on a limited number of logical qubits spread across multiple chips.

Separately, IBM is releasing a document that describes a key component of the system that will run on classical computing hardware. Full error correction requires evaluating the syndrome data derived from the state of all the measurement qubits in order to determine the state of the logical qubits and whether any corrections need to be made. As the complexity of the logical qubits grows, the computational burden of evaluating grows with it. If this evaluation can’t be executed in real time, then it becomes impossible to perform error-corrected calculations.

To address this, IBM has developed a message-passing decoder that can perform parallel evaluations of the syndrome data. The system explores more of the solution space by a combination of randomizing the weight given to the memory of past solutions and by handing any seemingly non-optimal solutions on to new instances for additional evaluation. The key thing is that IBM estimates that this can be run in real time using FPGAs, ensuring that the system works.

A quantum architecture

There are a lot more details beyond those, as well. Gambetta described the linkage between each computational unit—IBM is calling it a Universal Bridge—which requires one microwave cable for each code distance of the logical qubits being linked. (In other words, a distance 12 code would need 12 microwave-carrying cables to connect each chip.) He also said that IBM is developing control hardware that can operate inside the refrigeration hardware, based on what they’re calling “cold CMOS,” which is capable of functioning at 4 Kelvin.

The company is also releasing renderings of what it expects Starling to look like: a series of dilution refrigerators, all connected by a single pipe that contains the Universal Bridge. “It’s an architecture now,” Gambetta said. “I have never put details in the roadmap that I didn’t feel we could hit, and now we’re putting a lot more details.”

The striking thing to me about this is that it marks a shift away from a focus on individual qubits, their connectivity, and their error rates. The error hardware rates are now good enough (4 x 10-4) for this to work, although Gambetta felt that a few more improvements should be expected. And connectivity will now be directed exclusively toward creating a functional computational unit.

That said, there’s still a lot of space beyond Starling on IBM’s roadmap. The 200 logical qubits it promises will be enough to handle some problems, but not enough to perform the complex algorithms needed to do things like break encryption. That will need to wait for something closer to Blue Jay, a 2033 system that IBM expects will have 2,000 logical qubits. And, as of right now, it’s the only thing listed beyond Starling.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

IBM now describing its first error-resistant quantum compute system Read More »

startup-puts-a-logical-qubit-in-a-single-piece-of-hardware

Startup puts a logical qubit in a single piece of hardware

A bit over a year ago, Nord Quantique used a similar setup to show that it could be used to identify the most common form of error in these devices, one in which the system loses one of its photons. “We can store multiple microwave photons into each of these cavities, and the fact that we have redundancy in the system comes exactly from this,” said Nord Quantique’s CTO, Julien Camirand Lemyre. However, this system was unable to handle many of the less common errors that might also occur.

This time around, the company is showing that it can get an actual logical qubit into a variant of the same hardware. In the earlier version of its equipment, the resonator cavity had a single post and supported a single frequency. In the newer iteration, there were two posts and two frequencies. Each of those frequencies creates its own quantum resonator in the same cavity, with its own set of modes. “It’s this ensemble of photons inside this cavity that creates the logical qubit,” Lemyre told Ars.

The additional quantum information that can now be stored in the system enables it to identify more complex errors than the loss of a photon.

Catching, but not fixing errors

The company did two experiments with this new hardware. First, it ran multiple rounds of error detection on data stored in the logical qubit, essentially testing its ability to act like a quantum memory and retain the information stored there. Without correcting errors, the system rapidly decayed, with an error probability in each round of measurement of about 12 percent. By the time the system reached the 25th measurement, almost every instance had already encountered an error.

The second time through, the company repeated the process, discarding any instances in which an error occurred. In almost every instance, that meant the results were discarded long before they got through two dozen rounds of measurement. But at these later stages, none of the remaining instances were in an erroneous state. That indicates that a successful correction of the errors—something the team didn’t try—would be able to fix all the detected problems.

Startup puts a logical qubit in a single piece of hardware Read More »

research-roundup:-7-stories-we-almost-missed

Research roundup: 7 stories we almost missed


Ping-pong bots, drumming chimps, picking styles of two jazz greats, and an ancient underground city’s soundscape

Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim

It’s a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. May’s list includes a nifty experiment to make a predicted effect of special relativity visible; a ping-pong playing robot that can return hits with 88 percent accuracy; and the discovery of the rare genetic mutation that makes orange cats orange, among other highlights.

Special relativity made visible

The Terrell-Penrose-Effect: Fast objects appear rotated

Credit: TU Wien

Perhaps the most well-known feature of Albert Einstein’s special theory of relativity is time dilation and length contraction. In 1959, two physicists predicted another feature of relativistic motion: an object moving near the speed of light should also appear to be rotated. It’s not been possible to demonstrate this experimentally, however—until now. Physicists at the Vienna University of Technology figured out how to reproduce this rotational effect in the lab using laser pulses and precision cameras, according to a paper published in the journal Communications Physics.

They found their inspiration in art, specifically an earlier collaboration with an artist named Enar de Dios Rodriguez, who collaborated with VUT and the University of Vienna on a project involving ultra-fast photography and slow light. For this latest research, they used objects shaped like a cube and a sphere and moved them around the lab while zapping them with ultrashort laser pulses, recording the flashes with a high-speed camera.

Getting the timing just right effectively yields similar results to a light speed of 2 m/s. After photographing the objects many times using this method, the team then combined the still images into a single image. The results: the cube looked twisted and the sphere’s North Pole was in a different location—a demonstration of the rotational effect predicted back in 1959.

DOI: Communications Physics, 2025. 10.1038/s42005-025-02003-6  (About DOIs).

Drumming chimpanzees

A chimpanzee feeling the rhythm. Credit: Current Biology/Eleuteri et al., 2025.

Chimpanzees are known to “drum” on the roots of trees as a means of communication, often combining that action with what are known as “pant-hoot” vocalizations (see above video). Scientists have found that the chimps’ drumming exhibits key elements of musical rhythm much like humans, according to  a paper published in the journal Current Biology—specifically non-random timing and isochrony. And chimps from different geographical regions have different drumming rhythms.

Back in 2022, the same team observed that individual chimps had unique styles of “buttress drumming,” which served as a kind of communication, letting others in the same group know their identity, location, and activity. This time around they wanted to know if this was also true of chimps living in different groups and whether their drumming was rhythmic in nature. So they collected video footage of the drumming behavior among 11 chimpanzee communities across six populations in East Africa (Uganda) and West Africa (Ivory Coast), amounting to 371 drumming bouts.

Their analysis of the drum patterns confirmed their hypothesis. The western chimps drummed in regularly spaced hits, used faster tempos, and started drumming earlier during their pant-hoot vocalizations. Eastern chimps would alternate between shorter and longer spaced hits. Since this kind of rhythmic percussion is one of the earliest evolved forms of human musical expression and is ubiquitous across cultures, findings such as this could shed light on how our love of rhythm evolved.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.04.019  (About DOIs).

Distinctive styles of two jazz greats

Wes Montgomery (left)) and Joe Pass (right) playing guitars

Jazz lovers likely need no introduction to Joe Pass and Wes Montgomery, 20th century guitarists who influenced generations of jazz musicians with their innovative techniques. Montgomery, for instance, didn’t use a pick, preferring to pluck the strings with his thumb—a method he developed because he practiced at night after working all day as a machinist and didn’t want to wake his children or neighbors. Pass developed his own range of picking techniques, including fingerpicking, hybrid picking, and “flat picking.”

Chirag Gokani and Preston Wilson, both with Applied Research Laboratories and the University of Texas, Austin, greatly admired both Pass and Montgomery and decided to explore the underlying the acoustics of their distinctive playing, modeling the interactions of the thumb, fingers, and pick with a guitar string. They described their research during a meeting of the Acoustical Society of America in New Orleans, LA.

Among their findings: Montgomery achieved his warm tone by playing closer to the bridge and mostly plucking at the string. Pass’s rich tone arose from a combination of using a pick and playing closer to the guitar neck. There were also differences in how much a thumb, finger, and pick slip off the string:  use of the thumb (Montgomery) produced more of a “pluck” compared to the pick (Pass), which produced more of a “strike.” Gokani and Wilson think their model could be used to synthesize digital guitars with a more realistic sound, as well as helping guitarists better emulate Pass and Montgomery.

Sounds of an ancient underground city

A collection of images from the underground tunnels of Derinkuyu.

Credit: Sezin Nas

Turkey is home to the underground city Derinkuyu, originally carved out inside soft volcanic rock around the 8th century BCE. It was later expanded to include four main ventilation channels (and some 50,000 smaller shafts) serving seven levels, which could be closed off from the inside with a large rolling stone. The city could hold up to 20,000 people and it  was connected to another underground city, Kaymakli, via tunnels. Derinkuyu helped protect Arab Muslims during the Arab-Byzantine wars, served as a refuge from the Ottomans in the 14th century, and as a haven for Armenians escaping persecution in the early 20th century, among other functions.

The tunnels were rediscovered in the 1960s and about half of the city has been open to visitors since 2016. The site is naturally of great archaeological interest, but there has been little to no research on the acoustics of the site, particularly the ventilation channels—one of Derinkuyu’s most unique features, according to Sezin Nas, an architectural acoustician at Istanbul Galata University in Turkey.  She gave a talk at a meeting of the Acoustical Society of America in New Orleans, LA, about her work on the site’s acoustic environment.

Nas analyzed a church, a living area, and a kitchen, measuring sound sources and reverberation patterns, among other factors, to create a 3D virtual soundscape. The hope is that a better understanding of this aspect of Derinkuyu could improve the design of future underground urban spaces—as well as one day using her virtual soundscape to enable visitors to experience the sounds of the city themselves.

MIT’s latest ping-pong robot

Robots playing ping-pong have been a thing since the 1980s, of particular interest to scientists because it requires the robot to combine the slow, precise ability to grasp and pick up objects with dynamic, adaptable locomotion. Such robots need high-speed machine vision, fast motors and actuators, precise control, and the ability to make accurate predictions in real time, not to mention being able to develop a game strategy. More recent designs use AI techniques to allow the robots to “learn” from prior data to improve their performance.

MIT researchers have built their own version of a ping-pong playing robot, incorporating a lightweight design and the ability to precisely return shots. They built on prior work developing the Humanoid, a small bipedal two-armed robot—specifically, modifying the Humanoid’s arm by adding an extra degree of freedom to the wrist so the robot could control a ping-pong paddle. They tested their robot by mounting it on a ping-pong table and lobbing 150 balls at it from the other side of the table, capturing the action with high-speed cameras.

The new bot can execute three different swing types (loop, drive, and chip) and during the trial runs it returned the ball with impressive accuracy across all three types: 88.4 percent, 89.2 percent, and 87.5 percent, respectively. Subsequent tweaks to theirrystem brought the robot’s strike speed up to 19 meters per second (about 42 MPH), close to the 12 to 25 meters per second of advanced human players. The addition of control algorithms gave the robot the ability to aim. The robot still has limited mobility and reach because it has to be fixed to the ping-pong table but the MIT researchers plan to rig it to a gantry or wheeled platform in the future to address that shortcoming.

Why orange cats are orange

an orange tabby kitten

Cat lovers know orange cats are special for more than their unique coloring, but that’s the quality that has intrigued scientists for almost a century. Sure, lots of animals have orange, ginger, or yellow hues, like tigers, orangutans, and golden retrievers. But in domestic cats that color is specifically linked to sex. Almost all orange cats are male. Scientists have now identified the genetic mutation responsible and it appears to be unique to cats, according to a paper published in the journal Current Biology.

Prior work had narrowed down the region on the X chromosome most likely to contain the relevant mutation. The scientists knew that females usually have just one copy of the mutation and in that case have tortoiseshell (partially orange) coloring, although in rare cases, a female cat will be orange if both X chromosomes have the mutation. Over the last five to ten years, there has been an explosion in genome resources (including complete sequenced genomes) for cats which greatly aided the team’s research, along with taking additional DNA samples from cats at spay and neuter clinics.

From an initial pool of 51 candidate variants, the scientists narrowed it down to three genes, only one of which was likely to play any role in gene regulation: Arhgap36. It wasn’t known to play any role in pigment cells in humans, mice, or non-orange cats. But orange cats are special; their mutation (sex-linked orange) turns on Arhgap36 expression in pigment cells (and only pigment cells), thereby interfering with the molecular pathway that controls coat color in other orange-shaded mammals. The scientists suggest that this is an example of how genes can acquire new functions, thereby enabling species to better adapt and evolve.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.03.075  (About DOIs).

Not a Roman “massacre” after all

Two of the skeletons excavated by Mortimer Wheeler in the 1930s, dating from the 1st century AD.

Credit: Martin Smith

In 1936, archaeologists excavating the Iron Age hill fort Maiden Castle in the UK unearthed dozens of human skeletons, all showing signs of lethal injuries to the head and upper body—likely inflicted with weaponry. At the time, this was interpreted as evidence of a pitched battle between the Britons of the local Durotriges tribe and invading Romans. The Romans slaughtered the native inhabitants, thereby bringing a sudden violent end to the Iron Age. At least that’s the popular narrative that has prevailed ever since in countless popular articles, books, and documentaries.

But a paper published in the Oxford Journal of Archaeology calls that narrative into question. Archaeologists at Bournemouth University have re-analyzed those burials, incorporating radiocarbon dating into their efforts. They concluded that those individuals didn’t die in a single brutal battle. Rather, it was Britons killing other Britons over multiple generations between the first century BCE and the first century CE—most likely in periodic localized outbursts of violence in the lead-up to the Roman conquest of Britain. It’s possible there are still many human remains waiting to be discovered at the site, which could shed further light on what happened at Maiden Castle.

DOI: Oxford Journal of Archaeology, 2025. 10.1111/ojoa.12324  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 7 stories we almost missed Read More »

falcon-9-sonic-booms-can-feel-more-like-seismic-waves

Falcon 9 sonic booms can feel more like seismic waves

Could the similarities confuse California residents who might mistake a sonic boom for an earthquake?  Perhaps, at least until residents learn otherwise. “Since we’re often setting up in people’s backyard, they text us the results of what they heard,” said Gee. “It’s fantastic citizen science. They’ll tell us the difference is that the walls shake but the floors don’t. They’re starting to be able to tell the difference between an earthquake or a sonic boom from a launch.”

Launch trajectories of Falcon 9 rockets along the California coast. Credit: Kent Gee

A rocket’s trajectory also plays an important role. “Everyone sees the same thing, but what you hear depends on where you’re at and the rocket’s path or trajectory,” said Gee, adding that even the same flight path can nonetheless produce markedly different noise levels. “There’s a focal region in Ventura, Oxnard, and Camarillo where the booms are more impactful,” he said. “Where that focus occurs changes from launch to launch, even for the same trajectory.” That points to meteorology also being a factor: certain times of year could potentially have more impact than others as weather conditions shift, with wind shears, temperature gradients, and topography, for instance, potentially affecting the propagation of sonic booms.

In short, “If you can change your trajectory even a little under the right meteorological conditions, you can have a big impact on the sonic booms in this region of the country,” said Gee. And it’s only the beginning of the project; the team is still gathering data. “No two launches look the same right now,” said Gee. “It’s like trying to catch lightning.”

As our understanding improves, he sees the conversation shifting to more subjective social questions, possibly leading to the development of science-based local regulations, such as noise ordinances, to address any negative launch impacts. The next step is to model sonic booms under different weather conditions, which will be challenging due to coastal California’s microclimates. “If you’ve ever driven along the California coast, the weather changes dramatically,” said Gee. “You go from complete fog at Vandenburg to complete sun in Ventura County just 60 miles from the base.”

Falcon 9 sonic booms can feel more like seismic waves Read More »

the-key-to-a-successful-egg-drop-experiment?-drop-it-on-its-side

The key to a successful egg drop experiment? Drop it on its side

There was a key difference, however, between how vertically and horizontally  squeezed eggs deformed in the compression experiments—namely, the former deformed less than the latter. The shell’s greater rigidity along its long axis was an advantage because the heavy load was distributed over the surface. (It’s why the one-handed egg-cracking technique targets the center of a horizontally held egg.)

But the authors found that this advantage when under static compression proved to be a disadvantage when dropping eggs from a height, with the horizontal position emerging as the optimal orientation.  It comes down to the difference between stiffness—how much force is needed to deform the egg—and toughness, i.e., how much energy the egg can absorb before it cracks.

Cohen et al.’s experiments showed that eggs are tougher when loaded horizontally along their equator, and stiffer when compressed vertically, suggesting that “an egg dropped on its equator can likely sustain greater drop heights without cracking,” they wrote. “Even if eggs could sustain a higher force when loaded in the vertical direction, it does not necessarily imply that they are less likely to break when dropped in that orientation. In contrast to static loading, to remain intact following a dynamic impact, a body must be able to absorb all of its kinetic energy by transferring it into reversible deformation.”

“Eggs need to be tough, not stiff, in order to survive a fall,” Cohen et al. concluded, pointing to our intuitive understanding that we should bend our knees rather than lock them into a straightened position when landing after a jump, for example. “Our results and analysis serve as a cautionary tale about how language can affect our understanding of a system, and improper framing of a problem can lead to misunderstanding and miseducation.”

DOI: Communications Physics, 2025. 10.1038/s42005-025-02087-0  (About DOIs).

The key to a successful egg drop experiment? Drop it on its side Read More »

cern-gears-up-to-ship-antimatter-across-europe

CERN gears up to ship antimatter across Europe

There’s a lot of matter around, which ensures that any antimatter produced experiences a very short lifespan. Studying antimatter, therefore, has been extremely difficult. But that’s changed a bit in recent years, as CERN has set up a facility that produces and traps antimatter, allowing for extensive studies of its properties, including entire anti-atoms.

Unfortunately, the hardware used to capture antiprotons also produces interference that limits the precision with which measurements can be made. So CERN decided that it might be good to determine how to move the antimatter away from where it’s produced. Since it was tackling that problem anyway, CERN decided to make a shipping container for antimatter, allowing it to be put on a truck and potentially taken to labs throughout Europe.

A shipping container for antimatter

The problem facing CERN comes from its own hardware. The antimatter it captures is produced by smashing a particle beam into a stationary target. As a result, all the anti-particles that come out of the debris carry a lot of energy. If you want to hold on to any of them, you have to slow them down, which is done using electromagnetic fields that can act on the charged antimatter particles. Unfortunately, as the team behind the new work notes, many of the measurements we’d like to do with the antimatter are “extremely sensitive to external magnetic field noise.”

In short, the hardware that slows the antimatter down limits the precision of the measurements you can take.

The obvious solution is to move the antimatter away from where it’s produced. But that gets tricky very fast. The antimatter containment device has to be maintained as an extreme vacuum and needs superconducting materials to produce the electromagnetic fields that keep the antimatter from bumping into the walls of the container. All of that means a significant power supply, along with a cache of liquid helium to keep the superconductors working. A standard shipping container just won’t do.

So the team at CERN built a two-meter-long portable containment device. On one end is a junction that allows it to be plugged into the beam of particles produced by the existing facility. That junction leads to the containment area, which is blanketed by a superconducting magnet. Elsewhere on the device are batteries to ensure an uninterrupted power supply, along with the electronics to run it all. The whole setup is encased in a metal frame that includes lifting points that can be used to attach it to a crane for moving around.

CERN gears up to ship antimatter across Europe Read More »

physics-of-the-perfect-cacio-e-pepe-sauce

Physics of the perfect cacio e pepe sauce


The trick: Add corn starch separately to make the sauce rather than using pasta water.

Cacio e pepe is an iconic pasta dish that can be frustratingly difficult to make Credit: Simone Frau

Nobody does pasta quite like the Italians, as anyone who has tasted an authentic “pasta alla cacio e pepe” can attest. It’s a simple dish: just tonnarelli pasta, pecorino cheese, and pepper. But its simplicity is deceptive. Cacio e pepe (“cheese and pepper”) is notoriously challenging to make because it’s so easy for the sauce to form unappetizing clumps with a texture more akin to stringy mozzarella rather than being smooth and creamy.

A team of Italian physicists has come to the rescue with a foolproof recipe based on their many scientific experiments, according to a new paper published in the journal Physics of Fluids. The trick: using corn starch for the cheese and pepper sauce instead of relying on however much starch leaches into the boiling water as the pasta is cooked.

“A true Italian grandmother or a skilled home chef from Rome would never need a scientific recipe for cacio e pepe, relying instead on instinct and years of experience,” the authors wrote. “For everyone else, this guide offers a practical way to master the dish. Preparing cacio e pepe successfully depends on getting the balance just right, particularly the ratio of starch to cheese. The concentration of starch plays a crucial role in keeping the sauce creamy and smooth, without clumps or separation.”

There has been a surprising amount of pasta-related physics research in recent years, particularly around spaghetti—the mechanics of slurping the pasta into one’s mouth, for instance, or spitting it out (aka, the “reverse spaghetti problem”). The most well-known is the question of how to get dry spaghetti strands to break neatly in two rather than three or more scattered pieces. French physicists successfully explained the dynamics in an Ig Nobel Prize-winning 2006 paper. They found that, counterintuitively, a dry spaghetti strand produces a “kick back” traveling wave as it breaks. This wave temporarily increases the curvature in other sections, leading to many more breaks.

In 2020, physicists provided an explanation for why a strand of spaghetti in a pot of boiling water will start to sag as it softens before sinking to the bottom of the pot and curling back on itself in a U shape. Physicists have also discovered a way to determine if one’s spaghetti is perfectly done by using a simple ruler (although one can always use the tried-and-true method of flinging a test strand against the wall). In 2021, inspired by flat-packed furniture, scientists came up with an ingenious solution to packaging differently shaped pastas: ship them in a flat 2D form that takes on the final 3D shape when cooked, thanks to carefully etched patterns in the pasta.

And earlier this year, physicists investigated how adding salt to a pasta pot to make it boil faster can leave a white ring on the bottom of the pot to identify factors leading to the perfect salt ring. They found that particles released from a smaller height fall faster and form a pattern with a clean central region. Those released from a greater height take longer to fall to the bottom, and the cloud of particles expands radially until the particles are far enough apart not to be influenced by the wakes of neighboring particles, such that they no longer form a cloud. In the latter case, you end up with a homogeneous salt ring deposit.

Going through a phase (separation)

ompare the effect of: water alone; pasta water that retains some starch (obtained by cooking 100 g of pasta in 1 liter of water); and pasta water “risottata”

Comparing the effect of water alone, pasta water that retains some starch, and pasta water “risottata.” Credit: G. Bartolucci et al., 2025

So it shouldn’t be the least bit surprising that physicists have now turned their attention to the problem of the perfect cacio e pepe sauce. The authors are well aware that they are treading on sacred ground for Italian traditionalists. “I hope that eight Italian authors is enough [to quell skepticism],” co-author Ivan Di Terlizzi of the Max Planck Institute for the Physics of Complex Systems told The New York Times back in January. (An earlier version of the paper was posted to the physics preprint arXiv in January, prompting that earlier coverage.)

Terlizzi and his fellow author are all living abroad and frequently meet for dinner. Cacio e pepe is among their favorite traditional dishes to make, and as physicists, they couldn’t help but want to learn more about the unique physics of the process, not to mention “the more practical aim to avoid wasting good pecorino,” said Terlizzi. They focused on the separation that often occurs when cheese and water are mixed, building on earlier culinary experiments.

As the pasta cooks in boiling water, the noodles release starch. Traditionally, the chef will extract part of the water and starch solution—which is cooled to a suitable temperature to avoid clumping as the cheese proteins “denaturate”—and mix it with the cheese to make the sauce, adding the pepper last, right before serving. But the authors note that temperature is not the only factor that can lead to this dreaded “mozzarella phase.”

According to the authors, if one tries to mix cheese and water without any starch, the clumping is more pronounced. There is less clumping with water containing a little starch, like water in which pasta has been cooked. And when one mixes the cheese with pasta water “risottata”—i.e., collected and heated in a pan so enough water evaporates that there is a higher concentration of starch—there is almost no clumping.

Effect of trisodium citrate on the stability of Cacio e pepe sauce.

Effect of trisodium citrate on the stability of cacio e pepe sauce. Credit: G. Bartolucci et al., 2025

So starch plays a crucial role in the process of making cacio e pepe. The authors devised a set of experiments to scientifically investigate the phase behavior of water, starch, and cheese mixed together in various concentrations and at different temperatures. They primarily used standard kitchen tools to make sure home cooks could recreate their results (although not every kitchen has a sous vide machine). This enabled them to devise a phase diagram of what happens to the sauce as the conditions change.

The authors found that the correct starch ratio is between 2 to 3 percent of the cheese weight. Below that, you get the clumping phase separation; above that, and the sauce “becomes stiff and unappetizing as it cools,” they wrote. Pasta water alone contains too little starch. Using pasta water “risottata” may concentrate the starch, but the chef has less control over the precise amount of starch. So the authors recommend simply dissolving 4 grams of powdered potato or corn starch in 40 grams of water, heating it gently until it thickens—a transition known as starch gelatinization—and combining that gel with the cheese. They also recommend toasting the black pepper briefly before adding it to the mixture to enhance its flavors and aromas.

They ran the same set of experiments using trisodium citrate as an alternative stabilizer, which is widely used in the food industry as an emulsifier—including in the production of processed cheese, since it enhances smoothness and prevents unwanted clumping, exactly the properties one desires for a perfect cacio e pepe sauce. The trisodium citrate at concentrations above 2 percent worked just as well at avoiding the mozzarella phase, “though at a cost of deviating from strict culinary tradition,” the authors concluded. “However, while the sauce stabilization is more efficient, we found the taste of the cheese to be slightly blunted, likely due to the basic properties of the salt.”

The team’s next research goal is to conduct similar experiments with making pasta alla gricia—basically the same as cacio e pepe, with the addition of guanciale (cured pork cheek). “This recipe seems to be easier to perform, and we don’t know exactly why,” said co-author Daniel Maria Busiello, Terlizzi’s colleague at the Dresden Max Planck Institute. “This is one idea we might explore in the future.”

DOI: Physics of Fluids, 2025. 10.1063/5.0255841  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Physics of the perfect cacio e pepe sauce Read More »

the-physics-of-bowling-strike-after-strike

The physics of bowling strike after strike

More than 45 million people in the US are fans of bowling, with national competitions awarding millions of dollars. Bowlers usually rely on instinct and experience, earned through lots and lots of practice, to boost their strike percentage. A team of physicists has come up with a mathematical model to better predict ball trajectories, outlined in a new paper published in the journal AIP Advances. The resulting equations take into account such factors as the composition and resulting pattern of the oil used on bowling lanes, as well as the inevitable asymmetries of bowling balls and player variability.

The authors already had a strong interest in bowling. Three are regular bowlers and quite skilled at the sport; a fourth, Curtis Hooper of Longborough University in the UK, is a coach for Team England at the European Youth Championships. Hooper has been studying the physics of bowling for several years, including an analysis of the 2017 Weber Cup, as well as papers devising mathematical models for the application of lane conditioners and oil patterns in bowling.

The calculations involved in such research are very complicated because there are so many variables that can affect a ball’s trajectory after being thrown. Case in point: the thin layer of oil that is applied to bowling lanes, which Hooper found can vary widely in volume and shape among different venues, plus the lack of uniformity in applying the layer, which creates an uneven friction surface.

Per the authors, most research to date has relied on statistically analyzing empirical data, such as a 2018 report by the US Bowling Congress that looked at data generated by 37 bowlers. (Hooper relied on ball-tracking data for his 2017 Weber Cup analysis.) A 2009 analysis showed that the optimal location for the ball to strike the headpin is about 6 centimeters off-center, while the optimal entry angle for the ball to hit is about 6 degrees. However, such an approach struggles to account for the inevitable player variability. No bowler hits their target 100 percent of the time, and per Hooper et al., while the best professionals can come within 0.1 degrees from the optimal launch angle, this slight variation can nonetheless result in a difference of several centimeters down-lane.

The physics of bowling strike after strike Read More »

quantum-hardware-may-be-a-good-match-for-ai

Quantum hardware may be a good match for AI

Quantum computers don’t have that sort of separation. While they could include some quantum memory, the data is generally housed directly in the qubits, while computation involves performing operations, called gates, directly on the qubits themselves. In fact, there has been a demonstration that, for supervised machine learning, where a system can learn to classify items after training on pre-classified data, a quantum system can outperform classical ones, even when the data being processed is housed on classical hardware.

This form of machine learning relies on what are called variational quantum circuits. This is a two-qubit gate operation that takes an additional factor that can be held on the classical side of the hardware and imparted to the qubits via the control signals that trigger the gate operation. You can think of this as analogous to the communications involved in a neural network, with the two-qubit gate operation equivalent to the passing of information between two artificial neurons and the factor analogous to the weight given to the signal.

That’s exactly the system that a team from the Honda Research Institute worked on in collaboration with a quantum software company called Blue Qubit.

Pixels to qubits

The focus of the new work was mostly on how to get data from the classical world into the quantum system for characterization. But the researchers ended up testing the results on two different quantum processors.

The problem they were testing is one of image classification. The raw material was from the Honda Scenes dataset, which has images taken from roughly 80 hours of driving in Northern California; the images are tagged with information about what’s in the scene. And the question the researchers wanted the machine learning to handle was a simple one: Is it snowing in the scene?

Quantum hardware may be a good match for AI Read More »

fewer-beans-=-great-coffee-if-you-get-the-pour-height-right

Fewer beans = great coffee if you get the pour height right

Based on their findings, the authors recommend pouring hot water over your coffee grounds slowly to give the beans more time immersed in the water. But pour the water too slowly and the resulting jet will stick to the spout (the “teapot effect”) and there won’t be sufficient mixing of the grounds; they’ll just settle to the bottom instead, decreasing extraction yield. “If you have a thin jet, then it tends to break up into droplets,” said co-author Margot Young. “That’s what you want to avoid in these pour-overs, because that means the jet cannot mix the coffee grounds effectively.”

Smaller jet diameter impact on dynamics.

Smaller jet diameter impact on dynamics. Credit: E. Park et al., 2025

That’s where increasing the height from which you pour comes in. This imparts more energy from gravity, per the authors, increasing the mixing of the granular coffee grounds. But again, there’s such a thing as pouring from too great a height, causing the water jet to break apart. The ideal height is no more than 50 centimeters (about 20 inches) above the filter. The classic goosenecked tea kettle turns out to be ideal for achieving that optimal height. Future research might explore the effects of varying the grain size of the coffee grounds.

Increasing extraction yields and, by extension, reducing how much coffee grounds one uses matters because it is becoming increasingly difficult to cultivate the most common species of coffee because of ongoing climate change. “Coffee is getting harder to grow, and so, because of that, prices for coffee will likely increase in coming years,” co-author Arnold Mathijssen told New Scientist. “The idea for this research was really to see if we could help do something by reducing the amount of coffee beans that are needed while still keeping the same amount of extraction, so that you get the same strength of coffee.”

But the potential applications aren’t limited to brewing coffee. The authors note that this same liquid jet/submerged granular bed interplay is also involved in soil erosion from waterfalls, for example, as well as wastewater treatment—using liquid jets to aerate wastewater to enhance biodegradation of organic matter—and dam scouring, where the solid ground behind a dam is slowly worn away by water jets. “Although dams operate on a much larger scale, they may undergo similar dynamics, and finding ways to decrease the jet height in dams may decrease erosion and elongate dam health,” they wrote.

Physics of Fluids, 2025. DOI: 10.1063/5.0257924 (About DOIs).

Fewer beans = great coffee if you get the pour height right Read More »

first-tokamak-component-installed-in-a-commercial-fusion-plant

First tokamak component installed in a commercial fusion plant


A tokamak moves forward as two companies advance plans for stellarators.

There are a remarkable number of commercial fusion power startups, considering that it’s a technology that’s built a reputation for being perpetually beyond the horizon. Many of them focus on radically new technologies for heating and compressing plasmas, or fusing unusual combinations of isotopes. These technologies are often difficult to evaluate—they can clearly generate hot plasmas, but it’s tough to determine whether they can get hot enough, often enough to produce usable amounts of power.

On the other end of the spectrum are a handful of companies that are trying to commercialize designs that have been extensively studied in the academic world. And there have been some interesting signs of progress here. Recently, Commonwealth Fusion, which is building a demonstration tokamak in Massachussets, started construction of the cooling system that will keep its magnets superconducting. And two companies that are hoping to build a stellarator did some important validation of their concepts.

Doing donuts

A tokamak is a donut-shaped fusion chamber that relies on intense magnetic fields to compress and control the plasma within it. A number of tokamaks have been built over the years, but the big one that is expected to produce more energy than required to run it, ITER, has faced many delays and now isn’t expected to achieve its potential until the 2040s. Back in 2015, however, some physicists calculated that high-temperature superconductors would allow ITER-style performance in a far smaller and easier-to-build package. That idea was commercialized as Commonwealth Fusion.

The company is currently trying to build an ITER equivalent: a tokamak that can achieve fusion but isn’t large enough and lacks some critical hardware needed to generate electricity from that reaction. The planned facility, SPARC, is already in progress, with most of the supporting facility in place and superconducting magnets being constructed. But in late March, the company took a major step by installing the first component of the tokamak itself, the cryostat base, which will support the hardware that keeps its magnets cool.

Alex Creely, Commonwealth Fusion’s tokamak operations director and SPARC’s chief engineer, told Ars that the cryostat’s materials have to be chosen to be capable of handling temperatures in the area of 20 Kelvin, and be able to tolerate neutron exposure. Fortunately, stainless steel is still up to the task. It will also be part of a structure that has to handle an extreme temperature gradient. Creely said that it only takes about 30 centimeters to go from the hundreds of millions of degrees C of the plasma down to about 1,000° C, after which it becomes relatively simple to reach cryostat temperatures.

He said that construction is expected to wrap up about a year from now, after which there will be about a year of commissioning the hardware, with fusion experiments planned for 2027. And, while ITER may be facing ongoing delays, Creely said that it was critical for keeping Commonwealth on a tight schedule. Not only is most of the physics of SPARC the same as that of ITER, but some of the hardware will be as well. “We’ve learned a lot from their supply chain development,” Creely said. “So some of the same vendors that are supplying components for the ITER tokamak, we are also working with those same vendors, which has been great.”

Great in the sense that Commonwealth is now on track to see plasma well in advance of ITER. “Seeing all of this go from a bunch of sketches or boxes on slides—clip art effectively—to real metal and concrete that’s all coming together,” Creely said. “You’re transitioning from building the facility, building the plant around the tokamak to actually starting to build the tokamak itself. That is an awesome milestone.”

Seeing stars?

The plasma inside a tokamak is dynamic, meaning that it requires a lot of magnetic intervention to keep it stable, and fusion comes in pulses. There’s an alternative approach called a stellarator, which produces an extremely complex magnetic field that can support a simpler, stable plasma and steady fusion. As implemented by the Wendelstein 7-X stellarator in Germany, this meant a series of complex-shaped magnets manufactured with extremely low tolerance for deviation. But a couple of companies have decided they’re up for the challenge.

One of those, Type One Energy, has basically reached the stage that launched Commonwealth Fusion: It has made a detailed case for the physics underlying its stellarator design. In this instance, the case may even be considerably more detailed: six peer-reviewed articles in the Journal of Plasma Physics. The papers detail the structural design, the behavior of the plasma within it, handling of the helium produced by fusion, generation of tritium from the neutrons produced, and obtaining heat from the whole thing.

The company is partnering with Oak Ridge National Lab and the Tennessee Valley Authority to build a demonstration reactor on the site of a former fossil fuel power plant. (It’s also cooperating with Commonwealth on magnet development.) As with the SPARC tokamak, this will be a mix of technology demonstration and learning experience, rather than a functioning power plant.

Another company that’s pursuing a stellarator design is called Thea Energy. Brian Berzin, its CEO, told Ars that the company’s focus is on simplifying the geometry of the magnets needed for a stellarator and is using software to get them to produce an equivalent magnetic field. “The complexity of this device has always been really, really limiting,” he said, referring to the stellarator. “That’s what we’re really focused on: How can you make simpler hardware? Our way of allowing for simpler hardware is using really, really complicated software, which is something that has taken over the world.”

He said that the simplicity of the hardware will be helpful for an operational power plant, since it allows them to build multiple identical segments as spares, so things can be swapped out and replaced when maintenance is needed.

Like Commonwealth Fusion, Thea Energy is using high-temperature superconductors to build its magnets, with a flat array of smaller magnets substituting for the three-dimensional magnets used at Wendelstein. “We are able to really precisely recreate those magnetic fields required for accelerator, but without any wiggly, complicated, precise, expensive, costly, time-consuming hardware,” Berzin said. And the company recently released a preprint of some testing with the magnet array.

Thea is also planning on building a test stellarator. In its case, however, it’s going to be using deuterium-deuterium fusion, which is much less efficient than deuterium-tritium that will be needed for a power plant. But Berzin said that the design will incorporate a layer of lithium that will form tritium when bombarded by neutrons from the stellarator. If things go according to plan, the reactor will validate Thea’s design and be a fuel source for the rest of the industry.

Of course, nobody will operate a fusion power plant until sometime in the next decade—probably about at the same time that we might expect some of the first small modular fission plants to be built. Given the vast expansion in renewable production that is in progress, it’s difficult to predict what the energy market will look like at that point. So, these test reactors will be built in a very uncertain environment. But that uncertainty hasn’t stopped these companies from pursuing fusion.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

First tokamak component installed in a commercial fusion plant Read More »