Physics

scientists-hid-secret-codes-in-light-to-combat-video-fakes

Scientists hid secret codes in light to combat video fakes

Hiding in the light

Previously, the Cornell team had figured out how to make small changes to specific pixels to tell if a video had been manipulated or created by AI. But its success depended on the creator of the video using a specific camera or AI model. Their new method, “noise-coded illumination” (NCI), addresses those and other shortcomings by hiding watermarks in the apparent noise of light sources. A small piece of software can do this for computer screens and certain types of room lighting, while off-the-shelf lamps can be coded via a small attached computer chip.

“Each watermark carries a low-fidelity time-stamped version of the unmanipulated video under slightly different lighting. We call these code videos,” Davis said. “When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos, which lets us see where changes were made. And if someone tries to generate fake video with AI, the resulting code videos just look like random variations.” Because the watermark is designed to look like noise, it’s difficult to detect without knowing the secret code.

The Cornell team tested their method with a broad range of types of manipulation: changing warp cuts, speed and acceleration, for instance, and compositing and deep fakes. Their technique proved robust to things like signal levels below human perception; subject and camera motion; camera flash; human subjects with different skin tones; different levels of video compression; and indoor and outdoor settings.

“Even if an adversary knows the technique is being used and somehow figures out the codes, their job is still a lot harder,” Davis said. “Instead of faking the light for just one video, they have to fake each code video separately, and all those fakes have to agree with each other.” That said, Davis added, “This is an important ongoing problem. It’s not going to go away, and in fact it’s only going to get harder,” he added.

DOI: ACM Transactions on Graphics, 2025. 10.1145/3742892  (About DOIs).

Scientists hid secret codes in light to combat video fakes Read More »

research-roundup:-7-cool-science-stories-we-almost-missed

Research roundup: 7 cool science stories we almost missed


Other July stories: Solving a 150-year-old fossil mystery and the physics of tacking a sailboat.

150-year-old fossil of Palaeocampa anthrax isn’t a sea worm after all. Credit: Christian McCall

It’s a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. July’s list includes the discovery of the tomb of the first Maya king of Caracol in Belize, the fluid dynamics of tacking a sailboat, how to determine how fast blood was traveling when it stained cotton fabric, and how the structure of elephant ears could lead to more efficient indoor temperature control in future building designs, among other fun stories.

Tomb of first king of Caracol found

University of Houston provost and archeologist Diane Chase in newly discovered tomb of the first ruler of the ancient Maya city Caracol and the founder of its royal dynasty.

Credit: Caracol Archeological Project/University of Houston

Archaeologists Arlen and Diane Chase are the foremost experts on the ancient Maya city of Caracol in Belize and are helping to pioneer the use of airborne LiDAR to locate hidden structures in dense jungle, including a web of interconnected roadways and a cremation site in the center of the city’s Northeast Acropolis plaza. They have been painstakingly excavating the site since the mid-1980s. Their latest discovery is the tomb of Te K’ab Chaak, Caracol’s first ruler, who took the throne in 331 CE and founded a dynasty that lasted more than 460 years.

This is the first royal tomb the husband-and-wife team has found in their 40+ years of excavating the Caracol site. Te K’ab Chaak’s tomb (containing his skeleton) was found at the base of a royal family shrine, along with pottery vessels, carved bone artifacts, jadeite jewelry, and a mosaic jadeite death mask. The Chases estimate that the ruler likely stood about 5’7″ tall and was probably quite old when he died, given his lack of teeth. The Chases are in the process of reconstructing the death mask and conducting DNA and stable isotope analysis of the skeleton.

How blood splatters on clothing

Cast-off blood stain pattern

Credit: Jimmy Brown/CC BY 2.0

Analyzing blood splatter patterns is a key focus in forensic science, and physicists have been offering their expertise for several years now, including in two 2019 studies on splatter patterns from gunshot wounds. The latest insights gleaned from physics concern the distinct ways in which blood stains cotton fabrics, according to a paper published in Forensic Science International.

Blood is a surprisingly complicated fluid, in part because the red blood cells in human blood can form long chains, giving it the consistency of sludge. And blood starts to coagulate immediately once it leaves the body. Blood is also viscoelastic: not only does it deform slowly when exposed to an external force, but once that force has been removed, it will return to its original configuration. Add in coagulation and the type of surface on which it lands, and correctly interpreting the resulting spatter patterns becomes incredibly difficult.

The co-authors of the July study splashed five different fabric surfaces with pig’s blood at varying velocities, capturing the action with high-speed cameras. They found that when a blood stain has “fingers” spreading out from the center, the more fingers there are, the faster the blood was traveling when it struck the fabric. And the faster the blood was moving, the more “satellite droplets” there will be—tiny stains surrounding the central stain. Finally, it’s much easier to estimate the velocity of blood splatter on plain-woven cotton than on other fabrics like twill. The researchers plan to extend future work to include a wider variety of fabrics, weaves, and yarns.

DOI: Forensic Science International, 2025. 10.1016/j.forsciint.2025.112543  (About DOIs).

Offshore asset practices of the uber-rich

The uber-rich aren’t like the rest of us in so many ways, including their canny exploitation of highly secretive offshore financial systems to conceal their assets and/or identities. Researchers at Dartmouth have used machine learning to analyze two public databases and identified distinct patterns in the strategies oligarchs and billionaires in 65 different countries employ when squirreling away offshore assets, according to a paper published in the journal PLoS ONE.

One database tracks offshore finance, while the other rates different countries on their “rule of law.” This enabled the team to study key metrics like how much of their assets elites move offshore, how much they diversify, and how much they make use of “blacklisted” offshore centers that are not part of the mainstream financial system. The researchers found three distinct patterns, all tied to where an oligarch comes from.

Billionaires from authoritarian countries are more likely to diversify their hidden assets across many different centers—a “confetti strategy”—perhaps because these are countries likely to exact political retribution. Others, from countries with effective government regulations—or where there is a pronounced lack of civil rights—are more likely to employ a “concealment strategy” that includes more blacklisted jurisdictions, relying more on bearer shares that protect their anonymity. Those elites most concerned about corruption and/or having their assets seized typically employ a hybrid strategy.

The work builds on an earlier 2023 study concluding that issuing sanctions on individual oligarchs in Russia, China, the US, and Hong Kong is less effective than targeting the small, secretive network of financial experts who manage that wealth on behalf of the oligarchs. That’s because sanctioning just one wealth manager effectively takes out several oligarchs at once, per the authors.

DOI: PLoS ONE, 2025. 10.1371/journal.pone.0326228  (About DOIs).

Medieval remedies similar to TikTok trends

Medieval manuscripts like the Cotton MS Vitellius C III highlight uses for herbs that reflect modern-day wellness trends.

Credit: The British Library

The Middle Ages are stereotypically described as the “Dark Ages,” with a culture driven by superstition—including its medical practices. But a perusal of the hundreds of medical manuscripts collected in the online Corpus of Early Medieval Latin Medicine (CEMLM) reveals that in many respects, medical practices were much more sophisticated; some of the remedies are not much different from alternative medicine remedies touted by TikTok influencers today. That certainly doesn’t make them medically sound, but it does suggest we should perhaps not be too hasty in who we choose to call backward and superstitious.

Per Binghamton University historian Meg Leja, medievalists were not “anti-science.” In fact, they were often quite keen on learning from the natural world. And their health practices, however dubious they might appear to us—lizard shampoo, anyone?—were largely based on the best knowledge available at the time. There are detox cleanses and topical ointments, such as crushing the stone of a peach, mixing it with rose oil, and smearing it on one’s forehead to relieve migraine pain. (Rose oil may actually be an effective migraine pain reliever.) The collection is well worth perusing; pair it with the Wellcome-funded Curious Cures in Cambridge Libraries to learn even more about medieval medical recipes.

Physics of tacking a sailboat

The Courant Institute's Christiana Mavroyiakoumou, above at Central Park's Conservatory Water with model sailboats

Credit: Jonathan King/NYU

Possibly the most challenging basic move for beginner sailors is learning how to tack to sail upwind. Done correctly, the sail will flip around into a mirror image of its previous shape. And in competitive sailboat racing, a bad tack can lose the race. So physicists at the University of Michigan decided to investigate the complex fluid dynamics at play to shed more light on the tricky maneuver, according to a paper published in the journal Physical Review Fluids.

After modeling the maneuver and conducting numerical simulations, the physicists concluded that there are three primary factors that determine a successful tack: the stiffness of the sail, its tension before the wind hits, and the final sail angle in relation to the direction of the wind. Ideally, one wants a less flexible, less curved sail with high tension prior to hitting the wind and to end up with a 20-degree final sail angle. Other findings: It’s harder to flip a slack sail when tacking, and how fast one manages to flip the sail depends on the sail’s mass and the speed and acceleration of the turn.

DOI: Physical Review Fluids, 2025. 10.1103/37xg-vcff  (About DOIs).

Elephant ears inspire building design

African bush elephant with ears spread in a threat or attentive position and visible blood vessels

Maintaining a comfortable indoor temperature constitutes the largest fraction of energy usage for most buildings, with the surfaces of walls, windows, and ceilings contributing to roughly 63 percent of energy loss. Engineers at Drexel University have figured out how to make surfaces that help rather than hamper efforts to maintain indoor temperatures: using so-called phase-change materials that can absorb and release thermal energy as needed as they shift between liquid and solid states. They described the breakthrough in a paper published in the Journal of Building Engineering.

The Drexel group previously developed a self-warming concrete using a paraffin-based material, similar to the stuff used to make candles. The trick this time around, they found, was to create the equivalent of a vascular network within cement-based building materials. They used a printed polymer matrix to create a grid of channels in the surface of concrete and filled those channels with the same paraffin-based material. When temperatures drop, the material turns into a solid and releases heat energy; as temperatures rise, it shifts its phase to a liquid and absorbs heat energy.

The group tested several different configurations and found that the most effective combination of strength and thermal regulation was realized with a diamond-shaped grid, which boasted the most vasculature surface area. This configuration successfully slowed the cooling and heating of its surface to between 1 and 1.2 degrees Celsius per hour, while holding up against stretching and compression tests. The structure is similar to that of jackrabbit and elephant ears, which have extensive vascular networks to help regulate body temperature.

DOI: Journal of Building Engineering, 2025. 10.1016/j.jobe.2025.112878  (About DOIs).

ID-ing a century-old museum specimen

Neotype of Palaeocampa anthrax from the Mazon Creek Lagerstätte and rediscovered in the Invertebrate Paleontology collection of the MCZ.

Credit: Richard J. Knecht

Natural history museums have lots of old specimens in storage, and revisiting those specimens can sometimes lead to new discoveries. That’s what happened to University of Michigan evolutionary biologist Richard J. Knecht as he was poring over a collection at Harvard’s Museum of Comparative Zoology while a grad student there. One of the fossils, originally discovered in 1865, was labeled a millipede. But Knecht immediately recognized it as a type of lobopod, according to a paper published in the journal Communications Biology. It’s the earliest lobopod yet found, and this particular species also marks an evolutionary leap since it’s the first known lobopod to be non-marine.

Lobopods are the evolutionary ancestors to arthropods (insects, spiders, and crustaceans), and their fossils are common along Paleozoic sea beds. Apart from tardigrades and velvet worms, however, they were thought to be confined to oceans. But Palaeocampa anthrax has legs on every trunk, as well as almost 1,000 bristly spines covering its body with orange halos at their tips. Infrared spectroscopy revealed traces of fossilized molecules—likely a chemical that emanated from the spinal tips. Since any chemical defense would just disperse in water, limiting its effectiveness, Knecht concluded that Palaeocampa anthrax was most likely amphibious rather than being solely aquatic.

DOI: Communications Biology, 2025. 10.1038/s42003-025-08483-0  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 7 cool science stories we almost missed Read More »

peacock-feathers-can-emit-laser-beams

Peacock feathers can emit laser beams

Peacock feathers are greatly admired for their bright iridescent colors, but it turns out they can also emit laser light when dyed multiple times, according to a paper published in the journal Scientific Reports. Per the authors, it’s the first example of a biolaser cavity within the animal kingdom.

As previously reported, the bright iridescent colors in things like peacock feathers and butterfly wings don’t come from any pigment molecules but from how they are structured. The scales of chitin (a polysaccharide common to insects) in butterfly wings, for example, are arranged like roof tiles. Essentially, they form a diffraction grating, except photonic crystals only produce certain colors, or wavelengths, of light, while a diffraction grating will produce the entire spectrum, much like a prism.

In the case of peacock feathers, it’s the regular, periodic nanostructures of the barbules—fiber-like components composed of ordered melanin rods coated in keratin—that produce the iridescent colors. Different colors correspond to different spacing of the barbules.

Both are naturally occurring examples of what physicists call photonic crystals. Also known as photonic bandgap materials, photonic crystals are “tunable,” which means they are precisely ordered in such a way as to block certain wavelengths of light while letting others through. Alter the structure by changing the size of the tiles, and the crystals become sensitive to a different wavelength. (In fact, the rainbow weevil can control both the size of its scales and how much chitin is used to fine-tune those colors as needed.)

Even better (from an applications standpoint), the perception of color doesn’t depend on the viewing angle. And the scales are not just for aesthetics; they help shield the insect from the elements. There are several types of manmade photonic crystals, but gaining a better and more detailed understanding of how these structures grow in nature could help scientists design new materials with similar qualities, such as iridescent windows, self-cleaning surfaces for cars and buildings, or even waterproof textiles. Paper currency could incorporate encrypted iridescent patterns to foil counterfeiters.

Peacock feathers can emit laser beams Read More »

merger-of-two-massive-black-holes-is-one-for-the-record-books

Merger of two massive black holes is one for the record books

Physicists with the LIGO/Virgo/KAGRA collaboration have detected the gravitational wave signal (dubbed GW231123) of the most massive merger between two black holes yet observed, resulting in a new black hole that is 225 times more massive than our Sun. The results were presented at the Edoardo Amaldi Conference on Gravitational Waves in Glasgow, Scotland.

The LIGO/Virgo/KAGRA collaboration searches the universe for gravitational waves produced by the mergers of black holes and neutron stars. LIGO detects gravitational waves via laser interferometry, using high-powered lasers to measure tiny changes in the distance between two objects positioned kilometers apart. LIGO has detectors in Hanford, Washington, and in Livingston, Louisiana. A third detector in Italy, Advanced Virgo, came online in 2016. In Japan, KAGRA is the first gravitational-wave detector in Asia and the first to be built underground. Construction began on LIGO-India in 2021, and physicists expect it will turn on sometime after 2025.

To date, the collaboration has detected dozens of merger events since its first Nobel Prize-winning discovery. Early detected mergers involved either two black holes or two neutron stars.  In 2021, LIGO/Virgo/KAGRA confirmed the detection of two separate “mixed” mergers between black holes and neutron stars.

A tour of Virgo. Credit: EGO-Virgo

LIGO/Virgo/KAGRA started its fourth observing run in 2023, and by the following year had announced the detection of a signal indicating a merger between two compact objects, one of which was most likely a neutron star. The other had an intermediate mass—heavier than a neutron star and lighter than a black hole. It was the first gravitational-wave detection of a mass-gap object paired with a neutron star and hinted that the mass gap might be less empty than astronomers previously thought.

Merger of two massive black holes is one for the record books Read More »

microsoft-lays-out-its-path-to-useful-quantum-computing

Microsoft lays out its path to useful quantum computing


Its platform needs error correction that works with different hardware.

Some of the optical hardware needed to make Atom Computing’s machines work. Credit: Atom Computing

On Thursday, Microsoft’s Azure Quantum group announced that it has settled on a plan for getting error correction on quantum computers. While the company pursues its own hardware efforts, the Azure team is a platform provider that currently gives access to several distinct types of hardware qubits. So it has chosen a scheme that is suitable for several different quantum computing technologies (notably excluding its own). The company estimates that the system it has settled on can take hardware qubits with an error rate of about 1 in 1,000 and use them to build logical qubits where errors are instead 1 in 1 million.

While it’s describing the scheme in terms of mathematical proofs and simulations, it hasn’t shown that it works using actual hardware yet. But one of its partners, Atom Computing, is accompanying the announcement with a description of how its machine is capable of performing all the operations that will be needed.

Arbitrary connections

There are similarities and differences between what the company is talking about today and IBM’s recent update of its roadmap, which described another path to error-resistant quantum computing. In IBM’s case, it makes both the software stack that will perform the error correction and the hardware needed to implement it. It uses chip-based hardware, with the connections among qubits mediated by wiring that’s laid out when the chip is fabricated. Since error correction schemes require a very specific layout of connections among qubits, once IBM decides on a quantum error correction scheme, it can design chips with the wiring needed to implement that scheme.

Microsoft’s Azure, in contrast, provides its users with access to hardware from several different quantum computing companies, each based on different technology. Some of them, like Rigetti and Microsoft’s own planned processor, are similar to IBM’s in that they have a fixed layout during manufacturing, and so can only handle codes that are compatible with their wiring layout. But others, such as those provided by Quantinuum and Atom Computing, store their qubits in atoms that can be moved around and connected in arbitrary ways. Those arbitrary connections allow very different types of error correction schemes to be considered.

It can be helpful to think of this using an analogy to geometry. A chip is like a plane, where it’s easiest to form the connections needed for error correction among neighboring qubits; longer connections are possible, but not as easy. Things like trapped ions and atoms provide a higher-dimensional system where far more complicated patterns of connections are possible. (Again, this is an analogy. IBM is using three-dimensional wiring in its processing chips, while Atom Computing stores all its atoms in a single plane.)

Microsoft’s announcement is focused on the sorts of processors that can form the more complicated, arbitrary connections. And, well, it’s taking full advantage of that, building an error correction system with connections that form a four-dimensional hypercube. “We really have focused on the four-dimensional codes due to their amenability to current and near term hardware designs,” Microsoft’s Krysta Svore told Ars.

The code not only describes the layout of the qubits and their connections, but also the purpose of each hardware qubit. Some of them are used to hang on to the value of the logical qubit(s) stored in a single block of code. Others are used for what are called “weak measurements.” These measurements tell us something about the state of the ones that are holding on to the data—not enough to know their values (a measurement that would end the entanglement), but enough to tell if something has changed. The details of the measurement allow corrections to be made that restore the original value.

Microsoft’s error correction system is described in a preprint that the company recently released. It includes a family of related geometries, each of which provides different degrees of error correction, based on how many simultaneous errors they can identify and fix. The descriptions are about what you’d expect for complicated math and geometry—”Given a lattice Λ with an HNF L, the code subspace of the 4D geometric code CΛ is spanned by the second homology H2(T4Λ,F2) of the 4-torus T4Λ—but the gist is that all of them convert collections of physical qubits into six logical qubits that can be error corrected.

The more hardware qubits you add to host those six logical qubits, the greater error protection each of them gets. That becomes important because some more sophisticated algorithms will need more than the one-in-a-million error protection that Svore said Microsoft’s favored version will provide. That favorite is what’s called the Hadamard version, which bundles 96 hardware qubits to form six logical qubits, and has a distance of eight (distance being a measure of how many simultaneous errors it can tolerate). You can compare that with IBM’s announcement, which used 144 hardware qubits to host 12 logical qubits at a distance of 12 (so, more hardware, but more logical qubits and greater error resistance).

The other good stuff

On its own, a description of the geometry is not especially exciting. But Microsoft argues that this family of error correction codes has a couple of significant advantages. “All of these codes in this family are what we call single shot,” Svore said. “And that means that, with a very low constant number of rounds of getting information about the noise, one can decode and correct the errors. This is not true of all codes.”

Limiting the number of measurements needed to detect errors is important. For starters, measurements themselves can create errors, so making fewer makes the system more robust. In addition, in things like neutral atom computers, the atoms have to be moved to specific locations where measurements take place, and the measurements heat them up so that they can’t be reused until cooled. So, limiting the measurements needed can be very important for the performance of the hardware.

The second advantage of this scheme, as described in the draft paper, is the fact that you can perform all the operations needed for quantum computing on the logical qubits these schemes host. Just like in regular computers, all the complicated calculations performed on a quantum computer are built up from a small number of simple logical operations. But not every possible logical operation works well with any given error correction scheme. So it can be non-trivial to show that an error correction scheme is compatible with enough of the small operations to enable universal quantum computation.

So, the paper describes how some logical operations can be performed relatively easily, while a few others require manipulations of the error correction scheme in order to work. (These manipulations have names like lattice surgery and magic state distillation, which are good signs that the field doesn’t take itself that seriously.)

So, in sum, Microsoft feels that it has identified an error correction scheme that is fairly compact, can be implemented efficiently on hardware that stores qubits in photons, atoms, or trapped ions, and enables universal computation. What it hasn’t done, however, is show that it actually works. And that’s because it simply doesn’t have the hardware right now. Azure is offering trapped ion machines from IonQ and Qantinuum, but these top out at 56 qubits—well below the 96 needed for their favored version of these 4D codes. The largest it has access to is a 100-qubit machine from a company called PASQAL, which barely fits the 96 qubits needed, leaving no room for error.

While it should be possible to test smaller versions of codes in the same family, the Azure team has already demonstrated its ability to work with error correction codes based on hypercubes, so it’s unclear whether there’s anything to gain from that approach.

More atoms

Instead, it appears to be waiting for another partner, Atom Computing, to field its next-generation machine, one it’s designing in partnership with Microsoft. “This first generation that we are building together between Atom Computing and Microsoft will include state-of-the-art quantum capabilities, will have 1,200 physical qubits,” Svore said “And then the next upgrade of that machine will have upwards of 10,000. And so you’re looking at then being able to go to upwards of a hundred logical qubits with deeper and more reliable computation available. “

So, today’s announcement was accompanied by an update on progress from Atom Computing, focusing on a process called “midcircuit measurement.” Normally, during quantum computing algorithms, you have to resist performing any measurements of the value of qubits until the entire calculation is complete. That’s because quantum calculations depend on things like entanglement and each qubit being in a superposition between its two values; measurements can cause all that to collapse, producing definitive values and ending entanglement.

Quantum error correction schemes, however, require that some of the hardware qubits undergo weak measurements multiple times while the computation is in progress. Those are quantum measurements taking place in the middle of a computation—midcircuit measurements, in other words. To show that its hardware will be up to the task that Microsoft expects of it, the company decided to demonstrate mid-circuit measurements on qubits implementing a simple error correction code.

The process reveals a couple of notable features that are distinct from doing this with neutral atoms. To begin with, the atoms being used for error correction have to be moved to a location—the measurement zone—where they can be measured without disturbing anything else. Then, the measurement typically heats up the atom slightly, meaning they have to be cooled back down afterward. Neither of these processes is perfect, and so sometimes an atom gets lost and needs to be replaced with one from a reservoir of spares. Finally, the atom’s value needs to be reset, and it has to be sent back to its place in the logical qubit.

Testing revealed that about 1 percent of the atoms get lost each cycle, but the system successfully replaces them. In fact, they set up a system where the entire collection of atoms is imaged during the measurement cycle, and any atom that goes missing is identified by an automated system and replaced.

Overall, without all these systems in place, the fidelity of a qubit is about 98 percent in this hardware. With error correction turned on, even this simple logical qubit saw its fidelity rise over 99.5 percent. All of which suggests their next computer should be up to some significant tests of Microsoft’s error correction scheme.

Waiting for the lasers

The key questions are when it will be released, and when its successor, which should be capable of performing some real calculations, will follow it? That’s something that’s a challenging question to ask because, more so than some other quantum computing technologies, neutral atom computing is dependent on something that’s not made by the people who build the computers: lasers. Everything about this system—holding atoms in place, moving them around, measuring, performing manipulations—is done with a laser. The lower the noise of the laser (in terms of things like frequency drift and energy fluctuations), the better performance it’ll have.

So, while Atom can explain its needs to its suppliers and work with them to get things done, it has less control over its fate than some other companies in this space.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Microsoft lays out its path to useful quantum computing Read More »

ibm-now-describing-its-first-error-resistant-quantum-compute-system

IBM now describing its first error-resistant quantum compute system


Company is moving past focus on qubits, shifting to functional compute units.

A rendering of what IBM expects will be needed to house a Starling quantum computer. Credit: IBM

On Tuesday, IBM released its plans for building a system that should push quantum computing into entirely new territory: a system that can both perform useful calculations while catching and fixing errors and be utterly impossible to model using classical computing methods. The hardware, which will be called Starling, is expected to be able to perform 100 million operations without error on a collection of 200 logical qubits. And the company expects to have it available for use in 2029.

Perhaps just as significant, IBM is also committing to a detailed description of the intermediate steps to Starling. These include a number of processors that will be configured to host a collection of error-corrected qubits, essentially forming a functional compute unit. This marks a major transition for the company, as it involves moving away from talking about collections of individual hardware qubits and focusing instead on units of functional computational hardware. If all goes well, it should be possible to build Starling by chaining a sufficient number of these compute units together.

“We’re updating [our roadmap] now with a series of deliverables that are very precise,” IBM VP Jay Gambetta told Ars, “because we feel that we’ve now answered basically all the science questions associated with error correction and it’s becoming more of a path towards an engineering problem.”

New architectures

Error correction on quantum hardware involves entangling a group of qubits in a way that distributes one or more quantum bit values among them and includes additional qubits that can be used to check the state of the system. It can be helpful to think of these as data and measurement qubits. Performing weak quantum measurements on the measurement qubits produces what’s called “syndrome data,” which can be interpreted to determine whether anything about the data qubits has changed (indicating an error) and how to correct it.

There are lots of potential ways to arrange different combinations of data and measurement qubits for this to work, each referred to as a code. But, as a general rule, the more hardware qubits committed to the code, the more robust it will be to errors, and the more logical qubits that can be distributed among its hardware qubits.

Some quantum hardware, like that based on trapped ions or neutral atoms, is relatively flexible when it comes to hosting error-correction codes. The hardware qubits can be moved around so that any two can be entangled, so it’s possible to adopt a huge range of configurations, albeit at the cost of the time spent moving atoms around. IBM’s technology is quite different. It relies on qubits made of superconducting electronics laid out on a chip, with entanglement mediated by wiring that runs between qubits. The layout of this wiring is set during the chip’s manufacture, and so the chip’s design commits it to a limited number of potential error-correction codes.

Unfortunately, this wiring can also enable crosstalk between neighboring qubits, causing them to lose their state. To avoid this, existing IBM processors have their qubits wired in what they term a “heavy hex” configuration, named for its hexagonal arrangements of connections among its qubits. This has worked well to keep the error rate of its hardware down, but it also poses a challenge, since IBM has decided to go with an error-correction code that’s incompatible with the heavy hex geometry.

A couple of years back, an IBM team described a compact error correction code called a low-density parity check (LDPC). This requires a square grid of nearest-neighbor connections among its qubits, as well as wiring to connect qubits that are relatively distant on the chip. To get its chips and error-correction scheme in sync, IBM has made two key advances. The first is in its chip packaging, which now uses several layers of wiring sitting above the hardware qubits to enable all of the connections needed for the LDPC code.

We’ll see that first in a processor called Loon that’s on the company’s developmental roadmap. “We’ve already demonstrated these three things: high connectivity, long-range couplers, and couplers that break the plane [of the chip] and connect to other qubits,” Gambetta said. “We have to combine them all as a single demonstration showing that all these parts of packaging can be done, and that’s what I want to achieve with Loon.” Loon will be made public later this year.

Two diagrams of blue objects linked by red lines. The one on the left is sparse and simple, while the one on the right is a complicated mesh of red lines.

On the left, the simple layout of the connections in a current-generation Heron processor. At right, the complicated web of connections that will be present in Loon. Credit: IBM

The second advance IBM has made is to eliminate the crosstalk that the heavy hex geometry was used to minimize, so heavy hex will be going away. “We are releasing this year a bird for near-term experiments that is a square array that has almost zero crosstalk,” Gambetta said, “and that is Nighthawk.” The more densely connected qubits cut the overhead needed to perform calculations by a factor of 15, Gambetta told Ars.

Nighthawk is a 2025 release on a parallel roadmap that you can think of as user-facing. Iterations on its basic design will be released annually through 2028, each enabling more operations without error (going from 5,000 gate operations this year to 15,000 in 2028). Each individual Nighthawk processor will host 120 hardware qubits, but 2026 will see three of them chained together and operating as a unit, providing 360 hardware qubits. That will be followed in 2027 by a machine with nine linked Nighthawk processors, boosting the hardware qubit number over 1,000.

Riding the bicycle

The real future of IBM’s hardware, however, will be happening over on the developmental line of processors, where talk about hardware qubit counts will become increasingly irrelevant. In a technical document released today, IBM is describing the specific LDPC code it will be using, termed a bivariate bicycle code due to some cylindrical symmetries in its details that vaguely resemble bicycle wheels. The details of the connections matter less than the overall picture of what it takes to use this error code in practice.

IBM describes two implementations of this form of LDPC code. In the first, 144 hardware qubits are arranged so that they play host to 12 logical qubits and all of the measurement qubits needed to perform error checks. The standard measure of a code’s ability to catch and correct errors is called its distance, and in this case, the distance is 12. As an alternative, they also describe a code that uses 288 hardware qubits to host the same 12 logical qubits but boost the distance to 18, meaning it’s more resistant to errors. IBM will make one of these collections of logical qubits available as a Kookaburra processor in 2026, which will use them to enable stable quantum memory.

The follow-on will bundle these with a handful of additional qubits that can produce quantum states that are needed for some operations. Those, plus hardware needed for the quantum memory, form a single, functional computation unit, built on a single chip, that is capable of performing all the operations needed to implement any quantum algorithm.

That will appear with the Cockatoo chip, which will also enable multiple processing units to be linked on a single bus, allowing the logical qubit count to grow beyond 12. (The company says that one of the dozen logical qubits in each unit will be used to mediate entanglement with other units and so won’t be available for computation.) That will be followed by the first test versions of Starling, which will allow universal computations on a limited number of logical qubits spread across multiple chips.

Separately, IBM is releasing a document that describes a key component of the system that will run on classical computing hardware. Full error correction requires evaluating the syndrome data derived from the state of all the measurement qubits in order to determine the state of the logical qubits and whether any corrections need to be made. As the complexity of the logical qubits grows, the computational burden of evaluating grows with it. If this evaluation can’t be executed in real time, then it becomes impossible to perform error-corrected calculations.

To address this, IBM has developed a message-passing decoder that can perform parallel evaluations of the syndrome data. The system explores more of the solution space by a combination of randomizing the weight given to the memory of past solutions and by handing any seemingly non-optimal solutions on to new instances for additional evaluation. The key thing is that IBM estimates that this can be run in real time using FPGAs, ensuring that the system works.

A quantum architecture

There are a lot more details beyond those, as well. Gambetta described the linkage between each computational unit—IBM is calling it a Universal Bridge—which requires one microwave cable for each code distance of the logical qubits being linked. (In other words, a distance 12 code would need 12 microwave-carrying cables to connect each chip.) He also said that IBM is developing control hardware that can operate inside the refrigeration hardware, based on what they’re calling “cold CMOS,” which is capable of functioning at 4 Kelvin.

The company is also releasing renderings of what it expects Starling to look like: a series of dilution refrigerators, all connected by a single pipe that contains the Universal Bridge. “It’s an architecture now,” Gambetta said. “I have never put details in the roadmap that I didn’t feel we could hit, and now we’re putting a lot more details.”

The striking thing to me about this is that it marks a shift away from a focus on individual qubits, their connectivity, and their error rates. The error hardware rates are now good enough (4 x 10-4) for this to work, although Gambetta felt that a few more improvements should be expected. And connectivity will now be directed exclusively toward creating a functional computational unit.

That said, there’s still a lot of space beyond Starling on IBM’s roadmap. The 200 logical qubits it promises will be enough to handle some problems, but not enough to perform the complex algorithms needed to do things like break encryption. That will need to wait for something closer to Blue Jay, a 2033 system that IBM expects will have 2,000 logical qubits. And, as of right now, it’s the only thing listed beyond Starling.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

IBM now describing its first error-resistant quantum compute system Read More »

startup-puts-a-logical-qubit-in-a-single-piece-of-hardware

Startup puts a logical qubit in a single piece of hardware

A bit over a year ago, Nord Quantique used a similar setup to show that it could be used to identify the most common form of error in these devices, one in which the system loses one of its photons. “We can store multiple microwave photons into each of these cavities, and the fact that we have redundancy in the system comes exactly from this,” said Nord Quantique’s CTO, Julien Camirand Lemyre. However, this system was unable to handle many of the less common errors that might also occur.

This time around, the company is showing that it can get an actual logical qubit into a variant of the same hardware. In the earlier version of its equipment, the resonator cavity had a single post and supported a single frequency. In the newer iteration, there were two posts and two frequencies. Each of those frequencies creates its own quantum resonator in the same cavity, with its own set of modes. “It’s this ensemble of photons inside this cavity that creates the logical qubit,” Lemyre told Ars.

The additional quantum information that can now be stored in the system enables it to identify more complex errors than the loss of a photon.

Catching, but not fixing errors

The company did two experiments with this new hardware. First, it ran multiple rounds of error detection on data stored in the logical qubit, essentially testing its ability to act like a quantum memory and retain the information stored there. Without correcting errors, the system rapidly decayed, with an error probability in each round of measurement of about 12 percent. By the time the system reached the 25th measurement, almost every instance had already encountered an error.

The second time through, the company repeated the process, discarding any instances in which an error occurred. In almost every instance, that meant the results were discarded long before they got through two dozen rounds of measurement. But at these later stages, none of the remaining instances were in an erroneous state. That indicates that a successful correction of the errors—something the team didn’t try—would be able to fix all the detected problems.

Startup puts a logical qubit in a single piece of hardware Read More »

research-roundup:-7-stories-we-almost-missed

Research roundup: 7 stories we almost missed


Ping-pong bots, drumming chimps, picking styles of two jazz greats, and an ancient underground city’s soundscape

Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim

It’s a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. May’s list includes a nifty experiment to make a predicted effect of special relativity visible; a ping-pong playing robot that can return hits with 88 percent accuracy; and the discovery of the rare genetic mutation that makes orange cats orange, among other highlights.

Special relativity made visible

The Terrell-Penrose-Effect: Fast objects appear rotated

Credit: TU Wien

Perhaps the most well-known feature of Albert Einstein’s special theory of relativity is time dilation and length contraction. In 1959, two physicists predicted another feature of relativistic motion: an object moving near the speed of light should also appear to be rotated. It’s not been possible to demonstrate this experimentally, however—until now. Physicists at the Vienna University of Technology figured out how to reproduce this rotational effect in the lab using laser pulses and precision cameras, according to a paper published in the journal Communications Physics.

They found their inspiration in art, specifically an earlier collaboration with an artist named Enar de Dios Rodriguez, who collaborated with VUT and the University of Vienna on a project involving ultra-fast photography and slow light. For this latest research, they used objects shaped like a cube and a sphere and moved them around the lab while zapping them with ultrashort laser pulses, recording the flashes with a high-speed camera.

Getting the timing just right effectively yields similar results to a light speed of 2 m/s. After photographing the objects many times using this method, the team then combined the still images into a single image. The results: the cube looked twisted and the sphere’s North Pole was in a different location—a demonstration of the rotational effect predicted back in 1959.

DOI: Communications Physics, 2025. 10.1038/s42005-025-02003-6  (About DOIs).

Drumming chimpanzees

A chimpanzee feeling the rhythm. Credit: Current Biology/Eleuteri et al., 2025.

Chimpanzees are known to “drum” on the roots of trees as a means of communication, often combining that action with what are known as “pant-hoot” vocalizations (see above video). Scientists have found that the chimps’ drumming exhibits key elements of musical rhythm much like humans, according to  a paper published in the journal Current Biology—specifically non-random timing and isochrony. And chimps from different geographical regions have different drumming rhythms.

Back in 2022, the same team observed that individual chimps had unique styles of “buttress drumming,” which served as a kind of communication, letting others in the same group know their identity, location, and activity. This time around they wanted to know if this was also true of chimps living in different groups and whether their drumming was rhythmic in nature. So they collected video footage of the drumming behavior among 11 chimpanzee communities across six populations in East Africa (Uganda) and West Africa (Ivory Coast), amounting to 371 drumming bouts.

Their analysis of the drum patterns confirmed their hypothesis. The western chimps drummed in regularly spaced hits, used faster tempos, and started drumming earlier during their pant-hoot vocalizations. Eastern chimps would alternate between shorter and longer spaced hits. Since this kind of rhythmic percussion is one of the earliest evolved forms of human musical expression and is ubiquitous across cultures, findings such as this could shed light on how our love of rhythm evolved.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.04.019  (About DOIs).

Distinctive styles of two jazz greats

Wes Montgomery (left)) and Joe Pass (right) playing guitars

Jazz lovers likely need no introduction to Joe Pass and Wes Montgomery, 20th century guitarists who influenced generations of jazz musicians with their innovative techniques. Montgomery, for instance, didn’t use a pick, preferring to pluck the strings with his thumb—a method he developed because he practiced at night after working all day as a machinist and didn’t want to wake his children or neighbors. Pass developed his own range of picking techniques, including fingerpicking, hybrid picking, and “flat picking.”

Chirag Gokani and Preston Wilson, both with Applied Research Laboratories and the University of Texas, Austin, greatly admired both Pass and Montgomery and decided to explore the underlying the acoustics of their distinctive playing, modeling the interactions of the thumb, fingers, and pick with a guitar string. They described their research during a meeting of the Acoustical Society of America in New Orleans, LA.

Among their findings: Montgomery achieved his warm tone by playing closer to the bridge and mostly plucking at the string. Pass’s rich tone arose from a combination of using a pick and playing closer to the guitar neck. There were also differences in how much a thumb, finger, and pick slip off the string:  use of the thumb (Montgomery) produced more of a “pluck” compared to the pick (Pass), which produced more of a “strike.” Gokani and Wilson think their model could be used to synthesize digital guitars with a more realistic sound, as well as helping guitarists better emulate Pass and Montgomery.

Sounds of an ancient underground city

A collection of images from the underground tunnels of Derinkuyu.

Credit: Sezin Nas

Turkey is home to the underground city Derinkuyu, originally carved out inside soft volcanic rock around the 8th century BCE. It was later expanded to include four main ventilation channels (and some 50,000 smaller shafts) serving seven levels, which could be closed off from the inside with a large rolling stone. The city could hold up to 20,000 people and it  was connected to another underground city, Kaymakli, via tunnels. Derinkuyu helped protect Arab Muslims during the Arab-Byzantine wars, served as a refuge from the Ottomans in the 14th century, and as a haven for Armenians escaping persecution in the early 20th century, among other functions.

The tunnels were rediscovered in the 1960s and about half of the city has been open to visitors since 2016. The site is naturally of great archaeological interest, but there has been little to no research on the acoustics of the site, particularly the ventilation channels—one of Derinkuyu’s most unique features, according to Sezin Nas, an architectural acoustician at Istanbul Galata University in Turkey.  She gave a talk at a meeting of the Acoustical Society of America in New Orleans, LA, about her work on the site’s acoustic environment.

Nas analyzed a church, a living area, and a kitchen, measuring sound sources and reverberation patterns, among other factors, to create a 3D virtual soundscape. The hope is that a better understanding of this aspect of Derinkuyu could improve the design of future underground urban spaces—as well as one day using her virtual soundscape to enable visitors to experience the sounds of the city themselves.

MIT’s latest ping-pong robot

Robots playing ping-pong have been a thing since the 1980s, of particular interest to scientists because it requires the robot to combine the slow, precise ability to grasp and pick up objects with dynamic, adaptable locomotion. Such robots need high-speed machine vision, fast motors and actuators, precise control, and the ability to make accurate predictions in real time, not to mention being able to develop a game strategy. More recent designs use AI techniques to allow the robots to “learn” from prior data to improve their performance.

MIT researchers have built their own version of a ping-pong playing robot, incorporating a lightweight design and the ability to precisely return shots. They built on prior work developing the Humanoid, a small bipedal two-armed robot—specifically, modifying the Humanoid’s arm by adding an extra degree of freedom to the wrist so the robot could control a ping-pong paddle. They tested their robot by mounting it on a ping-pong table and lobbing 150 balls at it from the other side of the table, capturing the action with high-speed cameras.

The new bot can execute three different swing types (loop, drive, and chip) and during the trial runs it returned the ball with impressive accuracy across all three types: 88.4 percent, 89.2 percent, and 87.5 percent, respectively. Subsequent tweaks to theirrystem brought the robot’s strike speed up to 19 meters per second (about 42 MPH), close to the 12 to 25 meters per second of advanced human players. The addition of control algorithms gave the robot the ability to aim. The robot still has limited mobility and reach because it has to be fixed to the ping-pong table but the MIT researchers plan to rig it to a gantry or wheeled platform in the future to address that shortcoming.

Why orange cats are orange

an orange tabby kitten

Cat lovers know orange cats are special for more than their unique coloring, but that’s the quality that has intrigued scientists for almost a century. Sure, lots of animals have orange, ginger, or yellow hues, like tigers, orangutans, and golden retrievers. But in domestic cats that color is specifically linked to sex. Almost all orange cats are male. Scientists have now identified the genetic mutation responsible and it appears to be unique to cats, according to a paper published in the journal Current Biology.

Prior work had narrowed down the region on the X chromosome most likely to contain the relevant mutation. The scientists knew that females usually have just one copy of the mutation and in that case have tortoiseshell (partially orange) coloring, although in rare cases, a female cat will be orange if both X chromosomes have the mutation. Over the last five to ten years, there has been an explosion in genome resources (including complete sequenced genomes) for cats which greatly aided the team’s research, along with taking additional DNA samples from cats at spay and neuter clinics.

From an initial pool of 51 candidate variants, the scientists narrowed it down to three genes, only one of which was likely to play any role in gene regulation: Arhgap36. It wasn’t known to play any role in pigment cells in humans, mice, or non-orange cats. But orange cats are special; their mutation (sex-linked orange) turns on Arhgap36 expression in pigment cells (and only pigment cells), thereby interfering with the molecular pathway that controls coat color in other orange-shaded mammals. The scientists suggest that this is an example of how genes can acquire new functions, thereby enabling species to better adapt and evolve.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.03.075  (About DOIs).

Not a Roman “massacre” after all

Two of the skeletons excavated by Mortimer Wheeler in the 1930s, dating from the 1st century AD.

Credit: Martin Smith

In 1936, archaeologists excavating the Iron Age hill fort Maiden Castle in the UK unearthed dozens of human skeletons, all showing signs of lethal injuries to the head and upper body—likely inflicted with weaponry. At the time, this was interpreted as evidence of a pitched battle between the Britons of the local Durotriges tribe and invading Romans. The Romans slaughtered the native inhabitants, thereby bringing a sudden violent end to the Iron Age. At least that’s the popular narrative that has prevailed ever since in countless popular articles, books, and documentaries.

But a paper published in the Oxford Journal of Archaeology calls that narrative into question. Archaeologists at Bournemouth University have re-analyzed those burials, incorporating radiocarbon dating into their efforts. They concluded that those individuals didn’t die in a single brutal battle. Rather, it was Britons killing other Britons over multiple generations between the first century BCE and the first century CE—most likely in periodic localized outbursts of violence in the lead-up to the Roman conquest of Britain. It’s possible there are still many human remains waiting to be discovered at the site, which could shed further light on what happened at Maiden Castle.

DOI: Oxford Journal of Archaeology, 2025. 10.1111/ojoa.12324  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 7 stories we almost missed Read More »

falcon-9-sonic-booms-can-feel-more-like-seismic-waves

Falcon 9 sonic booms can feel more like seismic waves

Could the similarities confuse California residents who might mistake a sonic boom for an earthquake?  Perhaps, at least until residents learn otherwise. “Since we’re often setting up in people’s backyard, they text us the results of what they heard,” said Gee. “It’s fantastic citizen science. They’ll tell us the difference is that the walls shake but the floors don’t. They’re starting to be able to tell the difference between an earthquake or a sonic boom from a launch.”

Launch trajectories of Falcon 9 rockets along the California coast. Credit: Kent Gee

A rocket’s trajectory also plays an important role. “Everyone sees the same thing, but what you hear depends on where you’re at and the rocket’s path or trajectory,” said Gee, adding that even the same flight path can nonetheless produce markedly different noise levels. “There’s a focal region in Ventura, Oxnard, and Camarillo where the booms are more impactful,” he said. “Where that focus occurs changes from launch to launch, even for the same trajectory.” That points to meteorology also being a factor: certain times of year could potentially have more impact than others as weather conditions shift, with wind shears, temperature gradients, and topography, for instance, potentially affecting the propagation of sonic booms.

In short, “If you can change your trajectory even a little under the right meteorological conditions, you can have a big impact on the sonic booms in this region of the country,” said Gee. And it’s only the beginning of the project; the team is still gathering data. “No two launches look the same right now,” said Gee. “It’s like trying to catch lightning.”

As our understanding improves, he sees the conversation shifting to more subjective social questions, possibly leading to the development of science-based local regulations, such as noise ordinances, to address any negative launch impacts. The next step is to model sonic booms under different weather conditions, which will be challenging due to coastal California’s microclimates. “If you’ve ever driven along the California coast, the weather changes dramatically,” said Gee. “You go from complete fog at Vandenburg to complete sun in Ventura County just 60 miles from the base.”

Falcon 9 sonic booms can feel more like seismic waves Read More »

the-key-to-a-successful-egg-drop-experiment?-drop-it-on-its-side

The key to a successful egg drop experiment? Drop it on its side

There was a key difference, however, between how vertically and horizontally  squeezed eggs deformed in the compression experiments—namely, the former deformed less than the latter. The shell’s greater rigidity along its long axis was an advantage because the heavy load was distributed over the surface. (It’s why the one-handed egg-cracking technique targets the center of a horizontally held egg.)

But the authors found that this advantage when under static compression proved to be a disadvantage when dropping eggs from a height, with the horizontal position emerging as the optimal orientation.  It comes down to the difference between stiffness—how much force is needed to deform the egg—and toughness, i.e., how much energy the egg can absorb before it cracks.

Cohen et al.’s experiments showed that eggs are tougher when loaded horizontally along their equator, and stiffer when compressed vertically, suggesting that “an egg dropped on its equator can likely sustain greater drop heights without cracking,” they wrote. “Even if eggs could sustain a higher force when loaded in the vertical direction, it does not necessarily imply that they are less likely to break when dropped in that orientation. In contrast to static loading, to remain intact following a dynamic impact, a body must be able to absorb all of its kinetic energy by transferring it into reversible deformation.”

“Eggs need to be tough, not stiff, in order to survive a fall,” Cohen et al. concluded, pointing to our intuitive understanding that we should bend our knees rather than lock them into a straightened position when landing after a jump, for example. “Our results and analysis serve as a cautionary tale about how language can affect our understanding of a system, and improper framing of a problem can lead to misunderstanding and miseducation.”

DOI: Communications Physics, 2025. 10.1038/s42005-025-02087-0  (About DOIs).

The key to a successful egg drop experiment? Drop it on its side Read More »

cern-gears-up-to-ship-antimatter-across-europe

CERN gears up to ship antimatter across Europe

There’s a lot of matter around, which ensures that any antimatter produced experiences a very short lifespan. Studying antimatter, therefore, has been extremely difficult. But that’s changed a bit in recent years, as CERN has set up a facility that produces and traps antimatter, allowing for extensive studies of its properties, including entire anti-atoms.

Unfortunately, the hardware used to capture antiprotons also produces interference that limits the precision with which measurements can be made. So CERN decided that it might be good to determine how to move the antimatter away from where it’s produced. Since it was tackling that problem anyway, CERN decided to make a shipping container for antimatter, allowing it to be put on a truck and potentially taken to labs throughout Europe.

A shipping container for antimatter

The problem facing CERN comes from its own hardware. The antimatter it captures is produced by smashing a particle beam into a stationary target. As a result, all the anti-particles that come out of the debris carry a lot of energy. If you want to hold on to any of them, you have to slow them down, which is done using electromagnetic fields that can act on the charged antimatter particles. Unfortunately, as the team behind the new work notes, many of the measurements we’d like to do with the antimatter are “extremely sensitive to external magnetic field noise.”

In short, the hardware that slows the antimatter down limits the precision of the measurements you can take.

The obvious solution is to move the antimatter away from where it’s produced. But that gets tricky very fast. The antimatter containment device has to be maintained as an extreme vacuum and needs superconducting materials to produce the electromagnetic fields that keep the antimatter from bumping into the walls of the container. All of that means a significant power supply, along with a cache of liquid helium to keep the superconductors working. A standard shipping container just won’t do.

So the team at CERN built a two-meter-long portable containment device. On one end is a junction that allows it to be plugged into the beam of particles produced by the existing facility. That junction leads to the containment area, which is blanketed by a superconducting magnet. Elsewhere on the device are batteries to ensure an uninterrupted power supply, along with the electronics to run it all. The whole setup is encased in a metal frame that includes lifting points that can be used to attach it to a crane for moving around.

CERN gears up to ship antimatter across Europe Read More »

physics-of-the-perfect-cacio-e-pepe-sauce

Physics of the perfect cacio e pepe sauce


The trick: Add corn starch separately to make the sauce rather than using pasta water.

Cacio e pepe is an iconic pasta dish that can be frustratingly difficult to make Credit: Simone Frau

Nobody does pasta quite like the Italians, as anyone who has tasted an authentic “pasta alla cacio e pepe” can attest. It’s a simple dish: just tonnarelli pasta, pecorino cheese, and pepper. But its simplicity is deceptive. Cacio e pepe (“cheese and pepper”) is notoriously challenging to make because it’s so easy for the sauce to form unappetizing clumps with a texture more akin to stringy mozzarella rather than being smooth and creamy.

A team of Italian physicists has come to the rescue with a foolproof recipe based on their many scientific experiments, according to a new paper published in the journal Physics of Fluids. The trick: using corn starch for the cheese and pepper sauce instead of relying on however much starch leaches into the boiling water as the pasta is cooked.

“A true Italian grandmother or a skilled home chef from Rome would never need a scientific recipe for cacio e pepe, relying instead on instinct and years of experience,” the authors wrote. “For everyone else, this guide offers a practical way to master the dish. Preparing cacio e pepe successfully depends on getting the balance just right, particularly the ratio of starch to cheese. The concentration of starch plays a crucial role in keeping the sauce creamy and smooth, without clumps or separation.”

There has been a surprising amount of pasta-related physics research in recent years, particularly around spaghetti—the mechanics of slurping the pasta into one’s mouth, for instance, or spitting it out (aka, the “reverse spaghetti problem”). The most well-known is the question of how to get dry spaghetti strands to break neatly in two rather than three or more scattered pieces. French physicists successfully explained the dynamics in an Ig Nobel Prize-winning 2006 paper. They found that, counterintuitively, a dry spaghetti strand produces a “kick back” traveling wave as it breaks. This wave temporarily increases the curvature in other sections, leading to many more breaks.

In 2020, physicists provided an explanation for why a strand of spaghetti in a pot of boiling water will start to sag as it softens before sinking to the bottom of the pot and curling back on itself in a U shape. Physicists have also discovered a way to determine if one’s spaghetti is perfectly done by using a simple ruler (although one can always use the tried-and-true method of flinging a test strand against the wall). In 2021, inspired by flat-packed furniture, scientists came up with an ingenious solution to packaging differently shaped pastas: ship them in a flat 2D form that takes on the final 3D shape when cooked, thanks to carefully etched patterns in the pasta.

And earlier this year, physicists investigated how adding salt to a pasta pot to make it boil faster can leave a white ring on the bottom of the pot to identify factors leading to the perfect salt ring. They found that particles released from a smaller height fall faster and form a pattern with a clean central region. Those released from a greater height take longer to fall to the bottom, and the cloud of particles expands radially until the particles are far enough apart not to be influenced by the wakes of neighboring particles, such that they no longer form a cloud. In the latter case, you end up with a homogeneous salt ring deposit.

Going through a phase (separation)

ompare the effect of: water alone; pasta water that retains some starch (obtained by cooking 100 g of pasta in 1 liter of water); and pasta water “risottata”

Comparing the effect of water alone, pasta water that retains some starch, and pasta water “risottata.” Credit: G. Bartolucci et al., 2025

So it shouldn’t be the least bit surprising that physicists have now turned their attention to the problem of the perfect cacio e pepe sauce. The authors are well aware that they are treading on sacred ground for Italian traditionalists. “I hope that eight Italian authors is enough [to quell skepticism],” co-author Ivan Di Terlizzi of the Max Planck Institute for the Physics of Complex Systems told The New York Times back in January. (An earlier version of the paper was posted to the physics preprint arXiv in January, prompting that earlier coverage.)

Terlizzi and his fellow author are all living abroad and frequently meet for dinner. Cacio e pepe is among their favorite traditional dishes to make, and as physicists, they couldn’t help but want to learn more about the unique physics of the process, not to mention “the more practical aim to avoid wasting good pecorino,” said Terlizzi. They focused on the separation that often occurs when cheese and water are mixed, building on earlier culinary experiments.

As the pasta cooks in boiling water, the noodles release starch. Traditionally, the chef will extract part of the water and starch solution—which is cooled to a suitable temperature to avoid clumping as the cheese proteins “denaturate”—and mix it with the cheese to make the sauce, adding the pepper last, right before serving. But the authors note that temperature is not the only factor that can lead to this dreaded “mozzarella phase.”

According to the authors, if one tries to mix cheese and water without any starch, the clumping is more pronounced. There is less clumping with water containing a little starch, like water in which pasta has been cooked. And when one mixes the cheese with pasta water “risottata”—i.e., collected and heated in a pan so enough water evaporates that there is a higher concentration of starch—there is almost no clumping.

Effect of trisodium citrate on the stability of Cacio e pepe sauce.

Effect of trisodium citrate on the stability of cacio e pepe sauce. Credit: G. Bartolucci et al., 2025

So starch plays a crucial role in the process of making cacio e pepe. The authors devised a set of experiments to scientifically investigate the phase behavior of water, starch, and cheese mixed together in various concentrations and at different temperatures. They primarily used standard kitchen tools to make sure home cooks could recreate their results (although not every kitchen has a sous vide machine). This enabled them to devise a phase diagram of what happens to the sauce as the conditions change.

The authors found that the correct starch ratio is between 2 to 3 percent of the cheese weight. Below that, you get the clumping phase separation; above that, and the sauce “becomes stiff and unappetizing as it cools,” they wrote. Pasta water alone contains too little starch. Using pasta water “risottata” may concentrate the starch, but the chef has less control over the precise amount of starch. So the authors recommend simply dissolving 4 grams of powdered potato or corn starch in 40 grams of water, heating it gently until it thickens—a transition known as starch gelatinization—and combining that gel with the cheese. They also recommend toasting the black pepper briefly before adding it to the mixture to enhance its flavors and aromas.

They ran the same set of experiments using trisodium citrate as an alternative stabilizer, which is widely used in the food industry as an emulsifier—including in the production of processed cheese, since it enhances smoothness and prevents unwanted clumping, exactly the properties one desires for a perfect cacio e pepe sauce. The trisodium citrate at concentrations above 2 percent worked just as well at avoiding the mozzarella phase, “though at a cost of deviating from strict culinary tradition,” the authors concluded. “However, while the sauce stabilization is more efficient, we found the taste of the cheese to be slightly blunted, likely due to the basic properties of the salt.”

The team’s next research goal is to conduct similar experiments with making pasta alla gricia—basically the same as cacio e pepe, with the addition of guanciale (cured pork cheek). “This recipe seems to be easier to perform, and we don’t know exactly why,” said co-author Daniel Maria Busiello, Terlizzi’s colleague at the Dresden Max Planck Institute. “This is one idea we might explore in the future.”

DOI: Physics of Fluids, 2025. 10.1063/5.0255841  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Physics of the perfect cacio e pepe sauce Read More »