Physics

physicists-3d-printed-a-christmas-tree-of-ice

Physicists 3D-printed a Christmas tree of ice

Physicists at the University of Amsterdam came up with a really cool bit of Christmas decor: a miniature 3D-printed Christmas tree, a mere 8 centimeters tall, made of ice, without any refrigeration equipment or other freezing technology, and at minimal cost. The secret is evaporative cooling, according to a preprint posted to the physics arXiv.

Evaporative cooling is a well-known phenomenon; mammals use it to regulate body temperature. You can see it in your morning cup of hot coffee: the hotter atoms rise to the top of the magnetic trap and “jump out” as steam. It also plays a role (along with shock wave dynamics and various other factors) in the formation of “wine tears.” It’s a key step in creating Bose-Einstein condensates.

And evaporative cooling is also the main culprit behind the infamous “stall” that so frequently plagues aspiring BBQ pit masters eager to make a successful pork butt. The meat sweats as it cooks, releasing the moisture within, and that moisture evaporates and cools the meat, effectively canceling out the heat from the BBQ. That’s why a growing number of competitive pit masters wrap their meat in tinfoil after the first few hours (usually when the internal temperature hits 170° F).

Ice-printing methods usually rely on cryogenics or on cooled substrates. Per the authors, this is the first time evaporative cooling principles have been applied to 3D printing. The trick was to house the 3D printing inside a vacuum chamber using a jet nozzle as the printing head—something they discovered serendipitously when they were trying to get rid of air drag by spraying water in a vacuum chamber.  “The printer’s motion control guides the water jet layer-by-layer, building geometry on demand,” the authors wrote in a blog post for Nature, adding:

Physicists 3D-printed a Christmas tree of ice Read More »

no-sterile-neutrinos-after-all,-say-microboone-physicists

No sterile neutrinos after all, say MicroBooNE physicists

Since the 1990s, physicists have pondered the tantalizing possibility of an exotic fourth type of neutrino, dubbed the “sterile” neutrino, that doesn’t interact with regular matter at all, apart from its fellow neutrinos, perhaps. But definitive experimental evidence for sterile neutrinos has remained elusive. Now it looks like the latest results from Fermilab’s MiniBooNE experiment have ruled out the sterile neutrino entirely, according to a paper published in the journal Nature.

How did the possibility of sterile neutrinos even become a thing? It all dates back to the so-called “solar neutrino problem.” Physicists detected the first solar neutrinos from the Sun in 1966. The only problem was that there were far fewer solar neutrinos being detected than predicted by theory, a conundrum that became known as the solar neutrino problem. In 1962, physicists discovered a second type (“flavor”) of neutrino, the muon neutrino. This was followed by the discovery of a third flavor, the tau neutrino, in 2000.

Physicists already suspected that neutrinos might be able to switch from one flavor to another. In 2002, scientists at the Sudbury Neutrino Observatory (or SNO) announced that they had solved the solar neutrino problem. The missing solar (electron) neutrinos were just in disguise, having changed into a different flavor on the long journey between the Sun and the Earth. If neutrinos oscillate, then they must have a teensy bit of mass after all. That posed another knotty neutrino-related problem. There are three neutrino flavors, but none of them has a well-defined mass. Rather, different kinds of “mass states” mix together in various ways to produce electron, muon, and tau neutrinos. That’s quantum weirdness for you.

And there was another conundrum, thanks to results from Los Alamos’ LSND experiment and Fermilab’s MiniBooNE (MicroBooNE’s predecessor). Both found evidence of muon neutrinos oscillating into electron neutrinos in a way that shouldn’t be possible if there were just three neutrino flavors. So physicists suggested there might be a fourth flavor: the sterile neutrino, so named because unlike the other three, it does not couple to a charged counterpart via the electroweak force. Its existence would also have big implications for the nature of dark matter. But despite the odd tantalizing hint, sterile neutrinos have proven to be maddeningly elusive.

No sterile neutrinos after all, say MicroBooNE physicists Read More »

research-roundup:-6-cool-stories-we-almost-missed

Research roundup: 6 cool stories we almost missed


The assassination of a Hungarian duke, why woodpeckers grunt when they peck, and more.

Skull of remains found in a 13th century Dominican monastery on Margaret Island, Budapest, Hungary Credit: Eötvös Loránd University

It’s a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. November’s list includes forensic details of the medieval assassination of a Hungarian duke, why woodpeckers grunt when they peck, and more evidence that X’s much-maligned community notes might actually help combat the spread of misinformation after all.

An assassinated medieval Hungarian duke

The observed perimortem lesions on the human remains (CL=cranial lesion, PL= Postcranial lesion). The drawing of the skeleton was generated using OpenAI’s image generation tools (DALL·E) via ChatGPT.

Credit: Tamás Hajdu et al., 2026

Back in 1915, archaeologists discovered the skeletal remains of a young man in a Dominican monastery on Margaret Island in Budapest, Hungary. The remains were believed to be those of Duke Bela of Masco, grandson of the medieval Hungarian King Bela IV. Per historical records, the young duke was brutally assassinated in 1272 by a rival faction and his mutilated remains were recovered by the duke’s sister and niece and buried in the monastery.

The identification of the remains was based on a contemporary osteological analysis, but they were subsequently lost and only rediscovered in 2018. A paper published in the journal Forensic Science International: Genetics has now confirmed that identification and shed more light on precisely how the duke died. (A preprint is available on bioRxiv.]

An interdisciplinary team of researchers performed various kinds of bioarchaeological analysis on the remains. including genetic testing, proteomics, 3D modeling, and radiocarbon dating. The resulting data definitively proves that the skeleton is indeed that of Duke Bela of Masco.

The authors were also able to reconstruct the manner of the duke’s death, concluding that this was a coordinated attack by three people. One attacked from the front while the other two attacked from the left and right sides, and the duke was facing his assassins and tried to defend himself. The weapons used were most likely a saber and a long sword, and the assassins kept raining down blows even after the duke had fallen to the ground. The authors concluded that while the attack was clearly planned, it was also personal and fueled by rage or hate.

DOI: Forensic Science International: Genetics, 2025. 10.1016/j.fsigen.2025.103381  (About DOIs).

Why woodpeckers grunt when they peck

A male Pileated woodpecker foraging on a t

Woodpeckers energetically drum away at tree trunks all day long with their beaks and yet somehow never seem to get concussions, despite the fact that such drumming can produce deceleration forces as high as 1,200 g’s. (Humans suffer concussions with a sudden deceleration of just 100 g’s.) While popular myth holds that woodpecker heads are structured in such a way to absorb the shock, and there has been some science to back that up, more recent research found that their heads act more like hammers than shock absorbers. A paper published in the Journal of Experimental Biology sheds further light on the biomechanics of how woodpeckers essentially turn themselves into hammers and reveals that the birds actually grunt as they strike wood.

The authors caught eight wild downy woodpeckers and recorded them drilling and tapping on pieces of hardwood in the lab for three days, while also measuring electrical signals in their heads, necks, abdomens, tails, and leg muscles. Analyzing the footage, they found that woodpeckers use their hip flexors and front neck muscles to propel themselves forward as they peck while tipping their heads back and bracing themselves using muscles at the base of the skull and back of the neck. The birds use abdominal muscles for stability and brace for impact using their tail muscles to anchor their bodies against a tree. As for the grunting, the authors noted that it’s a type of breathing pattern used by tennis players (and martial artists) to boost the power of a strike.

DOI: Journal of Experimental Biology, 2025. 10.1242/jeb.251167  (About DOIs).

Raisins turn water into wine

wine glass half filled with raisins

Credit: Kyoto University

Fermentation has been around in some form for millennia, relying on alcohol-producing yeasts like Saccharomyces cerevisiae; cultured S. cerevisiae is still used by winemakers today. It’s long been thought that winemakers in ancient times stored fresh crushed grapes in jars and relied on natural fermentation to work its magic, but recent studies have called this into question by demonstrating that S. cerevisiae colonies usually don’t form on fresh grape skins. But the yeast does like raisins, as Kyoto University researchers recently discovered. They’ve followed up that earlier work with a paper published in Scientific Reports, demonstrating that it’s possible to use raisins to turn water into wine.

The authors harvested fresh grapes and dried them for 28 days. Some were dried using an incubator, some were sun-dried, and a third batch was dried using a combination of the two methods. The researchers then added the resulting raisins to bottles of water—three samples for each type of drying process—sealed the bottles, and stored them at room temperature for two weeks. One incubator-dried sample and two combo samples successfully fermented, but all three of the sun-dried samples did so, and at higher ethanol concentrations. Future research will focus on identifying the underlying molecular mechanisms. And for those interested in trying this at home, the authors warn that it only works with naturally sun-dried raisins, since store-bought varieties have oil coatings that block fermentation.

DOI: Scientific Reports, 2025. 10.1038/s41598-025-23715-3  (About DOIs).

An octopus-inspired pigment

An octopus camouflages itself with the seafloor.

Credit: Charlotte Seid

Octopuses, cuttlefish, and several other cephalopods can rapidly shift the colors in their skin thanks to that skin’s unique complex structure, including layers of chromatophores, iridophores, and leucophores. A color-shifting natural pigment called xanthommatin also plays a key role, but it’s been difficult to study because it’s hard to harvest enough directly from animals, and lab-based methods of making the pigment are labor-intensive and don’t yield much. Scientists at the University of San Diego have developed a new method for making xanthommatin in substantially larger quantities, according to a paper published in Nature Biotechnology.

The issue is that trying to get microbes to make foreign compounds creates a metabolic burden, and the microbes hence resist the process, hindering yields. The USD team figured out how to trick the cells into producing more xanthommatin by genetically engineering them in such a way that making the pigment was essential to a cell’s survival. They achieved yields of between 1 and 3 grams per liter, compared to just five milligrams of pigment per liter using traditional approaches. While this work is proof of principle, the authors foresee such future applications as photoelectronic devices and thermal coatings, dyes, natural sunscreens, color-changing paints, and environmental sensors. It could also be used to make other kinds of chemicals and help industries shift away from older methods that rely on fossil fuel-based materials.

DOI: Nature Biotechnology, 2025. 10.1038/s41587-025-02867-7  (About DOIs).

A body-swap robot

Participant standing on body-swap balance robot

Credit: Sachi Wickramasinghe/UBC Media Relations

Among the most serious risks facing older adults is falling. According to the authors of a paper published in Science Robotics, standing upright requires the brain to coordinate signals from the eyes, inner ears, and feet to counter gravity, and there’s a natural lag in how fast this information travels back and forth between brain and muscles. Aging and certain diseases like diabetic neuropathy and multiple sclerosis can further delay that vital communication; the authors liken it to steering a car with a wheel that responds half a second late. And it’s a challenge to directly study the brain under such conditions.

That’s why researchers at the University of British Columbia built a large “body swap” robotic platform. Subjects stood on force plates attached to a motor-driven backboard to reproduce the physical forces at play when standing upright: gravity, inertia, and “viscosity,” which in this case describes the damping effect of muscles and joints that allow us to lean without falling. The platform is designed to subtly alter those forces and also add a 200-millisecond delay.

The authors tested 20 participants and found that lowering inertia and making the viscosity negative resulted in similar instability to that which resulted from a signal delay. They then brought in ten new subjects to study whether adjusting body mechanics could compensate for information delays. They found that adding inertia and viscosity could at least partially counter the instability that arose from signal delay—essentially giving the body a small mechanical boost to help the brain maintain balance. The eventual goal is to design wearables that offer gentle resistance when an older person starts to lose their balance, and/or help patients with MS, for example, adjust to slower signal feedback.

DOI: Science Robotics, 2025. 10.1126/scirobotics.adv0496  (About DOIs).

X community notes might actually work

cropped image of phone screen showing an X post with a community note underneath

Credit: Huaxia Rui

Earlier this year, Elon Musk claimed that X’s community notes feature needed tweaking because it was being gamed by “government & legacy media” to contradict Trump—despite vigorously defending the robustness of the feature against such manipulation in the past. A growing body of research seems to back Musk’s earlier stance.

For instance, last year Bloomberg pointed to several studies suggesting that crowdsourcing worked just as well as using professional fact-checkers when assessing the accuracy of news stories. The latest evidence that crowd-sourcing fact checks can be effective at curbing misinformation comes from a paper published in the journal Information Systems Research, which found that X posts with public corrections were 32 percent more likely to be deleted by authors.

Co-author Huaxia Rui of the University of Rochester pointed out that community notes must meet a threshold before they will appear publicly on posts, while those that do not remain hidden from public view. Seeing a prime opportunity in the arrangement, Rui et al. analyzed 264,600 X posts that had received at least one community note and compared those just above and just below that threshold. The posts were collected from two different periods: June through August 2024, right before the US presidential election (when misinformation typically surges), and the post-election period of January and February 2025.

The fact that roughly one-third of authors responded to public community notes by deleting the post suggests that the built-in dynamics of social media (e.g., status, visibility, peer feedback) might actually help improve the spread of misinformation as intended. The authors concluded that crowd-checking “strikes a balance between First Amendment rights and the urgent need to curb misinformation.” Letting AI write the community notes, however, is probably still a bad idea.

DOI: Information Systems Research, 2025. 10.1287/isre.2024.1609  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 6 cool stories we almost missed Read More »

quantum-roundup:-lots-of-companies-announcing-new-tech

Quantum roundup: Lots of companies announcing new tech


More superposition, less supposition

IBM follows through on its June promises, plus more trapped ion news.

IBM has moved to large-scale manufacturing of its Quantum Loon chips. Credit: IBM

The end of the year is usually a busy time in the quantum computing arena, as companies often try to announce that they’ve reached major milestones before the year wraps up. This year has been no exception. And while not all of these announcements involve interesting new architectures like the one we looked at recently, they’re a good way to mark progress in the field, and they often involve the sort of smaller, incremental steps needed to push the field forward.

What follows is a quick look at a handful of announcements from the past few weeks that struck us as potentially interesting.

IBM follows through

IBM is one of the companies announcing a brand new architecture this year. That’s not at all a surprise, given that the company promised to do so back in June; this week sees the company confirming that it has built the two processors it said it would earlier in the year. These include one called Loon, which is focused on the architecture that IBM will use to host error-corrected logical qubits. Loon represents two major changes for the company: a shift to nearest-neighbor connections and the addition of long-distance connections.

IBM had previously used what it termed the “heavy hex” architecture, in which alternating qubits were connected to either two or three of their neighbors, forming a set of overlapping hexagonal structures. In Loon, the company is using a square grid, with each qubit having connections to its four closest neighbors. This higher density of connections can enable more efficient use of the qubits during computations. But qubits in Loon have additional long-distance connections to other parts of the chip, which will be needed for the specific type of error correction that IBM has committed to. It’s there to allow users to test out a critical future feature.

The second processor, Nighthawk, is focused on the now. It also has the nearest-neighbor connections and a square grid structure, but it lacks the long-distance connections. Instead, the focus with Nighthawk is to get error rates down so that researchers can start testing algorithms for quantum advantage—computations where quantum computers have a clear edge over classical algorithms.

In addition, the company is launching GitHub repository that will allow the community to deposit code and performance data for both classical and quantum algorithms, enabling rigorous evaluations of relative performance. Right now, those are broken down into three categories of algorithms that IBM expects are most likely to demonstrate a verifiable quantum advantage.

This isn’t the only follow-up to IBM’s June announcement, which also saw the company describe the algorithm it would use to identify errors in its logical qubits and the corrections needed to fix them. In late October, the company said it had confirmed that the algorithm could work in real time when run on an FPGA made in collaboration with AMD.

Record lows

A few years back, we reported on a company called Oxford Ionics, which had just announced that it achieved a record low error rate in some qubit operations using trapped ions. Most trapped-ion quantum computers move qubits by manipulating electromagnetic fields, but they perform computational operations using lasers. Oxford Ionics figured out how to perform operations using electromagnetic fields, meaning more of their processing benefited from our ability to precisely manufacture circuitry (lasers were still needed for tasks like producing a readout of the qubits). And as we noted, it could perform these computational operations extremely effectively.

But Oxford Ionics never made a major announcement that would give us a good excuse to describe its technology in more detail. The company was ultimately acquired by IonQ, a competitor in the trapped-ion space.

Now, IonQ is building on what it gained from Oxford Ionics, announcing a new, record-low error rate for two-qubit gates: greater than 99.99 percent fidelity. That could be critical for the company, as a low error rate for hardware qubits means fewer are needed to get good performance from error-corrected qubits.

But the details of the two-qubit gates are perhaps more interesting than the error rate. Two-qubit gates involve bringing both qubits involved into close proximity, which often requires moving them. That motion pumps a bit of energy into the system, raising the ions’ temperature and leaving them slightly more prone to errors. As a result, any movement of the ions is generally followed by cooling, in which lasers are used to bleed energy back out of the qubits.

This process, which involves two distinct cooling steps, is slow. So slow that as much as two-thirds of the time spent in operations involves the hardware waiting around while recently moved ions are cooled back down. The new IonQ announcement includes a description of a method for performing two-qubit gates that doesn’t require the ions to be fully cooled. This allows one of the two cooling steps to be skipped entirely. In fact, coupled with earlier work involving one-qubit gates, it raises the possibility that the entire machine could operate with its ions at a still very cold but slightly elevated temperature, avoiding all need for one of the two cooling steps.

That would shorten operation times and let researchers do more before the limit of a quantum system’s coherence is reached.

State of the art?

The last announcement comes from another trapped-ion company, Quantum Art. A couple of weeks back, it announced a collaboration with Nvidia that resulted in a more efficient compiler for operations on its hardware. On its own, this isn’t especially interesting. But it’s emblematic of a trend that’s worth noting, and it gives us an excuse to look at Quantum Art’s technology, which takes a distinct approach to boosting the efficiency of trapped-ion computation.

First, the trend: Nvidia’s interest in quantum computing. The company isn’t interested in the quantum aspects (at least not publicly); instead, it sees an opportunity to get further entrenched in high-performance computing. There are three areas where the computational capacity of GPUs can play a role here. One is small-scale modeling of quantum processors so that users can perform an initial testing of algorithms without committing to paying for access to the real thing. Another is what Quantum Art is announcing: using GPUs as part of a compiler chain to do all the computations needed to find more efficient ways of executing an algorithm on specific quantum hardware.

Finally, there’s a potential role in error correction. Error correction involves some indirect measurements of a handful of hardware qubits to determine the most likely state that a larger collection (called a logical qubit) is in. This requires modeling a quantum system in real time, which is quite difficult—hence the computational demands that Nvidia hopes to meet. Regardless of the precise role, there has been a steady flow of announcements much like Quantum Art’s: a partnership with Nvidia that will keep the company’s hardware involved if the quantum technology takes off.

In Quantum Art’s case, that technology is a bit unusual. The trapped-ion companies we’ve covered so far are all taking different routes to the same place: moving one or two ions into a location where operations can be performed and then executing one- or two-qubit gates. Quantum Art’s approach is to perform gates with much larger collections of ions. At the compiler level, it would be akin to figuring out which qubits need a specific operation performed, clustering them together, and doing it all at once. Obviously, there are potential efficiency gains here.

The challenge would normally be moving so many qubits around to create these clusters. But Quantum Art uses lasers to “pin” ions in a row so they act to isolate the ones to their right from the ones to their left. Each cluster can then be operated on separately. In between operations, the pins can be moved to new locations, creating different clusters for the next set of operations. (Quantum Art is calling each cluster of ions a “core” and presenting this as multicore quantum computing.)

At the moment, Quantum Art is behind some of its competitors in terms of qubit count and performing interesting demonstrations, and it’s not pledging to scale quite as fast. But the company’s founders are convinced that the complexity of doing so many individual operations and moving so many ions around will catch up with those competitors, while the added efficiency of multiple qubit gates will allow it to scale better.

This is just a small sampling of all the announcements from this fall, but it should give you a sense of how rapidly the field is progressing—from technology demonstrations to identifying cases where quantum hardware has a real edge and exploring ways to sustain progress beyond those first successes.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Quantum roundup: Lots of companies announcing new tech Read More »

next-generation-black-hole-imaging-may-help-us-understand-gravity-better

Next-generation black hole imaging may help us understand gravity better

Right now, we probably don’t have the ability to detect these small changes in phenomena. However, that may change, as a next-generation version of the Event Horizon Telescope is being considered, along with a space-based telescope that would operate on similar principles. So the team (four researchers based in Shanghai and CERN) decided to repeat an analysis they did shortly before the Event Horizon Telescope went operational, and consider whether the next-gen hardware might be able to pick up features of the environment around the black hole that might discriminate among different theorized versions of gravity.

Theorists have been busy, and there are a lot of potential replacements for general relativity out there. So, rather than working their way through the list, they used a model of gravity (the parametric Konoplya–Rezzolla–Zhidenko metric) that isn’t specific to any given hypothesis. Instead, it allows some of its parameters to be changed, thus allowing the team to vary the behavior of gravity within some limits. To get a sense of the sort of differences that might be present, the researchers swapped two different parameters between zero and one, giving them four different options. Those results were compared to the Kerr metric, which is the standard general relativity version of the event horizon.

Small but clear differences

Using those five versions of gravity, they model the three-dimensional environment near the event horizon using hydrodynamic simulations, including infalling matter, the magnetic fields it produces, and the jets of matter that those magnetic fields power.

The results resemble the sorts of images that the Event Horizon Telescope produced. These include a bright ring with substantial asymmetry, where one side is significantly brighter due to the rotation of the black hole. And, while the differences are subtle between all the variations of gravity, they’re there. One extreme version produced the smallest but brightest ring; another had a reduced contrast between the bright and dim side of the ring. There were also differences between the width of the jets produced in these models.

Next-generation black hole imaging may help us understand gravity better Read More »

new-quantum-hardware-puts-the-mechanics-in-quantum-mechanics

New quantum hardware puts the mechanics in quantum mechanics


As a test case, the machine was used to test a model of superconductivity.

Quantum computers based on ions or atoms have one major advantage: The qubits themselves aren’t manufactured, and there’s no device-to-device among atoms. Every atom is the same and should perform similarly every time. And since the qubits themselves can be moved around, it’s theoretically possible to entangle any atom or ion with any other in the system, allowing for a lot of flexibility in how algorithms and error correction are performed.

This combination of consistent, high-fidelity performance with all-to-all connectivity has led many key demonstrations of quantum computing to be done on trapped-ion hardware. Unfortunately, the hardware has been held back a bit by relatively low qubit counts—a few dozen compared to the hundred or more seen in other technologies. But on Wednesday, a company called Quantinuum announced a new version of its trapped-ion hardware that significantly boosts the qubit count and uses some interesting technology to manage their operation.

Trapped-ion computing

Both neutral atom and trapped-ion computers store their qubits in the spin of the nucleus. That spin is somewhat shielded from the environment by the cloud of electrons around the nucleus, giving these qubits a relatively long coherence time. While neutral atoms are held in place by a network of lasers, trapped ions are manipulated via electromagnetic control based on the ion’s charge. This means that key components of the hardware can be built using standard electronic manufacturing, although lasers are still needed for manipulations and readout.

While the electronics are static—they stay wherever they were manufactured—they can be used to move the ions around. That means that as long as the trackways the atoms can move on enable it, any two ions can be brought into close proximity and entangled. This all-to-all connectivity can enable more efficient implementation of algorithms performed directly on the hardware qubits or the use of error-correction codes that require a complicated geometry of connections. That’s one reason why Microsoft used a Quantinuum machine to demonstrate error-correction code based on a tesseract.

But arranging the trackways so that any two qubits can be next to each other can become increasingly complicated. Moving ions around is a relatively slow process, so retrieving two ions from the far ends of a chip too often can cause a system to start pushing up against the coherence time of the qubits. In the long term, Quantinuum plans to build chips with a square grid reminiscent of the street layout of many cities. But doing so will require a mastery of controlling the flow of ions through four-way intersections.

And that’s what Quantinuum is doing in part with its new chip, named Helios. It has a single intersection that couples two ion-storage areas, enabling operations as ions slosh from one end of the chip to the other. And it comes with significantly more qubits than its earlier hardware, moving from 56 to 96 qubits without sacrificing performance. “We’ve kept and actually even improved the two qubit gate fidelity,” Quantinuum VP Jenni Strabley told Ars. “So we’re not seeing any degradation in the two-qubit gate fidelity as we go to larger and larger sizes.”

Doing the loop

The image below is taken using the fluorescence of the atoms in the hardware itself. As you can see, the layout is dominated by two features: A loop at the left and two legs extending to the right. They’re connected by a four-way intersection. The Quantinuum staff described this intersection as being central to the computer’s operation.

A black background on which a series of small blue dots trace out a circle and two parallel lines connected by an x-shaped junction.

The actual ions trace out the physical layout of the Helios system, featuring a storage ring and two legs that contain dedicated operation sites. Credit: Quantinuum

The system works by rotating the ions around the loop. As an ion reaches the intersection, the system chooses whether to kick it into one of the legs and, if so, which leg. “We spin that ring almost like a hard drive, really, and whenever the ion that we want to gate gets close to the junction, there’s a decision that happens: Either that ion goes [into the legs], or it kind of makes a little turn and goes back into the ring,” said David Hayes, Quantinuum’s director of Computational Design and Theory. “And you can make that decision with just a few electrodes that are right at that X there.”

Each leg has a region where operations can take place, so this system can ensure that the right qubits are present together in the operation zones for things like two-qubit gates. Once the operations are complete, the qubits can be moved into the leg storage regions, and new qubits can be shuffled in. When the legs fill up, the qubits can be sent back to the loop, and the process is restarted.

“You get less traffic jams if all the traffic is running one way going through the gate zones,” Hayes told Ars. “If you had to move them past each other, you would have to do kind of physical swaps, and you want to avoid that.”

Obviously, issuing all the commands to control the hardware will be quite challenging for anything but the simplest operations. That puts an increasing emphasis on the compilers that add a significant layer of abstraction between what you want a quantum computer to do and the actual hardware commands needed to implement it. Quantinuum has developed its own compiler to take user-generated code and produce something that the control system can convert into the sequence of commands needed.

The control system now incorporates a real-time engine that can read data from Helios and update the commands it issues based on the state of the qubits. Quantinuum has this portion of the system running on GPUs rather than requiring customized hardware.

Quantinuum’s SDK for users is called Guppy and is based on Python, which has been modified to allow users to describe what they’d like the system to do. Helios is being accompanied by a new version of Guppy that includes some traditional programming tools like FOR loops and IF-based conditionals. These will be critical for the sorts of things we want to do as we move toward error-corrected qubits. This includes testing for errors, fixing them if they’re present, or repeatedly attempting initialization until it succeeds without error.

Hayes said the new version is also moving toward error correction. Thanks to Guppy’s ability to dynamically reassign qubits, Helios will be able to operate as a machine with 94 qubits while detecting errors on any of them. Alternatively, the 96 hardware qubits can be configured as a single unit that hosts 48 error-corrected qubits. “It’s actually a concatenated code,” Hayes told Ars. “You take two error detection codes and weave them together… it’s a single code block, but it has 48 logical cubits housed inside of it.” (Hayes said it’s a distance-four code, meaning it can fix up to two errors that occur simultaneously.)

Tackling superconductivity

While Quantinuum hardware has always had low error rates relative to most of its competitors, there was only so much you could do with 56 qubits. With 96 now at their disposal, researchers at the company decided to build a quantum implementation of a model (called the Fermi-Hubbard model) that’s meant to help study the electron pairing that takes place during the transition to superconductivity.

“There are definitely terms that the model doesn’t capture,” Quantinuum’s Henrik Dreyer acknowledged. “They neglect their electrorepulsion that [the electrons] still have—I mean, they’re still negatively charged; they are still repelling. There are definitely terms that the model doesn’t capture. On the other hand, I should say that this Fermi-Hubbard model—it has many of the features that a superconductor has.”

Superconductivity occurs when electrons join to form what are called Cooper pairs, overcoming their normal repulsion. And the model can tell that apart from normal conductivity in the same material.

“You ask the question ‘What’s the chance that one of the charged particles spontaneously disappears because of quantum fluctuations and goes over here?’” Dreyer said, describing what happens when simulating a conductor. “What people do in superconductivity is they take this concept, but instead of asking what’s the chance of a single-charge particle to tunnel over there spontaneously, they’re asking what is the chance of a pair to tunnel spontaneously?”

Even in its simplified form, however, it’s still a model of a quantum system, with all the computational complexity that comes with that. So the Quantinuum team modeled a few systems that classical computers struggle with. One was simply looking at a larger grid of atoms than most classical simulations have done; another expanded the grid in an additional dimension, modeling layers of a material. Perhaps the most complicated simulation involved what happens when a laser pulse of the right wavelength hits a superconductor at room temperature, an event that briefly induces a superconducting state.

And the system produced results, even without error correction. “It’s maybe a technical point, but I think it’s very important technical point, which is [that] the circuits that we ran, they all had errors,” Dreyer told Ars. “Maybe on the average of three or so errors, and for some reason, that is not very fully understood for this application, it doesn’t matter. You still get almost the perfect result in some of these cases.”

That said, he also indicated that having higher-fidelity hardware would help the team do a better job of putting the system in a ground state or running the simulation for longer. But those will have to wait for future hardware.

What’s next

If you look at Quantinuum’s roadmap for that future hardware, Helios would appear to be the last of its kind. It and earlier versions of the processors have loops and large straight stretches; everything in the future features a grid of squares. But both Strabley and Hayes said that Helios has several key transitional features. “Those ions are moving through that junction many, many times over the course of a circuit,” Strabley told Ars. “And so it’s really enabled us to work on the reliability of the junction, and that will translate into the large-scale systems.”

Image of a product roadmap, with years from 2020 to 2029 noted across the top. There are five processors arrayed from left to right, each with increasingly complex geometry.

Helios sits at the pivot between the simple geometries of earlier Quantinuum processors and the grids of future designs. Credit: Quantinuum

The collection of squares seen in future processors will also allow the same sorts of operations to be done with the loop-and-legs of Helios. Some squares can serve as the equivalent of a loop in terms of storage and sorting, while some of the straight lines nearby can be used for operations.

“What will be common to both of them is kind of the general concept that you can have a storage and sorting region and then gating regions on the side and they’re separated from one another,” Hayes said. “It’s not public yet, but that’s the direction we’re heading: a storage region where you can do really fast sorting in these 2D grids, and then gating regions that have parallelizable logical operations.”

In the meantime, we’re likely to see improvements made to Helios—ideas that didn’t quite make today’s release. “There’s always one more improvement that people want to make, and I’m the person that says, ‘No, we’re going to go now. Put this on the market, and people are going to go use it,’” Strabley said. “So there is a long list of things that we’re going to add to improve the performance. So expect that over the course of Helios, the performance is going to get better and better and better.”

That performance is likely to be used for the sort of initial work done on superconductivity or the algorithm recently described by Google, which is at or a bit beyond what classical computers can manage and may start providing some useful insights. But it will still be a generation or two before we start seeing quantum computing fulfill some of its promise.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

New quantum hardware puts the mechanics in quantum mechanics Read More »

google-has-a-useful-quantum-algorithm-that-outperforms-a-supercomputer

Google has a useful quantum algorithm that outperforms a supercomputer


An approach it calls “quantum echoes” takes 13,000 times longer on a supercomputer.

Image of a silvery plate labeled with

The work relied on Google’s current-generation quantum hardware, the Willow chip. Credit: Google

The work relied on Google’s current-generation quantum hardware, the Willow chip. Credit: Google

A few years back, Google made waves when it claimed that some of its hardware had achieved quantum supremacy, performing operations that would be effectively impossible to simulate on a classical computer. That claim didn’t hold up especially well, as mathematicians later developed methods to help classical computers catch up, leading the company to repeat the work on an improved processor.

While this back-and-forth was unfolding, the field became less focused on quantum supremacy and more on two additional measures of success. The first is quantum utility, in which a quantum computer performs computations that are useful in some practical way. The second is quantum advantage, in which a quantum system completes calculations in a fraction of the time it would take a typical computer. (IBM and a startup called Pasqual have published a useful discussion about what would be required to verifiably demonstrate a quantum advantage.)

Today, Google and a large collection of academic collaborators are publishing a paper describing a computational approach that demonstrates a quantum advantage compared to current algorithms—and may actually help us achieve something useful.

Out of time

Google’s latest effort centers on something it’s calling “quantum echoes.” The approach could be described as a series of operations on the hardware qubits that make up its machine. These qubits hold a single bit of quantum information in a superposition between two values, with probabilities of finding the qubit in one value or the other when it’s measured. Each qubit is entangled with its neighbors, allowing its probability to influence those of all the qubits around it. The operations that allow computation, called gates, are ways of manipulating these probabilities. Most current hardware, including Google’s, perform manipulations on one or two qubits at a time (termed one- and two-qubit gates, respectively.

For quantum echoes, the operations involved performing a set of two-qubit gates, altering the state of the system, and later performing the reverse set of gates. On its own, this would return the system to its original state. But for quantum echoes, Google inserts single-qubit gates performed with a randomized parameter. This alters the state of the system before the reverse operations take place, ensuring that the system won’t return to exactly where it started. That explains the “echoes” portion of the name: You’re sending an imperfect copy back toward where things began, much like an echo involves the imperfect reversal of sound waves.

That’s what the process looks like in terms of operations performed on the quantum hardware. But it’s probably more informative to think of it in terms of a quantum system’s behavior. As Google’s Tim O’Brien explained, “You evolve the system forward in time, then you apply a small butterfly perturbation, and then you evolve the system backward in time.” The forward evolution is the first set of two qubit gates, the small perturbation is the randomized one qubit gate, and the second set of two qubit gates is the equivalent of sending the system backward in time.

Because this is a quantum system, however, strange things happen. “On a quantum computer, these forward and backward evolutions, they interfere with each other,” O’Brien said. One way to think about that interference is in terms of probabilities. The system has multiple paths between its start point and the point of reflection—where it goes from evolving forward in time to evolving backward—and from that reflection point back to a final state. Each of those paths has a probability associated with it. And since we’re talking about quantum mechanics, those paths can interfere with each other, increasing some probabilities at the expense of others. That interference ultimately determines where the system ends up.

(Technically, these are termed “out of time order correlations,” or OTOCs. If you read the Nature paper describing this work, prepare to see that term a lot.)

Demonstrating advantage

So how do you turn quantum echoes into an algorithm? On its own, a single “echo” can’t tell you much about the system—the probabilities ensure that any two runs might show different behaviors. But if you repeat the operations multiple times, you can begin to understand the details of this quantum interference. And performing the operations on a quantum computer ensures that it’s easy to simply rerun the operations with different random one-qubit gates and get many instances of the initial and final states—and thus a sense of the probability distributions involved.

This is also where Google’s quantum advantage comes from. Everyone involved agrees that the precise behavior of a quantum echo of moderate complexity can be modeled using any leading supercomputer. But doing so is very time-consuming, so repeating those simulations a few times becomes unrealistic. The paper estimates that a measurement that took its quantum computer 2.1 hours to perform would take the Frontier supercomputer approximately 3.2 years. Unless someone devises a far better classical algorithm than what we have today, this represents a pretty solid quantum advantage.

But is it a useful algorithm? The repeated sampling can act a bit like the Monte Carlo sampling done to explore the behavior of a wide variety of physical systems. Typically, however, we don’t view algorithms as modeling the behavior of the underlying hardware they’re being run on; instead, they’re meant to model some other physical system we’re interested in. That’s where Google’s announcement stands apart from its earlier work—the company believes it has identified an interesting real-world physical system with behaviors that the quantum echoes can help us understand.

That system is a small molecule in a Nuclear Magnetic Resonance (NMR) machine. In a second draft paper being published on the arXiv later today, Google has collaborated with a large collection of NMR experts to explore that use.

From computers to molecules

NMR is based on the fact that the nucleus of every atom has a quantum property called spin. When nuclei are held near to each other, such as when they’re in the same molecule, these spins can influence one another. NMR uses magnetic fields and photons to manipulate these spins and can be used to infer structural details, like how far apart two given atoms are. But as molecules get larger, these spin networks can extend for greater distances and become increasingly complicated to model. So NMR has been limited to focusing on the interactions of relatively nearby spins.

For this work, though, the researchers figured out how to use an NMR machine to create the physical equivalent of a quantum echo in a molecule. The work involved synthesizing the molecule with a specific isotope of carbon (carbon-13) in a known location in the molecule. That isotope could be used as the source of a signal that propagates through the network of spins formed by the molecule’s atoms.

“The OTOC experiment is based on a many-body echo, in which polarization initially localized on a target spin migrates through the spin network, before a Hamiltonian-engineered time-reversal refocuses to the initial state,” the team wrote. “This refocusing is sensitive to perturbations on distant butterfly spins, which allows one to measure the extent of polarization propagation through the spin network.”

Naturally, something this complicated needed a catchy nickname. The team came up with TARDIS, or Time-Accurate Reversal of Dipolar InteractionS. While that name captures the “out of time order” aspect of OTOC, it’s simply a set of control pulses sent to the NMR sample that starts a perturbation of the molecule’s network of nuclear spins. A second set of pulses then reflects an echo back to the source.

The reflections that return are imperfect, with noise coming from two sources. The first is simply imperfections in the control sequence, a limitation of the NMR hardware. But the second is the influence of fluctuations happening in distant atoms along the spin network. These happen at a certain frequency at random, or the researchers could insert a fluctuation by targeting a specific part of the molecule with randomized control signals.

The influence of what’s going on in these distant spins could allow us to use quantum echoes to tease out structural information at greater distances than we currently do with NMR. But to do so, we need an accurate model of how the echoes will propagate through the molecule. And again, that’s difficult to do with classical computations. But it’s very much within the capabilities of quantum computing, which the paper demonstrates.

Where things stand

For now, the team stuck to demonstrations on very simple molecules, making this work mostly a proof of concept. But the researchers are optimistic that there are many ways the system could be used to extract structural information from molecules at distances that are currently unobtainable using NMR. They list a lot of potential upsides that should be explored in the discussion of the paper, and there are plenty of smart people who would love to find new ways of using their NMR machines, so the field is likely to figure out pretty quickly which of these approaches turns out to be practically useful.

The fact that the demonstrations were done with small molecules, however, means that the modeling run on the quantum computer could also have been done on classical hardware (it only required 15 hardware qubits). So Google is claiming both quantum advantage and quantum utility, but not at the same time. The sorts of complex, long-distance interactions that would be out of range of classical simulation are still a bit beyond the reach of the current quantum hardware. O’Brien estimated that the hardware’s fidelity would have to improve by a factor of three or four to model molecules that are beyond classical simulation.

The quantum advantage issue should also be seen as a work in progress. Google has collaborated with enough researchers at enough institutions that there’s unlikely to be a major improvement in algorithms that could allow classical computers to catch up. Until the community as a whole has some time to digest the announcement, though, we shouldn’t take that as a given.

The other issue is verifiability. Some quantum algorithms will produce results that can be easily verified on classical hardware—situations where it’s hard to calculate the right result but easy to confirm a correct answer. Quantum echoes isn’t one of those, so we’ll need another quantum computer to verify the behavior Google has described.

But Google told Ars nothing is up to the task yet. “No other quantum processor currently matches both the error rates and number of qubits of our system, so our quantum computer is the only one capable of doing this at present,” the company said. (For context, Google says that the algorithm was run on up to 65 qubits, but the chip has 105 qubits total.)

There’s a good chance that other companies would disagree with that contention, but it hasn’t been possible to ask them ahead of the paper’s release.

In any case, even if this claim proves controversial, Google’s Michel Devoret, a recent Nobel winner, hinted that we shouldn’t have long to wait for additional ones. “We have other algorithms in the pipeline, so we will hopefully see other interesting quantum algorithms,” Devoret said.

Nature, 2025. DOI: 10.1038/s41586-025-09526-6  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Google has a useful quantum algorithm that outperforms a supercomputer Read More »

how-easter-island’s-giant-statues-“walked”-to-their-final-platforms

How Easter Island’s giant statues “walked” to their final platforms


Workers with ropes could make the moai “walk” in zig-zag motion along roads tailor-made for the purpose.

Easter Island is famous for its giant monumental statues, called moai, built some 800 years ago and typically mounted on platforms called ahu. Scholars have puzzled over the moai on Easter Island for decades, pondering their cultural significance, as well as how a Stone Age culture managed to carve and transport statues weighing as much as 92 tons. One hypothesis, championed by archaeologist Carl Lipo of Binghamton University, among others, is that the statues were transported in a vertical position, with workers using ropes to essentially “walk” the moai onto their platforms.

The oral traditions of the people of Rapa Nui certainly include references to the moai “walking” from the quarry to their platforms, such as a song that tells of an early ancestor who made the statues walk. While there have been rudimentary field tests showing it might have been possible, the hypothesis has also generated a fair amount of criticism. So Lipo has co-authored a new paper published in the Journal of Archaeological Science offering fresh experimental evidence of “walking” moai, based on 3D modeling of the physics and new field tests to recreate that motion.

The first Europeans arrived in the 17th century and found only a few thousand inhabitants on the tiny island (just 14 by 7 miles across) thousands of miles away from any other land. In order to explain the presence of so many moai, the assumption has been that the island was once home to tens of thousands of people. But Lipo thought perhaps the feat could be accomplished with fewer workers. In 2012, Lipo and his colleague, Terry Hunt of the University of Arizona, showed that you could transport a 10-foot, 5-ton moai a few hundred yards with just 18 people and three strong ropes by employing a rocking motion.

In 2018, Lipo followed up with an intriguing hypothesis for how the islanders placed red hats on top of some moai; those can weigh up to 13 tons. He suggested the inhabitants used ropes to roll the hats up a ramp. Lipo and his team later concluded (based on quantitative spatial modeling) that the islanders likely chose the statues’ locations based on the availability of fresh water sources, per a 2019 paper in PLOS One.

The 2012 experiment demonstrated proof of principle, so why is Lipo revisiting it now? “I always felt that the [original] experiment was disconnected to some degree of theory—that we didn’t have particular expectations about numbers of people, rate of transport, road slope that could be walked, and so on,” Lipo told Ars. There were also time constraints because the attempt was being filmed for a NOVA documentary.

“That experiment was basically a test to see if we could make it happen or not,” he explained. “Fortunately, we did, and our joy in doing so is pretty well represented by our hoots and hollers when it started to walk with such limited efforts. Some of the limitation of the work was driven by the nature of TV. [The film crew] just wanted us—in just a day and half—to give it a shot. It was 4: 30 on the last day when it finally worked so we really didn’t get a lot of time to explore variability. We also didn’t have any particular predictions to test.”

Example of a road moai that fell and was abandoned after an attempt to re-erect it by excavating under its base, leaving it partially buried at an angle.

Example of a road moai that fell and was abandoned after an attempt to re-erect it by excavating under its base, leaving it partially buried at an angle. Credit: Carl Lipo

This time around, “We wanted to explore a bit of the physics: to show that what we did was pretty easily predicted by the physical properties of the moai—its shape, size, height, number of people on ropes, etc.—and that our success in terms of team size and rate of walking was consistent with predictions,” said Lipo. “This enables us to address one of the central critiques that always comes up: ‘Well, you did this with a 5-ton version that was 10 feet tall, but it would never work with a 30-ft-tall version that weighs 30 tons or more.'”

All about that base

You can have ahu (platforms) without moai (statues) and moai without ahu, usually along the roads leading to ahu; they were likely being transported and never got to their destination. Lipo and Hunt have amassed a database of 962 moai across the island, compiled through field surveys and photogrammetric documentation. They were particularly interested in 62 statues located along ancient transport roads that seemed to have been abandoned where they fell.

Their analysis revealed that these road moai had significantly wider bases relative to shoulder width, compared to statues mounted on platforms. This creates a stable foundation that lowers the center of mass so that the statue is more conducive to the side-to-side motion of walking transport without toppling over. Platform statues, by contrast, have shoulders wider than the base for a more top-heavy configuration.

The road moai also have a consistent and pronounced forward lean of between 6 degrees to 15 degrees from the vertical position, which moves the center of mass close to or just beyond the base’s front edge. Lipo and Hunt think this was due to careful engineering, not coincidence. It’s not conducive to stable vertical display but it is a boon during walking transport, because the forward lean causes the statue to fall forward when tilted laterally, with the rounded front base edge serving as a crucial pivot point. So every lateral rocking motion results in a forward “step.”

Per the authors, there is strong archaeological evidence that carvers modified the statues once they arrived at their platform destinations, modifying the base to eliminate the lean by removing material from the front. This shifted the center of mass over the base area for a stable upright position. The road moai even lack the carved eye sockets designed to hold white coral eyes with obsidian or red scoria for pupils—a final post-transport step once the statues had been mounted on their platforms.

Based on 3D modeling, Lipo and his team created a precisely scaled replica of one of the road moai, weighing 4.35 metric tons with the same proportions and mass distribution of the original statue. “Of course, we’d love to build a 30-foot-tall version, but the physical impossibility of doing so makes it a challenging task, nor is it entirely necessary,” said Lipo. “Through physics, we can now predict how many people it would take and how it would be done. That is key.”

Lipo's team created 3D models of moai to determine the unique characteristics that made them able to be

Lipo’s team created 3D models of moai to determine the unique characteristics that made them able to be “walked” across Rapa Nui. Credit: Carl Lipo

The new field trials required 18 people, four on each lateral rope and 10 on a rear rope, to achieve the side-to-side walking motion, and they were efficient enough in coordinating their efforts to move the statue forward 100 meters in just 40 minutes. That’s because the method operates on basic pendulum dynamics, per the authors, which minimizes friction between the base and the ground. It’s also a technique that exploits the gradual build-up of amplitude, which “suggests a sophisticated understanding of resonance principles,” Lipo and Hunt wrote.

So the actual statues could have been moved several kilometers over the course of weeks with only modest-sized crews of between 20-50 people, i.e., roughly the size of an extended family or “small lineage group” on Easter Island. Once the crew gets the statue rocking side to side—which can require between 15 to 60 people, depending on the size and weight of the moai—the resulting oscillation only needs minimal energy input from a smaller team of rope handlers to maintain that motion. They mostly provide guidance.

Lipo was not the first to test the walking hypothesis. Earlier work includes that of Czech experimental archaeologist Pavel Pavel, who conducted similar practical experiments on Easter Island in the 1980s after being inspired by Thor Heyerdahl’s Kon Tiki. (Heyerdahl even participated in the experiments.) Pavel’s team was able to demonstrate a kind of “shuffling” motion, and he concluded that just 16 men and one leader were sufficient to transport the statues.

Per Lipo and Hunt, Pavel’s demonstration didn’t result in broad acceptance of the walking hypothesis because it still required a huge amount of effort to tilt the statue, producing more of a twisting motion rather than efficient forward movement. This would only have moved a large statue 100 meters a day under ideal conditions. The base was also likely to be damaged from friction with the ground. Lipo and Hunt maintain this is because Pavel (and others who later tried to reproduce his efforts) used the wrong form of moai for those earlier field tests: those erected on the platforms, already modified for vertical stability and permanent display, and not the road moai with shapes more conducive to vertical transport.

“Pavel deserves recognition for taking oral traditions seriously and challenging the dominant assumption of horizontal transport, a move that invited ridicule from established scholars,” Lipo and Hunt wrote. “His experiments suggested that vertical transport was feasible and consistent with cultural memory. Our contribution builds on this by showing that ancestral engineers intentionally designed statues for walking. Those statues were later modified to stand erect on ceremonial platforms, a transformation that effectively erased the morphological features essential for movement.”

The evidence of the roadways

Lipo and Hunt also analyzed the roadways, noting that these ancient roadbeds had concave cross sections that would have been problematic for moving the statues horizontally using wooden rollers or frames perpendicular to those roads. But that concave shape would help constrain rocking movement during vertical transport. And the moai roads were remarkably level with slopes of, on average, 2–3 percent. For the occasional steeper slopes, such as walking a moai up a ramp to the top of an ahu, Lipo and Hunt’s field experiments showed that these could be navigated successfully through controlled stepping.

Furthermore, the distribution pattern of the roadways is consistent with the road moai being left due to mechanical failure. “Arguments that the moai were placed ceremonially in preparation for quarrying have become more common,” said Lipo. “The algorithm there is to claim that positions are ritual, without presenting anything that is falsifiable. There is no reason why the places the statues fell due to mechanical reasons couldn’t later become ‘ritual,’ in the same way that everything on the island could be claimed to be ritual—a circular argument. But to argue that they were placed there purposefully for ritual purposes demands framing the explanation in a way that is falsifiable.”

Schematic representation of the moai transport method using coordinated rope pulling to achieve a

Schematic representation of the moai transport method using coordinated rope pulling to achieve a “walking” motion. Credit: Carl Lipo and Terry Hunt, 2025

“The only line of evidence that is presented in this way is the presence of ‘platforms’ that were found beneath the base of one moai, which is indeed intriguing,” Lipo continued. “However, those platforms can be explained in other ways, given that the moai certainly weren’t moved from the quarry to the ahu in one single event. They were paused along the way, as is clear from the fact that the roads appear to have been constructed in segments with different features. Their construction appears to be part of the overall transport process.”

Lipo’s work has received a fair share of criticism from other scholars over the years, and his and Hunt’s paper includes a substantial section rebutting the most common of those critiques. “Archaeologists tend to reject (in practice) the idea that the discipline can construct cumulative knowledge,” said Lipo. “In the case of moai transport, we’ve strived to assemble as much empirical evidence as possible and have forwarded an explanation that best accounts for what we can observe. Challenges to these ideas, however, do not come from additional studies with new data but rather just new assertions.”

“This leads the public to believe that we (as a discipline) can never really figure anything out and are always going to be a speculative enterprise, spinning yarns and arguing with each other,” Lipo continued. “With the erosion of trust in science, this is fairly catastrophic to archaeology as a whole but also the whole scientific enterprise. Summarizing the results in the way we do here is an attempt to point out that we can build falsifiable accounts and can make contributions to cumulative knowledge that have empirical consequences—even with something as remarkable as the transport of moai.”

Experimental archaeology is a relatively new field that some believe could be the future of archaeology. “I think experimental archaeology has potential when it’s tied to physics and chemistry,” said Lipo. “It’s not just recreating something and then arguing it was done in the same way in the past. Physics and chemistry are our time machines, allowing us to explain why things are the way they are in the present in terms of the events that occurred in the past. The more we can link the theory needed to explain the present, the better we can explain the past.”

DOI: Journal of Archaeological Science, 2025. 10.1016/j.jas.2025.106383  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

How Easter Island’s giant statues “walked” to their final platforms Read More »

floating-electrons-on-a-sea-of-helium

Floating electrons on a sea of helium

By now, a handful of technologies are leading contenders for producing a useful quantum computer. Companies have used them to build machines with dozens to hundreds of qubits, the error rates are coming down, and they’ve largely shifted from worrying about basic scientific problems to dealing with engineering challenges.

Yet even at this apparently late date in the field’s development, there are companies that are still developing entirely new qubit technologies, betting the company that they have identified something that will let them scale in ways that enable a come-from-behind story. Recently, one of those companies published a paper that describes the physics of their qubit system, which involves lone electrons floating on top of liquid helium.

Trapping single electrons

So how do you get an electron to float on top of helium? To find out, Ars spoke with Johannes Pollanen, the chief scientific officer of EeroQ, the company that accomplished the new work. He said that it’s actually old physics, with the first demonstrations of it having been done half a century ago.

“If you bring a charged particle like an electron near the surface, because the helium is dielectric, it’ll create a small image charge underneath in the liquid,” said Pollanen. “A little positive charge, much weaker than the electron charge, but there’ll be a little positive image there. And then the electron will naturally be bound to its own image. It’ll just see that positive charge and kind of want to move toward it, but it can’t get to it, because the helium is completely chemically inert, there are no free spaces for electrons to go.”

Obviously, to get the helium liquid in the first place requires extremely low temperatures. But it can actually remain liquid up to temperatures of 4 Kelvin, which doesn’t require the extreme refrigeration technologies needed for things like transmons. Those temperatures also provide a natural vacuum, since pretty much anything else will also condense out onto the walls of the container.

Diagrams of a chip showing channels and electrodes, along with an image of the chip itself.

The chip itself, along with diagrams of its organization. The trap is set by the gold electrode on the left. Dark channels allow liquid helium and electrons to flow into and out of the trap. And the bluish electrodes at the top and bottom read the presence of the electrons. Credit: EeroQ

Liquid helium is also a superfluid, meaning it flows without viscosity. This allows it to easily flow up tiny channels cut into the surface of silicon chips that the company used for its experiments. A tungsten filament next to the chip was used to load the surface of the helium with electrons at what you might consider the equivalent of a storage basin.

Floating electrons on a sea of helium Read More »

2025-nobel-prize-in-physics-awarded-for-macroscale-quantum-tunneling

2025 Nobel Prize in Physics awarded for macroscale quantum tunneling


John Clarke, Michel H. Devoret, and John Martinis built an electrical circuit-based oscillator on a microchip.

A device consisting of four transmon qubits, four quantum buses, and four readout resonators fabricated by IBM in 2017. Credit: ay M. Gambetta, Jerry M. Chow & Matthias Steffen/CC BY 4.0

The 2025 Nobel Prize in Physics has been awarded to John Clarke, Michel H. Devoret, and John M. Martinis “for the discovery of macroscopic quantum tunneling and energy quantization in an electrical circuit.” The Nobel committee said during a media briefing that the laureates’ work provides opportunities to develop “the next generation of quantum technology, including quantum cryptography, quantum computers, and quantum sensors.” The three men will split the $1.1 million (11 million Swedish kronor) prize money. The presentation ceremony will take place in Stockholm on December 10, 2025.

“To put it mildly, it was the surprise of my life,” Clarke told reporters by phone during this morning’s press conference. “Our discovery in some ways is the basis of quantum computing. Exactly at this moment where this fits in is not entirely clear to me. One of the underlying reasons that cellphones work is because of all this work.”

When physicists began delving into the strange new realm of subatomic particles in the early 20th century, they discovered a realm where the old, deterministic laws of classical physics no longer apply. Instead, uncertainty reigns supreme. It is a world governed not by absolutes, but by probabilities, where events that would seem impossible on the macroscale occur on a regular basis.

For instance, subatomic particles can “tunnel” through seemingly impenetrable energy barriers. Imagine that an electron is a water wave trying to surmount a tall barrier. Unlike water, if the electron’s wave is shorter than the barrier, there is still a small probability that it will seep through to the other side.

This neat little trick has been experimentally verified many times. In the 1950s, physicists devised a system in which the flow of electrons would hit an energy barrier and stop because they lacked sufficient energy to surmount that obstacle. But some electrons didn’t follow the established rules of behavior. They simply tunneled right through the energy barrier.

(l-r): John Clarke, Michel H. Devoret and John M. Martinis

(l-r): John Clarke, Michel H. Devoret, and John M. Martinis. Credit: Niklas Elmehed/Nobel Prize Outreach

From subatomic to the macroscale

Clarke, Devoret, and Martinis were the first to demonstrate that quantum effects, such as quantum tunneling and energy quantization, can operate on macroscopic scales, not just one particle at a time.

After earning his PhD from the University of Cambridge, Clarke came to the University of California, Berkeley, as a postdoc, eventually joining the faculty in 1969. By the mid-1980s, Devoret and Martinis had joined Clarke’s lab as a postdoc and graduate student, respectively. The trio decided to look for evidence of macroscopic quantum tunneling using a specialized circuit called a Josephson junction—a macroscopic device that takes advantage of a tunneling effect that is now widely used in quantum computing, quantum sensing, and cryptography.

A Josephson junction—named after British physicist Brian Josephson, who won the 1973 Nobel Prize in physics—is basically two semiconductor pieces separated by an insulating barrier. Despite this small gap between the two conductors, electrons can still tunnel through the insulator and create a current. That occurs at sufficiently low temperatures, when the junction becomes superconducting as electrons form so-called “Cooper pairs.”

The team built an electrical circuit-based oscillator on a microchip measuring about 1 centimeter in size—essentially a quantum version of the classic pendulum. Their biggest challenge was figuring out how to reduce the noise in their experimental apparatus. For their experiments, they first fed a weak current into the junction and measured the voltage—initially zero. Then they increased the current and measured how long it took for the system to tunnel out of its enclosed state to produce a voltage.

Credit: Johan Jarnestad/The Royal Swedish Academy of Sciences

They took many measurements and found that the average current increases as the device’s temperature falls, as expected. But at some point, the temperature got so low that the device became superconducting and the average current became independent of the device’s temperature—a telltale signature of macroscopic quantum tunneling.

The team also demonstrated that the Josephson junction exhibited quantized energy levels—meaning the energy of the system was limited to only certain allowed values, just like subatomic particles can gain or lose energy only in fixed, discrete amounts—confirming the quantum nature of the system. Their discovery effectively revolutionized quantum science, since other scientists could now test precise quantum physics on silicon chips, among other applications.

Lasers, superconductors, and superfluid liquids exhibit quantum mechanical effects at the macroscale, but these arise by combining the behavior of microscopic components. Clarke, Devoret, and Martinis were able to create a macroscopic effect—a measurable voltage—from a macroscopic state. Their system contained billions of Cooper pairs filling the entire superconductor on the chip, yet all of them were described by a single wave function. They behave like a large-scale artificial atom.

In fact, their circuit was basically a rudimentary qubit. Martinis showed in a subsequent experiment that such a circuit could be an information-bearing unit, with the lowest energy state and the first step upward functioning as a 0 and a 1, respectively. This paved the way for such advances as the transmon in 2007: a superconducting charge qubit with reduced sensitivity to noise.

“That quantization of the energy levels is the source of all qubits,” said Irfan Siddiqi, chair of UC Berkeley’s Department of Physics and one of Devoret’s former postdocs. “This was the grandfather of qubits. Modern qubit circuits have more knobs and wires and things, but that’s just how to tune the levels, how to couple or entangle them. The basic idea that Josephson circuits could be quantized and were quantum was really shown in this experiment. The fact that you can see the quantum world in an electrical circuit in this very direct way was really the source of the prize.”

So perhaps it is not surprising that Martinis left academia in 2014 to join Google’s quantum computing efforts, helping to build a quantum computer the company claimed had achieved “quantum supremacy” in 2019. Martinis left in 2020 and co-founded a quantum computing startup, Qolab, in 2022. His fellow Nobel laureate, Devoret, now leads Google’s quantum computing division and is also a faculty member at the University of California, Santa Barbara. As for Clarke, he is now a professor emeritus at UC Berkeley.

“These systems bridge the gap between microscopic quantum behavior and macroscopic devices that form the basis for quantum engineering,” Gregory Quiroz, an expert in quantum information science and quantum algorithms at Johns Hopkins University, said in a statement. “The rapid progress in this field over the past few decades—in part fueled by their critical results—has allowed superconducting qubits to go from small-scale laboratory experiments to large-scale, multi-qubit devices capable of realizing quantum computation. While we are still on the hunt for undeniable quantum advantage, we would not be where we are today without many of their key contributions to the field.”

As is often the case with fundamental research, none of the three physicists realized at the time how significant their discovery would be in terms of its impact on quantum computing and other applications.

“This prize really demonstrates what the American system of science has done best,” Jonathan Bagger, CEO of the American Physical Society, told The New York Times. “It really showed the importance of the investment in research for which we do not yet have an application, because we know that sooner or later, there will be an application.”

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

2025 Nobel Prize in Physics awarded for macroscale quantum tunneling Read More »

meet-the-2025-ig-nobel-prize-winners

Meet the 2025 Ig Nobel Prize winners


The annual award ceremony features miniature operas, scientific demos, and the 24/7 lectures.

The Ig Nobel Prizes honor “achievements that first make people laugh and then make them think.” Credit: Aurich Lawson / Getty Images

Does alcohol enhance one’s foreign language fluency? Do West African lizards have a preferred pizza topping? And can painting cows with zebra stripes help repel biting flies? These and other unusual research questions were honored tonight in a virtual ceremony to announce the 2025 recipients of the annual Ig Nobel Prizes. Yes, it’s that time of year again, when the serious and the silly converge—for science.

Established in 1991, the Ig Nobels are a good-natured parody of the Nobel Prizes; they honor “achievements that first make people laugh and then make them think.” The unapologetically campy awards ceremony features miniature operas, scientific demos, and the 24/7 lectures whereby experts must explain their work twice: once in 24 seconds and the second in just seven words.

Acceptance speeches are limited to 60 seconds. And as the motto implies, the research being honored might seem ridiculous at first glance, but that doesn’t mean it’s devoid of scientific merit. In the weeks following the ceremony, the winners will also give free public talks, which will be posted on the Improbable Research website.

Without further ado, here are the winners of the 2025 Ig Nobel prizes.

Biology

Example of the area of legs and body used to count biting flies on cows.

Credit: Tomoki Kojima et al., 2019

Citation: Tomoki Kojima, Kazato Oishi, Yasushi Matsubara, Yuki Uchiyama, Yoshihiko Fukushima, Naoto Aoki, Say Sato, Tatsuaki Masuda, Junichi Ueda, Hiroyuki Hirooka, and Katsutoshi Kino, for their experiments to learn whether cows painted with zebra-like striping can avoid being bitten by flies.

Any dairy farmer can tell you that biting flies are a pestilent scourge for cattle herds, which is why one so often sees cows throwing their heads, stamping their feet, flicking their tails, and twitching their skin—desperately trying to shake off the nasty creatures. There’s an economic cost as well since it causes the cattle to graze and feed less, bed down for shorter times, and start bunching together, which increases heat stress and risks injury to the animals. That results in less milk yield for dairy cows and less beef yields from feedlot cattle.

You know who isn’t much bothered by biting flies? The zebra. Scientists have long debated the function of the zebra’s distinctive black-and-white striped pattern. Is it for camouflage? Confusing potential predators? Or is it to repel those pesky flies? Tomoki Kojima et al. decided to put the latter hypothesis to the test, painting zebra stripes on six pregnant Japanese black cows at the Aichi Agricultural Research Center in Japan. They used water-borne lacquers that washed away after a few days, so the cows could take turns being in three different groups: zebra stripes, just black stripes, or no stripes (as a control).

The results: the zebra stripes significantly decreased both the number of biting flies on the cattle and the animals’ fly-repelling behaviors compared to those with black stripes or no stripes. The one exception was for skin twitching—perhaps because it is the least energy intensive of those behaviors. Why does it work? The authors suggest it might have something to do with modulation brightness or polarized light that confuses the insects’ motion detection system, used to control their approach when landing on a surface. But that’s a topic for further study.

Chemistry

Freshly cooked frozen w:blintzes in a non-stick frying pan coated with Teflon

Credit: Andrevan/CC BY-SA 2.5

Citation: Rotem Naftalovich, Daniel Naftalovich, and Frank Greenway, for experiments to test whether eating Teflon [a form of plastic more formally called “polytetrafluoroethylene”] is a good way to increase food volume and hence satiety without increasing calorie content.

Diet sodas and other zero-calorie drinks are a mainstay of the modern diet, thanks to the development of artificial sweeteners whose molecules can’t be metabolized by the human body. The authors of this paper are intrigued by the notion of zero-calorie foods, which they believe could be achieved by increasing the satisfying volume and mass of food without increasing the calories. And they have just the additive for that purpose: polytetrafluoroethylene (PTFE), more commonly known as Teflon.

Yes, the stuff they use on nonstick cookware. They insist that Teflon is inert, heat-resistant, impervious to stomach acid, tasteless, cost-effective, and available in handy powder form for easy mixing into food. They recommend a ratio of three parts food to one part Teflon powder.

The authors understand that to the average layperson, this is going to sound like a phenomenally bad idea—no thank you, I would prefer not to have powdered Teflon added to my food. So they spend many paragraphs citing all the scientific studies on the safety of Teflon—it didn’t hurt rats in feeding trials!—as well as the many applications for which it is already being used. These include Teflon-coated stirring rods used in labs and coatings on medical devices like bladder catheters and gynecological implants, as well as the catheters used for in vitro fertilization. And guys, you’ll be happy to know that Teflon doesn’t seem to affect sperm motility or viability. I suspect this will still be a hard sell in the consumer marketplace.

Physics

Cacio e pepe is an iconic pasta dish that is also frustratingly difficult to make

Credit: Simone Frau

Citation: Giacomo Bartolucci, Daniel Maria Busiello, Matteo Ciarchi, Alberto Corticelli, Ivan Di Terlizzi, Fabrizio Olmeda, Davide Revignas, and Vincenzo Maria Schimmenti, for discoveries about the physics of pasta sauce, especially the phase transition that can lead to clumping, which can be a cause of unpleasantness.

“Pasta alla cacio e pepe” is a simple dish: just tonnarelli pasta, pecorino cheese, and pepper. But its simplicity is deceptive. The dish is notoriously challenging to make because it’s so easy for the sauce to form unappetizing clumps with a texture more akin to stringy mozzarella rather than being smooth and creamy. As we reported in April, Italian physicists came to the rescue with a foolproof recipe based on their many scientific experiments, according to a new paper published in the journal Physics of Fluids. The trick: using corn starch for the cheese and pepper sauce instead of relying on however much starch leaches into the boiling water as the pasta is cooked.

Traditionally, the chef will extract part of the water and starch solution—which is cooled to a suitable temperature to avoid clumping as the cheese proteins “denaturate”—and mix it with the cheese to make the sauce, adding the pepper last, right before serving. But the authors note that temperature is not the only factor that can lead to this dreaded “mozzarella phase.” If one tries to mix cheese and water without any starch, the clumping is more pronounced. There is less clumping with water containing a little starch, like water in which pasta has been cooked. And when one mixes the cheese with pasta water “risottata”—i.e., collected and heated in a pan so enough water evaporates that there is a higher concentration of starch—there is almost no clumping.

The authors found that the correct starch ratio is between 2 to 3 percent of the cheese weight. Below that, you get the clumping phase separation; above that, and the sauce becomes stiff and unappetizing as it cools. Pasta water alone contains too little starch. Using pasta water “risottata” may concentrate the starch, but the chef has less control over the precise amount of starch. So the authors recommend simply dissolving 4 grams of powdered potato or corn starch in 40 grams of water, heating it gently until it thickens and combining that gel with the cheese. They also recommend toasting the black pepper briefly before adding it to the mixture to enhance its flavors and aromas.

Engineering Design

Experimental set-up (a) cardboard enclosure (b) UV-C tube light (c) SMPS

Credit: Vikash Kumar and Sarthak Mittal

Citation: Vikash Kumar and Sarthak Mittal, for analyzing, from an engineering design perspective, “how foul-smelling shoes affects the good experience of using a shoe-rack.”

Shoe odor is a universal problem, even in India, according to the authors of this paper, who hail from Shiv Nadar University (SNU) in Uttar Pradesh. All that heat and humidity means people perspire profusely when engaging even in moderate physical activity. Add in a lack of proper ventilation and washing, and shoes become a breeding ground for odor-causing bacteria called Kytococcus sedentarius. Most Indians make use of shoe racks to store their footwear, and the odors can become quite intense in that closed environment.

Yet nobody has really studied the “smelly shoe” problem when it comes to shoe racks. Enter Kumar and Mittal, who conducted a pilot study with the help of 149 first-year SNU students. More than half reported feeling uncomfortable about their own or someone else’s smelly shoes, and 90 percent kept their shoes in a shoe rack. Common methods to combat the odor included washing the shoes and drying them in the sun; using spray deodorant; or sprinkling the shoes with an antibacterial powder. They were unaware of many current odor-combatting products on the market, such as tea tree and coconut oil solutions, thyme oil, or isopropyl alcohol.

Clearly, there is an opportunity to make a killing in the odor-resistant shoe rack market. So naturally Kumar and Mittal decided to design their own version. They opted to use bacteria-killing UV rays (via a UV-C tube light) as their built-in “odor eater,” testing their device on the shoes of several SNU athletes, “which had a very strong noticeable odor.” They concluded that an exposure time of two to three minutes was sufficient to kill the bacteria and get rid of the odor.

Aviation

Wing membranes (patagia) of Townsend's big-eared bat, Corynorhinus townsendii

Credit: Public domain

Citation: Francisco Sánchez, Mariana Melcón, Carmi Korine, and Berry Pinshow, for studying whether ingesting alcohol can impair bats’ ability to fly and also their ability to echolocate.

Nature is rife with naturally occurring ethanol, particularly from ripening fruit, and that fruit in turn is consumed by various microorganisms and animal species. There are occasional rare instances of some mammals, birds, and even insects consuming fruit rich in ethanol and becoming intoxicated, making those creatures more vulnerable to potential predators or more accident-prone due to lessened motor coordination. Sánchez et al. decided to look specifically at the effects of ethanol on Egyptian fruit bats, which have been shown to avoid high-ethanol fruit. The authors wondered if this might be because the bats wanted to avoid becoming inebriated.

They conducted their experiments on adult male fruit bats kept in an outdoor cage that served as a long flight corridor. The bats were given liquid food with varying amounts of ethanol and then released in the corridor, with the authors timing how long it took each bat to fly from one end to the other. A second experiment followed the same basic protocol, but this time the authors recorded the bats’ echolocation calls with an ultrasonic microphone. The results: The bats that received liquid food with the highest ethanol content took longer to fly the length of the corridor, evidence of impaired flight ability. The quality of those bats’ echolocation was also adversely affected, putting them at a higher risk of colliding with obstacles mid-flight.

Psychology

Narcissus (1597–99) by Caravaggio; the man in love with his own reflection

Credit: Public domain

Citation: Marcin Zajenkowski and Gilles Gignac, for investigating what happens when you tell narcissists—or anyone else—that they are intelligent.

Not all narcissists are created equal. There are vulnerable narcissists who tend to be socially withdrawn, have low self-esteem, and are prone to negative emotions. And then there are grandiose narcissists, who exhibit social boldness, high self-esteem, and are more likely to overestimate their own intelligence. The prevailing view is that this overconfidence stems from narcissism. The authors wanted to explore whether this effect might also work in reverse, i.e., that believing one has superior intelligence due to positive external feedback can lead to at least a temporary state of narcissism.

Zajenkowski et al. recruited 361 participants from Poland who were asked to rate their level of intelligence compared to other people; complete the Polish version of the Narcissistic Personality Inventory; and take an IQ test to compare their perceptions of their own intelligence with an objective measurement. The participants were then randomly assigned to one of two groups. One group received positive feedback—telling them they did indeed have a higher IQ than most people—while the other received negative feedback.

The results confirmed most of the researchers’ hypotheses. In general, participants gave lower estimates of their relative intelligence after completing the IQ test, which provided an objective check of sorts. But the type of feedback they received had a measurable impact. Positive feedback enhanced their feelings of uniqueness (a key aspect of grandiose narcissism). Those who received negative feedback rated their own intelligence as being lower, and that negative feedback had a larger effect than positive feedback. The authors concluded that external feedback helped shape the subjects’ perception of their own intelligence, regardless of the accuracy of that feedback.

Nutrition

Rainbow lizards eating ‘four cheese’ pizza at a seaside touristic resort in Togo.

Credit: Daniele Dendi et al, 2022

Citation: Daniele Dendi, Gabriel H. Segniagbeto, Roger Meek, and Luca Luiselli, for studying the extent to which a certain kind of lizard chooses to eat certain kinds of pizza.

Move over, Pizza Rat, here come the Pizza Lizards—rainbow lizards, to be precise. This is a species common to urban and suburban West Africa. The lizards primarily live off insects and arthropods, but their proximity to humans has led to some developing a more omnivorous approach to their foraging. Bread is a particular favorite. Case in point: One fine sunny day at a Togo seaside resort, the authors noticed a rainbow lizard stealing a tourist’s slice of four-cheese pizza and happily chowing down.

Naturally, they wanted to know if this was an isolated incident or whether the local rainbow lizards routinely feasted on pizza slices. And did the lizards have a preferred topping? Inquiring minds need to know. So they monitored the behavior of nine particular lizards, giving them the choice between a plate of four-cheese pizza and a plate of “four seasons” pizza, spaced about 10 meters apart.

It only took 15 minutes for the lizards to find the pizza and eat it, sometimes fighting over the remaining slices. But they only ate the four-cheese pizza. For the authors, this suggests there might be some form of chemical cues that attract them to the cheesy pizzas, or perhaps it’s easier for them to digest. I’d love to see how the lizards react to the widely derided Canadian bacon and pineapple pizza.

Pediatrics

Pumped breast milk in bottles

Citation: Julie Mennella and Gary Beauchamp, for studying what a nursing baby experiences when the baby’s mother eats garlic.

Mennella and Beauchamp designed their experiment to investigate two questions: whether the consumption of garlic altered the odor of a mother’s breast milk, and if so, whether those changes affected the behavior of nursing infants. (Garlic was chosen because it is known to produce off flavors in dairy cow milk and affect human body odor.) They recruited eight women who were exclusively breastfeeding their infants, taking samples of their breast milk over a period when the participants abstained from eating sulfurous foods (garlic, onion, asparagus), and more samples after the mothers consumed either a garlic capsule or a placebo.

The results: Mothers who ingested the garlic capsules produced milk with a perceptibly more intense odor, as evaluated by several adult panelists brought in to sniff the breast milk samples. The strong odor peaked at two hours after ingestion and decreased fats, which is consistent with prior research on cows that ingested highly odorous feeds. As for the infants, those whose mothers ingested garlic attached to the breast for longer periods and sucked more when the milk smelled like garlic. This could be relevant to ongoing efforts to determine whether sensory experiences during breastfeeding can influence how readily infants accept new foods upon weaning, and perhaps even their later food preferences.

Literature

closeup of a hand with clubbed fingernails

Credit: William B. Bean

Citation: The late Dr. William B. Bean, for persistently recording and analyzing the rate of growth of one of his fingernails over a period of 35 years.

If you’re surprised to see a study on fingernail growth rates under the Literature category, it will all make sense once you read the flowery prose stylings of Dr. Bean. He really did keep detailed records of how fast his fingernails grew for 35 years, claiming in his final report that “the nail provides a slowly moving keratin kymograph that measures age on the inexorable abscissa of time.” He sprinkles his observations with ponderous references to medieval astrology, James Boswell, and Moby Dick, with a dash of curmudgeonly asides bemoaning the sterile modern medical teaching methods that permeate “the teeming mass of hope and pain, technical virtuosity, and depersonalization called a ‘health center.'”

So what did our pedantic doctor discover in those 35 years, not just studying his own nails, but meticulously reviewing all the available scientific literature? Well, for starters, the rate of fingernail growth diminishes as one ages; Bean noted that his growth rates remained steady early on, but “slowed down a trifle” over the last five years of his project. Nails grow faster in children than adults. A warm environment can also accelerate growth, as does biting one’s fingernails—perhaps, he suggests, because the biting stimulates blood flow to the area. And he debunks the folklore of hair and nails growing even after death: it’s just the retraction and contraction of the skin post-mortem that makes it seem like the nails are growing.

Peace

Citation: Fritz Renner, Inge Kersbergen, Matt Field, and Jessica Werthmann, for showing that drinking alcohol sometimes improves a person’s ability to speak in a foreign language.

Alcohol is well-known to have detrimental effects on what’s known in psychological circles as “executive functioning,” impacting things like working memory and inhibitory control. But there’s a widespread belief among bilingual people that a little bit of alcohol actually improves one’s fluency in a foreign language, which also relies on executive functioning. So wouldn’t being intoxicated actually have an adverse effect on foreign language fluency? Renner et al. decided to investigate further.

They recruited 50 native German-speaking undergrad psychology students at Maastricht University in the Netherlands who were also fluent in Dutch. They were randomly divided into two groups. One group received an alcoholic drink (vodka with bitter lemon), and the other received water. Each participant consumed enough to be slightly intoxicated after 15 minutes, and then engaged in a discussion in Dutch with a native Dutch speaker. Afterward, they were asked to rate their self-perception of their skill at Dutch, with the Dutch speakers offering independent observer ratings.

The researchers were surprised to find that intoxication improved the participants’ Dutch fluency, based on the independent observer reports. (Self-evaluations were largely unaffected by intoxication levels.) One can’t simply attribute this to so-called “Dutch courage,” i.e., increased confidence associated with intoxication. Rather, the authors suggest that intoxication lowers language anxiety, thereby increasing one’s foreign language proficiency, although further research would be needed to support that hypothesis.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Meet the 2025 Ig Nobel Prize winners Read More »

scientists-unlock-secret-to-thick,-stable-beer-foams

Scientists unlock secret to thick, stable beer foams

For many beer lovers, a nice thick head of foam is one of life’s pure pleasures, and the longer that foam lasts, the better the beer-drinking experience. A team of Swiss researchers spent seven years studying why some beer foams last longer than others and found that the degree of fermentation—i.e., whether a given beer has been singly, doubly, or triply fermented—is crucial, according to a new paper published in the journal Physics of Fluids.

As previously reported, foams are ubiquitous in everyday life, found in foods (whipped cream), beverages (beer, cappuccino), shaving cream and hair-styling mousse, packing peanuts, building insulation, flame-retardant materials, and so forth. All foams are the result of air being beaten into a liquid formula that contains some kind of surfactant (active surface agent), usually fats or proteins in edible foams, or chemical additives in non-edible products. That surfactant strengthens the liquid film walls of the bubbles to keep them from collapsing.

Individual bubbles typically form a sphere because that’s the shape with the minimum surface area for any volume and hence is the most energy-efficient. One reason for the minimizing principle when it comes to a bubble’s shape is that many bubbles can then tightly pack together to form a foam. But bubbles “coarsen” over time, the result of gravity pulling down on the liquid and thinning out the walls. Eventually, they start to look more like soccer balls (polyhedrons). In a coarsening foam, smaller bubbles are gradually absorbed by larger ones. There is less and less liquid to separate the individual bubbles, so they press together to fill the space.

This “jamming” is why foams are typically far more rigid than their gas (95 percent) and liquid (5 percent) components. The more tightly the bubbles jam together, the less they can move around and the greater the pressure inside them becomes, giving them properties of a solid.

Various factors can affect foam stability. For instance, in 2019, Japanese researchers investigated a phenomenon known as “collective bubble collapse,” or CBC, in which breaking one bubble at the edge of a foam results in a cascading effect as the breakage spreads to other bubbles in the foam. They identified two distinct mechanisms for the resulting CBCs: a so-called “propagating mode,” in which a broken bubble is absorbed into the liquid film, and a “penetrating mode,” in which the breakage of a bubble causes droplets to shoot off and hit other bubbles, causing them to break in turn.

Scientists unlock secret to thick, stable beer foams Read More »